ABSTRACT
We study how generative AI, and in particular agentic AI, shapes human learning incentives and the long-run evolution of society’s information ecosystem. We build a dynamic model of learning and decision-making in which successful decisions require combining shared, community-level general knowledge with individual-level, context-specific knowledge; these two inputs are complements. Learning exhibits economies of scope: costly human effort jointly produces a private signal about their own context and a “thin” public signal that accumulates into the community’s stock of general knowledge, generating a learning externality. Agentic AI delivers context-specific recommendations that substitute for human effort. By contrast, a richer stock of general knowledge complements human effort by raising its marginal return. The model highlights a sharp dynamic tension: while agentic AI can improve contemporaneous decision quality, it can also erode learning incentives that sustain long-run collective knowledge. When human effort is sufficiently elastic and agentic recommendations exceed an accuracy threshold, the economy can tip into a knowledge- collapse steady state in which general knowledge vanishes ultimately, despite high-quality personalized advice. Welfare is generally non-monotone in agentic accuracy, implying an interior, welfare-maximizing level of agentic precision and motivating information-design regulations. In contrast, greater aggregation capacity for general knowledge—meaning more effective sharing and pooling of human-generated general knowledge—unambiguously raises welfare and increases resilience to knowledge collapse.
Daron Acemoglu Massachusetts Institute of Technology Department of Economics and NBER daron@mit.edu
Dingwen Kong Massachusetts Institute of Technology dingwenk@mit.edu
Asuman Ozdaglar Massachusetts Institute of Technology Department of Electrical Engineering asuman@mit.ed
The disagreements are in part about whether AI-provided information is a complement or a substitute to human learning. If the former, then expanding of AI will make humans put their effort and attention in where it matters and use AI’s inputs with growing effectiveness. In the substitutes case, however, better and better AI will increasingly discourage human effort and learning—because most relevant information comes to be served to humans on a platter.
This paper is an attempt to contribute to a better theoretical understanding of how AI tools impact human cognition and knowledge. We build a dynamic model of learning and decision-making where AI inputs can be either complementary or substitutable to human effort. At the center of our approach is a distinction between two types of information: general and individual- (or context-) specific. To perform any task, individuals require general knowledge. For example,for investment decisions one needs a basic understanding of different financial instruments such as treasury bonds, corporate bonds, stocks, options, etc., as well as information on how world stock markets and economies have been performing, some relevant aspects of their institutional structure, an understanding of macroeconomic risks etc. But one also needs information related to an individual’s context: what is the risk tolerance and planning horizon of the individual in question? What correlation is there between their other income sources and different asset returns? Do they have information, hunches, preferences or beliefs affecting how they should invest and what types of risks they should take? And so on. Notably, human decision-makers often acquire both general and specific knowledge jointly. For example, most individuals will learn about general financial knowledge in a finance course or reading relevant financial literature, and they will come to recognize their own needs and form their preferences and beliefs relevant for investment during the same process. Put differently, often there is economies of scope in learning, with the same efforts generating both general and individual- or context-specific knowledge.
Like humans, generative AI tools can also acquire both general and context-specific knowledge.
But it is their ability—especially the promise of the much-anticipated agentic AI models—to develop and provide context-specific information and decision support that is most promising. Building on their architecture that enables the storage and rapid inspection of vast amounts of data and inference on connections between context and relevant information, these models are promising to find patterns that are relevant to a specific context and uniquely useful for individual decision-makers.
Indeed, at some level, the Internet already aggregated a huge amount of general knowledge that was out there, and current generative AI tools can do this quite satisfactorily in many domains.
The big next step—with both great potential and danger—is the individual and context-specificaid from AI models. It is this type of context-specific recommendation from agentic AI that we focus on in our analysis. Returning to the investment example, text books and online resources can teach the mechanics of a broad range of financial instruments, but a future agentic system might translate an individual’s particular context into a concrete portfolio choice or even autonomously execute trades subject to that person’s conditions and constraints.
A key premise underlying our approach is that good prediction or performance and tasks typically requires both general knowledge and context-specific knowledge, and that these inputs are complements in the production of successful decisions. General knowledge makes context-specific evidence interpretable and valuable; conversely, context-specific knowledge pinpoints where the decision-maker sits within that general framework. Another important feature of general knowledge is that it builds on an entire community’s learning efforts—by its nature general knowledge can be shared. This implies that most of the general knowledge an individual generates is an externality that he or she does not internalize. These elements imply that additional general knowledge raises the marginal return to an individual’s learning effort: with more general knowledge, the same unit of effort is more beneficial for understanding and usefully acting on an individual’s specific context,and this motivates more effort when there is more abundant general knowledge. Thus general knowledge is complementary to human learning effort. By the same token, with context-specific recommendations from agentic AI, there will be less impetus to exert costly effort, because one of the objectives of this effort is already well served by agentic AI. Consequently, agentic recommendations are a substitute for human effort.
Our model embeds this relationship between different types of knowledge and individual learning effort into a dynamic model of community learning. An additional important element is that community-level knowledge is itself an input into the AI models—without human efforts, experiments and discovery there would not be enough valuable information for AI models to aggregate and sift through for either distilling general knowledge or for making individual-specific recommendations.
We model the decision problem of a collection of human agents as a prediction problem—each agent’s payoff depends on the distance of a common state representing general knowledge and their prediction about this state, and on the distance of their prediction about the context- state from its true value. In making these predictions, agents use their own learning effort, which generates two signals, one correlated with their context-specific state and another one correlated with the common state. By virtue of being about the common state, the latter signal is useful to all decision-makers, thus generating learning or data externalities. Agentic AI also provides context-specific recommendations, again modeled as predictions, which agents optimally combine with their own signals and priors.
The main result of the paper is a cautionary one: a powerful agentic AI model can statically help human decision-makers, but it can dynamically harm collective knowledge building. In fact,it can lead to what we call “knowledge collapse” whereby in the long-run equilibrium all human knowledge is ultimately destroyed.
Understanding the intuition of this result helps clarify our main contribution and the interpreta-tion of our results. It is not surprising that, statically, individuals gain from additional information,in the absence of any misspecification in their model of the world or other biases. They only reduce their effort because they already receive a fairly good recommendation from agentic AI, which is naturally a substitute to their effort. This substitution when done optimally cannot harm their utility statically. However, human effort feeds into collective knowledge, and this externality is not internalized by the agents. As they reduce their learning effort, the amount of information that either the community or AI models can aggregate starts diminishing. Then the long-run welfare effects depend on the balance of these two forces—better static decision-making versus less dynamic collective knowledge. We show that the negative effect is likely to dominate when agentic AI is very accurate. On the other hand, the ability of agents to learn from the general knowledge of others—either via community aggregation or more traditional AI or Internet-type tools—always improves welfare.
These same comparative statics also apply to the likelihood that the system converges to the knowledge-collapse steady state. Specifically, in domains where learning effort is sufficiently elastic,the system can exhibit multiple steady states: a high-knowledge one and a knowledge-collapse trap with zero general knowledge. As agentic AI improves, the basin of attraction of the knowledge-collapse steady state expands. Moreover, once agentic AI is accurate enough, a “complete collapse”occurs in which the high-knowledge steady state disappears and the system converges to the knowledge-collapse trap regardless of initial conditions. On the other hand, better aggregation of human-produced general knowledge has the opposite effect: it shrinks the basin of attraction of the knowledge-collapse steady state and can offset part of the discouragement effect of agentic AI.
We also consider several extensions to show both the flexibility of the model and the robustness of our main findings. First, in our baseline model, we assume that even without AI, human-generated general knowledge is relatively well aggregated and thus AI does not lead to a significant improvement in the aggregation of general knowledge. In our first extension we relax this assumption and establish that similar results apply even when AI simultaneously improves the aggregation of general knowledge relative to what the community was able to do and introduces the agentic element of providing individual specific recommendations to human decision-makers. Our results readily generalize to this case. Second, we investigate the extent to which “synthetic data,” generated byAI models without relying on human learning and experimentation can substitute for human effort.
We show that the same qualitative insights apply even with synthetic data, provided that such data is not a perfect substitute to human effort. However, in this case the knowledge-collapse steady state still features some positive amount of general information about the common state, since even without human effort synthetic data generates new knowledge. Third, we study a version of the model in which individuals can decide the direction of their learning effort, determining the balance between acquiring general knowledge and context-specific knowledge. Provided that general and context-specific knowledge cannot be perfectly separated, our results continue to apply in this case.
Conclusion
In this paper, we introduced a simple framework to study the implications of new generative AI technologies that promise to provide context-specific information and recommendations to human decision-makers. Our framework is based on three core ideas:
1. Good human decisions combine general knowledge with context-specific information.
2. Human effort directed at improving cognition generates both types of information, with the primary private return coming from context-specific information.
3. Individual contributions to general knowledge create externalities on others who build on this general knowledge.
These three observations together imply that the main motive for individual effort is often the acquisition of context-specific information, while the general knowledge an individual generates is primarily an externality. Consequently, better general knowledge in society is a complement to human learning effort, while better context-specific recommendations are substitutes. Because generative AI promises to provide this kind of context-specific information, it can be such a substitute, and reduce human effort. But then as lower human effort reduces general knowledge-building, generative AI (especially agentic AI) can dynamically push a society towards lower effective information and even lead to a knowledge-collapse steady state.
Our analysis is purely theoretical and shows that such a framework is tractable and yields a number of new and intuitive comparative statics. It also clarifies the conditions under which a knowledge-collapse steady state emerges and what determines how large its basin of attraction is.
The tractability of our model also enables us to consider a number of extensions, aimed at showing that our main qualitative insights are robust.
There are several interesting areas for future research, including on incorporating synthetic data and other new AI capabilities, and exploring whether they can break the strong substitution between context-specific AI recommendations and human effort. The tractability of our model makes a range of other theoretical applications and extensions possible, for example, looking at different types of efforts on the side of humans and the implications of different AI technologies that provide varying mixes of general and context-specific recommendations.
Our framework also provides guidelines on different types of effects that need to be measured empirically to evaluate the overall welfare impacts of new AI advances.

No comments:
Post a Comment