Economics of Artificial Intelligence

Economics of Artificial Intelligence

An NBER conference on Economics of Artificial Intelligence took place in Toronto on September 13-14, 2018. Research Associates Ajay K. Agrawal, Joshua S. Gans and Avi Goldfarb of University of Toronto and Catherine Tucker of MIT organized the meeting, sponsored by the Alfred P. Sloan Foundation, CIFAR, and the Creative Destruction Lab. These researchers' papers were presented and discussed:

Emilio Calvano, Vencenzo Denicolò, and Sergio Pastorello, University of Bologna, and Giacomo Calzolari, European University Institute

Q-Learning to Cooperate (slides) (video)

AI algorithms are increasingly replacing human decision making in real marketplaces. To inform the debate on potential consequences, Calvano, Calzolari, Denicolò, and Pastorello run experiments with AI agents powered by reinforcement learning in controlled environments (computer simulations). In particular, the researchers study multi-agent interaction in the context of a workhorse oligopoly model: price competition with Logit demand and constant marginal costs. Calvano, Calzolari, Denicolò, and Pastorello show that two Q-learners with no previous knowledge of such environment consistently learn to charge supra-competitive prices. They do so by successfully coordinating on classic collusive strategies. That is schemes providing incentives to cooperate through off-path, incentive compatible and thus credible punishments. Calvano, Calzolari, Denicolò, and Pastorello show that this finding is robust to asymmetries in cost or demand structure and to changes in the number of players, product differentiation and demand level.


Prasanna Tambe, University of Pennsylvania

Machine Learning and Domain Knowledge (slides) (video)

Tambe analyzes a database of online job listings to investigate the structure of machine learning jobs. First presents descriptive statistics about the demand side of the machine learning labor market are presented. Second, the hypothesis that unlike prior waves of IT, machine learning skills tend to be bundled with domain knowledge, such as economics or life sciences is tested. Complementarities between technical skills and domain knowledge are also reflected in the college degrees listed in job ads. When compared with more traditional technical jobs, job postings related to machine learning often indicate that applicants can either have degrees in technical disciplines or subject areas related to the data context. In general, the findings indicate that the adoption of machine learning skills into jobs is likely to be governed by the need to couple these skills with domain knowledge, potentially turning a wider category of jobs into "technical" jobs. Implications for labor market competition and education policy are discussed.


Erik Brynjolfsson, MIT and NBER; Tom Mitchell, Carnegie Mellon University; and Daniel Rock, MIT

Machine Learning and Occupational Change (video)


Anton Korinek, University of Virginia and NBER

Artificially Intelligent Agents in Our Economy (slides) (video)

Given recent advances in AI, Korinek observes that machines increasingly behave like artificially intelligent agents (AIAs). This raises fundamental questions about what an economy with humans and AIAs will look like – questions that stretch from the allocation of resources between humans and non-humans to the potential for an existential race. He develops an economic framework that describes humans and AIAs symmetrically as goal-oriented entities that each (i) absorb scarce resources, (ii) supply their factor services to the economy, (iii) exhibit defined behavior and (iv) are subject to specified laws of motion. After introducing a resource allocation frontier that captures the distribution of resources between humans and machines, the paper describes several mechanisms that may provide AIAs with autonomous control over resources, both within and outside of our human system of property rights. In a number of our scenarios, competition over scarce factors eventually reduces human absorption. In the limit case of an AIA-only economy, AIAs both produce and absorb large quantities of output without any role for humans, rejecting the fallacy that human demand is necessary to support economic activity. Finally, the paper discusses a number of recent macroeconomic trends that are harbingers of the rise of autonomous AIAs.


Paul M. Romer, New York University and NBER

Machine Learning as a 'Wind Tunnel' for Research on Human Learning (slides)

The proven way for a nation to dampen the inequality that technological change would induce if nothing else changes is to raise average human capital. If machine learning yielded practical insights into ways to improve the productivity of human learning, the net effect of machine learning on inequality might be to reduce it.
The problem with scaling up existing systems of education is that they tend to amplify the small initial differences in ability. Scaling up will therefore tend to raise the mean level of human capital but increase the variance. To reduce inequality, the most valuable innovations would raise human capital disproportionately for those who would otherwise be in the lower tail of the distribution, for example, if because they never learned to read.
Romer explains that, if machine learners could accurately mimic human learners, they could be used to screen educational innovations quickly and cheaply. To use an analogy from the design of airplanes, artificial learners that capture realistic features of human learning could be used as a "wind tunnel" that fits in between paper and pencil theory and field tests. As the analogy from aeronautics suggests, progress will entail simultaneous improvements in theory, practice, and in the wind tunnels that connect them.
A testbed for assessing the feasibility of this approach would be to develop machine learning systems that learn how to read. A test problem for these learners would be to assess the effect of a range of simplifications in the orthography for English, which is the outlier among languages because of its inconsistent mapping between letters and sounds. From a practical perspective, this would be a very hard question to investigate with real students, but might be possible with the rapid progress in machine learning. Were useful insights to emerge, it would be virtually impossible to change standard spelling in the Anglosphere, but it might instead be possible to offer a rationalized spelling for people who learn English as a second language. It would be trivial to have dictionaries that translate text flawlessly between these two spelling systems and everyone could understand spoken English. This would help the many people who fail to learn to read English as a second language. If the rationalized spelling is not too far from standard spelling, it might also spread back into the Anglosphere on a time scale of decades. This pattern of benefits would make this effort more attractive to the government of a country such as China. It would not have the same incentive to governments in the Anglosphere to stay close to standard spelling and ease the eventual adoption in the Anglosphere.


Edmund S. Phelps, Columbia University

Two Kinds of Robots in Growth Models: An Introduction (slides) (video)

Two scenarios following the introduction of robots into the economy are discussed: In the first scenario, robots are purely additive: their labor is added to the labor of the workers, and in the second scenario they multiply the productivity of humans. The framework is a two-sector equilibrium growth model in which the consumer good is produced by the capital good and the latter is produced by labor. Phelps examines the dynamics of wages after the robots arrive. If adding robots doubles total labor, robotic plus human, it will in the long run double both capital stock and consumer good output. In the short run, the injection of the robots will cause an initial drop in the real price of capital goods, thus a drop in the marginal value productivity of labor, robotic and human, and thus a drop in the real hourly pay rate. In the long run, with the ensuing capital formation, the real pay rate will gradually recover to its original level. The result changes in this first scenario if robots are introduced to the economy in rapid succession. In the second scenario, in which robots multiply the productivity of humans, the effect on the real price of goods and hourly pay rate is theoretically ambiguous.


Susan Athey, Stanford University and NBER

Contextual Bandits (video)


Matthew Gentzkow, Stanford University and NBER

Artificial Intelligence, Media, and Fake News (slides) (video)

Digital technologies have upended media markets, causing a cascade of social effects both positive and negative. Rapidly advancing machine learning and AI now seem poised to usher in a new set of changes that could be equally profound. Gentzkow discusses what past and recent research in economics has to say about the likely impact of AI on media markets, with particular emphasis on the extent to which AI can stem the flow of misinformation and support democracy.


Sendhil Mullainathan, University of Chicago and NBER

Using Machine Learning to Understand Human Decision-Making: Application to Health Care (video)


Kathryn L. Shaw, Stanford University and NBER

AI and Personnel Economics (slides) (video)


Michael Schwarz, Microsoft

Open Questions and Research Directions — AI and the Marginal Value of Data (slides) (video)


James Bessen, Boston University, and Robert Seamans, New York University

Startups' Use of Data for Artificial Intelligence (slides) (video)

AI has been making dramatic progress, and is starting to be commercialized. In order to understand the challenges and implications of this commercialization, Bessen and Seamans develop and administer a survey of AI-startup firms. They present several findings from the survey. The AI startups in the sample sell their products across a wide range of industries, and in particular seem to target mid-size firms. The startups rely on a variety of data sources, in particular data from their customers. Their products provide a range of benefits and these benefits generally tend to augment existing business practices for their customers, rather than reduce costs or replace human labor. While it appears that customer jobs are replaced in some cases, notably clerical, manual and frontline service jobs, customer jobs are created in other cases, notably professional, manager and sales jobs.


Joao Guerreiro, Northwestern University; Sergio Rebelo, Northwestern University and NBER; and Pedro Teles, Banco de Portugal

Should Robots be Taxed? (NBER Working Paper No. 23806) (video)

Guerreiro, Rebelo, and Teles use a model of automation to show that with the current U.S. tax system, a fall in automation costs could lead to a massive rise in income inequality. This inequality can be reduced by raising marginal income tax rates and taxing robots. But this solution involves a substantial efficiency loss for the reduced level of inequality. A Mirrleesian optimal income tax can reduce inequality at a smaller efficiency cost, but is difficult to implement. An alternative approach is to amend the current tax system to include a lump-sum rebate. In the researchers' model, with the rebate in place, it is optimal to tax robots only when there is partial automation.


Jason Furman, Harvard Kennedy School

A.I. Policy Considerations (slides) (video)


Mitsuru Igami, Yale University

Artificial Intelligence as Structural Estimation: Economic Interpretations of Deep Blue, Bonanza, and AlphaGo (slides) (video)

Artificial intelligence (AI) has achieved superhuman performance in a growing number of tasks, but understanding and explaining AI remain challenging. Igami clarifies the connections between machine-learning algorithms to develop AIs and the econometrics of dynamic structural models through the case studies of three famous game AIs. Chess-playing Deep Blue is a calibrated value function, whereas shogi-playing Bonanza is an estimated value function via Rust's (1987) nested fixed-point method. AlphaGo's "supervised-learning policy network" is a deep neural network implementation of Hotz and Miller's (1993) conditional choice probability estimation -- its "reinforcement-learning value network" is equivalent to Hotz, Miller, Sanders, and Smith's (1994) conditional choice simulation method. Relaxing these AIs' implicit econometric assumptions would improve their structural interpretability.


Hal Varian, University of California at Berkeley

Automation and Procreation (slides) (video)

It is widely believed that robotics and AI will reduce the demand for certain types of labor. What is not so widely appreciated is that demographic forces will reduce the supply of labor in most developed countries, in some cases by quite large amounts. Varian shows that two huge demographic shocks in the twentieth century -- baby boomers and female participation -- made labor markets "loose" but during the next 25-30 years labor markets will likely be "tight" unless there is dramatic advances in robotics.


Isil Erel, Ohio State University; Léa H. Stern, University of Washington; Chenhao Tan, University of Colorado, Boulder; and Michael S. Weisbach, Ohio State University and NBER

Selecting Directors Using Machine Learning (NBER Working Paper No. 24435) (slides) (video)

Can an algorithm assist firms in their hiring decisions of corporate directors? Erel, Stern, Tan, and Weisbach propose a method of selecting boards of directors that relies on machine learning. They develop algorithms with the goal of selecting directors that would be preferred by the shareholders of a particular firm. Using shareholder support for individual directors in subsequent elections and firm profitability as performance measures, the researchers construct algorithms to make out-of-sample predictions of these measures of director performance. They then run tests of the quality of these predictions and show that, when compared with a realistic pool of potential candidates, directors predicted to do poorly by our algorithms indeed rank much lower in performance than directors who were predicted to do well. Relative to the benchmark provided by the algorithms, firm-selected directors are more likely to be male, have previously held more directorships, have fewer qualifications, and larger networks. Machine learning holds promise for understanding the process by which existing governance structures are chosen, and has potential to help real world firms improve their governance.


Kristina McElheran, University of Toronto

Economic Measurement of AI (video)

McElheran provides a partial review of empirical research on how firms use new information technologies, arguing for early and careful measurement of recent advances in machine learning and artificial intelligence (AI). McElheran summarizes novel findings on important precursors to AI such as big data analytics and cloud computing, distilling key implications for researchers and policy makers interested in the diffusion and economic impact of AI. McElheran further highlights a new data collection effort by the U.S. Census Bureau that promises to make progress with representative micro data on both the use and organizational context of new business technologies across the U.S. economy.


Bo Cowgill, Columbia University

Impact of Algorithms on Judicial Discretion: Evidence from Regression Discontinuities (video)

How do judges use algorithmic suggestions in criminal trials? Cowgill studies criminal cases in Broward County Florida, where judges are provided guidance about defendants' recidivism risk using a predictive algorithm derived from historical data. The algorithm's output is continuous, but is shared with judges in rounded buckets (low, medium and high). Using the underlying continuous score, Cowgill examines judicial decisions close to the thresholds using a regression discontinuity design. Defendants slightly above the thresholds are sentenced an average ten extra days in jail before trial. Black defendants' outcomes are more sensitive to the thresholds' than white defendants, although this is partly the result of a different mix of crimes. When jail decisions are linked to outcomes, the findings show that the extra jail-time given to defendants above the thresholds corresponds to a small increase in recidivism. These results suggest that algorithmic suggestions have a causal impact on criminal proceedings and recidivism.


Daron Acemoglu, MIT and NBER, and Pascual Restrepo, Boston University

Automation and New Tasks: The Implications of Task Content of Technology for Labor Demand (video)

Acemoglu and Restrepo present a framework for understanding the effects of automation and other types of technological changes on labor demand, and use it for interpreting changes in US employment over the recent past. Automation enables capital to replace labor in tasks it was previously engaged in. Because of the displacement effect it generates, automation is qualitatively different from factor-augmenting technological changes -- it always reduces the labor share in value added (of an industry or economy) and may also reduce employment and wages even as it raises productivity. The effects of automation are counterbalanced by the creation of new tasks in which labor has a comparative advantage, which generates a reinstatement effect raising the labor share and labor demand by expanding the set of tasks allocated to labor. The researchers show how the role of changes in the task content of production -- due to automation and new tasks -- can be inferred from industry-level data. The researchers' empirical exercise suggests that the slower growth of employment over the last three decades is accounted for by an acceleration in the displacement effect, especially in manufacturing, a weaker reinstatement effect, and slower growth of productivity than in previous decades.


Gillian Hadfield, University of Toronto

Incomplete Contracting and AI Alignment (slides) (video)

Hadfield suggests that the analysis of incomplete contracting developed by law and economics researchers can provide a useful framework for understanding the AI alignment problem and help to generate a systematic approach to finding solutions. Hadfield first provides an overview of the incomplete contracting literature and explore parallels between this work and the problem of AI alignment. As is emphasized, misalignment between principal and agent is a core focus of economic analysis. Hadfield highlights some technical results from the economics literature on incomplete contracts that may provide insights for AI alignment researchers. The research's core contribution, however, is to bring to bear an insight that economists have been urged to absorb from legal scholars and other behavioral scientists: the fact that human contracting is supported by substantial amounts of external structure, such as generally available institutions (culture, law) that can supply implied terms to fill the gaps in incomplete contracts. Hadfield proposes a research agenda for AI alignment work that focuses on the problem of how to build AI that can replicate the human cognitive processes that connect individual incomplete contracts with this supporting external structure.