Economics of Artificial Intelligence

Economics of Artificial Intelligence

An NBER conference on Economics of Artificial Intelligence took place online September 24-25. Research Associates Ajay K. Agrawal, Joshua S. Gans, and Avi Goldfarb of University of Toronto, and Catherine Tucker of MIT organized the meeting, sponsored by the Alfred P. Sloan Foundation and the Creative Destruction Lab. These researchers' papers were presented and discussed:


Martin Beraja, MIT and NBER; David Y. Yang, Harvard University and NBER; and Noam Yuchtman, London School of Economics and NBER

Data-Intensive Innovation and the State: Evidence from AI Firms in China (NBER Working Paper No. 27723)

Data-intensive technologies, like AI, are increasingly widespread. Beraja, Yang, and Yuchtman argue that the direction of innovation and growth in data-intensive economies may be crucially shaped by the state because: (i) the state is a key collector of data and (ii) data is sharable across uses within firms, potentially generating economies of scope. They study a prototypical setting: facial recognition AI in China. Collecting comprehensive data on firms and government procurement contracts, the researchers find evidence of economies of scope arising from government data: firms awarded contracts providing access to more government data produce both more government and commercial software. They then build a directed technical change model to study the implications of government data access for the direction of innovation, growth, and welfare. The researchers conclude with three applications showing how data-intensive innovation may be shaped by the state: both directly, by setting industrial policy, and indirectly, by choosing surveillance levels and privacy regulations.


Stephanie Assad and Robert Clark, Queen's University; Daniel Ershov, Toulouse School of Economics; and Lei Xu, Bank of Canada

Algorithmic Pricing and Competition: Empirical Evidence from the German Retail Gasoline Market

Economic theory provides ambiguous and conflicting predictions about the relationship between algorithmic pricing and competition. In this paper Assad, Clark, Ershov, and Xu provide the first empirical analysis of this relationship. They study Germany’s retail gasoline market where algorithmic-pricing software became widely available by mid-2017, and for which they have access to comprehensive, high-frequency price data. The analysis involves two steps. First, they identify stations that adopt algorithmic-pricing software by testing for structural breaks in markers associated with algorithmic pricing. They find a large number of station-level structural breaks around the suspected time of large-scale adoption. Second, the researchers investigate the impact of adoption on outcomes linked to competition. Since station-level adoption is endogenous, they instrument using brand headquarter-level adoption decisions. The IV results show that adoption increases margins by 12%, but only in non-monopoly markets. Furthermore, restricting attention to duopoly markets, they find that market-level margins do not change when only one of the two stations adopts, but increase by nearly 30% in markets where both do. These results suggest that AI adoption has a significant effect on competition.


Wei Jiang, Columbia University and NBER, and Sean Cao, Baozhong Yang, and Alan L. Zhang, Georgia State University

How to Talk When a Machine is Listening: Corporate Disclosure in the Age of AI

Cao, Jiang, Yang, and Zhang analyze how corporate disclosure has been reshaped by machine processors, employed by algorithmic traders, robot investment advisors, and quantitative analysts. The findings indicate that increasing machine and AI readership, proxied by machine downloads, motivates firms to prepare filings that are more friendly to machine parsing and processing. Moreover, firms with high expected machine downloads manage textual sentiment and audio emotion in ways catered to machine and AI readers, such as by differentially avoiding words that are perceived as negative by computational algorithms as compared to those by human readers, and by exhibiting speech emotion favored by machine learning software processors. The publication of Loughran and McDonald (2011) is instrumental in attributing the change in the measured sentiment to machine and AI readership. While existing research has explored how investors and researchers apply machine learning and computational tools to quantify qualitative information from disclosure and news, this study is the first to identify and analyze the feedback effect on corporate disclosure decisions, i.e., how companies adjust the way they talk knowing that machines are listening.


Kate Bundorf and Maria Polyakova, Stanford University and NBER, and Ming Tai-Seale, University of California, San Diego

How do Humans Interact with Algorithms? Experimental Evidence from Health Insurance (NBER Working Paper No. 25976)

Algorithms are increasingly available to help consumers make purchasing decisions. How does algorithmic advice affect human decisions and what types of consumers are likely to use such advice? Bundorf, Polyakova, and Tai-Seale examine these questions in the context of a randomized, controlled trial in which they exposed consumers choosing prescription drug insurance plans to personalized information, either with or without algorithmic expert recommendations, relative to offering no personalized information. The researchers develop an empirical model of consumer choice to examine the mechanisms by which expert recommendations affect choices. The experimental data are consistent with a version of the model in which consumers have noisy beliefs not only about product features but also about the parameters of their utility function. Expert advice, in turn, changes how consumers value product features by changing their beliefs about their utility function parameters. Further, there is substantial selection into who demands expert advice. Consumers who the researchers predict would have responded more to algorithmic advice were less likely to demand it.


Laura Blattner, Stanford University, and Scott T. Nelson, University of Chicago

How Costly is Noise? Data and Disparities in the US Mortgage Market


Anton Korinek, University of Virginia and NBER, and Joseph E. Stiglitz, Columbia University and NBER

Steering Technological Progress

Rapid progress in new technologies such as Artificial Intelligence has recently led to widespread anxiety about potential job losses. Korinek and Stiglitz ask how to guide innovative efforts so as to increase labor demand and create better-paying jobs. They develop a theoretical framework to identify the properties that make an innovation desirable from the perspective of workers, including its technological complementarity to labor, the factor share of labor in producing the goods involved, and the relative income of the affected workers. Examples of labor-friendly innovations are intelligent assistants who enhance the productivity of human workers. The researchers also discuss measures to steer technological progress in a desirable direction for workers, ranging from nudges for entrepreneurs to changes in tax, labor market, and intellectual property policies to direct subsidies and taxes on innovation. In the future, the researchers find that progress should increasingly be steered to provide workers with utility from the non-monetary aspects of their jobs.


Stephan T. Zheng, Alexander Trott, Sunil Srinivasa, Melvin Gruesbeck, and Richard Socher, Salesforce Research; Nikhil Naik, MIT; and David Parkes, Harvard University

The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies

Tackling real-world socio-economic challenges requires designing and testing economic policies. However, this is hard in practice, due to a lack of appropriate (micro-level) economic data and limited opportunity to experiment. Zheng, Trott, Srinivasa, Naik, Gruesbeck, Parkes, and Socher train social planners that discover tax policies in dynamic economies that can effectively trade-off economic equality and productivity. The researchers propose a two-level deep reinforcement learning approach to learn dynamic tax policies, based on economic simulations in which both agents and a government learn and adapt. The data-driven approach does not make use of economic modeling assumptions, and learns from observational data alone. They make four main contributions. First, they present an economic simulation environment that features competitive pressures and market dynamics. The researchers validate the simulation by showing that baseline tax systems perform in a way that is consistent with economic theory, including in regard to learned agent behaviors and specializations. Second, they show that AI-driven tax policies improve the trade-off between equality and productivity by 16% over baseline policies, including the prominent Saez tax framework. Third, they showcase several emergent features: AI-driven tax policies are qualitatively different from baselines, setting a higher top tax rate and higher net subsidies for low incomes. Moreover, AI-driven tax policies perform strongly in the face of emergent tax-gaming strategies learned by AI agents. Lastly, AI-driven tax policies are also effective when used in experiments with human participants. In experiments conducted on MTurk, an AI tax policy provides an equality-productivity trade-off that is similar to that provided by the Saez framework along with higher inverse-income weighted social welfare.


Simona Abis, Columbia University, and Laura Veldkamp, Columbia University and NBER

The Changing Economics of Knowledge Production

Big data technologies change the way in which data and human labor combine to create knowledge. Is this a modest technological advance or a transformation of our basic economic processes? Using hiring and wage data from the financial sector, Abis and Veldkamp estimate firms' data stocks and the shape of their knowledge production functions. Knowing how much production functions have changed informs us about the likely long-run changes in output, in factor shares, and in the distribution of income, due to the new, big data technologies. Using data from the investment management industry, the results suggest that the labor share of income in knowledge work may fall from 44% to 27% and the researchers quantify the corresponding increase in the value of data.


Emma J. Pierson and Jure Leskovec, Stanford University; David M. Cutler, Harvard University and NBER; Sendhil Mullainathan, University of Chicago and NBER; and Ziad Obermeyer, University of California, Berkeley

An Algorithmic Approach to Explaining Why the Underserved Feel More Pain

Underserved populations experience higher levels of pain. These disparities persist even after controlling for the objective severity of diseases like osteoarthritis, as graded by human experts using medical images, raising the possibility that underserved patients' pain stems from factors external to the knee (e.g., stress). Here we use deep learning to create an alternative measure of the severity of osteoarthritis, by using knee x-rays to predict patients' experienced pain. We show that this approach dramatically reduces unexplained racial disparities in pain. Relative to standard measures of severity graded by radiologists, which explain only 9% (95% CI, 3%-16%) of racial disparities in pain, algorithmic predictions explain 43%, or 4.7x more (95% CI, 3.2x- 11.8x), with similar results for lower-income and less-educated patients. These results suggest that much of underserved patients' pain stems from factors within the knee, but ones not reflected in standard radiographic measures of severity. We show that the algorithm's ability to explain disparities is rooted in racial and socioeconomic diversity of the training set, and that it does not simply reconstruct race or known radiographic features. Since algorithmic predictions better capture underserved patients' pain, algorithmic predictions could potentially redress disparities in access to treatments: Because patients with severe osteoarthritis are empirically more likely to receive arthroplasty, access to surgery would double for Black patients (22% vs. 11%; p < 0.001) if an algorithmic severity measure were used instead of standard measures.


Dirk Bergemann and Tan Gan, Yale University, and Alessandro Bonatti, MIT

The Economics of Social Data

Bergemann, Bonatti, and Gan propose a model of data intermediation to analyze the incentives for sharing individual data in the presence of informational externalities. A data intermediary acquires signals from individual consumers regarding their preferences. The intermediary resells the information in a product market in which firms and consumers can tailor their choices to the demand data. The social dimension of the individual data -- whereby an individual's data are predictive of the behavior of others -- generates a data externality that can reduce the intermediary's cost of acquiring information. The researchers derive the intermediary's optimal data policy and establish that it preserves the privacy of consumer identities while providing precise information about market demand to the firms. This policy enables the intermediary to capture the total value of the information as the number of consumers becomes large.


Danielle Li, MIT and NBER; Lindsey R. Raymond, MIT; and Peter Bergman, Columbia University and NBER

Hiring as Exploration (NBER Working Paper No. 27736)

In looking for the best workers over time, firms must balance exploitation (selecting from groups with proven track records) with exploration (selecting from under-represented groups to learn about quality). Yet modern hiring algorithms, based on "supervised learning" approaches, are designed solely for exploitation. Li, Raymond, and Bergman view hiring as a contextual bandit problem and build a resume screening algorithm that values exploration by evaluating candidates according to their upside potential. Using data from professional services recruiting within a Fortune 500 firm, they show that this approach improves both the quality (as measured by eventual offer and acceptance rates) and the diversity of candidates selected for an interview relative to the firm's existing practices. The same is not true for traditional supervised learning-based algorithms, which improve quality but select far fewer minority applicants. In an extension, the researchers show that exploration-based algorithms are also able to learn more effectively about simulated changes in applicant quality over time. Together, the results highlight the importance of incorporating exploration in developing hiring algorithms that are potentially both more efficient and equitable.


Debraj Ray, New York University and NBER, and Dilip Mookherjee, Boston University and NBER

Growth, Automation and the Long Run Share of Labor

Ray and Mookherjee study a model of long-run growth and distribution with two key features. First, there is an asymmetry between physical and human capital. Individual claims on the former can be reproduced linearly and indefinitely. Because no similar claim on humans is possible, human capital accumulation instead takes the form of acquiring occupational skills, the returns to which are determined by an endogenous collection of wages. Second, physical capital can take the form of machines that are complementary to human labor, or robots, a substitute for it. Under a self-replication condition on the production of robot services, the theory delivers progressive automation, with the share of labor in national income converging to zero. The displacement of human labor is gradual, and real wages could rise indefinitely. The results extend to endogenous technical change, as well as relaxations of the sharply posited human-physical asymmetry.


Katherine A. Stapleton, University of Oxford, and Michael Webb, Stanford University

Automation, Trade and Multinational Activity: Micro Evidence from Spain

Stapleton and Webb use a rich dataset of Spanish manufacturing firms from 1990 to 2016 to shed new light on how automation in a high-income country affects trade and multinational activity involving lower-income countries. The researchers exploit supply-side improvements in the technical capabilities of robots, as described in robotics patents, that made it become feasible over time to automate certain tasks and not others. They show that, contrary to the speculation that automation in high-income countries will cause the reshoring of production, the use of robots in Spanish firms actually had a positive impact on their imports from, and number of affiliates in, lower-income countries. Robot adoption caused firms to expand production and increase labour productivity and TFP. The impact of robot adoption on offshoring then varied depending on whether a firm first automated or first offshored production. For firms that had not yet offshored production to lower-income countries, the productivity and revenue enhancing effects of robot adoption made them more likely to start doing so. By contrast, for firms that were already offshoring to lower-income countries, robot adoption had no impact on offshoring. The effect for the former group dominates, such that the net impact of robot adoption is positive. The researchers show that these findings can be explained in a framework that incorporates firm heterogeneity, the choice between automation, offshoring and performing tasks at home and where automation and offshoring both involve upfront fixed costs, such that their sequencing matters.


Ashesh Rambachan, Harvard University; Jon Kleinberg, Cornell University; Jens Ludwig, University of Chicago and NBER; and Sendhil Mullainathan, University of Chicago and NBER

An Economic Approach to Regulating Algorithms

There is growing concern about "algorithmic bias" -- that predictive algorithms used in decision-making might bake in or exacerbate discrimination in society. When will these "biases" arise? What should be done about them? Rambachan, Kleinberg, Ludwig, and Mullainathan argue that such questions are naturally answered using the tools of welfare economics: a social welfare function for the policymaker, a private objective function for the algorithm designer, and a model of their information sets and interaction. The researchers build such a model that allows the training data to exhibit a wide range of "biases." The prevailing wisdom is that biased data change how the algorithm is trained and whether an algorithm should be used at all. In contrast, the researchers find two striking irrelevance results. First, when the social planner builds the algorithm, her equity preference has no effect on the training procedure. So long as the data, however biased, contain signal, they will be used and the algorithm built on top will be the same. Any characteristic that is predictive of the outcome of interest, including group membership, will be used. Second, they study how the social planner regulates private (possibly discriminatory) actors building algorithms. Optimal regulation depends crucially on the disclosure regime. Absent disclosure, algorithms are regulated much like human decision-makers: disparate impact and disparate treatment rules dictate what is allowed. In contrast, under stringent disclosure of all underlying algorithmic inputs (data, training procedure and decision rule), once again the researchers find an irrelevance result: private actors can use any predictive characteristic. Additionally, now algorithms strictly reduce the extent of discrimination against protected groups relative to a world in which humans make all the decisions. As these results run counter to prevailing wisdom on algorithmic bias, at a minimum, they provide a baseline set of assumptions that must be altered to generate different conclusions.


Daron Acemoglu and David Autor, MIT and NBER; Pascual Restrepo, Boston University and NBER; and Jonathon Hazell, MIT

AI and Jobs: Evidence from Online Vacancies

Artificial intelligence (AI) technologies are developing rapidly, yet there is limited evidence on how AI is affecting hiring in job categories most likely to be either substituted or complemented by AI. Acemoglu, Autor, Restrepo, and Hazell study the impact of AI on US hiring from 2010 onwards, using establishment level data on vacancies with detailed occupation information comprising the near-universe of online vacancies in the US. They classify establishments as “AI exposed” — that is, likely to replace workers with AI — based on their detailed skill mix garnered from their job postings 2010. The researchers offer three sets of findings. First, they document rapid growth in AI related vacancies over 2010-2018 that is not limited to the Information Technology sector and is greater in AI-exposed establishments. Second, AI-exposed establishments reduce vacancy postings in occupations that are “at risk” of AI-replacement and increase vacancy postings in occupations that are not at risk of AI-replacement. These countervailing effects are essentially fully offsetting: exposed establishments do not significantly alter total vacancy postings. Finally, the researchers find suggestive evidence that AI adoption has non-neutral effects on aggregate vacancy postings at the local labor market level. When an establishment posts more AI vacancies, other establishments in the same labor market post fewer overall vacancies. These “spillovers” are confirmed when they apply a novel identification strategy that leverages the occupation mix in establishments' non-local headquarters.