The Allocation of Decision Authority to Human and Artificial Intelligence
The allocation of decision authority by a principal to either a human agent or an artificial intelligence (AI) is examined. The principal trades off an AI’s more aligned choice with the need to motivate the human agent to expend effort in learning choice payoffs. When agent effort is desired, it is shown that the principal is more likely to give that agent decision authority, reduce investment in AI reliability and adopt an AI that may be biased. Organizational design considerations are likely to impact on how AI’s are trained.
Thanks to Jorge Guzman for an excellent discussion. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research.
Susan C. Athey
Susan Athey serves on the boards of directors of Expedia (EXPE), Lending Club (LC), Rover, Ripple, Turo, Innovations for Poverty Action, and CoinCenter. She previously had a long term consulting relationship with Microsoft. She also advises venture capital firms X/Seed Capital and NYCA Partners, and works on various consulting cases for a variety of clients through Keystone Strategy. She is the founding faculty director of the Golub Capital Social Impact Lab, as well as associate director of Stanford University's Institute for Human Centered Artificial Intelligence.
Susan C. Athey & Kevin A. Bryan & Joshua S. Gans, 2020. "The Allocation of Decision Authority to Human and Artificial Intelligence," AEA Papers and Proceedings, vol 110, pages 80-84. citation courtesy of