The A.I. Dilemma: Growth versus Existential Risk
Advances in artificial intelligence (A.I.) are a double-edged sword. On the one hand, they may increase economic growth as A.I. augments our ability to innovate. On the other hand, many experts worry that these advances entail existential risk: creating a superintelligence misaligned with human values could lead to catastrophic outcomes, even possibly human extinction. This paper considers the optimal use of A.I. technology in the presence of these opportunities and risks. Under what conditions should we continue the rapid progress of A.I. and under what conditions should we stop?
Nothing to disclose. I’m grateful to Jean-Felix Brouillette, Tom Davidson, Sebastian Di Tella, Maya Eden, Joshua Gans, Tom Houlden, Pete Klenow, Anton Korinek, Kevin Kuruc, Pascual Restrepo, Charlotte Siegmann, Chris Tonetti, Phil Trammell and seminar participants at the Markus Academy, the Minneapolis Fed, the NBER A.I. conference, Oxford, PSE Macro Days 2023, and Stanford for helpful comments and discussions. The views expressed herein are those of the author and do not necessarily reflect the views of the National Bureau of Economic Research.