Two-Armed Restless Bandits with Imperfect Information: Stochastic Control and Indexability
We present a two-armed bandit model of decision making under uncertainty where the expected return to investing in the "risky arm'' increases when choosing that arm and decreases when choosing the "safe'' arm. These dynamics are natural in applications such as human capital development, job search, and occupational choice. Using new insights from stochastic control, along with a monotonicity condition on the payoff dynamics, we show that optimal strategies in our model are stopping rules that can be characterized by an index which formally coincides with Gittins' index. Our result implies the indexability of a new class of "restless'' bandit models.
We are grateful to Richard Holden, Peter Michor, Derek Neal, Ariel Pakes, Yuliy Sannikov, Mete Soner, Josef Teichmann and seminar participants at Barcelona GSE and Harvard University for helpful comments and suggestions. Financial support from the Education Innovation Laboratory at Harvard University is gratefully acknowledged. Correspondence can be addressed to the authors by e-mail: firstname.lastname@example.org [Fryer] or email@example.com [Harms]. The usual caveat applies. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research.