Designing Human-AI Collaboration: A Sufficient-Statistic Approach
Working Paper 33949
DOI 10.3386/w33949
Issue Date
We propose a sufficient statistic for designing AI information-disclosure and selective automation policies. The approach allows for endogenous and biased beliefs, and effort crowd-out, without using a structural model of human decision-making. We deploy and validate our approach in a fact-checking experiment. Humans under-respond to AI predictions and reduce effort when presented with confident AI predictions. Overconfidence in own-signal rather than under-confidence in AI drives AI under-response. The optimal policy automates decisions where the AI is confident and delegates the other decisions while fully disclosing the AI prediction. Although automation is valuable, the benefit of assisting humans with AI is negligible.