Behavioral Economics of AI: LLM Biases and Corrections
Working Paper 34745
DOI 10.3386/w34745
Issue Date
Do generative AI models, particularly large language models (LLMs), exhibit systematic behavioral biases in economic and financial decisions? If so, how can these biases be mitigated? Drawing on the cognitive psychology and experimental economics literatures, we conduct the most comprehensive set of experiments to date—originally designed to document human biases—on prominent LLM families across model versions and scales. We document systematic patterns in LLM behavior. In preference-based tasks, responses become more human-like as models become more advanced or larger, while in belief-based tasks, advanced large-scale models frequently generate rational responses. Prompting LLMs to make rational decisions reduces biases.
-
-
Copy CitationPietro Bini, Lin William Cong, Xing Huang, and Lawrence J. Jin, "Behavioral Economics of AI: LLM Biases and Corrections," NBER Working Paper 34745 (2026), https://doi.org/10.3386/w34745.Download Citation
-