How does AI Distribute the pie? Large Language Models and the Ultimatum Game.
As Large Language Models (LLMs) are increasingly tasked with autonomous decision making, understanding their behavior in strategic settings is crucial. We investigate the choices of various LLMs in the Ultimatum Game, a setting where human behavior notably deviates from theoretical rationality. We conduct experiments varying the stake size and the nature of the opponent (Human vs. AI) across both Proposer and Responder roles. Three key results emerge. First, LLM behavior is heterogeneous but predictable when conditioning on stake size and player types. Second, while some models approximate the rational benchmark and others mimic human social preferences, a distinct “altruistic” mode emerges where LLMs propose hyper-fair distributions (greater than 50%). Third, LLM Proposers forgo a large share of total payoff, and an even larger share when the Responder is human. These findings highlight the need for careful testing before deploying AI agents in economic settings.
-
-
Copy CitationDouglas K.G. Araujo and Harald Uhlig, "How does AI Distribute the pie? Large Language Models and the Ultimatum Game.," NBER Working Paper 34919 (2026), https://doi.org/10.3386/w34919.Download Citation