Biased Beliefs About Random Samples: Evidence from Two Integrated Experiments
This paper describes results of a pair of incentivized experiments on biases in judgments about random samples. Consistent with the Law of Small Numbers (LSN), participants exaggerated the likelihood that short sequences and random subsets of coin flips would be balanced between heads and tails. Consistent with the Non-Belief in the Law of Large Numbers (NBLLN), participants underestimated the likelihood that large samples would be close to 50% heads. However, we identify some shortcomings of existing models of LSN, and we find that NBLLN may not be as stable as previous studies suggest. We also find evidence for exact representativeness (ER), whereby people tend to exaggerate the likelihood that samples will (nearly) exactly mirror the underlying odds, as an additional bias beyond LSN. Our within-subject design of asking many different questions about the same data lets us disentangle the biases from possible rational alternative interpretations by showing that the biases lead to inconsistency in answers. Our design also centers on identifying and controlling for bin effects, whereby the probability assigned to outcomes systematically depends on the categories used to elicit beliefs in a way predicted by support theory. The bin effects are large and systematic and affect some results, but we find LSN, NBLLN, and ER even after controlling for them.