Field Experiments in the Science of Science: Lessons from Peer Review and the Evaluation of New Knowledge
Scientific evaluation and peer review govern the allocation of resources and certification of knowledge in science yet has been subjected to limited causal investigation. This chapter synthesizes randomized experiments embedded in live peer-evaluation systems at journals, conferences, and funding agencies, restricting attention to published studies. I organize this evidence using a Q–A–R–S framework that decomposes peer review into submission attributes (Q), author attributes (A), reviewer composition and expertise (R), and features of the evaluation system and institutional context (S), interpreting outcomes through a view of peer review as an information-processing institution in which reviewers form assessments using the existing stock of knowledge. The chapter treats experimental design choices as objects of analysis, assessing what existing interventions can—and cannot—identify about peer review under practical institutional constraints. Randomized experimentation is indeed feasible and currently spans the full Q–A–R–S space, albeit sparsely. I make several claims based on analytic synthesis: (1) interventions often affect reviewer or author behavior with little impact on core evaluative judgments; (2) evaluations are shaped primarily by reviewer identity and expertise; and (3) peer review functions more reliably as a filter of unacceptable submissions than as a fine-grained ranking mechanism among higher-quality contributions. Overall, the evidence and synthesis here does not support the view—often implied in the literature—that peer review is highly ineffective. Instead, it points to a functioning institution operating under binding constraints, with substantial scope for institutional innovation. The chapter concludes by identifying priorities for future high-yield experimentation, emphasizing system-level interventions, the role of automation in filtering tasks, and the distinct challenges of improving ranking reliability at the knowledge frontier.
-
Copy CitationKevin Boudreau, Economics of Science (University of Chicago Press, 2026), chap. 3, https://www.nber.org/books-and-chapters/economics-science/field-experiments-science-science-lessons-peer-review-and-evaluation-new-knowledge.Download Citation