Forecasting Social Science: Evidence from 100 Projects
Forecasts about research findings affect critical scientific decisions, such as the treatments an R&D lab invests in or the statistical power of an experiment. How accurate are these forecasts, and what are the implications of potentially inaccurate beliefs? We analyze a unique data set of all 100 projects posted on the Social Science Prediction Platform from 2020 to 2024, which received 53,298 forecasts in total, including 66 projects for which we have results. We show that forecasters, on average, over-estimate treatment effects; however, the average forecast is quite predictive of the actual treatment effect. We also examine differences in accuracy. Academics have higher accuracy than non-academics, but expertise in a field does not increase accuracy. A panel of motivated repeat forecasters has higher accuracy, but this does not extend to all repeat forecasters. Confidence in the accuracy of one’s forecasts is perversely associated with lower accuracy. We also document substantial cross-study correlation in accuracy and identify a group of "superforecasters". Integrating these lessons, we show how to design optimal forecasts that are appropriately shrunk and that place more weight on high-accuracy predictions. We then highlight how these optimal forecasts can be used in experimental design to increase statistical power.
-
-
Copy CitationStefano DellaVigna and Eva Vivalt, "Forecasting Social Science: Evidence from 100 Projects," NBER Working Paper 34493 (2025), https://doi.org/10.3386/w34493.Download Citation
-