Department of Economics
Cambridge, MA 02138
NBER Program Affiliations:
NBER Affiliation: Research Associate
Institutional Affiliation: Harvard University
NBER Working Papers and Publications
|November 2018||On the Informativeness of Descriptive Statistics for Structural Estimates|
with Matthew Gentzkow, Jesse M. Shapiro: w25217
Researchers often present treatment-control differences or other descriptive statistics alongside structural estimates that answer policy or counterfactual questions of interest. We ask to what extent confidence in the researcher's interpretation of the former should increase a reader's confidence in the latter. We consider a structural estimate ĉ that may depend on a vector of descriptive statistics ̂γ. We define a class of misspecified models in a neighborhood of the assumed model. We then compare the bounds on the bias of ĉ due to misspecification across all models in this class with the bounds across the subset of these models in which misspecification does not affect ̂γ. Our main result shows that the ratio of the lengths of these tight bounds depends only on a quantity we call the i...
|September 2017||Weighting for External Validity|
with Emily Oster: w23826
External validity is a challenge in treatment effect estimation. Even in randomized trials, the experimental sample often differs from the population of interest. If participation decisions are explained by observed variables such differences can be overcome by reweighting. However, participation may depend on unobserved variables. Even in such cases, under a common support assumption there exist weights which, if known, would allow reweighting the sample to match the population. While these weights cannot in general be estimated, we develop approximations which relate them to the role of private information in participation decisions. These approximations suggest benchmarks for assessing external validity.
|March 2017||Identification of and Correction for Publication Bias|
with Maximilian Kasy: w23298
Some empirical results are more likely to be published than others. Such selective publication leads to biased estimates and distorted inference. This paper proposes two approaches for identifying the conditional probability of publication as a function of a study’s results, the first based on systematic replication studies and the second based on meta-studies. For known conditional publication probabilities, we propose median-unbiased estimators and associated confidence sets that correct for selective publication. We apply our methods to recent large-scale replication studies in experimental economics and psychology, and to meta-studies of the effects of minimum wages and de-worming programs.
|November 2014||Measuring the Sensitivity of Parameter Estimates to Estimation Moments|
with Matthew Gentzkow, Jesse M. Shapiro: w20673
We propose a local measure of the relationship between parameter estimates and the moments of the data they depend on. Our measure can be computed at negligible cost even for complex structural models. We argue that reporting this measure can increase the transparency of structural estimates, making it easier for readers to predict the way violations of identifying assumptions would affect the results. When the key assumptions are orthogonality between error terms and excluded instruments, we show that our measure provides a natural extension of the omitted variables bias formula for nonlinear models. We illustrate with applications to published articles in several fields of economics.
Published: Isaiah Andrews & Matthew Gentzkow & Jesse M. Shapiro, 2017. "Measuring the Sensitivity of Parameter Estimates to Estimation Moments*," The Quarterly Journal of Economics, vol 132(4), pages 1553-1592. citation courtesy of