# generated by /homes/nber/adrepec/bin/nbrred running on mysql0
Template-Type: ReDIF-Paper 1.0
Title: Semiparametric Estimation of a Dynamic Game of Incomplete Information
Classification-JEL: L0; L5; C1
Author-Name: Patrick Bajari
Author-Name: Han Hong
Note: TWP
Number: 0320
Creation-Date: 2006-02
Order-URL: http://www.nber.org/papers/t0320
File-URL: http://www.nber.org/papers/t0320.pdf
File-Format: application/pdf
Abstract: Recently, empirical industrial organization economists have proposed estimators for dynamic games of incomplete information. In these models, agents choose from a finite number actions and maximize expected discounted utility in a Markov perfect equilibrium. Previous econometric methods estimate the probability distribution of agents%u2019 actions in a first stage. In a second step, a finite vector of parameters of the period return function are estimated. In this paper, we develop semiparametric estimators for dynamic games allowing for continuous state variables and a nonparametric first stage. The estimates of the structural parameters are T1/2 consistent (where T is the sample size) and asymptotically normal even though the first stage is estimated nonparametrically. We also propose sufficient conditions for identification of the model.
Handle: RePEc:nbr:nberte:0320
Template-Type: ReDIF-Paper 1.0
Title: Estimating Macroeconomic Models: A Likelihood Approach
Classification-JEL: C11; C15; E10; E32
Author-Name: Jesus Fernandez-Villaverde
Author-Person: pfe14
Author-Name: Juan F. Rubio-Ramirez
Author-Person: pru25
Note: TWP
Number: 0321
Creation-Date: 2006-02
Order-URL: http://www.nber.org/papers/t0321
File-URL: http://www.nber.org/papers/t0321.pdf
File-Format: application/pdf
Abstract: This paper shows how particle filtering allows us to undertake likelihood-based inference in dynamic macroeconomic models. The models can be nonlinear and/or non-normal. We describe how to use the output from the particle filter to estimate the structural parameters of the model, those characterizing preferences and technology, and to compare different economies. Both tasks can be implemented from either a classical or a Bayesian perspective. We illustrate the technique by estimating a business cycle model with investment-specific technological change, preference shocks, and stochastic volatility.
Handle: RePEc:nbr:nberte:0321
Template-Type: ReDIF-Paper 1.0
Title: Regression Discontinuity Inference with Specification Error
Classification-JEL: C1; C5; J0
Author-Name: David S. Lee
Author-Name: David Card
Author-Person: pca271
Note: TWP
Number: 0322
Creation-Date: 2006-03
Order-URL: http://www.nber.org/papers/t0322
File-URL: http://www.nber.org/papers/t0322.pdf
File-Format: application/pdf
Abstract: A regression discontinuity (RD) research design is appropriate for program evaluation problems in which treatment status (or the probability of treatment) depends on whether an observed covariate exceeds a fixed threshold. In many applications the treatment-determining covariate is discrete. This makes it impossible to compare outcomes for observations "just above" and "just below" the treatment threshold, and requires the researcher to choose a functional form for the relationship between the treatment variable and the outcomes of interest. We propose a simple econometric procedure to account for uncertainty in the choice of functional form for RD designs with discrete support. In particular, we model deviations of the true regression function from a given approximating function -- the specification errors -- as random. Conventional standard errors ignore the group structure induced by specification errors and tend to overstate the precision of the estimated program impacts. The proposed inference procedure that allows for specification error also has a natural interpretation within a Bayesian framework.
Handle: RePEc:nbr:nberte:0322
Template-Type: ReDIF-Paper 1.0
Title: Heteroskedasticity-Robust Standard Errors for Fixed Effects Panel Data Regression
Classification-JEL: C23; C12
Author-Name: James H. Stock
Author-Person: pst148
Author-Name: Mark W. Watson
Author-Person: pwa582
Note: TWP
Number: 0323
Creation-Date: 2006-06
Order-URL: http://www.nber.org/papers/t0323
File-URL: http://www.nber.org/papers/t0323.pdf
File-Format: application/pdf
Publication-Status: published as Stock, James H. and Mark W. Watson. "Heteroskedasticity-Robust Standard Errors for Fixed Effects Panel Data Regression." Econometrica 76, 1 (2008): 155-174.
Abstract: The conventional heteroskedasticity-robust (HR) variance matrix estimator for cross-sectional regression (with or without a degrees of freedom adjustment), applied to the fixed effects estimator for panel data with serially uncorrelated errors, is inconsistent if the number of time periods T is fixed (and greater than two) as the number of entities n increases. We provide a bias-adjusted HR estimator that is (nT)1/2 -consistent under any sequences (n, T) in which n and/or T increase to %u221E.The conventional heteroskedasticity-robust (HR) variance matrix estimator for cross-sectional regression (with or without a degrees of freedom adjustment), applied to the fixed effects estimator for panel data with serially uncorrelated errors, is inconsistent if the number of time periods T is fixed (and greater than two) as the number of entities n increases. We provide a bias-adjusted HR estimator that is (nT)1/2 -consistent under any sequences (n, T) in which n and/or T increase to %u221E.
Handle: RePEc:nbr:nberte:0323
Template-Type: ReDIF-Paper 1.0
Title: Nonparametric Tests for Treatment Effect Heterogeneity
Classification-JEL: C14; C21; C52
Author-Name: Richard K. Crump
Author-Name: V. Joseph Hotz
Author-Person: pho4
Author-Name: Guido W. Imbens
Author-Person: pim4
Author-Name: Oscar A. Mitnik
Note: TWP
Number: 0324
Creation-Date: 2006-06
Order-URL: http://www.nber.org/papers/t0324
File-URL: http://www.nber.org/papers/t0324.pdf
File-Format: application/pdf
Abstract: A large part of the recent literature on program evaluation has focused on estimation of the average effect of the treatment under assumptions of unconfoundedness or ignorability following the seminal work by Rubin (1974) and Rosenbaum and Rubin (1983). In many cases however, researchers are interested in the effects of programs beyond estimates of the overall average or the average for the subpopulation of treated individuals. It may be of substantive interest to investigate whether there is any subpopulation for which a program or treatment has a nonzero average effect, or whether there is heterogeneity in the effect of the treatment. The hypothesis that the average effect of the treatment is zero for all subpopulations is also important for researchers interested in assessing assumptions concerning the selection mechanism. In this paper we develop two nonparametric tests. The first test is for the null hypothesis that the treatment has a zero average effect for any subpopulation defined by covariates. The second test is for the null hypothesis that the average effect conditional on the covariates is identical for all subpopulations, in other words, that there is no heterogeneity in average treatment effects by covariates. Sacrificing some generality by focusing on these two specific null hypotheses we derive tests that are straightforward to implement.
Handle: RePEc:nbr:nberte:0324
Template-Type: ReDIF-Paper 1.0
Title: On the Failure of the Bootstrap for Matching Estimators
Classification-JEL: C14; C21; C52
Author-Name: Alberto Abadie
Author-Person: pab7
Author-Name: Guido W. Imbens
Author-Person: pim4
Note: TWP
Number: 0325
Creation-Date: 2006-06
Order-URL: http://www.nber.org/papers/t0325
File-URL: http://www.nber.org/papers/t0325.pdf
File-Format: application/pdf
Publication-Status: published as -Abadie, Alberto and Guido W. Imbens. "On the Failure of the Bootstrap for Matching Estimators." Econometrica 76, 6 (2008): 1537-1557.
Abstract: Matching estimators are widely used for the evaluation of programs or treatments. Often researchers use bootstrapping methods for inference. However, no formal justification for the use of the bootstrap has been provided. Here we show that the bootstrap is in general not valid, even in the simple case with a single continuous covariate when the estimator is root-N consistent and asymptotically normally distributed with zero asymptotic bias. Due to the extreme non-smoothness of nearest neighbor matching, the standard conditions for the bootstrap are not satisfied, leading the bootstrap variance to diverge from the actual variance. Simulations confirm the difference between actual and nominal coverage rates for bootstrap confidence intervals predicted by the theoretical calculations. To our knowledge, this is the first example of a root-N consistent and asymptotically normal estimator for which the bootstrap fails to work.
Handle: RePEc:nbr:nberte:0325
Template-Type: ReDIF-Paper 1.0
Title: Approximately Normal Tests for Equal Predictive Accuracy in Nested Models
Classification-JEL: C22; C53; E17; F37
Author-Name: Kenneth D. West
Author-Person: pwe16
Author-Name: Todd Clark
Author-Person: pcl55
Note: AP EFG TWP
Number: 0326
Creation-Date: 2006-08
Order-URL: http://www.nber.org/papers/t0326
File-URL: http://www.nber.org/papers/t0326.pdf
File-Format: application/pdf
Abstract: Forecast evaluation often compares a parsimonious null model to a larger model that nests the null model. Under the null that the parsimonious model generates the data, the larger model introduces noise into its forecasts by estimating parameters whose population values are zero. We observe that the mean squared prediction error (MSPE) from the parsimonious model is therefore expected to be smaller than that of the larger model. We describe how to adjust MSPEs to account for this noise. We propose applying standard methods (West (1996)) to test whether the adjusted mean squared error difference is zero. We refer to nonstandard limiting distributions derived in Clark and McCracken (2001, 2005a) to argue that use of standard normal critical values will yield actual sizes close to, but a little less than, nominal size. Simulation evidence supports our recommended procedure.
Handle: RePEc:nbr:nberte:0326
Template-Type: ReDIF-Paper 1.0
Title: Robust Inference with Multi-way Clustering
Classification-JEL: C12; C21; C23
Author-Name: A. Colin Cameron
Author-Person: pca565
Author-Name: Jonah B. Gelbach
Author-Person: pge238
Author-Name: Douglas L. Miller
Author-Person: pmi179
Note: TWP
Number: 0327
Creation-Date: 2006-09
Order-URL: http://www.nber.org/papers/t0327
File-URL: http://www.nber.org/papers/t0327.pdf
File-Format: application/pdf
Abstract: In this paper we propose a new variance estimator for OLS as well as for nonlinear estimators such as logit, probit and GMM, that provcides cluster-robust inference when there is two-way or multi-way clustering that is non-nested. The variance estimator extends the standard cluster-robust variance estimator or sandwich estimator for one-way clustering (e.g. Liang and Zeger (1986), Arellano (1987)) and relies on similar relatively weak distributional assumptions. Our method is easily implemented in statistical packages, such as Stata and SAS, that already offer cluster-robust standard errors when there is one-way clustering. The method is demonstrated by a Monte Carlo analysis for a two-way random effects model; a Monte Carlo analysis of a placebo law that extends the state-year effects example of Bertrand et al. (2004) to two dimensions; and by application to two studies in the empirical public/labor literature where two-way clustering is present.
Handle: RePEc:nbr:nberte:0327
Template-Type: ReDIF-Paper 1.0
Title: Non-response in the American Time Use Survey: Who Is Missing from the Data and How Much Does It Matter?
Classification-JEL: J2
Author-Name: Katharine G. Abraham
Author-Person: pab32
Author-Name: Aaron Maitland
Author-Name: Suzanne M. Bianchi
Note: CH LS TWP
Number: 0328
Creation-Date: 2006-09
Order-URL: http://www.nber.org/papers/t0328
File-URL: http://www.nber.org/papers/t0328.pdf
File-Format: application/pdf
Abstract: This paper examines non-response in a large government survey. The response rate for the American Time Use Survey (ATUS) has been below 60 percent for the first two years of its existence, raising questions about whether the results can be generalized to the target population. The paper begins with an analysis of the types of non-response encountered in the ATUS. We find that non-contact accounts for roughly 60 percent of ATUS non-response, with refusals accounting for roughly 40 percent. Next, we examine two hypotheses about the causes of this non-response. We find little support for the hypothesis that busy people are less likely to respond to the ATUS, but considerable support for the hypothesis that people who are weakly integrated into their communities are less likely to respond, mostly because they are less likely to be contacted. Finally, we compare aggregate estimates of time use calculated using the ATUS base weights without any adjustment for non-response to estimates calculated using the ATUS final weights with a non-response adjustment and to estimates calculated using weights that incorporate our own non-response adjustments based on a propensity model. While there are some modest differences, the three sets of estimates are broadly similar. The paper ends with a discussion of survey design features, their effect on the types and level of non-response, and the tradeoffs associated with different design choices.
Handle: RePEc:nbr:nberte:0328
Template-Type: ReDIF-Paper 1.0
Title: Researcher Incentives and Empirical Methods
Classification-JEL: A11; B4
Author-Name: Edward L. Glaeser
Author-Person: pgl9
Note: TWP
Number: 0329
Creation-Date: 2006-10
Order-URL: http://www.nber.org/papers/t0329
File-URL: http://www.nber.org/papers/t0329.pdf
File-Format: application/pdf
Abstract: Economists are quick to assume opportunistic behavior in almost every walk of life other than our own. Our empirical methods are based on assumptions of human behavior that would not pass muster in any of our models. The solution to this problem is not to expect a mass renunciation of data mining, selective data cleaning or opportunistic methodology selection, but rather to follow Leamer's lead in designing and using techniques that anticipate the behavior of optimizing researchers. In this essay, I make ten points about a more economic approach to empirical methods and suggest paths for methodological progress.
Handle: RePEc:nbr:nberte:0329
Template-Type: ReDIF-Paper 1.0
Title: Moving the Goalposts: Addressing Limited Overlap in the Estimation of Average Treatment Effects by Changing the Estimand
Classification-JEL: C01; C1; C13; C14; C2; C21
Author-Name: Richard K. Crump
Author-Name: V. Joseph Hotz
Author-Person: pho4
Author-Name: Guido W. Imbens
Author-Person: pim4
Author-Name: Oscar A. Mitnik
Author-Person: pmi129
Note: TWP
Number: 0330
Creation-Date: 2006-10
Order-URL: http://www.nber.org/papers/t0330
File-URL: http://www.nber.org/papers/t0330.pdf
File-Format: application/pdf
Abstract: Estimation of average treatment effects under unconfoundedness or exogenous treatment assignment is often hampered by lack of overlap in the covariate distributions. This lack of overlap can lead to imprecise estimates and can make commonly used estimators sensitive to the choice of specification. In such cases researchers have often used informal methods for trimming the sample. In this paper we develop a systematic approach to addressing such lack of overlap. We characterize optimal subsamples for which the average treatment effect can be estimated most precisely, as well as optimally weighted average treatment effects. Under some conditions the optimal selection rules depend solely on the propensity score. For a wide range of distributions a good approximation to the optimal rule is provided by the simple selection rule to drop all units with estimated propensity scores outside the range [0.1,0.9].
Handle: RePEc:nbr:nberte:0330
Template-Type: ReDIF-Paper 1.0
Title: Vector Multiplicative Error Models: Representation and Inference
Classification-JEL: C01
Author-Name: Fabrizio Cipollini
Author-Name: Robert F. Engle
Author-Name: Giampiero M. Gallo
Author-Person: pga48
Note: AP TWP
Number: 0331
Creation-Date: 2006-11
Order-URL: http://www.nber.org/papers/t0331
File-URL: http://www.nber.org/papers/t0331.pdf
File-Format: application/pdf
Abstract: The Multiplicative Error Model introduced by Engle (2002) for positive valued processes is specified as the product of a (conditionally autoregressive) scale factor and an innovation process with positive support. In this paper we propose a multi-variate extension of such a model, by taking into consideration the possibility that the vector innovation process be contemporaneously correlated. The estimation procedure is hindered by the lack of probability density functions for multivariate positive valued random variables. We suggest the use of copulafunctions and of estimating equations to jointly estimate the parameters of the scale factors and of the correlations of the innovation processes. Empirical applications on volatility indicators are used to illustrate the gains over the equation by equation procedure.
Handle: RePEc:nbr:nberte:0331
Template-Type: ReDIF-Paper 1.0
Title: DSGE Models in a Data-Rich Environment
Classification-JEL: C10; C32; C53; E01; E32; E37
Author-Name: Jean Boivin
Author-Person: pbo43
Author-Name: Marc Giannoni
Author-Person: pgi36
Note: EFG ME TWP
Number: 0332
Creation-Date: 2006-12
Order-URL: http://www.nber.org/papers/t0332
File-URL: http://www.nber.org/papers/t0332.pdf
File-Format: application/pdf
Abstract: Standard practice for the estimation of dynamic stochastic general equilibrium (DSGE) models maintains the assumption that economic variables are properly measured by a single indicator, and that all relevant information for the estimation is summarized by a small number of data series. However, recent empirical research on factor models has shown that information contained in large data sets is relevant for the evolution of important macroeconomic series. This suggests that conventional model estimates and inference based on estimated DSGE models might be distorted. In this paper, we propose an empirical framework for the estimation of DSGE models that exploits the relevant information from a data-rich environment. This framework provides an interpretation of all information contained in a large data set, and in particular of the latent factors, through the lenses of a DSGE model. The estimation involves Markov-Chain Monte-Carlo (MCMC) methods. We apply this estimation approach to a state-of-the-art DSGE monetary model. We find evidence of imperfect measurement of the model's theoretical concepts, in particular for inflation. We show that exploiting more information is important for accurate estimation of the model's concepts and shocks, and that it implies different conclusions about key structural parameters and the sources of economic fluctuations.
Handle: RePEc:nbr:nberte:0332
Template-Type: ReDIF-Paper 1.0
Title: Using Randomization in Development Economics Research: A Toolkit
Classification-JEL: C93; I0; J0; O0
Author-Name: Esther Duflo
Author-Person: pdu166
Author-Name: Rachel Glennerster
Author-Name: Michael Kremer
Author-Person: pkr20
Note: CH TWP
Number: 0333
Creation-Date: 2006-12
Order-URL: http://www.nber.org/papers/t0333
File-URL: http://www.nber.org/papers/t0333.pdf
File-Format: application/pdf
Abstract: This paper is a practical guide (a toolkit) for researchers, students and practitioners wishing to introduce randomization as part of a research design in the field. It first covers the rationale for the use of randomization, as a solution to selection bias and a partial solution to publication biases. Second, it discusses various ways in which randomization can be practically introduced in a field settings. Third, it discusses designs issues such as sample size requirements, stratification, level of randomization and data collection methods. Fourth, it discusses how to analyze data from randomized evaluations when there are departures from the basic framework. It reviews in particular how to handle imperfect compliance and externalities. Finally, it discusses some of the issues involved in drawing general conclusions from randomized evaluations, including the necessary use of theory as a guide when designing evaluations and interpreting results.
Handle: RePEc:nbr:nberte:0333