National Bureau of Economic Research Working PapersThe Latest NBER Working Papers
http://www.nber.org/new.html
Indirect Inference with Importance Sampling: An Application to Women's Wage Growth -- by Robert M. Sauer, Christopher R. TaberThis paper has two main parts. In the first, we describe a method that smooths the objective function in a general class of indirect inference models. Our smoothing procedure makes use of importance sampling weights in estimation of the auxiliary model on simulated data. The importance sampling weights are constructed from likelihood contributions implied by the structural model. Since this approach does not require transformations of endogenous variables in the structural model, we avoid the potential approximation errors that may arise in other smoothing approaches for indirect inference. We show that our alternative smoothing method yields consistent estimates. The second part of the paper applies the method to estimating the effect of women's fertility on their human capital accumulation. We find that the curvature in the wage profile is determined primarily by curvature in the human capital accumulation function as a function of previous human capital, as opposed to being driven primarily by age. We also find a modest effect of fertility induced nonemployment spells on human capital accumulation. We estimate that the difference in wages among prime age women would be approximately 3% higher if the relationship between fertility and working were eliminated.
http://papers.nber.org/papers/w23669#fromrss
http://papers.nber.org/papers/w23669#fromrssLevel and Volatility Factors in Macroeconomic Data -- by Yuriy Gorodnichenko, Serena NgThe conventional wisdom in macroeconomic modeling is to attribute business cycle fluctuations to innovations in the level of the fundamentals. Though volatility shocks could be important too, their propagating mechanism is still not well understood partly because modeling the latent volatilities can be quite demanding. This paper suggests a simply methodology that can separate the level factors from the volatility factors and assess their relative importance without directly estimating the volatility processes. This is made possible by exploiting features in the second order approximation of equilibrium models and information in a large panel of data. Our largest volatility factor V<sub>1</sub> is strongly counter-cyclical, persistent, and loads heavily on housing sector variables. When augmented to a VAR in housing starts, industrial production, the fed-funds rate, and inflation, the innovations to V<sub>1</sub> can account for a non-negligible share of the variations at horizons of four to five years. However, V<sub>1</sub> is only weakly correlated with the volatility of our real activity factor and does not displace various measures of uncertainty. This suggests that there are second-moment shocks and non-linearities with cyclical implications beyond the ones we studied. More theorizing is needed to understand the interaction between the level and second-moment dynamics.
http://papers.nber.org/papers/w23672#fromrss
http://papers.nber.org/papers/w23672#fromrssOpportunities and Challenges: Lessons from Analyzing Terabytes of Scanner Data -- by Serena NgThis paper seeks to better understand what makes big data analysis different, what we can and cannot do with existing econometric tools, and what issues need to be dealt with in order to work with the data efficiently. As a case study, I set out to extract any business cycle information that might exist in four terabytes of weekly scanner data. The main challenge is to handle the volume, variety, and characteristics of the data within the constraints of our computing environment. Scalable and efficient algorithms are available to ease the computation burden, but they often have unknown statistical properties and are not designed for the purpose of efficient estimation or optimal inference. As well, economic data have unique characteristics that generic algorithms may not accommodate. There is a need for computationally efficient econometric methods as big data is likely here to stay.
http://papers.nber.org/papers/w23673#fromrss
http://papers.nber.org/papers/w23673#fromrss