West Lafayette, IN 47906
Institutional Affiliation: Purdue University
NBER Working Papers and Publications
|May 2020||Reconstructing the Yield Curve|
with : w27266
The constant-maturity zero-coupon Treasury yield curve is one of the most studied datasets. We reconstruct the yield curve using a non-parametric kernel-smoothing method with a novel adaptive bandwidth specifically designed to fit the Treasury yield curve. Our curve is globally smooth while still capturing important local variation. Economically, we show that applying our data leads to different conclusions from using the leading alternative data of Gürkaynak et al. (2007) (GSW) when we repeat two popular studies of Cochrane and Piazzesi (2005) and Giglio and Kelly (2018). Statistically, we show our dataset preserves information in the raw data and has much smaller pricing errors than GSW. Our new yield curve is maintained and updated online, complemented by bandwidths that summarize infor...
|March 2016||Rethinking Performance Evaluation|
with : w22134
We show that the standard equation-by-equation OLS used in performance evaluation ignores information in the alpha population and leads to severely biased estimates for the alpha population. We propose a new framework that treats fund alphas as random effects. Our framework allows us to make inference on the alpha population while controlling for various sources of estimation risk. At the individual fund level, our method pools information from the entire alpha distribution to make density forecast for the fund's alpha, offering a new way to think about performance evaluation. In simulations, we show that our method generates parameter estimates that universally dominate the OLS estimates, both at the population and at the individual fund level. While it is generally accepted that few if a...
|October 2014||. . . and the Cross-Section of Expected Returns|
with , : w20592
Hundreds of papers and hundreds of factors attempt to explain the cross-section of expected returns. Given this extensive data mining, it does not make any economic or statistical sense to use the usual significance criteria for a newly discovered factor, e.g., a t-ratio greater than 2.0. However, what hurdle should be used for current research? Our paper introduces a multiple testing framework and provides a time series of historical significance cutoffs from the first empirical tests in 1967 to today. Our new method allows for correlation among the tests as well as missing data. We also project forward 20 years assuming the rate of factor production remains similar to the experience of the last few years. The estimation of our model suggests that a newly discovered factor needs to clear ...
Published: Campbell R. Harvey & Yan Liu & Heqing Zhu, 2016. "… and the Cross-Section of Expected Returns," Review of Financial Studies, vol 29(1), pages 5-68.