Forecast Evaluation of Small Nested Model Sets
We propose two new procedures for comparing the mean squared prediction error (MSPE) of a benchmark model to the MSPEs of a small set of alternative models that nest the benchmark. Our procedures compare the benchmark to all the alternative models simultaneously rather than sequentially, and do not require reestimation of models as part of a bootstrap procedure. Both procedures adjust MSPE differences in accordance with Clark and West (2007); one procedure then examines the maximum t-statistic, the other computes a chi-squared statistic. Our simulations examine the proposed procedures and two existing procedures that do not adjust the MSPE differences: a chi-squared statistic, and White's (2000) reality check. In these simulations, the two statistics that adjust MSPE differences have most accurate size, and the procedure that looks at the maximum t-statistic has best power. We illustrate our procedures by comparing forecasts of different models for U.S. inflation.
We thank Roberto Duncan, Eleonora Granziera and Maria Zucca for research assistance, Michael McCracken for supplying the unpublished tables of quantiles referenced in section 3, and Raffaella Giacomini, participants at the 5th ECB Workshop on Forecasting Techniques, two anonymous referees and Tim Bollerslev (the editor) for helpful comments. West thanks the National Science Foundation for financial support. The views expressed here are not necessarily those of the European Central Bank nor of the National Bureau of Economic Research.
Kirstin Hubrich & Kenneth D. West, 2010. "Forecast evaluation of small nested model sets," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 25(4), pages 574-594. citation courtesy of