NBER Reporter: Research Summary Spring 2005
Policy Evaluation in Macroeconomics
Stephanie Schmitt-Grohe and Martin Uribe(1)
Much of our recent research has been devoted to developing and applying tools for the evaluation of macroeconomic stabilization policy. This choice of topic is motivated by the fact that by the late 1990s empirical research using macroeconomic data from industrialized countries had cast compelling doubts on the ability of the neoclassical growth model to provide a satisfactory account of aggregate fluctuations. As a response, the new Keynesian paradigm emerged as an alternative framework for understanding business cycles. One key difference between the neoclassical and the new Keynesian paradigms is that in the latter, the presence of various nominal and real distortions provides a meaningful role for stabilization policy, opening the door once again, after decades of dormancy, for policy evaluation.
Optimal Fiscal and Monetary Policy Under Sticky Prices
A well-known result in macroeconomic theory is that optimal fiscal and monetary policy features smooth distortionary income tax rates and highly volatile and unpredictable inflation rates. The intuition behind this result is straightforward: surprise inflation is equivalent to a lump-sum tax on nominal asset holdings. The Ramsey planner finances innovations in the fiscal budget, such as government spending shocks or unexpected declines in the tax base, through surprise changes in the price level. In this way, distortionary tax rates can be kept relatively stable over time. In calibrated model economies, under the Ramsey policy, the public would be accustomed to seeing inflation rates jumping from -15 percent to +15 percent from one year to the next. This result is completely at odds not only with observed inflation behavior but also with the primary goal of central banks around the world, namely, price stability.
We argue that the price stability goal of central banks can indeed be justified on theoretical grounds.(2) One key assumption of existing studies on optimal monetary and fiscal policy is that there are no impediments to nominal price adjustments. We relax this assumption and instead assume that product prices are sticky.
Obviously, by making price changes costly, we expect to obtain the result that under the Ramsey policy inflation is less volatile than in an economy with flexible prices. But our findings go way beyond our expectations. The introduction of a miniscule amount of price stickiness, less than ten times the degree of price stickiness estimated for the U.S. economy, suffices to make price stability the overriding goal of optimal monetary policy. Specifically, even when firms are assumed to be able to change prices every three to four weeks, the optimal volatility of inflation is below 0.52 percent per year, which is 13 times smaller than the optimal inflation volatility predicted under full price flexibility.
One may naturally expect that the reduced inflation volatility under the Ramsey plan would have to be compensated by increased unpredictability in income tax rates. But this is not the case. The Ramsey planner finances surprises to the fiscal budget mainly through adjustments in the stock of public debt. By using government debt as a shock absorber, the Ramsey planner can smooth tax rates over time. For instance, an unexpected fiscal deficit calls for a permanent increase in debt in the amount of the fiscal deficit and a small but permanent increase in taxes equal in size to the interest payments on the additional debt. Consequently, tax rates and government debt display a near random walk property. It follows that the mere introduction of a small amount of price stickiness resurrects the classical Barro(3) tax-smoothing result. This result stands in contrast to those obtained under flexible prices. In this case, tax rates inherit the stochastic process of the underlying shocks, and thus, in general, will not display the near-random walk property.
Our investigation delivers three additional results of interest for the computation of Ramsey policies. First, we show that stationary Ramsey equilibria can be computed accurately by solving a first-order approximation to the Ramsey optimality conditions ignoring the implementability constraint. (Of course, this constraint must be taken into account in deriving the Ramsey optimality conditions.) Second, we show that in the economic environments we analyze, first-order accurate solutions to the Ramsey problem are virtually identical to second-order accurate solutions. Finally, and more importantly, in the case of flexible prices, with or without imperfect competition in product markets, we show that the Ramsey problem admits an exact numerical solution.(4) We demonstrate that the exact numerical solution to the Ramsey problem is remarkably similar to the first-order accurate solution. These results are significant in light of the fact that first-order approximations to Ramsey problems can be computed fairly easily.
Developing Tools For Policy Evaluation
One obstacle we encountered early on in research summarized here was the lack of appropriate tools for evaluating stabilization policies in the context of distorted economies. An important part of our effort was devoted therefore to developing such tools.
Most models used in modern macroeconomics are too complex to allow for exact solutions. For this reason, researchers have appealed to numerical approximation techniques. One popular and widely used approximation technique is a first-order perturbation method delivering a linear approximation to the policy function. One reason for the popularity of first-order perturbation techniques is that they do not suffer from the "curse of dimensionality." That is, problems with a large number of state variables can be handled without many computational demands. Because models that are successful in accounting for many aspects of observed business cycles are bound to be large,(5) this advantage of perturbation techniques is of particular importance for policy evaluation. However, first-order approximation techniques suffer from serious limitations when applied to welfare evaluation. The problem is that when welfare is evaluated using a first-order approximation to the equilibrium laws of motion of endogenous variables, some second- and higher-order terms of the equilibrium welfare function are omitted while others are included. Consequently, the resulting criterion is inaccurate to order two or higher. This inaccuracy may result in spurious welfare rankings. For instance, in a recent paper Jinill Kim and Sunghyun Kim(6) show that in a simple two-agent economy, a welfare comparison based on an evaluation of the utility function using a linear approximation to the policy function may yield the erroneous result that welfare is higher under autarky than under full risk sharing.
In general, a correct second-order approximation of the equilibrium welfare function requires a second-order approximation to the policy function. This is what we set out to accomplish in a recent article.(7) There, we derive a second-order approximation to the solution of a general class of discrete-time rational expectations models. Our main theoretical contribution is to show that for any model belonging to this general class, the coefficients on the terms linear and quadratic in the state vector in a second-order expansion of the decision rule are independent of the volatility of the exogenous shocks. In other words, these coefficients must be the same in the stochastic and the deterministic versions of the model. Thus, up to second order, the presence of uncertainty affects only the constant term of the decision rules. But the fact that only the constant term is affected by the presence of uncertainty is by no means inconsequential. For it implies that up to second order the unconditional mean of endogenous variables in general can be significantly different from their non-stochastic steady state values. Thus, second-order approximation methods in principle can capture important effects of uncertainty on, for example, average rate-of-return differentials across assets with different risk characteristics, or on the average level of consumer welfare. An additional advantage of higher-order perturbation methods is that, like their first-order counterparts, they do not suffer from the curse of dimensionality. This is because, given the first-order approximation to the policy function, finding the coefficients of a second-order approximation simply entails solving a system of linear equations.
Our main practical contribution is the development of a set of MATLAB programs that compute the coefficients of the second-order approximation to the solution to the general class of models described above. This computer code is publicly available at our websites.
Optimal Operational Monetary Policy for the U.S. Economy
After the completion of the second-order approximation toolkit, we felt that we were suitably equipped to undertake a systematic and rigorous evaluation of stabilization policy. A contemporaneous development that helped to facilitate our work was the emergence of estimated medium-scale dynamic general equilibrium models of the U.S. economy with the ability to explain the behavior of a relatively large number of macroeconomic variables at business-cycle frequency.(8)
A central characteristic of the studies on optimal monetary policy that existed at the time we initiated our research on policy evaluation was that they were conducted in the context of highly stylized environments. One important drawback of that approach is that highly simplified models are unlikely to provide a satisfactory account of cyclical movements except for a few macroeconomic variables of interest. For this reason, the usefulness of this strategy to produce policy advice for the real world is necessarily limited.
In a recent paper(9), we depart from the extant literature by conducting policy evaluation within the context of a rich theoretical framework capable of explaining observed business cycle fluctuations for a wide range of nominal and real variables. Following the lead of Miles Kimball(10), we emphasize the importance of combining nominal and real rigidities in explaining the propagation of macroeconomic shocks. Specifically, the model features four nominal frictions -- sticky prices, sticky wages, money in the utility function, and a cash-in-advance constraint on the wage bill of firms -- and four sources of real rigidities: investment adjustment costs, variable capacity utilization, habit formation, and imperfect competition in product and factor markets. We assume aggregate fluctuations to be driven by supply shocks, which take the form of stochastic variations in total factor productivity, and demand shocks stemming from exogenous innovations to the level of government purchases. David Altig and his co-authors(11) argue that the model economy for which we seek to design optimal operational monetary policy indeed can explain the observed responses of inflation, real wages, nominal interest rates, money growth, output, investment, consumption, labor productivity, and real profits to productivity and monetary shocks in the postwar United States. In this respect, our paper aspires to be a step forward in the research program of generating monetary policy evaluation that is of relevance for the actual practice of central banking.
In our quest for the optimal monetary policy scheme we restrict attention to what we call operational interest rate rules. By an operational interest-rate rule we mean an interest-rate rule that satisfies three requirements. First, it prescribes that the nominal interest rate is set as a function of a few readily observable macroeconomic variables. In the tradition of John Taylor's seminal 1993 paper,(12) we focus on rules whereby the nominal interest rate depends on measures of inflation, aggregate activity, and possibly its own lag. Second, the operational rule must induce an equilibrium satisfying the zero lower bound on nominal interest rates. And third, operational rules must render the rational expectations equilibrium unique. This last restriction closes the door to expectations driven aggregate fluctuations.
The object that monetary policy aims to maximize in our study is the expectation of lifetime utility of the representative household conditional on a particular initial state of the economy. Our focus on a conditional welfare measure represents a fundamental departure from most existing normative evaluations of monetary policy, which rank policies based upon unconditional expectations of utility.(13) Unconditional welfare measures ignore the welfare effects of transitioning from a particular initial state to the stochastic steady state induced by the policy under consideration.(14) Indeed, we document that under plausible initial conditions, conditional welfare measures can result in different rankings of policies than the more commonly used unconditional measure. This finding highlights the fact that transitional dynamics matter for policy evaluation.
In our welfare evaluations, we depart from the widespread practice in the neo-Keynesian literature on optimal monetary policy of limiting attention to models in which the nonstochastic steady state is undistorted. Most often, this approach involves assuming the existence of a battery of subsidies to production and employment financed by lump-sum taxes that are aimed at eliminating the long-run distortions originating from monopolistic competition in factor and product markets. The efficiency of the deterministic steady-state allocation is assumed for purely computational reasons. It allows the use of first-order approximation techniques to evaluate welfare accurately up to second order.(15) This practice has two potential shortcomings. First, the instruments necessary to bring about an undistorted steady state (for example, labor and output subsidies financed by lump-sum taxation) are empirically uncompelling. Second, it is not clear ex ante whether a policy that is optimal for an economy with an efficient steady state will also be so for an economy where the instruments necessary to engineer the nondistorted steady state are unavailable. For these reasons, we refrain from making the efficient-steady-state assumption and instead work with a model whose steady state is distorted.
Departing from a model whose steady state is Pareto efficient has a number of important ramifications. One is that to obtain a second-order accurate measure of welfare it no longer suffices to approximate the equilibrium of the model up to first order. Instead, we obtain a second-order accurate approximation to welfare by solving the equilibrium of the model up to second order. Specifically, we use our second-order methodology and computer code.
Our numerical work suggests that, in the model economy we study, the optimal operational interest-rate rule takes the form of a real-interest-rate targeting rule. It features an inflation coefficient close to unity, a mute response to output, no interest-rate smoothing, and is forward looking. The optimal rule satisfies the Taylor principle because the inflation coefficient is greater than unity albeit very close to 1. Optimal operational monetary policy calls for significant inflation volatility. This result stands in contrast to those obtained in the related literature. The main element of the model driving the desirability of inflation volatility is indexation of nominal factor and product prices to 1-period lagged inflation. Under the alternative assumption of no indexation or indexation to long-run inflation, the conventional result of the optimality of inflation stability reemerges.
Schmitt-Grohe is a Research Associate, and Uribe a Faculty Research Fellow, in the NBER's Program on Economic Fluctuations and Growth. They are also professors of economics at Duke University. Their profiles appear later in this issue.
2. S. Schmitt-Grohe and M. Uribe, "Optimal Fiscal and Monetary Policy under Sticky Prices," NBER Working Paper No. 9220, September 2002, and Journal of Economic Theory, 114, (February 2004), pp. 198-230.
3. R. J. Barro, n the Determination of Public Debt, Journal of Political Economy, 87, (1979), pp. 940-71.
4. S. Schmitt-Grohe and M. Uribe, "Optimal Fiscal and Monetary Policy under Imperfect Competition," NBER Working Paper No. 10149, December 2003, and Journal of Macroeconomics, 26, (June 2004), pp. 183-209.
5. See for example, F. Smets and R. Wouters, "Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach," European Central Bank, April 21, 2004; and L. J. Christiano, M. Eichenbaum, and C. Evans, "Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy," NBER Working Paper No. 8403, July 2001.
6. J. Kim and S. H. Kim, "Spurious Welfare Reversals in International Business Cycle Models," Journal of International Economics, 60, (2003), pp. 471-500.
7. S. Schmitt-Grohe, and M. Uribe, "Solving Dynamic General Equilibrium Models Using a Second-Order Approximation to the Policy Function," NBER Technical Working Paper No. 282, October 2002, and Journal of Economic Dynamics and Control, 28, (January 2004), pp. 755-75.
8. See the papers by Christiano, Eichenbaum, and Evans and Smets and Wouters cited earlier.
11. D. Altig, L. J. Christiano, M. Eichenbaum, and J. Linde, "Technology Shocks and Aggregate Fluctuations," manuscript, Northwestern University, 2003.
12. J. Taylor, "Discretion versus Policy Rules in Practice," Carnegie Rochester Conference Series on Public Policy, 39, (December 1993), pp. 195-214.
13. Exceptions are R. Kollmann, "Welfare Maximizing Fiscal and Monetary Policy Rules," mimeo, University of Bonn, March 2003; and our "Optimal Simple And Implementable Monetary and Fiscal Rules," NBER Working Paper No. 10253, January 2004.
14. J. Kim, S. Kim, E. Schaumburg, and C. Sims, "Calculating and Using Second Order Accurate Solutions of Discrete Time Dynamic Equilibrium Models," mimeo, Princeton University, 2003.
15. This simplification was pioneered in J. J. Rotemberg and M. D. Woodford, "Interest Rate Rules in an Estimated Sticky Price Model," NBER Working Paper No. 6618, June 1998, and in J. B. Taylor, ed., Monetary Policy Rules, Chicago: University of Chicago Press, 1999, pp. 57-119.