Meetings, Winter 2006

03/01/2006
Featured in print Reporter

Monetary Economics

NBER’s Program on Monetary Economics met in Cambridge on November 4. NBER Research Associates Michael D. Bordo of Rutgers  University and Julio J. Rotemberg of MIT organized the meeting. The following papers were discussed:

  • Gauti Eggertsson, Federal Reserve Bank of New York, “Great Expectations and the End of the Great Depression” Discussant: Hugh Rockoff, Rutgers University
  • Michael D. Bordo, Christopher Erceg, Andrew Levin, and Ryan Michaels, Federal Reserve Board, “Three Great American Disinflations” Discussant: Francois Velde, Federal Reserve Bank of Chicago
  • Refet Gurkanak, Bilkent University; Andrew Levin; and Eric Swanson, Federal Reserve Bank of San Francisco, “Does Inflation Targeting Anchor Long-Run Inflation Expectations? Evidence from Long-Term Bonds Yields in the U.S., U.K., and Sweden” Discussant: Kenneth Kuttner, Oberlin College
  • Timothy Cogley, University of California, Davis, and Argia Sbordone, Federal Reserve Bank of New York, “A Search for a Structural Phillips Curve” Discussant: Jean Boivin, Columbia University
  • Igor Livshits and James Macgee, University of Western Ontario; and Michele Tertilt, Stanford University, “Accounting for the Rise in Consumer Bankruptcies” Discussant: Stephen Zeldes, Columbia University
  • Lutz Kilian, University of Michigan, “Exogenous Oil Supply Shocks: How Big Are They and How Much Do They Matter for the U.S. Economy?” Discussant: Ana Maria Herrera, Michigan State University

Eggertsson argues that the recovery from the Great Depression was driven by a shift in expectations. This shift was caused by  President Franklin Delano Roosevelt’s (FDR) policy actions. On the monetary policy side, FDR abolished the gold standard and — even more importantly — announced an explicit policy objective of inflating the price level to pre-Depression levels. On the fiscal  policy side, FDR expanded government real and deficit spending, making his policy objective credible. Eggertsson evaluates the economic consequences of FDR; he uses a dynamic stochastic general equilibrium model, assuming sticky prices and rational  expectations.

In their paper, Bordo and his coauthors examine three famous episodes of disinflation (or deflation) in U.S. history, including episodes following the Civil War, World War I, and the Volcker disinflation of the early 1980s. For each of these  episodes, they derive measures of policy predictability that attempt to quantify the extent to which each deflation was anticipated by economic agents. They use their measures to help account for the disparate real effects observed across episodes, and in turn relate them to the policy actions and communication strategy of the monetary authority. They then proceed to account for the salient features of each episode within the context of a stylized model. Their simulations indicate how a more predictable policy of gradual deflation could have helped avoid the sharp post- WWI depression. But the simulations also suggest that securing the benefits of gradualism requires a supporting institutional framework and communication strategy that allows the private sector to make reliable inferences about the course of policy.

Gürkaynak, Levin, and Swanson investigate the extent to which inflation targeting helps anchor long-run inflation expectations by comparing the behavior of daily bond yield data in the United Kingdom and Sweden, both inflation targeters, to that in the United States, a non-inflation-targeter. Using the difference between far-ahead forward rates on nominal and indexed bonds as a measure of compensation for expected inflation and inflation risk at long horizons, the authors examine the extent to which far-ahead forward inflation compensation moves in response to macroeconomic data releases and monetary policy announcements. In the United States, forward inflation compensation exhibits highly significant responses to economic news. In the United Kingdom, there is a level of sensitivity similar to that in the United States prior to the Bank of England gaining independence in 1997, but a striking absence of such sensitivity since the central bank became independent. In Sweden, inflation compensation has been insensitive to economic news over the whole period for which the authors have data. The authors show that these results also are matched by the times-series behavior of farahead forward interest rates and inflation compensation over this period. All of these findings suggest that a known and credible inflation target significantly helps to anchor the private sector’s views of the distribution of long-run inflation outcomes.

The foundation of the New Keynesian Phillips curve is a model of price setting with nominal rigidities that implies that the dynamics of inflation are well explained by the evolution of real marginal costs. Cogley and Sbordone attempt to analyze whether this is a structurally invariant relationship. To assess this, they first estimate an unrestricted time-series model for inflation, unit labor costs, and other variables, and present evidence that their joint dynamics are well represented by a vector autoregression with drifting coefficients and volatilities, as in Cogley and Sargent (2004). Then, following Sbordone (2002, 2003), they apply a two-step minimum distance estimator to estimate deep parameters. Based on their results, they argue that the price-setting model is structurally invariant.

Personal bankruptcies in the United States have increased dramatically, rising from 1.4 per thousand working age population in 1970 to 8.5 in 2002. Livshits, MacGee, and Tertilt use a heterogeneous agent life-cycle model with competitive financial intermediaries who can observe households’ earnings, age, and current asset holdings to evaluate several commonly offered explanations. They find that an increase in uncertainty (income shocks, expense uncertainty) cannot quantitatively account for the rise in bankruptcies. Instead, stories related to a change in the credit market environment are more plausible. In particular, a combination of a decrease in the credit market transactions cost together with a decline in “stigma” does a good job at accounting for the rise in consumer bankruptcy. The authors also argue that the abolition of usury laws and other legal changes have played little role.

Since the oil crises of the 1970s, there has been strong interest in the question of how oil production shortfalls caused by wars and other exogenous political events in OPEC countries affect oil prices, U.S. real GDP growth, and U.S. CPI inflation. Kilian focuses on the modern OPEC period since 1973. His results differ along a number of dimensions from the conventional wisdom. First, he shows that under reasonable assumptions, the timing,  magnitude, and even the sign of exogenous oil supply shocks may differ greatly from current state-of-the-art estimates. Second, the common view — that the case for the exogeneity of at least the major oil price shocks is strong — is supported by the data for the  1980–1 and 1990–1 oil price shocks, but not for  other oil price shocks. Notably, statistical measures of the net oil price that increase relative to the recent past do not represent the exogenous component of oil prices. In fact, only a small fraction of the observed oil price increases during crisis periods can be attributed to exogenous oil production disruptions. Third, compared to  previous indirect estimates of the effects of exogenous supply disruptions on real GDP growth that treated major oil price increases as exogenous, the direct estimates that Kilian obtains suggest a sharp drop after five quarters rather than an immediate and  sustained reduction in economic growth for a year. They also suggest a spike in CPI inflation three quarters after the exogenous oil supply shock rather than a sustained increase in inflation, as is sometimes conjectured. Finally, Kilian’s results put into perspective the importance of exogenous oil production shortfalls in the Middle East. He shows that exogenous oil supply shocks made remarkably little difference overall for the evolution of U.S. real GDP growth and CPI inflation since the 1970s, although they did matter for some historical episodes.

Health Care

The NBER’s Health Care Program met in Cambridge on November 4. NBER Research Associate David M. Cutler, of Harvard University, and Program Director Alan M. Garber, of Stanford University, organized the meeting. These papers were presented:

  • Nancy Beaulieu, Harvard University and NBER; David M. Cutler; and Katherine Ho, Columbia University and NBER, “The Business Case for Diabetes Disease Management at Two Managed Care Organizations”
  • Mark Pauly, University of Pennsylvania and NBER; Christy Thompson, Independence Blue Cross; Thomas Abbott, Medstat, Inc.; William Sage, Columbia University; and James Margolis, Medical Group Management Association, “Who Pays: the Incidence of Higher Malpractice Premiums”
  • Yu-chu Shen, Naval Postgraduate School and NBER; and Glenn Melnick, University of Southern California, “Is Managed Care Still an Effective Cost Containment Device?”
  • Dana Goldman, RAND Corporation and NBER; Pinar Karacamandic and Geoffrey Joyce, RAND Corporation; and Neeraj Sood, RAND Corporation and NBER, “Adverse Selection in Retiree Prescription Drug Plans”
  • Ernst R. Berndt, MIT and NBER; Alisa S. Busch and Sharon-lise Normand, Harvard University; and Richard G. Frank, Harvard University and NBER, “Real Output in Mental Health Care During the 1990s” (NBER Working Paper No. 11557)


Diabetes is the most common and costly of all chronic diseases. There is broad-based agreement on how to manage the disease, yet fewer than 40 percent of diabetics receive guideline levels of medical care. Beaulieu, Cutler, and Ho investigate the reasons for this phenomenon by examining the business case for improved diabetes care from the perspective of a single health plan (HealthPartners of Minnesota). The potential benefits accruing to a health plan from diabetes disease management include medical care cost savings and higher premiums. The potential costs to the health plan derive from disease management program costs and adverse selection. The authors find that the implementation of diabetes disease management coincided with large health improvements. Medical care cost savings over several years were small in the closed panel group practice but moderate for the health plan overall. The difference in cost savings between these two patient populations could be attributable to scale or differences in the baseline health of the populations. They find evidence that adverse selection and the timing of cost and benefits worsen the health plan business case. In addition, the payment systems, from purchaser to health plan and health plan to provider, are very weakly connected to the quality of diabetes care further weakening the business case. Finally, overlapping provider networks create a public goods externality that limits the health plan’s ability to privately capture the benefits from its investments.

Malpractice premiums are higher in some states than in others for apparently similar physician practices. They are rising, and they are rising at different rates. Someone clearly is paying more into the health care or health insurance system, but who? In the first instance, obviously physician practices pay the malpractice premium, but they may be able to shift some or all of high or growing premiums onto insurers and patients. The question of the “incidence” of premiums is an important part of understanding how the system behaves and has been behaving over time. An answer to this question would also help in judging the distribution of gains and losses from efforts to constrain premiums or damage awards. If all the gain from lower premiums goes to physicians, public attitudes may be different than if it is shared with the public. Pauly, Thompson, Abbott, Sage, and Margolis report on a study of premium incidence over the period 1994-2002, when the malpractice insurance system again went into “crisis” as premiums rose significantly in some geographic areas and for some kinds of physicians.

Shen and Melnick take a historical perspective in examining the effects of managed care growth and hospital competition on hospital cost and revenue growth. Looking at managed care’s boom period (1990-4), its mature period (1994-8), and its backlash period (1998-2003), they find that higher managed care presence was indeed effective in slowing down hospital cost and revenue growth during the boom and the mature periods. However, it lost its cost containment effect during the backlash period. This result persists under different estimation methods designed to reduce biases that might result from comitted variable bias and measurement errors. On the other hand, competition effects appear to persist throughout the three  periods. Such persistent competition effects were initially the result of aggressive selective contracting in the high managed care markets, but were later dominated by the  less saturated, but growing, managed care markets that seem to catch up with the more developed markets.

Rising co-payments for prescription drugs, coupled with already low rates of compliance for chronic therapies, raise concerns about the consequences of the design of pharmacy benefits. Goldman, Karaca-mandic, Joyce, and Sood consider one innovative such benefit by which patients with the greatest therapeutic benefit from a prescription drug have lower co-payments. Patients often do not fully internalize future medical benefits of a drug therapy and hence do not use the drug optimally. Better price incentives (through lower co-payments) to patients with higher potential efficacy would encourage optimal compliance and could lead to future health plan savings in terms of avoided health services utilization. The authors model such a copayment scheme for one of the largest classes of prescription drugs: cholesterollowering (CL) therapy. Using claims data from 88 health plans, they study 62,274 patients aged 20 and older who initiated CL therapy between 1997 and 2001. They examine the association between co-payments and compliance in the year following initiation of therapy, and the association between compliance and subsequent hospital and emergency department service use for up to four years following initiation. They use the results to simulate the effects of co-payments that vary depending on a patient’s risk of cardiovascular events. They show that strategically reducing co-payments for patients most at-risk can improve overall compliance and reduce use of other expensive services. In an era of consumer-directed health care and improved information technology, tailoring co-payments to the expected therapeutic benefit of a patient can increase the clinical and economic efficacy of prescription medications.

Health accounts document changes over time in the level and composition of health spending. There has been a continued evolution in the ability to track such outlays. Less rapid has been the ability to interpret changes in spending. Berndt, Busch, Frank and Normand apply quality-adjusted price indexes for several major mental disorders to national mental health account estimates to assess changes in real “output”. They show that using the new price indexes reveals large gains in real output relative to application of BLS indexes.  

Macroeconomics and Individual Decisionmaking

The NBER’s Working Group on Macroeconomics and Individual Decisionmaking met in Cambridge on November 5. Working Group Directors George A. Akerlof, University of California, Berkeley, and Robert J. Shiller, NBER and Yale University, set the following agenda:

  • Nabil Al-najjar, Sandeep Baliga, and David Besanko, Northwestern University, “The Sunk Cost Bias and Managerial Pricing Practices” Discussant: Truman Bewley, Yale University
  • William T. Dickens, Brookings Institution; Lorenz Goette, University of Zurich; Erica L. Groshen, Federal Reserve Bank of New York; Steinar Holden, University of Oslo; Julian Messina, Jarkko Turunen, and Melanie Ward, European Central Bank; and Mark E.  Schweitzer, Federal Reserve Bank of Cleveland, “The Interaction of Labor Markets and Inflation: Analysis of Micro Data from the International Wage Flexibility Project” Discussant: Ricardo Reis, Princeton University and NBER
  • Refet S. Gürkaynak, Bilkent University, and Justin Wolfers, University of Pennsylvania and NBER, “Macroeconomic Derivatives: An Initial Analysis of Market-Based Macro Forecasts, Uncertainty and Risk” Discussant: Paul Willen, Federal Reserve Bank of Boston
  • Ricardo J. Caballero, MIT and NBER, and Arvind Krishnamurthy, Northwestern University, “Financial System Risk and Flight to  Quality” Discussant: Jon Faust, Federal Reserve Board
  • Annamaria Lusardi, Dartmouth College and NBER, and Olivia S. Mitchell, University of Pennsylvania and NBER, “Financial Literacy and Planning: Implications for Retirement Well-Being” Discussant: Andrew Caplin, New York University and NBER


Al-Najjar, Baliga, and Besanko provide an explanation for why the sunk cost bias persists among firms competing in a differentiated product oligopoly. Firms experiment with cost methodologies that are consistent with real-world accounting practices, including ones that allocate fixed and sunk costs to determine variable costs. These firms follow naive adaptive learning to adjust prices. Costing methodologies that increase profits are reinforced. The authors show that all firms eventually display the sunk cost bias. For the canonical case of symmetric linear demand, they obtain comparative statics results showing how the degree of the sunk cost bias changes with demand.

The adoption of explicit or implicit inflation targets by many central banks, and the low stable rates of inflation that have ensued, raise the question of how inflation affects market efficiency. Dickens, Goette, Groshen, Holden, Messina, Schweitzer, Turunen, and Ward study three market imperfections that cause the rate of inflation to affect labor market efficiency. First, the presence of substantial resistance to nominal wage cuts in a low inflation environment can slow the adjustment of relative wages to labor market shocks and thus result in a misallocation of resources. Alternatively, to the extent that the downward rigidity prevents real wage cuts, rather than nominal wage cuts, inflation will not improve efficiency. In this  case, only increases in real wages resulting from productivity growth can reduce the misallocation of resources caused by a real  wage floor. Higher inflation is associated  with more frequent wage and price changes, higher search costs for goods or jobs, and greater uncertainty about the future path of wages and prices. These effects can lead to errors and adjustment lags in wage setting and diminish the information value of observed wages. Thus, increased inflation may also cause a misallocation of resources. In short, inflation can grease the wheels of economic adjustment in the labor market by relieving the constraint imposed by downward nominal wage rigidity, but not if there is also substantial downward real wage rigidity. At the same time, inflation can throw sand in the wheels of economic adjustment by degrading the value of price signals. Knowledge of which of these imperfections dominates  at different levels of inflation and under different institutional regimes can be valuable for choosing an inflation target and for learning more about the economic environment in which monetary policy is conducted. The authors briefly review the empirical literature in order to motivate the method used to distinguish these three labor market imperfections. Next, they describe the data and empirical approach which applies a common protocol to 31 distinct panels of workers wage changes. Then they establish that wage changes show substantial dispersion that rises with the rate of wage inflation, as predicted by grease and sand effects. To identify the three imperfections under consideration, they examine histograms of wage changes (that are corrected for measurement errors) for the particular asymmetries  and spikes that are characteristic of downward real and nominal wage rigidity. This process yields estimates of the prevalence of real and nominal wage rigidity for each dataset and year, which they then analyze for insight into the causes and consequences of wage rigidities. Finally, they examine the linkage between estimates of true wage change dispersion and inflation for evidence of sand effects.

In September 2002, a new market in “Economic Derivatives” was launched allowing traders to take positions on future values of several macroeconomic data releases. Gürkaynak and Wolfers provide an initial analysis of the prices of these options. They find that market-based measures of expectations are similar to survey-based forecasts although the market-based measures somewhat more accurately predict financial market responses to surprises in data.  hese markets also provide implied probabilities of the full range of specific outcomes, allowing the authors to measure  uncertainty, assess its driving forces, and compare this measure of uncertainty with the dispersion of point-estimates among individual forecasters (a measure of disagreement).  They also assess the accuracy of market-generated probability density forecasts. A consistent theme is that few of the behavioral anomalies present in surveys of professional forecasts survive in equilibrium, and that these markets are remarkably well calibrated. Finally, they assess the role of risk, finding little evidence that risk-aversion drives a wedge between market prices and probabilities in this market.

Caballero and Krishnamurthy present a model of flight to quality episodes that emphasizes financial system risk and the Knightian uncertainty surrounding these episodes. In the model, agents are uncertain about the probability distribution of shocks in markets different from theirs, treating such uncertainty as Knightian. Aversion to Knightian uncertainty generates demand for safe financial claims. It also leads agents to require financial intermediaries to lock-up capital to cover their own markets’ shocks in a manner that is robust to uncertainty over other markets, but is wasteful in the aggregate. Locked collateral cannot move across markets to offset negative shocks and hence can trigger a financial accelerator. A lender of last resort can unlock private capital markets to stabilize the economy during these periods by committing to intervene should  conditions worsen.

Some recent evidence suggests that many American households will not be able to maintain their lifestyles in  retirement. Little is known about why people fail to plan for retirement, and whether planning and information costs might affect retirement saving patterns. To better understand these issues, Lusardi and Mitchell devised and fielded a purpose- built module on planning and financial literacy for the 2004 Health and Retirement Study (HRS). This module measures how workers make their saving decisions, how they collect the information for making these decisions, and whether they possess the financial literacy needed to make these decisions. The resulting analysis shows that financial illiteracy is widespread among older Americans: only half of the age 50+ respondents could correctly answer two simple questions regarding interest compounding and inflation, and only one-third understood these as well as stock market risk. Women, minorities, and those without a college  degree were particularly at risk of displaying low financial knowledge. The authors also evaluate whether people tried to figure out how much they need to save for retirement, whether they devised a plan, and whether they succeeded at the plan. In fact, these calculations prove to be difficult: fewer than one-third of our age 50+ respondents ever tried to devise a retirement plan, and only two-thirds of those who tried, actually claim to have succeeded. Overall, fewer than one-fifth of the respondents believed that they engaged in successful retirement planning. The authors also find that financial knowledge and planning are clearly interrelated: those who displayed financial knowledge were more likely to plan and to succeed in their planning. Moreover, those who did plan were more likely to rely on formal planning methods such as retirement calculators, retirement seminars, and financial experts, and less likely to rely on family/relatives or coworkers. Finally, Lusardi and Mitchell show that keeping track of spending and budgeting habits appears  conducive to retirement saving.

Asset Pricing

The NBER’s Program on Asset Pricing met in Cambridge on November 11. Jessica Wachter, NBER and The Wharton School, and Luis M. Viceira, Harvard Business School, organized the meeting. The following papers were discussed:

  • Bernard Dumas, INSEAD and NBER, and Alexander Kurshev and Raman Uppal, London Business School, “What Can Rational Investors Do About Excessive Volatility and Sentiment Fluctuations?” Discussant: Leonid Kogan, MIT and NBER
  • Lubos Pastor and Pietro Veronesi, University of Chicago and NBER, “Technological Revolutions and Stock Prices” Discussant: Markus Brunnermeier,Princeton University and NBER
  • Jun Pan, MIT and NBER, and Kenneth Singleton, Stanford University and NBER, “Default and Recovery Implicit in the Term Structure of Sovereign CDS Spreads” Discussant: Francis Longstaff, University of California, Los Angeles and NBER
  • Ravi Bansal and Ed Fang, Duke University, and Amir Yaron, University of Pennsylvania and NBER, “Equity Capital: A Puzzle?” Discussant: John Heaton, University of Chicago and NBER
  • Borja Larrain, Federal Reserve Bank of Boston, and Motohiro Yogo, University of Pennsylvania, “Does Firm Value Move Too Much to be Justified By SubsequentChanges in Cash Flow?” Discussant: Malcolm Baker, Harvard University and NBER
  • Jacob Boudoukh, Matthew Richardson, and Robert Whitelaw, New York University and NBER, “The Myth of Long-Horizon Predictability”
  • Amit Goyal, Emory University, and Ivo Welch, Brown University and NBER, “A Comprehensive Look at the Empirical Preformance of Equity Premium Prediction”
  • John Y. Campbell, Harvard University and NBER, and Samuel B. Thompson, Harvard University, “Predicting the Equity Premium Out of Sample: Can Anything Beat the Historical Average?”(NBER Working Paper No. 11468) Discussant for all three papers: John Cochrane, University of Chicago and NBER

 
Dumas, Kurshev, and Uppal analyze the trading strategy that would allow an investor to take advantage of “excessive” stock price volatility and “sentiment” fluctuations. They construct a general equilibrium model of sentiment. In it, there are two classes of agents; stock prices are excessively volatile because one class is overconfident about a public signal. As a result, this class of irrational agents changes its expectations too often, sometimes being excessively optimistic, sometimes being excessively  pessimistic. The authors determine and examine the trading strategy of the rational investors who are not overconfident about the signal. They find that, because irrational traders introduce an additional source of risk, rational investors reduce the proportion of wealth invested into equity, except when they are extremely optimistic about future growth. Moreover, their optimal portfolio strategy is based not just on a current price divergence but also on a model of irrational behavior and a prediction concerning the speed of convergence. Thus, the portfolio strategy includes a protection in case there is a deviation from that prediction. The authors find that long maturity bonds are an essential accompaniment of equity investment, as they serve to hedge this “sentiment risk.” Even though rational investors find it beneficial to trade on their belief that the market is excessively volatile, the answer to the question posed in the title is: “There is little that rational investors can do optimally to exploit, and hence eliminate, excessive volatility, except in the very long run.”

During technological revolutions, stock prices of innovative firms tend to exhibit high volatility and bubble-like patterns, which are often attributed to investor irrationality. Pastor and Veronesi develop a general equilibrium model that rationalizes the observed price patterns. High volatility is a result of high uncertainty about the average productivity of a new technology. Investors learn about this productivity before deciding whether to adopt the technology on a large scale. For technologies that ultimately are adopted, the nature of uncertainty changes from idiosyncratic to systematic as the adoption becomes  more likely; as a result, stock prices fall after an initial run-up. This “bubble” in stock prices is observable ex post but unpredictable ex ante, and it is most pronounced for technologies characterized by high  uncertainty and fast adoption. The authors examine stock prices in the early days of American railroads, and find evidence consistent with a largescale adoption of the railroad technology by the late 1850s.

Pan and Singleton explore in depth the nature of the risk-neutral credit-event intensities (λQ) that best describe the term structures of sovereign CD spreads. They examine three distinct families of stochastic processes: the square-root, lognormal, and three-halves processes. Their models use different specifications of mean reversions and time-varying volatilities to fit both the distributions of spreads and the variation over  time in the shapes of the term structures of spreads. They find that the models imply highly persistent λQ that are strongly correlated with measures of global credit event risks and the VIX index of option-implied volatilities. Moreover, the correlations across countries of the model-implied  credit-event intensities are large, and change with credit-market conditions. There are substantial model-implied risk premiums associated with unpredictable future variation in λQ. The authors show that the term structure of CD spreads allows them to separately identify both the loss rate in the event of default, LQ, and the parameters of  the process, λQ. Unconstrained estimates of LQ are much lower than the values typically assumed in the financial industry. Finally, to shed light on the economic consequences of differing levels of LQ or persistence in λQ, the authors explore the sensitivity of  the prices of options on CD contracts to alternative settings of the parameters governing the default process.

In almost any  quilibrium model, shifts in sectoral wealth have direct implications for asset returns, inducing investors to hold more or less of their wealth in the sector. For an expanding sector, these inducements can be in the form of higher-mean or lower-volatility assets. Bansal, Fang, and Yaron document that shifts in sectoral financial wealth have virtually no bearing on the subsequent mean and volatility of sectoral returns. About 90 percent of the wealth share fluctuations are attributable to movements in net payout and 10 percent to changes in expected returns. The evidence shows that sectoral wealth and asset returns are not related — this leads to the equity capital puzzle.

Through the flow-of-funds identity and the capital accumulation equation, Larrain and Yo develop a present-value model that relates the market value of corporate assets to its expected future cash flow. The relevant measure of cash flow is net payout, which is the sum of dividends, interest, and net equity and debt repurchases. A variance decomposition of the ratio of net payout to assets shows that 12 percent of its variation is explained by asset returns, while 88 percent is explained by cash flow growth. The constant discount rate present-value model is adequate for valuing corporate assets, in contrast to its failure for valuing equity.

The prevailing view in finance is that the evidence for being able to predict long-horizon stock returns is significantly stronger than for short-horizon returns. Boudoukh, Richardson, and Whitelaw show that, for all practical purposes, the estimators are almost perfectly correlated across horizons under the null hypothesis of no  predictability. For example, for the persistence levels of dividend yields, the analytical correlation is 99 percent between the 1- and 2-year horizon estimators and 94 percent between the 1- and 5-year horizons, because of the combined effects of overlapping returns and persistence of the predictive variable. Common sampling error across equations leads to OLS coefficient estimates and R-squares that are roughly proportional to the horizon under the null of no predictability. This is the precise pattern found in the data. They corroborate the asymptotic  theory  and extend the analysis using extensive simulation evidence. The authors then perform joint tests across horizons for a variety of explanatory variables, and there is little or no evidence of predictability in the data. 

Economists have suggested a whole range of variables that predict the equity premium: dividend price ratios, dividend yields, earnings-price ratios, dividend payout ratios, corporate or net issuing ratios, book-market ratios, beta premia, interest rates (in various guises), and  consumption-based macroeconomic ratios (cay). Goyal and Welch comprehensively reexamine the performance of these variables, both in-sample and out-of-sample. They find that most variables would not have helped an investor outpredict the historical equity premium mean. Most would have hurt outright. None deserves an unqualified endorsement.

A number of variables are correlated with subsequent returns on the aggregate U.S. stock market in the twentieth century. Some of these variables are stock market valuation ratios, others reflect patterns in corporate finance or the levels of short- and long-term interest rates. Goyal and Welch (2004) have argued that insample correlations conceal a systematic failure of these variables out of sample: none are able to beat a simple forecast based on the historical average stock return. In their paper, Campbell and Thompson show that forecasting variables with significant forecasting power in-sample generally have a better out-ofsample performance than a forecast based on the historical average return, once sensible restrictions are imposed on the signs of coefficients and return forecasts. The out-of-sample  predictive power is small, but they find that it is economically meaningful. They also show that a variable is quite likely to have poor out-of-sample performance for an extended period of time even when the variable genuinely predicts returns with a stable coefficient.

Corporate Finance

The NBER’s Program on Corporate Finance met at the Harvard Business School on November 11. Heitor Almeida and Daniel Wolfenzon, NBER and New York University Stern School of Business, organized this program:

  • Viral Acharya, London Business School, and Rangarajan Sundaram and Kose John, New York University, “Cross-Country Variations in Capital Structures: The Role of Bankruptcy Codes” Discussant: Matias Braun, University of California, Los Angeles
  • Jean Helwege, University of Arizona; Christo Pirinsky, Texas A&M University; and Rene Stulz, Ohio State University and NBER, “Why Do Firms Become Widely Held? An Analysis of the Dynamics of Corporate Ownership” Discussant: Randall Morck, NBER and University of Alberta
  • Amir Sufi, University of Chicago and NBER, “Bank Lines of Credit in Corporate Finance: An Empirical Analysis” Discussant: Murillo Campello, University of Illinois
  • Marianne Bertrand, University of Chicago and NBER, Francis Kramarz, CREST-ENSAE; Antoinette Schoar, MIT and NBER; and David  Thesmar, HEC, “Politically Connected CEOs and Corporate Outcomes: Evidence from France” Discussant: Mara Faccio, Vanderbilt University
  • Massimo Massa, INSEAD, and Andrei Simonov, Stockholm School of Economics, “Shareholder Homogeneity and Firm Value: The Disciplining Role of Non-Controlling Shareholders” Discussant: Martijn Cremers, Yale University
  • Alexander Dyck, University of Toronto; Natalya Volchkova, Russian-European Center for Economic Policy; and Luigi Zingales, Harvard University and NBER, “The Corporate Governance Role of the Media: Evidence from Russia” Discussant: Stefano Dellavigna, UC, Berkeley


Acharya, Sundaram, and John investigate the impact of bankruptcy codes on firms’ capital-structure choices. They develop a  theoretical model to identify how firm characteristics may interact with the bankruptcy code in determining optimal capital structures. A novel and sharp empirical implication emerges from this model: the difference in leverage choices under a relatively equity-friendly bankruptcy code (such as the U.S. code) and one that is relatively more debt-friendly (such as the U.K. code) should be a decreasing function of the anticipated liquidation value of the firm’s assets. Using a large database of U.S. and U.K. firms over the period 1990 to 2002, they subject this prediction to extensive empirical testing, both parametric and  on-parametric, using different proxies for liquidation values and different measures of leverage. They find strong support for the theory; that is, proxies for liquidation value are both statistically and economically significant in explaining leverage  differences across the two countries. On the other hand, many of the other factors that are known to affect within-country leverage (such as size) cannot explain acrosscountries differences in leverage.

Helwege, Pirinsky, and Stulz consider IPO firms from 1970 to 2001 and examine the evolution of their insider ownership over time to understand better why and how U.S. firms that become widely held did so. In their sample, a majority of firms has insider ownership below 20 percent after ten years. The authors find that a firm’s stock market performance and trading play an extremely important role in its insider ownership dynamics. Firms that  experience large decreases in insider ownership and/or become widely held have high valuations, good recent stock market  performance, and liquid markets for their stocks. In contrast and surprisingly, variables suggested by agency theory have limited success in explaining the evolution of insider ownership.

Sufi uses novel data collected from annual 10-K SEC filings to conduct the first large sample empirical examination of the use of bank lines of credit by public corporations. He finds that the supply of lines of credit by banks to corporate borrowers is particularly sensitive to the borrowers’ historical profitability. Even among borrowers with access to a bank line of credit, banks use strict covenants on profitability, and the borrower loses access to the unused portion of the line of credit when it experiences a drop in profitability. These findings identify a specific constraint (the inability to obtain a line of credit) that causes low profitability firms to hold larger cash balances in their liquidity management strategies.

A number of papers have documented that political leaders may use their power to grant favors to connected private firms. In this paper, Bertrand, Kramarz, Schoar, and Thesmar investigate the reverse perspective: they ask whether politically connected business leaders alter corporate decisions to bestow “re-election favors” onto incumbent politicians. They study this question in the context of France, where they document a large overlap in educational and professional background between the CEOs of publicly-traded firms and politicians: more than half of the assets traded on the French stock markets are managed by CEOS who were formerly civil servants. Overall, the results support the hypothesis that connections between CEOs and politicians factor into corporate decisions on job creation and destruction. Firms managed by connected CEOs create more jobs (and destroy fewer plants) in politically more contested areas, and especially around election years. The authors find weak evidence that these networks between politicians and business executives follow partisan lines. In return, “favors” extended by connected CEOs to politicians seem to be reciprocated through privileged access to subsidy programs and lower taxes. Finally, the authors show that firms managed by politically connected CEOs have lower performance than non-connected firms, suggesting that political connections might impose a cost on the firms.

Massa and Simonov study how the shareholding structure of a firm affects  its stock price and profitability. They argue that the degree of shareholder homogeneity affects firm value. Homogeneous shareholders act as a disciplining device on managers, inducing higher profitability, higher stock price, lower volatility and higher transparency. Shareholder homogeneity represents an alternative and indirect source of corporate governance based on the stock market. The authors test this hypothesis by using a dataset containing  information on all the shareholders for each firm in Sweden from 1995 to 2001. They construct two novel proxies for shareholder homogeneity: the first is based on the age cohort of the shareholders, and the second on their degree of college interaction. For each firm, they measure the degree of homogeneity of all shareholders. Using this proxy, they show that greater homogeneity  increases firm profitability  and returns, and reduces analyst error, analyst dispersion, and stock volatility.

Dyck, Volchkova, and Zingales study the effect of media coverage on corporate governance outcomes by focusing on Russia in the period 1999—2002. Russia provides multiple examples of corporate governance abuses, where traditional corporate governance mechanisms are ineffective, and where they can identify an exogenous source of news coverage arising from the presence of an investment fund, the Hermitage fund, that tried to shame companies by exposing their abuses in the international media. The authors find that the probability that a corporate governance abuse is reversed is affected by the coverage of the news in the Anglo-American press. The result is not attributable to the endogeneity of news reporting, since this result holds even when they instrument media coverage with the presence of the Hermitage fund among its shareholders and the “natural” newsworthiness of the company involved. They confirm this evidence with a case study.

Behavioral Economics

The NBER’s Working Group on Behavioral Economics, directed by NBER Research Associates Robert J. Shiller of Yale University and Richard H. Thaler, University of Chicago, met in Cambridge on November 12. The following papers were discussed:

  • Andrei Shleifer and Sendhil Mullainathan, Harvard University and NBER, “Persuasion in Finance” Discussant: Shane Frederick, MIT
  • Jeffrey Wurgler, New York University and NBER, and Malcolm Baker, Harvard University and NBER, “Government Bonds and the Cross-Section of Stock Returns” Discussant: Bhaskaran Swaminathan, Cornell University
  • Markus Brunnermeier, Princeton University, and Christian Julliard, London School of Economics, “Money Illusion and Housing Frenzies” Discussant: Christopher J. Mayer, Columbia University and NBER
  • Yiming Qian and Matthew Billett, University of Iowa, “Are Overconfident Managers Born or Made? Evidence of Self-Attribution Bias from Frequent Acquirers” Discussant: Ulrike Malmendier, Stanford University and NBER
  • Harrison Hong, Princeton University, and Marcin Kacperczyk, University of British Columbia, “The Price of Sin: The Effects of Social Norms on Markets” Discussant: Owen Lamont, Yale University and NBER
  • Luigi Guiso, University of Chicago; Paola Sapienza, Northwestern University and NBER; and Luigi Zingales, Harvard University and  NBER, “Trusting the Stock Market”(NBER Working Paper No. 11648) Discussant: Joshua Coval, Harvard University

Persuasion is a fundamental part of social activity, yet it is rarely studied by economists. Mullainathan and Shleifer compare the traditional economic model, in which persuasion is an effort to change the listener’s mind using information, with a behavioral model, in which persuasion is an effort to fit the message into the audience’s already held beliefs. They present a simple  formalization of the behavioral model, and compare the two models using data on financial advertising in Money and Business Week magazines over the course the internet bubble. The evidence on the content of persuasive messages is broadly consistent with the behavioral model of persuasion.

Baker and Wurgler document that U.S. government bonds co-move more strongly with bond-like stocks: stocks of large, mature, low-volatility, profitable, dividend-paying firms that are neither high growth nor distressed. This  pattern may be caused by common shocks to real cash flows, rationally required returns, or flights to quality in which drops in investor sentiment increase the demand for both government bonds and bond-like stocks. Consistent with both the required returns and sentiment channels, the authors find a common predictable component in bonds and bondlike stocks. Consistent with the sentiment channel, they find that bonds and bond-like stocks co-move with inflows into government bond and conservative stock mutual funds.

A reduction in inflation can fuel run-ups in housing prices if people suffer from money illusion. For example, basing the decision on whether to rent or buy a house simply on monthly rent relative to current monthly mortgage payments, agents do not properly take into account that inflation lowers future real mortgage payments, therefore systematically mis-evaluating real estate. After empirically decomposing the price-rent  ratio into a rational component and an implied mispricing, Brunnermeier and Julliard find that: 1) inflation and the nominal interest rate explain a large share of the time-series variation of the mispricing; 2) the  run-ups in housing prices starting in the late 1990s can be reconciled with the contemporaneous reduction in inflation and nominal interest rates; and 3) the tilt effect cannot rationalize these findings.

Billett and Qian explore the source of managerial hubris in mergers and acquisitions by examining the history of deals made by individual acquirers. Their study has three main findings: 1) compared to their first deals, acquirers of second and higher-order deals experience significantly more negative announcement effects; 2) while acquisition likelihood increases in the performance associated with previous acquisitions, previous positive performance does not curb the negative wealth effects associated with future deals; 3) top management’s net purchase of stock is greater preceding high order deals than it is for first deals. The authors interpret these results as consistent with  self-attribution bias leading to managerial overconfidence. They also find evidence that the market anticipates future deals based impounds such anticipation into stock prices.

Hong and Kacperczyk provide evidence for the effects of social norms on markets by studying “sin” stocks — publicly-traded companies involved in producing alcohol, tobacco, and gaming. They hypothesize that there is a societal norm to not fund operations that promote vice and that some investors, particularly institutions subject to norms, pay a financial cost in abstaining from these stocks. Consistent with this hypothesis, sin stocks are less held by certain institutions, such as pension plans (but not by mutual funds who are natural arbitrageurs), and less followed by analysts than other stocks. Consistent with them facing greater litigation risk and/or being neglected because of social norms, they outperform the market even after accounting for well-known return predictors. Corporate financing decisions and time-variation in norms for tobacco also indicate that norms affect stock prices. Finally, the authors gauge the relative importance of litigation risk versus neglect for returns. Sin stock returns are not systematically related to various proxies for litigation risk, but are weakly correlated with the demand for socially responsible investing, consistent with them being neglected.

Guiso, Sapienza, and Zingales provide a new explanation for the limited stock market participation puzzle. In deciding whether to buy stocks, investors factor in the risk of being cheated. The perception of this risk is a function not only of the objective characteristics of the stock, but also of the subjective characteristics of the investor. Less trusting individuals are less likely to buy stock and, conditional on buying stock, they will buy less stock. The calibration of the model shows that this problem is sufficiently severe to account for the lack of participation of some of the richest  investors in the United States as well as for differences in the rate of participation across countries. The authors also find evidence consistent with these propositions in Dutch and Italian microdata, as well as in cross-country data.

Higher Education

The NBER’s Working Group on Higher Education met in Cambridge on November 17. Director Charles T. Clotfelter, NBER and Duke University, organized this program:

  • Jesse Rothstein, Princeton University and NBER, and Albert Yoon, Northwestern University, “Mismatch in Law School” Discussant: Thomas J. Kane, Harvard University and NBER
  • Christopher Cornwell and David Mustard, University of Georgia, “Merit Aid and Sorting: The Effects of HOPE-Style Scholarships on College Ability Stratification” Discussant: Susan Dynarski, Harvard University and NBER
  • Devin Pope, University of California, Berkeley, and Jaren Pope, North Carolina State University, “Understanding College Choice Decisions: How Sports Success Garners Attention and Provides Information” Discussant: Sarah Turner, University of Virginia and NBER
  • John J. Siegfried and T. Aldrich Finegan, Vanderbilt University, and Wendy Stock, Montana State University, “Time-to-Degree for the Economics PhD Class of 2001–02” and “Attrition in Economics Ph.D. Programs” Discussant: Ronald G. Ehrenberg, Cornell University and NBER
  • Scott Carrell, Dartmouth College, and Frederick V. Malmstrom and James E. West, U.S. Air Force Academy, “Peer Effects in Academic Cheating” Discussant: David Zimmerman, Williams College and NBER
  •  Ofer Malamud, University of Chicago, “Breadth vs. Depth: The Effect of Academic Specialization on Labor Market Outcomes” Discussant: Bruce Sacerdote, Dartmouth College and NBER


According to the “mismatch” hypothesis, affirmative action preferences in admissions induce minority students to attend selective schools where they are unable to compete with their more qualified white classmates. Rothstein and Yoon implement two tests of mismatch using data on law students. Students attending more selective law schools earn substantially lower grades than  similarly-qualified students at less selective schools, but are no less likely to graduate or pass the bar exam, and obtain better jobs at higher salaries. The authors also compare black students to whites. In the upper four quintiles of the LSATundergraduate GPA distribution, blacks and whites graduate and pass the bar exam at similar rates, though blacks attend more selective schools and earn lower grades; blacks also obtain better post-law-school jobs. In the bottom quintile, black bar passage rates are lower. However, this cannot confidently be attributed to mismatch, as many more whites than blacks are unable to gain admission to law school, introducing the potential for sample selection bias.

In the last 15 years there has been a significant increase in merit aid. Since the early 1990s nearly 30 state-sponsored merit programs have been started, about half of which are based largely on Georgia’s HOPE Scholarship. Coincident with this increase in merit aid has been increased attention to sorting in various aspects of life, especially in education. Cornwell and Mustard examine the extent to which meritbased aid exacerbates or ameliorates sorting by ability in higher education. They use data from Peterson’s Guide to Colleges and the Integrated Post-secondary Education Data System (IPEDS) to evaluate this relationship. From these sources they create a large panel dataset of institutions of higher learning in the Southern Regional Education Board (SREB), and test how merit aid affects sorting between and within states. Their empirical strategy treats HOPE as a natural experiment and contrasts the quality of freshmen at Georgia colleges to their  out-of-state counterparts. The difference-in-differences estimates show that HOPE increased the quality of entering freshmen in Georgia institutions relative to their out-of-state peers. At the highestquality institutions HOPE raises all measures of student quality and the homogeneity of students by ability. The lowest-quality institutions experience no statistically significant effect from HOPE on any measure of student quality. The authors conclude that statesponsored merit aid programs increase the retention of high ability students for college and also increase the ability stratification of institutions within states. They also examine two indirect measures of student selectivity: acceptance and yield rates. HOPE decreases acceptance rates at all types of institutions, but the percentage change is largest at the universities, which are most space constrained. HOPE increases yield rates for universities but not for any other institution categories. Put together, these results suggest that HOPE substantially increased the selectivity at universities. In addition to Georgia, five other states (Arkansas, Florida, Kentucky, Louisiana, Maryland, and South Carolina) in the SREB started large-scale merit aid programs during the sample period. The data show that in general universities in these states experience similar gains in verbal and math SAT scores and the percentage of students who graduate in the top 10 percent of their high school classes. There are two exceptions. Louisiana, which uses a relatively low threshold criterion to qualify for its merit award, experiences no statistically significant increase in SAT scores from its merit program. Florida’s Bright Futures Scholarship appears to reduce the SAT scores of incoming students while increasing the fraction of  students who graduated from the top 10 percent of their high school classes.

Deciding where to apply to college among the thousands of four-year schools in the United States is a daunting task for most teenagers. High school students are typically not aware of all of the benefits that each school might offer. In fact, observation suggests that many students may be more familiar with a school’s recent sports record than its academic quality. Pope and Pope develop a simple model of school choice that incorporates  the limited awareness that high school students may have regarding the utility of attending different colleges. Their model predicts that college sports success may increase a school’s future applications both by making students more aware of that college and by increasing the utility associated with attending that school. Using an administrative dataset that records where high school students sent their SAT scores, the authors analyze the effect of sports success on sent test scores for all 332 schools that participate in NCAA Division I basketball or football. They show that sent test scores act as a reasonable proxy for sent applications. Their results indicate that sports success in a given year can increase the total number of students that send their test scores the following year by up to 10 percent. They also show that certain demographic groups (males, blacks, and students who played sports in high school) are significantly more influenced by sports success and that schools can expect changes in sent test scores by up to 15–20 percent after a good sports year for these groups. The authors conclude that the increase in sent test scores stems from both the increased exposure/awareness that schools receive because of sports and the increased utility that students  associate with attending a school with a strong sports program.

Stock and Siegfried use survey responses from Ph.D. graduates and thesis advisors to estimate the time  required for the class of 2001–2 to earn a degree. Median time to earn the Ph.D. is 5.5 years, up from 5.25 years for the class of 1996–7. The time required to write a dissertation is a little longer than the time required to complete comprehensive examinations and course work. Graduates who had their first child while in a Ph.D. program are estimated to finish almost one year later than others. Those with predominantly fellowship support finished about six months  faster than those funded predominantly by a teaching assistantship, as did those whose dissertation was a set of essays rather than a single topic treatise. Americans who did their undergraduate work at either a Top-50 U.S. liberal arts or other U.S. college or university that does not offer a Ph.D. in economics finished faster than their counterparts who earned a bachelor’s degree from a U.S. university that offers a Ph.D. in economics. International students from predominantly English speaking countries finished faster than other students studying in the United States on temporary visas.

Stock, Finegan, and Siegfried use information about 586 individuals who matriculated into two economics Ph.D. programs in Fall 2002 to estimate firstand second-year attrition rates.  after two years, 26.5 percent of the initial cohort had left, equally divided between the first and second years. Attrition varies widely across individual programs. It is lower among the most highly rated 15 programs, for students with higher verbal and  quantitative GRE scores, and for those on a research assistantship. Poor academic performance is the most cited reason for  withdrawal. About 15 percent transfer to other economics programs because they are dissatisfied with some aspect of the particular program where they first enrolled. Using self-reported academic honor violations from the classes of 1959 through 2002 at the three major U.S. military service academies (Air Force, Army, and Navy), Carrell, Malmstrom, and West measure how peer honesty influences individual cheating behavior. All else equal, they find higher levels of peer cheating result in a substantially increased probability that an  individual will cheat. They identify through separate estimation procedures an exogenous (contextual or pre-treatment) peer effect and an endogenous (during treatment) peer effect. Results for the (first-order) exogenous peer effect indicate that one additional high school cheater creates 0.33 to 0.48 new college cheaters. Results for the (first-order) endogenous peer effect indicate that one additional college cheater creates 0.61 to 0.86 new college cheaters. These results imply that, in equilibrium, the social multiplier for academic cheating ranges between 2.56 and 3.97. 

Malamud examines the tradeoff between early and late specialization in the context of higher education. While some educational systems require students to specialize early by choosing a major field of study prior to entering university, others allow students to postpone this choice. Malamud develops a model in which individuals, by taking courses in different fields of study, accumulate field-specific skills and receive noisy signals of match quality in these fields. With later specialization, students have more time to learn about match quality in each field but less time to acquire specific skills once a field is chosen. Malamud derives comparative static predictions between regimes with early and late  specialization, and tests these predictions across British systems of higher education using university administrative data and survey data on 1980 university graduates. He finds that individuals in Scotland, where specialization occurs relatively late, are less likely to switch to an occupation that is unrelated to their field of study compared to their English counterparts who specialize earlier. According to the model, the return to being well matched to an occupational field is high relative to the return to specific skills and there may therefore be benefits to later specialization.  Malamud also finds strong evidence in support of the prediction that individuals who switch to unrelated occupations earn lower wages but no evidence that the cost of switching differs between those specializing early and late.

Education

The NBER’s Education Program met in Cambridge on November 18. Program Director Caroline M. Hoxby, NBER and Harvard University, chose the following papers for discussion:

  • David N. Figlio, University of Florida and NBER, “Why Barbie Says ‘Math is Hard’”
  • Margaret Ledyard, University of Texas at Austin, “ Why are Private Schools Small? School Location, Returns to Scale, and Size”
  • Stephanie Riegg Cellini, University of California, Los Angeles, “Funding Schools or Financing Students: Public Subsidies and the Market for Two-Year College Education”
  • Joshua Angrist, MIT and NBER, and Aimee Chin, University of Houston; and Ricardo Godoy, Brandeis University, “Is Spanish-Only Schooling Responsible for the Puerto-Rican Language Gap?”
  • Lex Borghans and Bart Golsteyn, University of Maastrich, “Imagination, Time Discounting, and Human Capital Investment Decisions”
  • Nora Gordon, University of California, San Diego and NBER; Elizabeth Cascio, University of California, Davis and NBER; Sarah Reber, University of California, Los Angeles; and Ethan Lewis, Federal Reserve Bank of Philadelphia, “Financial Incentives and the Desegregation of Southern Public Schools”


Figlio adopts a novel approach to discerning one pathway through which family and cultural expectations and resultant identity-formation could influence young women’s choices about studies and potential future careers. He posits that a girl with a more feminine name may be treated systematically differently by parents, teachers, and peers, or may herself relate to more feminine stereotypes. In such a circumstance, girls with more feminine names may be more likely to select coursework that is more “traditionally female”— such as the humanities and foreign languages — and shy away from coursework that is more “traditionally male” — such as advanced math and science. Of course, names are not exogenously given to girls. Parents often pay great attention to the names they give their children, and parents with different proclivities toward mathematics and science, say, may systematically select different names for their daughters. In order to avoid confounding unmeasured familyspecific factors with causal effects of names, Figlio uses a unique dataset of pairs of highly-achieving sisters provided to him by a large Florida school district. He then relates the name that a given high-achieving sister has to her propensity to take calculus and physics in high school, as compared with her high-achieving sisters with different names. Parents often give pairs of sisters very different names in terms of their femininity, offering the opportunity to directly test the presumption that a name can have causal  influences on a girl’s academic development. Figlio finds that girls with more feminine names are less likely to self-select into advanced mathematics and science classes in high school, holding constant family fixed effects and prior achievement. These results suggest that environmental factors play a large role in determining whether women choose mathematics and science as potential careers.

Private schools are, on average, a third of the size of public schools. But why are they small? Two possible explanations are differences in demand and differences in production. If the returns to scale are similar for private and public high schools, then an increase in school choice through private school vouchers could lead to larger private high schools. This would be a demand effect — private schools are smaller because fewer people want to go to them. Using cost estimates for public schools from Ledyard (2004), Ledyard can account for approximately 70 percent of the difference in size between public and Catholic high schools. She holds costs fixed, and uses data on private school location, size, and affiliation to predict the size of Catholic schools.

Both public and private two-year colleges rely on public subsidies to make their education affordable for students. Public community colleges receive government support directly in the form of subsidies, while private forprofit colleges or proprietary schools receive government support indirectly in the form of grants or vouchers given to students. Cellini analyzes the impact of these two funding schemes on the entry decisions of proprietary schools and enrollments in community colleges. She uses a new administrative dataset of for-profit colleges in California, panel data methods, and a unique regression discontinuity design. She finds that an increase in public funding for a local community college diverts students from the private to the public sector and causes a  corresponding decline in the number of proprietary schools in the county. Raising student financial aid awards, on the other hand, expands the overall pool of sub-baccalaureate students and causes proprietary schools to enter the market. This effect is  particularly strong in counties with high poverty rates where more students are eligible for aid.

Between 1898 and 1948, English was the language of instruction for most post-primary grades in Puerto Rican public schools. Since 1949, the language of instruction in all grades has been Spanish. Angrist, Chin, and Godoy use this policy change to estimate the effect of English-intensive instruction on the English-language skills of Puerto Ricans. Although naïve estimates suggest that English instruction increased English-speaking ability among Puerto Rican natives, estimates that allow for education-specific cohort trends show no effect. This result is surprising in light of the strong presumption by American policymakers at the time that English instruction was the best way to raise English proficiency. It suggests that increased emphasis on using English as the language of instruction may do little to benefit Puerto Ricans who remain on the island today.

While economic theory regards education as an important investment, the reality of students’ behavior does not always seem to support this view. Borghans and Golsteyn aim to analyze the behavior of students at college from in investment perspective. They provide robust paradoxical findings that college students with higher discount rates stay longer in education. The explanation they pursue is that a higher discount rate can partly be a consequence of a lack of  imagination about the future work life. If so, the discount rate will be very high at moments when there are major changes in circumstances, in this case when students go from college to work. This provides incentives for students who lack a clear picture about their future work-finding life to stay in education. To test this model, the authors measure the crucial individual attributes, ask students about the way they made their choices, and present them other choices that reveal the nature of their behavior. The empirical results support the model, so the main conclusion is that a lack of imagination induces students to stay longer in education while it reduces the efficiency of this investment.

Cascio, Gordon, Lewis, and Reber examine whether the financial incentives put in place by two pieces of federal legislation — the Civil Rights Act of 1964 and the Elementary and Secondary Education Act of 1965 — played a causal role in desegregating southern schools. The latter targeted a large federal education program toward the South, while the former tied the receipt of funds under this new program to nondiscrimination. Using a newly collected dataset on school desegregation and school finance for the 1960s, the authors find that districts with relatively more to lose under federal funding allocation rules engaged in more student desegregation, were more likely to have desegregated their faculties, and were more likely to have received their federal funding by the fall of 1967. Qualitatively similar results are found for the fall of 1966. These results suggest that legislative and executive enforcement efforts — not just the courts — contributed to the desegregation of southern education.

Political Economy

The NBER’s Working Group on Political Economy, directed by NBER Research Associate Alberto F. Alesina of Harvard University, met in Cambridge on November 19. The following papers were discussed:

  • Timothy Besley, London School of Economics; Torsten Persson, Stockholm University and NBER; and Daniel Sturm, University of Munich, “Political Competition and Economic Performance: Theory and Evidence from the United States” Discussant: Roberto Perotti, Universita Bocconi and NBER
  • Allan Drazen, University of Maryland and NBER, and Marcela Eslava, Universidad de los Andes, “Pork Barrel Cycles” Discussant: Alessandro Lizzeri, New York University
  • John N. Friedman and Richard T. Holden, Harvard University, “Optimal Gerrymandering” Discussant: Roland G. Fryer, Harvard University
  • Kenneth L. Sokoloff, UC, Los Angeles and NBER, and Eric M. Zolt, UC, Los Angeles, “Inequality and the Evolution of Institutions of Taxation: Evidence from the Economic History of the Americas” Discussant: William Easterly, New York University
  • Per Pettersson-Lidbom, Stockholm University, and Matz Dahlberg, Uppsala University, “An Empirical Approach for Estimating the Causal Effect of Soft Budget Constraints on Economic Outcomes” Discussant: Antonio Merlo, University of Pennsylvania

Besley, Persson, and Sturm formulate a model to explain why the lack of political competition may stifle economic performance; they use the United States as a testing ground for the model’s predictions, exploiting the 1965 Voting Rights Act which helped to break the near monopoly on political power of the Democrats in southern states. They find that changes in political competition have quantitatively important effects on state income growth, state policies, and quality of Governors. By their bottomline estimate, the increase in political competition triggered by the Voting Rights Act raised long-run per capita income in the average affected state by about 20 percent.

Drazen and Eslava present a model of political budget cycles in which incumbents influence voters by targeting government spending to specific groups of voters at the expense of other voters or other expenditures. Each voter faces a signal extraction problem: being targeted with expenditure before the election may reflect opportunistic manipulation, but may also reflect a sincere preference of the incumbent for the types of spending that voter prefers. The authors show the existence of a political equilibrium in which rational voters support an incumbent who targets them with spending before the election even though they know it may be electorally motivated. In equilibrium, voters in the more “swing” regions are targeted at the expense of types of spending not favored by these voters. This will be true even if they know they live in swing regions. However, the responsiveness of these voters to electoral manipulation depends on whether they face some degree of uncertainty about the electoral importance of the group they are in. Use of targeted spending also implies voters can be influenced without election-year deficits, consistent with recent finding for established democracies.

Standard intuitions for optimal gerrymandering involve concentrating one’s extreme opponents in “unwinnable” districts (“throwing away”) and spreading one’s supporters evenly over “winnable” districts (“smoothing”). These intuitions are not robust and depend crucially on arbitrary modeling assumptions. Friedman and Holden characterize the solution to a problem in which the gerrymanderer observes a noisy signal of voter preferences from a continuous distribution and creates N districts of equal size to maximize the expected number of districts that she wins.  They show that “throwing away” districts is not generally optimal, nor is “smoothing.” The optimal solution involves creating a district that matches extreme “Republicans” with extreme “Democrats,” then continuing to match toward the center of the signal distribution. The value to being the gerrymanderer increases with the extremity of voter preferences, the quality of the signal, and the number of districts.

Sokoloff and Zolt turn to history to gain a better perspective on how and why tax systems vary. They focus on the societies of the Americas over the nineteenth and twentieth centuries, for two major reasons. First, despite the region having the most extreme inequality in the world, the tax structures of Latin America are generally recognized as among the most regressive, even by developing country standards. Second, as has come to be widely appreciated, the colonization and  development of the Americas constitute a natural experiment of sorts that students of economic and social development can exploit. Beginning more than 500 years ago, a small number of European countries established colonies in diverse environments across the hemisphere. The different circumstances meant that largely exogenous differences existed across these societies, not only in national heritage, but also in the extent of inequality. The principal concern in this paper is with how the extent of inequality may influence the design and implementation of tax systems. Several salient patterns emerge. The United States and Canada (like Britain, France, Germany, and even Spain) were much more inclined to tax wealth and income during their early stages of growth — and into the twentieth century — than developing countries are today. Although the U.S. and Canadian federal governments were  similar to those of their counterparts in Latin America in relying primarily on the taxation of foreign trade (overwhelmingly tariffs) and excise taxes, the greater success or inclination of state (provincial) and local governments in North America to tax wealth (primarily in the form of property or estate taxes) and income (primarily in the form of business taxes), as well as the much larger relative sizes of these sub-national governments in North America, accounted for a radical divergence in the overall structure of taxation. Tapping these progressive sources of government revenue, state and local governments in the United States and Canada, even before independence, began directing substantial resources toward public schools, improvements in infrastructure involving transportation and health, and other social programs. In contrast, the societies of Latin America, which had come to be characterized soon after initial settlement by rather extreme inequality in wealth, human capital, and political influence, tended to adopt tax structures that were significantly less progressive in incidence and manifested greater reluctance or inability to impose local taxes to fund local public investments and services. These patterns persisted, moreover, well into the twentieth century – indeed up to the present day. The apparent association between initial inequality and the institutions of taxation and public finance is all the more intriguing in that Sokoloff and Zolt find corresponding patterns across different regions of the United States and across different countries of Latin America.

Pettersson-Lidbom and Dahlberg develop an empirical framework for estimating  the causal effect of soft budget constraints on economic outcomes. Their point of departure is that the problem of the soft budget constraint is a problem of credibility; that is, inability of a supporting organization to commit itself not to extend more resources to a budget-constrained organization (in other words, bailouts) ex post than it was prepared to provide ex ante. This means that current economic behavior of a budget-constrained organization will depend upon its expectations of being bailed out in the future. Thus, to estimate the causal effect of soft budget constraints (that is, bailout expectations) on economic outcomes, one has to measure these expectations and link them to the current behavior of the budget-constrained organization. The authors argue that one can use information about realized bailouts to construct credible measures of bailout expectations. They apply an empirical framework to Swedish local governments, which provide an attractive testing ground for the soft budget constraint since the central government has extended a total of 1,697 bailouts over the period 1974 to 1992. The authors find that bailout expectations have a causal effect on economic behavior. The estimated effect is quite sizeable: on average, a local government increases its debt by 30 percent if it is certain of being bailed versus when it is certain of not being bailed out.

International Trade and Investment

The NBER’s Program on International Trade and Investment met at the Bureau’s California office on December 2 and 3. Program Director Robert C. Feenstra of University of California, Davis, organized the meeting. The following papers were discussed:

  • Carsten Eckel, University of Goettingen, and J. Peter Neary, University College Dublin, “Multi-Product Firms and Flexible Manufacturing in the Global Economy”
  • Volker Nocke, University of Pennsylvania, and Stephen Yeaple, University of Pennsylvania and NBER, “Endogenizing Firm Scope: Multiproduct Firms in International Trade”
  • Pol Antras, Mihir A. Desai, and C. Fritz Foley, Harvard University and NBER, “FDI Flows and Multinational Firm Activity”
  • Lee J. Branstetter and Raymond Fisman, Columbia University and NBER; C. Fritz Foley; and Kamal Saggi, Southern Methodist University, “Intellectual Property Rights, Imitation, and Foreign Direct Investment: Theory and Evidence”
  • Andrew B. Bernard, Dartmouth College and NBER; J. Bradford Jensen, Institute for International Economics; and Peter Schott, Yale University and NBER, “Transfer Pricing by U.S.-Based Multinational Firms”
  • Diego Puga and Daniel Trefler, University of Toronto and NBER, “Wake Up and Smell the Ginseng: The Rise of Incremental Innovation in Low-Wage Countries”
  • Svetlana Demidova, Pennsylvania State University; Hiau Looi Kee, The World Bank; and Kala Krishna, Pennsylvania State University and NBER, “Rules of Origin and Firm Heterogeneity”
  • Christian Broda, University of Chicago; Nuno Limão, University of Maryland; and David E. Weinstein, Columbia University and NBER, “Optimal Tariffs: The Evidence”

Eckel and Neary present a new model of multi-product firms (MPFs) and flexible manufacturing and explore its implications in partial and general equilibrium. International trade integration affects the scale and scope of MPFs through a competition effect and a demand effect. The authors demonstrate how MPFs adjust in the presence of single-product firms and in heterogeneous industries. Their results are in line with recent empirical evidence and suggest that MPFs in conjunction with flexible  manufacturing play an important role in the impact of international trade on product diversity – that is, the range of products produced by all firms.

Nocke and Yeaple develop a theory of multi-product firms that differ in their organizational capabilities. In the model, a firm’s unit cost is the endogenous outcome of its choice of the number of its product lines. The more product lines a firm manages, the higher are its unit costs, but this trade-off is less severe for firms with greater organizational capabilities. Paradoxically, more efficient firms optimally increase their scope to such an extent that their unit costs are higher than those of less efficient firms. The model thus explains the empirical puzzle that there is a negative relationship between firm size and Tobin’s Q. Positive industry shocks — such as those caused by trade liberalization — induce a merger wave that alters the intra-industry dispersion of observed productivity as high-Q firms buy or sell product lines with low-Q firms.

How are foreign direct investment (FDI) flows and patterns of multinational firm (MNC) activity determined in a world with frictions in financial contracting and variations in institutional environments? As developers of technologies, MNCs have long been characterized as having comparative advantage in monitoring the deployment of their technology. Antras, Desai, and Foley show that, in a setting of non-contractible monitoring and financial frictions, this comparative advantage endogenously gives rise to MNC activity and FDI flows. The mechanism generating MNC activity is not the risk of technological expropriation by local partners but the demands of external funders who require MNC participation to ensure value maximization by local entrepreneurs. The model delivers distinctive predictions for the impact of weak institutions on patterns of MNC activity and FDI flows, with weak institutional environments limiting the scale of multinational firm activity but increasing the share of that activity that is financed by multinational to accounting for distinctions between patterns of MNC activity and FDI flows, the model can help explain substantial two-way FDI flows between countries with high levels of financial development and small and unbalanced FDI flows between countries with  different levels of financial development. The main predictions of the model are tested and confirmed using firm-level data on U.S. outbound FDI.

Does the adoption of stronger intellectual property rights (IPR) in developing countries enhance or retard their industrial development? How does such a policy shift affect industrial activity in the developed countries, where most innovative activity is concentrated? Branstetter, Fisman, Foley, and Saggi address these questions both theoretically and empirically. On the theoretical side, they develop a North-South product cycle model in which Northern innovation, Southern imitation, and FDI are all endogenous. This model predicts that IPR reform in the South leads to increased FDI from the North, as Northern firms shift production to Southern affiliates. This increased FDI drives an acceleration of Southern industrial development, as the South’s share of global manufacturing and the pace at which production of more recently invented goods shifts to the South both increase. The model also predicts that as production shifts to the South, Northern resources will be reallocated to R and D, driving an increase in the global rate of innovation. The authors confront the theoretical model with evidence on the response of U.S. multinationals to a series of well-documented IPR reforms by developing countries in the 1980s and 1990s. Their results indicate that U.S.-based MNCs expand the scale of their activities in reforming countries after IPR reform, and this effect is disproportionately strong for affiliates whose parents rely strongly on patented intellectual property as part of their global business strategy. Data tracking industry level value-added in the reforming countries point to an overall expansion of industrial activity after IPR reform. Finally, evidence from highly disaggregated trade data also suggests that the expansion of multinational activity leads to a higher net level of production shifting to developing countries, more than offsetting any possible decline in the imitative activity of indigenous firms.

Bernard, Jensen, and Schott examine how prices set by multinational firms vary across arm’s-length and related-party customers. They find that arm’s length prices are substantially and significantly higher than related party prices for U.S.-based multinational exporters. The price difference is large even when comparing the export of the same good by the same firm to the same destination country in the same month by the same mode of transport. The price wedge is smaller for commodities than for differentiated goods and is increasing in firm size and firm export share. The difference between arm’s length and related party prices is also significantly greater for goods sent to countries with lower taxes and higher tariffs. Changes in exchange rates have differential effects on arm’s length and related party prices; an appreciation of the dollar strongly reduces the difference between the prices.

Increasingly, a small number of low-wage countries such as China and India are involved in innovation — not “big ideas” innovation, but the constant incremental innovations needed to stay ahead in business. Puga and Trefler provide some evidence of this new phenomenon and develop a model in which there is a transition from old-style product- cycle trade to trade involving incremental innovation in low-wage countries. They explain why levels of involvement in innovation vary across low-wage countries and even across firms within each low-wage country. They then draw out implications for the location of production, trade, capital flows, earnings and living standards.

Demidova, Kee, and Krishna develop a heterogeneous firm model to study the effects of trade policy, trade preferences, and the rules of origin (ROOs) needed to obtain them. They apply their model to Bangladeshi garment exports to the United States and European Union. There are differences across products and export destinations that make for an interesting natural experiment. These differences generate differences in the composition of exporters and productivity. The authors use data on Bangladeshi garment exporters to construct firm-level total factor productivity (TFP) estimates. They then test the predictions of the model on the relationship between the distributions of TFP of various groups of firms. They show that the facts match the predictions of the model.

The theoretical debate over whether countries can and should set tariffs in response to export elasticities goes back over a century to the writings of Edgeworth (1894) and Bickerdike (1907). Despite the optimal tariff argument’s centrality in debates over commercial policy, there exists no evidence about whether countries actually use it in setting tariffs. Broda, Limão, and Weinstein estimate disaggregate export elasticities and find that countries that are not members of the World Trade Organization systematically set higher tariffs on goods that are supplied inelastically. The result is robust to the inclusion of political economy variables and a variety of model specifications. Moreover, they find that countries with higher aggregate market power have on higher average tariffs. In short, there is strong evidence in favor of the optimal tariff argument.

Productivity

The NBER’s Program on Productivity met in Cambridge on December 2. Program Director Ernst R. Berndt of MIT and Pierre Azoulay, NBER and Columbia University, organized the meeting. The agenda was:

  • Amy Finkelstein and Daron Acemoglu, MIT and NBER, “Input and Technology Choices in Regulated Industries: Evidence from the Health Care Sector” Discussant: David M. Cutler, Harvard University and NBER
  • Pierre Azoulay, and Joshua Graff Zivin, Columbia University and NBER, “Peer Effects in the Workplace: Evidence from Professional Transitions for the Superstars of Medicine” Discussant: Manuel Trajtenberg, Tel Aviv University and NBER
  • Laura Schultz and Sumiye Okubo, Bureau of Economic Analysis — Report on the BEA/NSF R and D Satellite Accounts: Estimating the Returns and Spillovers from Business R and D
  • Yonghong Wu, University of Illinois; David Popp, Syracuse University and NBER; and Stuart Bretschneider, Syracuse University, “The Effects of Innovation Policies on Business R&D: A Cross-National Empirical Study”  Discussant: Margaret Kyle, Duke University and NBER
  • Nick Bloom, Stanford University, and John Van Reenan, London School of Economics, “Measuring and Explaining Management Practices Across Firms and Countries” Discussant: Richard B. Freeman, Harvard University and NBER
  • James D. Adams, Renssalaer Polytechnic University and NBER, and J. Roger Clemmons, University of Florida, “Industrial Scientific Discovery” Discussant: Scott Stern, Northwestern University and NBER

Acemoglu and Finkelstein examine the implications of regulatory change for the input mix and technology choices of regulated industries. They present a simple neoclassical framework that emphasizes changes in relative factor prices faced by regulated firms under different regimes, and investigate how this might affect technology choices through substitution of (capital embodied) technologies for tasks previously performed by labor. They empirically examine some of the implications of the framework by studying the change from full cost to partial cost reimbursement under the Medicare Prospective Payment System (PPS) reform, which increased the relative price of labor faced by U.S. hospitals. Using the interaction of hospitals’ pre- PPS Medicare share of patient days with the introduction of these regulatory changes, they document a substantial increase in capital-labor ratios and a large decline in labor inputs associated with PPS. Most interestingly, they find that the PPS reform seems to have encouraged the adoption of a range of new medical technologies. They also show that the reform was associated with an increase in the skill composition of these hospitals, which is consistent with technology-skill or capital-skill complementarities.

Azoulay and Zivin estimate the magnitude of knowledge spillovers generated by 4,764 academic superstars in the life sciences onto their coauthors’ research productivity. Using matched employee-employer data, the authors measure how scientic output (grants, publications, and patents) for a coauthor changes when the superstar moves to or from a different institution. Preliminary results indicate that superstars generate substantial spillovers through two independent channels: location and co-authorship. Location spillovers decline more than linearly with geographic distance. Substitution away from collaboration with other scientists cancels a significant portion of the benefits of exposure to superstar talent. The authors also find that the location spillovers declined markedly in the 1990s.

Wu, Popp, and Bretschneider examine the effect of three major national innovation policies (patent  protection, R and D tax incentives, and government funding of business R and D) on business R and D spending. Unlike previous work, their study considers the effect of openness to international trade. They use data from nine OECD countries (Australia, Canada, France, Germany, Italy, Japan, Spain, United Kingdom, and United States) in 1985–95. Their results show that all three innovation policies play a significant role in stimulating business funded and performed R and D. Among the components of patent rights, enforcement of patent legal regime and duration of protection term consistently have a positive effect on business R and D decisions. In addition, R and D performed by the government has a positive effect on business R and D, while R and D by the higher education sector has a negative impact on business R and D. The authors also find modest empirical support for the positive role of openness to international trade in business R and D investment.

Bloom and Reenen use an innovative survey tool to collect  management practice data from 732 medium-sized manufacturing firms in the United States, France, Germany, and the United Kingdom. These measures of managerial practice are strongly associated with productivity, profitability, Tobin’s Q, sales growth, and survival rates. Management practices also display significant cross-country differences, with U.S. firms on average better managed than European firms, and significant within-country differences with a long tail of extremely badly managed firms. The authors find this is attributable to: different levels of product market competition, associated with better management; and family firms passing management control down to the eldest sons (primo geniture), associated with worse management. European firms report lower levels of competition, while French and British firms also report substantially higher levels of primo geniture because of the influence of Norman legal origin. These two factors explain up to two thirds of the average U.S.-Europe management gap.

Adams and Clemmons estimate science production functions for top R and D firms in the United States. Their data include estimated flows of basic science from universities to firms, from firms to other firms, and within firms. The underlying evidence consists of papers and citations from the Institute for Scientific Information (ISI) in  Philadelphia, Pennsylvania. The data cover the top 200 R and D firms and the top 110 universities during 1981–99.  These account for most U.S. scientific research during this period. Their empirical estimates are based on a panel of firms, science fields, and years that is an extract from the papers and citations data. Using this panel, they find that science spillovers from universities and other firms occur primarily within fields. Industry is much less of a barrier and, in fact, most knowledge flows occur between, rather than within industries. Citation and collaboration spillovers from universities, citation spillovers from other firms, and citation spillbacks from firms’ past research all make significant contributions to scientific discovery. The authors also uncover a host of potential biases. First, the  response of discovery to the firms’ own R and D is biased upward by the failure to include science spillovers from universities and other firms. Second, the university citation spillover is biased upward by the failure to include collaboration between firms and universities. Third, the effects of spillovers and spillbacks are biased downward when zeroes of the spillovers and spillbacks are not considered by the estimation procedure. The elasticity of firms’ science output with respect to university citation spillovers is consistently larger than the firm spillover elasticity. In addition, the marginal product of university spillovers exceeds the marginal product of firm spillovers, so that additional science output per dollar of university R and D is several times larger than additional output per dollar of firm R and D. University collaboration only serves to increase this productivity advantage of universities. Since university R and D is primarily funded by government, this potency of university spillovers appears to reassert the role of publicly funded science in propagating knowledge externalities throughout the U.S. economy.