Meetings: Winter, 2001

03/01/2001
Featured in print Reporter

Economic Fluctuations and Growth

The fall research meeting of the NBER's Program on Economic Fluctuations and Growth took place in Chicago on October 27. Steven J. Davis, NBER and University of Chicago, and Peter J. Klenow, Federal Reserve Bank of Minneapolis, organized the program and chose the following papers for discussion:

  • George-Marios Angeletos, Jeremy Tobacman, and Stephen Weinberg, Harvard University; David I. Laibson, NBER and Harvard University; and Andrea Repetto, University of Chile, "The Hyperbolic Buffer Stock Model: Calibration, Simulation, and Empirical Evaluation"
  • Discussants: Jonathan A. Parker, Princeton University, and David E. Altig, Federal Reserve Bank of Cleveland
  • Francesco Caselli, NBER and Harvard University, and Wilbur J. Colemann II, Duke University, "The World Technology Frontier" (NBER Working Paper No. 7904)
  • Discussants: Per Krusell, University of Rochester, and Jaume Ventura, NBER and MIT
  • K. Daron Acemoglu, NBER and MIT; Simon Johnson, MIT; and James A. Robinson, University of California, Berkeley, "The Colonial Origins of Comparative Development: An Empirical Investigation" (NBER Working Paper No. 7771)
  • Discussants: Kenneth K. Sokoloff, NBER and University of California, Los Angeles, and Robert E. Hall, NBER and Stanford University
  • Jeffrey R. Campbell, NBER and University of Chicago, and Jonas D. M. Fisher, Federal Reserve Bank of Chicago, "Idiosyncratic Risk and Aggregate Employment Dynamics" (NBER Working Paper No. 7936)
  • Discussants: John C. Haltiwanger, NBER and University of Maryland, and Richard Rogerson, NBER and University of Pennsylvania
  • Andrew G. Atkeson and Patrick J. Kehoe, Federal Reserve Bank of Minneapolis, "The Transition to a New Economy"
  • Discussants: Peter Howitt, Brown University, and Robert J. Gordon, NBER and Northwestern University
  • Judith A. Chevalier and Anil K Kashyap, NBER and University of Chicago, and Peter E. Rossi, University of Chicago, "Why Don't Prices Rise During Periods of Peak Demand? Evidence from Scanner Data" (NBER Working Paper No. 7981)
  • Discussants: Julio J. Rotemberg, NBER and Harvard University, and Valerie A. Ramey, NBER and University of California, San Diego

Laboratory and field studies of time preference find that discount rates are much greater in the short run than in the long run. Hyperbolic discount functions capture this property. In their paper, Angeletos, Tobacman, Weinberg, Laibson, and Repetto present simulations of the savings and asset allocation choices of households with hyperbolic preferences. They compare the behavior of the "hyperbolic households" to that of "exponential households." The authors find that the hyperbolic households hold relatively more illiquid wealth and relatively less liquid wealth. The hyperbolic households also exhibit greater comovement between consumption and income and experience a greater drop in consumption around retirement. The hyperbolic simulations match observed consumption and balance sheet data much better than the exponential simulations do.

Caselli and Colemann define a country's technology as a trio of efficiencies: one for unskilled labor, one for skilled labor, and one for capital. The authors then find that the efficiency of unskilled labor and the efficiencies of skilled labor and capital are negatively correlated across countries. They interpret this as evidence of a "world technology frontier." On this frontier, increases in the efficiency of unskilled labor come at the cost of declines in the efficiency of skilled labor and capital. Caselli and Colemann estimate a model in which firms in each country optimally choose their technology subject to a technology frontier. The optimal choice of technology depends on the country's endowment of skilled and unskilled labor, so that the model is one of appropriate technology. The estimation allows for country-specific technology frontiers, attributable to barriers to technology adoption. The authors find that poor countries disproportionately tend to be inside the world technology frontier.

Acemoglu, Johnson, and Robinson argue that Europeans adopted very different policies on colonization with distinct associated institutions in their various colonies. The choice of colonization strategy was determined, at least in part, by whether Europeans could settle in the colony. In places where Europeans faced high mortality rates, they could not settle, and they were thus more likely to set up extractive institutions. These early institutions have persisted to this day. By exploiting differences in mortality rates faced by soldiers, bishops, and sailors in the seventeenth, eighteenth, and nineteenth century colonies as an instrument for current institutions, the authors estimate that these institutions have large effects on per capita income. The estimates imply that differences in institutions explain approximately three-quarters of the per capita income differences across former colonies. After controlling for the effect of institutions, the authors find that countries in Africa, or closer to the equator, do not have lower incomes.

Campbell and Fisher study how producers' idiosyncratic risks affect an industry's aggregate dynamics in an environment in which certainty equivalence fails. In the model, producers can place workers in two types of jobs, organized and temporary. Workers are less productive in temporary jobs, but creating organized jobs requires the irreversible investment of managerial resources. Increasing productivity risk raises the value of an unexercised option to create an organized job. Losing this option is one cost of immediate organized job creation, so an increase in its value induces substitution toward cheaper temporary jobs. Because these jobs are costless to create and destroy, a producer using temporary jobs can be more flexible, responding more to both idiosyncratic and aggregate shocks. If all of an industry's producers adapt to heightened idiosyncratic risk in this way, then the industry as a whole can respond more to a given aggregate shock. This insight helps to explain the observation from the U.S. manufacturing sector that groups of plants displaying high idiosyncratic variability also have large aggregate fluctuations.

During the Second Industrial Revolution, from 1860-1900, a large number of new technologies, including electricity, were invented. These inventions launched a transition to a new economy: 70 years of ongoing rapid technical change. However, following this revolution, there was a delay of several decades before growth in both output and productivity rose to new levels. Historians hypothesize that this delay was caused by the slow diffusion of the new technologies that were embodied in the design of new plants, combined with the ongoing learning in plants after they had adopted the new technologies. Motivated by these hypotheses, Atkeson and Kehoe build a quantitative model of this transition and show that it implies both slow diffusion and a delay in growth similar to that in the data.

Chevalier, Kashyap, and Rossi examine the retail and wholesale prices of a large supermarket chain in Chicago over seven and a half years. They show that prices tend to fall during the seasonal demand peak for a product and that changes in retail margins account for most of those price changes. This research adds to the growing body of evidence that markups are countercyclical. The pattern of margin changes is consistent with "loss leader" models, such as the Lal and Matutes (1994) model of retailer pricing and advertising competition. Other models of imperfect competition are less consistent with retailer behavior. The authors find that manufacturer behavior plays only a limited role in the countercyclicality of prices.

 

Public Economics

The NBER's Program on Public Economics met in Cambridge on November 2-3. Program Director James M. Poterba of MIT served as organizer and chose the following papers for discussion:

  • Shlomo Yitzhaki, NBER and Hebrew University, "A Public Finance Approach to Assessing Poverty Alleviation"
  • Discussant: Holger Sieg, NBER and Duke University
  • Karen E. Dynan, Federal Reserve Board, Jonathan S. Skinner, NBER and Dartmouth College, and Stephen P. Zeldes, NBER and Columbia University, "Do the Rich Save More?" (NBER Working Paper No. 7906)
  • Discussant: Christopher D. Carroll, NBER and John Hopkins University
  • Louis Kaplow, NBER and Harvard University, "A Framework for Assessing Estate and Gift Taxation" (NBER Working Paper No. 7775)
  • Discussant: Antonio Rangel, NBER and Stanford University
  • Julie B. Cullen, NBER and University of Michigan, Steven D. Levitt, NBER and University of Chicago, and Brian Jacob, University of Chicago, "The Impact of School Choice on Student Outcomes: An Analysis of the Chicago Public Schools" (NBER Working Paper No. 7888)
  • Discussant: Cecelia E. Rouse, NBER and Princeton University
  • Francesco Caselli, NBER and Harvard University, and Massimo Morelli, University of Minnesota, "Bad Politicians"
  • Aaron Yelowitz, NBER and University of California, Los Angeles, "Public Housing and Labor Supply"
  • Discussant: Mark G. Duggan, NBER and University of Chicago
  • Roger H. Gordon, NBER and University of Michigan, and Young Lee, University of Maryland, "Do Taxes Affect Corporate Debt Policy? Evidence from U.S. Corporate Tax Return Data" (NBER Working Paper No. 7433)
  • Discussant: Mihir A. Desai, NBER and Harvard University

Yitzhaki compares cost-benefit analysis and tax reform. He shows that both concepts can be handled by the same method: in both, there is a need to define social distributional weights and to evaluate the marginal efficiency of public funds. He suggests that the social distributional weights be derived from popular indexes of inequality. This would enable the decomposition of the impact of tax reform on growth and redistribution, allowing one to evaluate the trade-off between the two.

The issue of whether households with higher lifetime incomes save a larger fraction of their income is important to the evaluation of tax and macroeconomic policy. Dynan, Skinner, and Zeldes consider the various ways in which life-cycle models can generate differences in saving rates by income groups: by changing Social Security benefits, time preference rates, non-homothetic preferences, bequest motives, uncertainty, and consumption floors. They find a strong positive relationship between personal saving rates and lifetime income. The data do not support theories relying on time preference rates, non-homothetic preferences, or variations in Social Security benefits. Instead, the evidence is consistent with models in which precautionary saving and bequest motives drive variations in saving rates across income groups. Finally, the authors illustrate how models that assume a constant rate of saving across income groups can yield erroneous predictions.

Whether and how estates and gifts should be taxed has long been a controversial subject, and the approach to estate and gift taxation varies among developed countries. Kaplow examines the conceptual basis for various arguments for and against the current estate and gift tax regime and proposed alternatives. He then considers the integration of policy analysis of transfer taxation with analysis of the rest of the tax system, notably, the income tax. How would it be optimal to tax transfers if they are viewed simply as one of many forms of expenditure by donors? And, how do the distinctive features of gifts and bequests alter the conclusions? Kaplow discusses the importance of different transfer motives and reconsiders the analysis in light of the importance of: human capital in intergenerational transfers; differences between inter vivos transfers and bequests; differences between gifts to individuals and gifts to charitable institutions; differences among gifts to donees having varying relationships to the donor; and the possibility that transfers are not explained by maximizing behavior.

Cullen, Jacob, and Levitt explore the impact of school choice through the open enrollment program of the Chicago Public Schools (CPS). Roughly half of the students within the CPS opt out of their assigned high school to attend other neighborhood schools or special career academies and magnet schools. Students who opt out are more likely to graduate than observationally similar students who remain at their assigned schools. However, except for those attending career academies, the gains appear to be driven by the fact that the more motivated students are disproportionately likely to opt out. Students with easy geographical access to a range of schools, other than career academies, are no more likely to graduate on average than students in more isolated areas. Open enrollment apparently benefits those students who take advantage of having access to vocational programs without harming those who do not.

Caselli and Morelli present a simple theory of the quality of elected officials. Quality has (at least) two dimensions: competence and honesty. Voters prefer competent and honest policymakers, so high-quality citizens have a greater chance of being elected to office. But low-quality citizens have a "comparative advantage" in pursuing elective office because their market wages are lower than the market wages of high-quality citizens (competence), and/or because they reap higher returns from holding office (honesty). In the political equilibrium, the average quality of the elected body depends on the structure of rewards from holding public office. Under the assumption that the rewards from office increase with the average quality of office holders, there can be multiple equilibriums in quality. Under the assumption that incumbent policymakers set the rewards for future policymakers, there can be path dependence in quality.

Yelowitz explores how public housing rules affect the work behavior of female-headed households. The public housing program's generosity varies by metropolitan area. It also varies over time, through year-to-year changes in the subsidy and income eligibility limit. And, unlike other welfare programs, the housing benefits vary based on the sex composition of the children. For example, a family with one boy and one girl gets a three-bedroom apartment or voucher, while a family with two boys or two girls gets a two-bedroom apartment or voucher. Yelowitz finds that the public housing rules induce labor supply distortions. Among female-headed households, a one-standard deviation increase in the subsidy reduces labor force participation by 3.6-4.2 percentage points from a baseline participation rate of 70-75 percent.

Using data on all U.S. corporations, Gordon and Lee estimate the effects of changes in corporate tax rates on the debt policies of firms of different sizes. Small firms face very different tax rates than larger firms, and relative tax rates also have changed frequently over time, providing substantial information to identify tax effects. Their results suggest that taxes have had a strong and statistically significant effect on debt levels. For example, cutting the corporate tax rate by 10 percentage points (for example, from 46 percent to 36 percent) and holding personal tax rates fixed will reduce the fraction of assets financed with debt by around 3.5 percent. Since small firms normally rely much more heavily on debt finance yet face much lower tax incentives to use debt, the estimated effect of taxes would be strongly biased downwards without controls for firm size.

 

Asset Pricing

The NBER's Program on Asset Pricing met in Cambridge on November 3. Jacob Boudoukh, NBER and New York University, and Jiang W. Wang, NBER and MIT, organized the program and chose the following papers for discussion:

  • John Y. Campbell and Luis M. Viceira, NBER and Harvard University, and Lewis Chan, Hong Kong University of Science and Technology, "A Multivariate Model of Strategic Asset Allocation"
  • Discussant: Anthony Lynch, New York University
  • Joao F. Gomes, Leonid Kogan, and Lu Zhang, University of Pennsylvania, "Equilibrium Cross Section of Returns"
  • Discussant: Jonathan Berk, NBER and University of California, Berkeley
  • Qiang Dai, New York University, "From Equity Premium Puzzle to Expectations Puzzle: A General Equilibrium Production Economy of Stochastic Habit Formation"
  • Discussant: John H. Cochrane, NBER and University of California, Los Angeles
  • Nicholas C. Barberis, NBER and University of Chicago, and Ming Huang, Stanford University, "Mental Accounting, Loss Aversion, and Individual Stock Returns"
  • Discussant: John Heaton, NBER and University of Chicago
  • Erzo G. J. Luttmer and Thomas Mariotti, London School of Economics, "Subjective Discounting in an Exchange Economy"
  • Discussant: Stanley Zin, NBER and Carnegie Mellon University
  • Michael W. Brandt, NBER and University of Pennsylvania, Qi Zeng, University of Pennsylvania, and Lu Zhang, "Equilibrium Stock Return Dynamics under Alternative Rules of Learning about Hidden States"
  • Discussant: Pietro Veronesi, University of Chicago

Campbell, Chan, and Viceira show how the predictability of asset returns can affect the portfolio choices of long-lived investors who value wealth not for its own sake but for the consumption it can support. The authors develop an approximate solution method for the optimal consumption-and-portfolio-choice problem of an infinitely-lived investor with Epstein-Zin utility who faces a set of asset returns described by a vector autoregression in returns and state variables. Their empirical estimates, based on long-run annual and postwar quarterly U.S. data, suggest that the predictability of stock returns greatly increases the optimal demand for stocks. Nominal bonds have only a small role in optimal long-term portfolios. The authors extend the analysis to consider long-term inflation-indexed bonds and find that extremely conservative investors should hold large positions in these bonds when they are available.

Gomes, Kogan, and Zhang explicitly link expected stock returns to firm characteristics--such as firm size and book-to-market (B/M) ratio--in a dynamic general equilibrium production economy. Although stock returns in the model are characterized by an intertemporal Capital Asset Pricing Model (CAPM) with the market portfolio as the only factor, both size and B/M play separate roles in describing the cross section of returns. These two firm characteristics appear to predict stock returns because they are correlated with the true conditional market beta of returns. These cross-sectional relations can subsist even after controlling for a typical empirical estimate of market beta. This supports the view that the documented ability of size and B/M to explain the cross section of stock returns is not necessarily inconsistent with a single-factor conditional CAPM.

Dai develops a general equilibrium model for a representative agent production economy with stochastic internal habit formation. The model Dai describes has a scale-independent economy with a unique stochastic investment opportunity set. Local correlation between the stochastic interest rate and the time-varying market price of risk can be determined endogenously and leads to correct predictions of the sign and magnitude of several major empirical puzzles in both equity and bond markets. Dai shows that the equity premium puzzle, the risk-free rate puzzle, and the expectations puzzle are completely resolved under reasonable parameter values. Thus, he establishes the inextricable link between the equity and bond markets, both theoretically and empirically.

Barberis and Huang study equilibrium asset prices in a model where investors are loss averse, paying particular attention to what they are loss averse about. The authors consider two possibilities, which correspond to different assumptions about how people do mental accounting or about how they evaluate their investment performance. In one case, investors track their performance stock by stock and are loss averse over individual stock fluctuations. In the other case, they measure their performance at the portfolio level, and are loss averse only over portfolio fluctuations. The authors find that loss aversion over individual stock fluctuations helps to explain a wide range of empirical facts, both in the time series and in the cross section. In simulated data, individual stock returns have a high mean excess volatility, and are slightly predictable in the time series. There are also large "value" and "size" premiums in the cross section. Investor loss aversion over portfolio fluctuations is less successful in explaining the facts: individual returns are insufficiently volatile and excessively correlated, while the premiums for value and size largely disappear.

Luttmer and Mariotti describe the equilibrium of a discrete-time exchange economy in which consumers with arbitrary subjective discount factors and quasi-homothetic period utility functions follow linear Markov consumption and portfolio strategies. The authors provide an analytically convenient continuous-time approximation and show how subjective rates of time preference affect risk-free rates but not instantaneous risk-return trade-offs. They also examine the quantitative effects of hyperbolic discounting in an economy in which log endowments are subject to temporary and permanent shocks that are governed by a Feller (1951) square-root process. They find that hyperbolic and quasi-hyperbolic discount factors can significantly increase the volatility of aggregate wealth and raise the expected excess return on aggregate wealth.

Brandt, Zeng, and Zhang examine the properties of equilibrium stock returns in an incomplete information economy in which the agents need to learn the hidden state of the endowment process. They consider the case of optimal Bayesian learning and suboptimal learning, including near-rational learning, over- or underconfidence, optimism or pessimism, adaptive learning, and limited memory. They find that Bayesian learning can quantitatively explain long-run mean-reversion, predictability, volatility clustering, and leverage effects in stock returns. However, it cannot generate enough short-run momentum because any uncertainty about the state is resolved too quickly (that is, agents learn too fast). Among the suboptimal learning rules, only overconfidence can marginally improve some aspects of the model (that is, introduce short-run momentum) without substantially deteriorating other aspects.

 

Labor Studies

The NBER's Program on Labor Studies met in Cambridge on November 3. Program Director Richard B. Freeman and NBER Research Associate Lawrence F. Katz, both of Harvard University, chose these papers for discussion:

  • Aaron Yelowitz, NBER and University of California, Los Angeles, "Public Housing and Labor Supply"
  • Joshua D. Angrist, NBER and MIT, "Economic and Social Consequences of Imbalanced Sex Ratios: Evidence from America's Second Generation"
  • Charles C. Brown, NBER and University of Michigan, "Relatively Equal Opportunity in the Armed Forces: Impacts on Children of Military Families"
  • Orley C. Ashenfelter, NBER and Princeton University, and David Card, NBER and University of California, Berkeley, "How Did the Elimination of Mandatory Retirement Affect Faculty Retirement?"
  • Steven J. Davis, NBER and University of Chicago, and Paul Willen, University of Chicago, "Occupation-Level Income Shocks and Asset Returns: Their Covariance and Implications for Portfolio Choice" (NBER Working Paper No. 7905)
  • Brian J. Hall, NBER and Harvard University, and Kevin J. Murphy, University of Southern California, "Stock Options for Undiversified Executives"

Yelowitz uses data from the Survey of Income and Program Participation (SIPP) and the Current Population Survey (CPS) to explore how rules governing public housing affect the work behavior of female-headed households. The generosity of the public housing program varies according to: 1) metropolitan area; 2) time, because of year-to-year changes in the subsidy and income eligibility limit; and 3) sex composition of the household's children (for example, a family with one boy and one girl gets a three-bedroom apartment or voucher, while a family with two boys or two girls gets a two-bedroom apartment or voucher). Yelowitz concludes that the public housing rules do induce labor supply distortions. Among female-headed households, a single standard deviation increase in the subsidy reduces labor force participation by 3.6 to 4.2 percentage points from a baseline participation rate of 70 to 75 percent.

A combination of changing migration patterns and U.S. immigration restrictions resulted in a shift in the male-female balance in many ethnic groups in the early twentieth century. Angrist asks how this change in sex ratios affected the children of immigrants. He finds that higher sex ratios, defined as the number of men per woman, had a large positive impact on the likelihood of marriage for females. More surprising, perhaps, marriage rates among second-generation males were also an increasing function of immigrant sex ratios. This suggests that higher sex ratios also raised male earnings and the incomes of parents with young children. Changes in extended family structure associated with changing sex ratios complicate the interpretation of these findings. On balance, though, the results are consistent with theories in which higher sex ratios increase male competition in the marriage market.

Equal opportunity policy and market forces have made the military a distinctive institution in U.S. society. Blacks are well represented in the military, compared to the civilian sector. Integration of both work groups and housing started earlier and proceded more rapidly in the military. And, unlike many civilian jobs, the military provides medical care for both soldiers and dependents. While one might look for impacts of such relatively equal opportunities on a number of child outcomes, Brown focuses on the test scores of children in eighth grade. Data from the National Assessment of Educational Progress suggests that, while white children from military families score slightly higher than do their civilian counterparts, black children from military families do significantly better than their counterparts. The test score gap is about 40 percent smaller in the military than in civilian schools. Moreover, a variety of evidence suggests that this is not primarily attributable to enlistment policies that determine who is able to enter the armed forces.

Using information on retirement flows between 1986 and 1996 among older faculty at a large sample of four-year colleges and universities, Ashenfelter and Card attempt to measure the effect of the elimination of mandatory retirement. Comparisons of retirement rates before and after 1994, the year most institutions were forced to eliminate mandatory retirement, suggest that the abolition of compulsory retirement led to a dramatic drop in retirement rates for faculty aged 70 and 71. Comparisons of retirement rates in the early 1990s between schools that were still enforcing mandatory retirement and those that were forced to stop by state laws lead to the same conclusion. In the era of mandatory retirement, fewer than 10 percent of 70-year-old faculty were still teaching two years later. After the elimination of mandatory retirement, this fraction has risen to 50 percent. These findings suggest that most U.S. colleges and universities will experience a significant rise in the fraction of older faculty in the coming years.

Davis and Willen develop and apply a simple graphical approach to portfolio selection that accounts for covariance between asset returns and an investor's labor income. The authors apply the approach to occupation-level components of innovations in individual income estimated from the CPS and characterize several properties of these innovations, including their covariance with aggregate equity returns, long-term bond returns, and returns on several other assets. They find that aggregate equity returns are not correlated with the occupation-level income innovations. A portfolio based on firm size is significantly correlated with income innovations for several occupations, though, as are selected industry-level equity portfolios. Applying their theory to the empirical results yields large predicted levels of risky asset holdings compared to observed levels, considerable variation in optimal portfolio allocations over the life cycle, and large departures from the two-fund separation principle.

Hall and Murphy use a certainty-equivalence framework to analyze the cost and value of, and pay/performance incentives provided by, nontradable options held by undiversified, risk-averse executives. They derive "Executive Value" lines -- the risk-adjusted analogues to Black-Scholes lines -- and distinguish between "executive value" and "company cost." Their findings suggest that the divergence between the value and cost of options explains or provides insight into virtually every major issue regarding stock option practice, including: executive views about Black-Scholes measures of options; tradeoffs between options, stock, and cash; exercise price policies; connections between the paysetting process and exercise price policies; institutional investor views regarding options and restricted stock; option repricings; early exercise policies and decisions; and the length of vesting periods.

 

Behavioral Finance

The NBER's Working Group on Behavioral Finance met on November 10 in New Haven. Robert J. Shiller, NBER and Yale University, and Richard H. Thaler, NBER and University of Chicago, organized the program and chose the following papers for discussion.

  • Nicholas C. Barberis, NBER and University of Chicago, and Andrei Shleifer, NBER and Harvard University, "Style Investing" (NBER Working Paper No. 8039)
  • Discussant: Sanford J. Grossman, NBER and University of Pennsylvania
  • Anna Scherbina, Northwestern University, "Stock Prices and Differences of Opinion: Empirical Evidence that Prices Reflect Optimism"
  • Discussant: Richard H. Thaler
  • Joseph Chen and Harrison Hong, Stanford University, and Jeremy C. Stein, NBER and Harvard University, "Breadth of Ownership and Stock Returns"
  • Discussant: Jeffrey A. Wurgler, Yale University
  • Brad M. Barber and Terrance Odean, University of California, Davis, and Lu Zheng, University of Michigan, "The Behavior of Mutual Fund Investors"
  • Discussant: William N. Goetzmann, NBER and Yale University
  • Louis K. C. Chan, University of Illinois; Jason J. Karceski, University of Florida; and Josef Lakonishok, NBER and University of Illinois, "The Level and Persistence of Growth Rates"
  • Discussant: Cliff Asness, AQR Capital Management, LLC
  • Jeffery S. Abarbanell, University of North Carolina, and Reuven Lehavy, University of California, Berkeley, "Biased Forecasts or Biased Earnings? The Role of Earnings Management in Explaining Apparent Optimism and Inefficiency in Analysts' Earnings Forecasts"
  • Discussant: Jay Patel, Boston University

Barberis and Shleifer study asset prices in an economy in which some investors classify risky assets into different styles and move funds back and forth between these styles depending on relative performance. News about one style can affect the prices of other apparently unrelated styles; assets in the same style will move together too much, while assets in different styles co-move together too little; and high average returns on a style will be associated with common factors unrelated to risk. These assumptions imply that style momentum strategies will be very profitable. The authors use their model to shed light on a number of puzzling features of the data.

Scherbina investigates how differences in opinions regarding stock valuations influence prices. She finds that stock prices are driven by investors with an optimistic outlook whenever market and institutional frictions prevent pessimistic investors from expressing their opinion. As a result, market prices are more likely to be higher than consensus valuations when there are substantial differences of opinion. Using data on analysts' forecasts, Scherbina divides stocks into portfolios based on the dispersion in earnings forecasts and finds that portfolios containing stocks with highly dispersed forecasts on average earn 0.82 percent lower returns per month than portfolios with low-dispersion stocks. The difference in returns is more prominent for the low book-to-market and small stocks. She also documents that consensus forecasts are more upwardly biased the higher the dispersion in the underlying forecasts. This bias arises because the more pessimistic analysts choose not to issue a forecast for fear of jeopardizing their relationship with the management.

Chen, Hong, and Stein develop a model of stock prices in which there are differences of opinion among investors and constraints on short sales. Breadth of ownership is a valuation indicator in the model. When breadth is low--that is, when few investors have long positions in the stock -- this is a signal that the short-sales constraint is tightly binding. It implies that prices are high relative to fundamentals and that expected returns therefore are low. Reductions (increases) in breadth thus should forecast lower (higher) returns. Another prediction of the model is that changes in breadth should be positively correlated with other variables that forecast increased risk-adjusted returns. Using quarterly data on mutual fund holdings over the period 1979-98, the authors find evidence supportive of both of these predictions.

Barber, Odean, and Zheng analyze the mutual fund purchase and sale decisions of over 30,000 households with accounts at a large U.S. discount broker for the six years ending in 1996. They document three primary results. First, investors buy funds with strong past performance; over half of all fund purchases occur in funds ranked in the top quintile of past annual returns. Second, investors sell funds with strong past performance and are reluctant to sell their losing fund investments. Investors are twice as likely to sell a winning mutual fund than a losing mutual fund. Thus, nearly 40 percent of fund sales occur in funds ranked in the top quintile of past annual returns. Third, investors are sensitive to the form in which fund expenses are charged. Although investors are less likely to buy funds with high transaction fees (for example, broker commissions or front-end load fees), their purchases are relatively insensitive to a fund's operating expense ratio. Given evidence on the persistence of mutual fund performance, the purchase of last year's winning funds seems rational. However, the authors argue that selling winning fund investments and neglecting a fund's operating expense ratio when purchasing a fund is clearly counterproductive.

Chan, Karceski, and Lakonishok analyze historical long-term growth rates across a broad cross-section of stocks using a variety of indicators of operating performance. They ask whether it is possible to predict which firms will achieve high future growth using attributes such as past growth, industry affiliation (technology versus nontechnology), book-to-market ratio, past return, and security analysts' long-term forecasts. Historically, some firms have attained very high growth rates, but this is relatively rare. Only about 5 percent of surviving firms do better than a growth rate of 29 percent per year over ten years. Moreover, there is very limited ability to identify beforehand which firms will be able to generate such high long-term growth in the future. The historical patterns thus raise strong doubts about the sustainability of many stocks' valuations. Looking forward, the past growth record does not suggest a high expected return on stocks in general.

Abarbanell and Lehavy demonstrate that relatively small numbers of large optimistic and small pessimistic errors in analysts' forecasts have a disproportional impact on the statistical measure relied on in the earlier literature for drawing inferences about analysts' incentives and their proclivity to issued biased findings. The authors indicate that there is a common empirical source that underlies evidence of bias and inefficiency in distributions of analysts' forecasts, two phenomena that previously have been analyzed as separate manifestations of analyst irrationality. Also, the authors find that analysts do not account completely for systematic forms of earnings management intended to create accounting reserves or to beat market earnings expectations slightly. Taken together, these findings provide a challenge to researchers to refine the existing judgment and the incentive-based explanations for systematic analyst forecast errors in order to account for the role of unusual reported earnings realizations. The results also raise the possibility that systematic "errors" characterize equilibriums in which analysts are completely rational and face symmetric incentives.

 

Corporate Finance

The NBER's Program on Corporate Finance met on November 10 in Cambridge. Rene M. Stulz, NBER and Ohio State University, organized the meeting and chose these papers for discussion:

  • Luigi Guiso, University of Sassari, Paola Sapienza, Northwestern University, and Luigi G. Zingales, NBER and University of Chicago, "The Role of Social Capital in Financial Development" (NBER Working Paper No. 7563)
  • Discussant: Tarun Khana, Harvard University
  • Charles P. Himmelberg and Inessa Love, Columbia University, and R. Glenn Hubbard, NBER and Columbia University, "Investor Protection, Ownership, and Investments: Some Cross-Country Empirical Evidence"
  • Discussant: David S. Scharfstein, NBER and MIT
  • Julie Wulf, University of Pennsylvania, "Internal Capital Markets and Firm-Level Compensation Incentives for Division Managers"
  • Discussant: Antoinette Schoar, MIT
  • Michael J. Barclay, NBER and University of Rochester; Clifford G. Holderness, Boston College; and Dennis P. Sheehan, Pennsylvania State University, "The Block Pricing Paradox"
  • Discussant: Karen H. Wruck, Ohio State University
  • Brett Trueman, M. H. Franco Wong, and Xiao-Jun Zhang, University of California, Berkeley, "The Eyeballs Have It: Searching for the Value in Internet Stocks"
  • Discussant: Jay R. Ritter, University of Florida
  • Eugene F. Fama, University of Chicago, and Kenneth R. French, NBER and MIT, "The Equity Premium"
  • Discussants: G. William Schwert, NBER and University of Rochester, and Andrei Shleifer, NBER and Harvard University

To identify the effect of social capital on financial development, Guiso, Sapienza, and Zingales exploit the well-known differences in social capital and trust across different areas of Italy. In regions with high levels of social trust, households invest less in cash and more in stock, use more checks, have higher access to institutional credit, and make less use of informal credit. The effect of social capital is stronger where legal enforcement is weaker and among less-educated people. These results are not driven by omitted environmental variables, because the authors also show that the behavior of people who move is still affected by the level of social capital in the province where they were born.

Himmelberg, Love, and Hubbard investigate the effect of investor protection on corporate investment, emphasizing the endogeneity of ownership structure as one means of identifying firms operating under weak legal protections. Building on the idea that a weak legal environment increases the cost of external financing, the authors derive a model of investment in which changes in the marginal cost of capital are identified by changes in leverage and by the interactions of leverage with the concentration of inside equity ownership. Using firm-level data for a broad sample of 39 countries, they confirm that weaker legal protection empirically predicts higher concentrations of inside equity ownership. They also find that the marginal cost of capital is more sensitive to changes in leverage when inside equity ownership is highly concentrated. These results provide evidence that weak investor protection inhibits the efficient allocation of capital.

Using Compustat financial data and compensation data from a proprietary survey, Wulf finds that compensation and investment incentives are substitutes: firms that more strongly link firm performance to incentive compensation for division managers also provide weaker investment incentives through the capital budgeting process. Specifically, as the proportion of incentive pay for division managers that is based on firm performance increases, division investment is less responsive to division profitability. These findings are consistent with a model of influence activities by division managers and the implied relative weights placed on imperfect, objective signals (that is, accounting measures) versus distortable, subjective signals (that is, manager recommendations) in interdivisional capital allocation decisions.

Barclay, Holderness, and Sheehan examine the disparity in prices of large traded blocks of stock. On average, block trades are priced at an 11 percent premium to the post-announcement exchange price, while private placements are priced at a 19 percent discount. This paradox cannot be resolved by obvious considerations such as block size or liquidity. According to the authors, resolution comes from what happens after the transactions. Most block-trade purchasers become involved in management, suggesting that their premiums reflect anticipated private benefits from control. Most private-placement purchasers remain passive: firm value declines, and there are few acquisitions and little management turnover. This suggests that discounts on private placements reflect implicit compensation for helping to entrench management, not for monitoring, or for providing certification.

Trueman, Wong, and Zhang show how the market uses limited accounting information and measures of Internet usage to value Internet firms. The authors do not find a significant association between bottom-line net income and their sample firms' market prices; this is consistent with some investors' claims that financial statement information is of very limited use in the valuation of Internet stocks. However, they do find that gross profits are positively and significantly associated with prices. In addition, they find that unique visitors and page views, as measures of Internet usage, provide incremental explanatory power for stock prices, over and above net income and its components. They also find significant differences in valuation between e-tailers and portal and content/community firms with respect to their financial data and measures of Internet usage.

Fama and French compare estimates of the equity premium for 1872-1999 from realized returns and the Gordon constant dividend growth model. The two approaches produce similar estimates of the real equity premium for 1872-1949, about 4 percent per year. But for 1950-99, the Gordon estimate of 3.4 percent per year is about 40 percent of the estimate from realized stock returns of 8.28 percent. The authors suggest that the difference between the two estimates for 1950-99 is largely attributable to unexpected capital gains, the result of a decline in discount rates to unusually low values at the end of the sample period. They conclude that the unconditional expected stock return of the last half-century is a lot lower than the realized average return.

 

Higher Education

The NBER's Working Group on Higher Education met in Cambridge on November 10. Charles T. Clotfelter, NBER and Duke University, organized the meeting at which the following papers were discussed:

  • Kelly Dugan, Charles Mullin, and John Siegfried, Vanderbilt University, "Undergraduate Financial Aid and Subsequent Alumni Giving Behavior"
  • Discussant: Bruce I. Sacerdote, NBER and Dartmouth College
  • Christopher Cornwell, David B. Mustard, and Deepa Sridhar, University of Georgia, "The Enrollment Effects of Merit Aid: Evidence from Georgia's HOPE Scholarship Program"
  • Discussant: Caroline M. Hoxby, NBER and Harvard University
  • Jerry G. Thursby, Purdue University, and Marie C. Thursby, NBER and Purdue University, "Who's Selling the Ivory Tower? Sources of Growth in University Licensing" (NBER Working Paper No. 7718)
  • Discussant: Paula E. Stephan, Georgia State University
  • Amy E. Schwartz, New York University, and Benjamin P. Scafidi, Georgia State University, "What's Happened to the Price of College? Quality Adjusted Price Indexes for Four-Year Colleges"
  • Discussant: A. Abigail Payne, University of Illinois
  • Todd R. Stinebrickner and Ralph Stinebrickner, University of Western Ontario, "The Importance of Nontuition Factors in Determining the Family Income-Schooling Relationship: Evidence from a Liberal Arts College with a Full Tuition Subsidy Program"
  • Discussant: Susan M. Dynarski, NBER and Harvard University
  • Michelle McLennan, Ursinus College, and Susan L. Averett, Lafayette College, "Black and White Women: Differences in College Attendance, Does the Rate of Return Matter?"
  • Discussant: Bridget T. Long, Harvard University

Dugan, Mullin, and Siegfried use data on 2,822 Vanderbilt University graduates to investigate alumni giving behavior during the eight years after graduation. They first estimate the likelihood of making a contribution and then the average gift size, conditional on contributing. They find that the type of financial aid received as an undergraduate has a greater influence on subsequent alumni generosity than the amount received. Adding some scholarship to a loan-only package, or eliminating all loans from a mixed loan-grant package, increases the likelihood of a subsequent contribution. Increasing the total size of the package, or altering the proportions of an already mixed package, appears to be inconsequential for future donations. The authors also find that students who receive small merit scholarships contribute more as alumni than students who receive either no merit scholarship or a large merit scholarship.

Georgia's lottery-funded HOPE Scholarship allows high-school students graduating with a "B" average to qualify for scholarships at degree-granting public or private colleges. Since HOPE's inception, more that $1 billion in scholarship funds have been disbursed to over a half million students. Exploiting HOPE as a natural experiment, Cornell, Mustard, and Sridhar contrast enrollment rates in Georgia with those in a set of control-group states from 1988-97. They find that HOPE has led to about an 8 percentage point increase, or an 11 percent rise, in the first-time-freshmen enrollment rate in Georgia. The 8 percentage point effect for all first-time freshmen is concentrated in four-year schools and roughly evenly split between public and private colleges. HOPE has induced increases of at least 10 and 20 percent, respectively, in the enrollment rates of four-year public and private schools. Finally, these results support the view that HOPE has served primarily to influence college choice, rather than to expand access.

Historically, the commercial use of university research has been viewed in terms of spillovers. But there has been a dramatic increase recently in technology transfer through licensing as universities attempt to appropriate the returns from faculty research. This change has prompted concerns regarding the source of the growth -- specifically, whether it suggests a change in the nature of university research. Thursby and Thursby examine the extent to which the growth in licensing is attributable to the productivity of observable inputs or driven by a change in the propensity of faculty and administrators to engage in commercializing university research. They use survey data from 65 universities to calculate total factor productivity (TFP) growth in each stage of research. They augment the productivity analysis with survey evidence from businesses who license in university inventions. Their results suggest that increased licensing is primarily attributable to an increased willingness by faculty and administrators to license and to an increased business reliance on external R and D rather than a shift in faculty research.

According to estimates from the consumer price index (CPI), the "sticker" or "list price" of a college education in the United States has risen significantly faster since the earlier 1980s than the overall rate of inflation. This has raised considerable concern among policymakers, parents, and students that college attendance was becoming less and less affordable even as it was becoming more and more important for economic success in the job market. For the CPI, the government does not adjust the sticker price of college (tuition and fees) for scholarships awarded, discounts given, or for changes in the quality or characteristics of the services provided, such as attributes of the faculty, course offerings, or facilities. Thus, the estimated price indexes reflect changes in quality and characteristics of college as well as changes in prices. Schwartz and Scafidi, by contrast, develop and explore the construction of aid- and quality-adjusted price indexes for U.S. colleges, based on the estimation of hedonic models of the consumer price of college. They find that adjusting for financial aid and quality of services results in a net price increase of college costs over this time period that is 45 percent below the price increase in the current "college tuition and fees" price index in the CPI.

Researchers have long sought to better understand why a strong relationship between family income and educational attainment exists at virtually all levels of schooling. In part because of a recent increase in the disparity between the wages of college graduates and the wages of individuals with less than a college degree, researchers now want to know exactly why individuals from low-income families are less likely to graduate from college. Using unique new data obtained directly from a liberal arts school that maintains a full tuition subsidy program, Stinebrickner and Stinebrickner show that non-tuition reasons are very important. Their findings have implications for expensive policy programs such as the full tuition subsidy program recently approved by California.

McLennan and Averett focus on the college attendance decisions of women by race, and specifically whether they respond to the rate of return. Their results suggest that both black and white women are likelier to attend college if they are faced with higher rates of return. Further, early childbearing reduces the probability of attending college for both white and black women, even after controlling for family and individual background characteristics.

 

Health Care

The NBER's Program on Health Care met on November 17 at the Bureau's headquarters in Cambridge. Program Director Alan M. Garber of Stanford University presided over a day-long discussion of these topics:

  • "Economic Consequences of Health Insurance Reform"--Presentations and Roundtable Discussion
  • David M. Cutler, NBER and Harvard University; Jonathan Gruber, NBER and MIT; and Mark B. McClellan, NBER and Stanford University
  • "Is Health Insurance Affordable for the Uninsured?"
  • M. Kate Bundorf, NBER and Stanford University, and Mark V. Pauly, NBER and University of Pennsylvania
  • "Incentives in HMOs"
  • Martin S. Gaynor, NBER and Carnegie Mellon University; James B. Rebitzer, NBER and Case Western Reserve; and Lowell J. Taylor; Carnegie Mellon University
  • "Association between Intensity of Treatment and Mortality in Cohorts of Medicare Beneficiaries"
  • Elliott S. Fisher and Therese A. Stukel, Dartmouth College, and David E. Wennberg, Maine Medical Center

In the first of the day's discussions, Gruber analyzed the economic consequences of a national health insurance plan based on a structured approach to competition among private health plans, tax credits to subsidize health insurance purchase among low-income Americans, and other features to promote near-universal coverage. McClellan presented an analysis of a similar plan, and Cutler led a discussion of several issues in the valuation and costs of national health insurance financing proposals that incorporate competition among private insurance plans.

In their paper, Bundorf and Pauly investigate the meaning of the term "affordability" in the context of the purchase of health insurance. After proposing a definition and estimating the proportion of those currently uninsured who, by this definition, are unable to afford coverage, they find that health insurance actually was affordable for anywhere from 24 to 55 percent of the uninsured in 1998.

Gaynor, Rebitzer, and Taylor use unique proprietary data from an HMO network to analyze the effect of financial and other incentives on medical costs and quality. They report three findings: 1) costs fall as financial incentives for physicians to control costs increase; 2) nonfinancial features of the incentive system (notably peer pressure and mutual monitoring among physicians) may also influence costs; and 3) incentives can be structured so that cost control need not have a negative impact on quality. Indeed, the authors find that panels of physicians who controlled costs most effectively also had the highest score on quality indicators.

Studies of variations in regional medical practice find marked disparities in the amount of medical care provided to Medicare enrollees. Fisher, Stukel, and Wennberg ask whether the more intensive practice pattern observed in some regions results in improved health outcomes. They study the relationship between intensity of treatment and mortality in three groups of Medicare enrollees and find that Medicare enrollees residing in high-intensity regions have no better survival than those residing in regions where enrollees use less health care. After the initial episode of care, enrollees who lived in the highest intensity region received approximately 60 percent more care during the follow-up period than those in the lowest intensity regions. Yet, higher intensity treatment was not associated with improved survival. In fact, the authors observed slightly increased mortality as the intensity of medical practice increased.

 

Monetary Economics

Members and guests of the NBER's Program on Monetary Economics met in Cambridge on November 17. Program Director Ben S. Bernanke, also of Princeton University, organized the program and chose the following papers for discussion:

  • Chang-Tai Hsieh, Princeton University, and Christina D. Romer, NBER and University of California, Berkeley, "Was the Federal Reserve Fettered? Devaluation Expections in the 1932 Monetary Expansion"
  • Discussant: Richard Grossman, Wesleyan University
  • Glenn D. Rudebusch, Federal Reserve Bank of San Francisco, "Term Structure Evidence on Interest Rate Smoothing and Monetary Policy Inertia"
  • Discussant: Brian Sack, Federal Reserve Board
  • Lars E. O. Svensson, NBER and Stockholm University, and Michael Woodford, NBER and Princeton University, "Indicator Variables for Optimal Policy" (NBER Working Paper No. 7953)
  • Discussant: James Bullard, Federal Reserve Bank of St. Louis
  • James A. Kahn, Margaret M. McConnell, and Gabriel Perez Queros, Federal Reserve Bank of New York, "The Reduced Volatility of the U.S. Economy: Policy or Progress?"
  • Discussant: Jean Boivin, Columbia University
  • Esteban Jadresic, International Monetary Fund, "Can Staggered Price Setting Explain Short-Run Inflation Dynamics?"
  • Discussant: John Roberts, Federal Reserve Board
  • Aaron Tornell, NBER and University of California, Los Angeles, "Robust- Forecasting and Asset Pricing Anomalies" (NBER Working Paper No. 7753)
  • Discussant: James Stock, NBER and Harvard University

Hsieh and Romer consider the $1 billion expansionary open market operation undertaken in the spring of 1932 as a crucial case study of the link between monetary expansion and expectations of devaluation. They use data on forward exchange rates to measure expectations of devaluation during this episode but find little evidence that the large monetary expansion led investors to believe that the United States would devalue. The financial press and the records of the Federal Reserve System also show little evidence of expectations of devaluation or fear of a speculative attack. The authors find that a flawed model of the effects of monetary policy and conflict among the 12 Federal Reserve banks, rather than concern about the gold standard, led the Fed to suspend the expansionary policy in the summer of 1932.

A number of studies have used quarterly data to estimate monetary policy rules or reaction functions. These rules seem to imply a very slow adjustment of the policy interest rate: about 20 percent of the target per quarter. The conventional wisdom is that this gradual adjustment reflects policy inertia or interest rate smoothing behavior by central banks. However, Rudebusch notes that such slow quarterly adjustment implies predictable future variation in the policy rate at horizons of several quarters. In contrast, evidence from the term structure of interest rates suggests that there is no information about such changes in financial markets. Rudebusch provides an alternative interpretation: the large lag coefficients in the estimated policy rules may reflect persistent special factors that cause the central bank to deviate from the policy rule in unpredictable ways.

Svensson and Woodford derive and interpret the optimal weights on indicators in models with partial information about the state of the economy and forward-looking variables, for equilibriums under discretion and under commitment. They examine an example of optimal monetary policy with a partially observable potential output and a forward-looking indicator. The optimal response to the optimal estimate of potential output displays certainty-equivalence, while the optimal response to the imperfect observation of output depends on the noise in this observation.

The U.S. economy has experienced a dramatic decline in the volatility of both inflation and output since the early 1980s. Kahn, McConnell, and Queros examine two competing explanations for this. The first is the popular view that improved Fed policy since the late 1970s is chiefly responsible. The second view asserts that improvements in information technology have stabilized aggregate output variability, primarily through their effects on inventory behavior. The authors model the joint determination of output, inflation, and policy in an optimizing framework and argue that the technology story plays the primary role in explaining the relative stability of the last two decades.

While staggered price setting models are increasingly popular in macroeconomics, recent empirical studies question their ability to explain short-run inflation dynamics. Jadresic shows that a staggered price setting model that allows for a flexible distribution of price durations can replicate the persistence of inflation found in the data. The model also can explain the empirical regularity that, although inflation surprises are followed by a period of slow output growth, booms in output growth are followed by a period of high inflation. The distribution of price durations that yields these results, estimated from aggregate data on prices and other variables, is consistent with the microeconomic evidence suggesting that the duration of prices and wages is about a year on average, but that there is a great deal of heterogeneity across individual prices and wages.

Tornell presents an alternative expectation formation mechanism that helps rationalize well-known asset pricing anomalies, such as the predictability of excess returns, excess volatility, and the equity-premium puzzle. As with rational expectations (RE), the expectation formation mechanism that Tornell considers is based on a rigorous optimization algorithm that does not presume misperceptions -- it simply departs from some of the implicit assumptions that underlie RE. The new element is that uncertainty cannot be modeled via probability distributions. Tornell considers an asset pricing model in which uncertainty is represented by unknown disturbance sequences, as in the H-infinity-control literature. Agents must filter the "persistent" and "transitory" components of a sequence of observations to make consumption and portfolio decisions. Tornell finds that H-infinity forecasts are more sensitive to news than RE forecasts and that equilibrium prices exhibit the anomalies previously mentioned.

 

Macroeconomics and Individual Decisionmaking

As part of the NBER's Project on Behavioral Macroeconomics, there was a meeting on "Macroeconomics and Individual Decisionmaking" in Cambridge on November 18. Project Directors George A. Akerlof, University of California, Berkeley, and Robert J. Shiller, NBER and Yale University, set the following agenda:

  • Xavier Gabaix, MIT, and David I. Laibson, NBER and Harvard University, "The 6D Bias and the Equity Premium Puzzle"
  • Discussant: Karen E. Dynan, Federal Reserve Board of Governors
  • George A. Akerlof, and William T. Dickens and George L. Perry, Brookings Institution, "Near-Rational Wage and Price Setting and the Long-Run Phillips Curve"
  • Discussant: Robert J. Shimer, NBER and Princeton University
  • Steven N. Durlauf, NBER and University of Wisconsin, "A Framework for the Study of Individual Behavior and Social Interactions"
  • Discussant: Russell W. Cooper, NBER and Boston University
  • Roland J. Benabou, NBER and Princeton University, and Jean Tirole, Université des Sciences Sociales, Toulouse, "Willpower and Personal Rules"
  • Discussant: Botond Koszegi, University of California, Berkeley
  • Sendhil Mullainathan, NBER and MIT, "Thinking through Categories: A Model of Cognition"
  • Discussant: Edward D. O'Donoghue, Cornell University
  • David E. Lebow, Raven E. Saks, and Beth Anne Wilson, Federal Reserve Board of Governors, "Downward Nominal Wage Rigidity: Evidence from the Employment Cost Index"
  • Discussant: Shulamit Kahn, Boston University

If decision costs lead agents to update consumption only every D periods, then high-frequency data will exhibit an unusually low correlation between equity returns and consumption growth. Gabaix and Laibson characterize the dynamic properties of an economy composed of consumers who delay updating in this way. Using a Mehra-Prescott procedure, an econometrician would infer a coefficient of relative risk aversion biased upward by a factor of 6D. With quarterly data, if agents adjust their consumption every D = 4 quarters, the imputed coefficient of relative risk aversion will be 24 times greater than the true value. High levels of risk aversion implied by the equity premium and violations of the Hansen-Jaganathan bounds cease to be puzzles. The neoclassical model with delayed adjustment explains the consumption behavior of shareholders. Once limited participation is taken into account, the model matches the high-frequency properties of aggregate consumption and equity returns.

In their paper, Akerlof, Dickens, and Perry question the basic assumptions about how expectations of inflation are used. From evidence about how people actually use information in making decisions, they develop an alternative to the natural-rate model based on more realistic, near rational behavior. They find that rather than having a unique natural rate, the economy exhibits a range of sustainable unemployment rates consistent with low rates of inflation. The lowest sustainable unemployment rate is well below the natural rate as usually estimated and is associated with inflation rates moderately above zero.

Recent work in economics has begun to integrate sociological ideas into the modeling of individual behavior. In particular, this new approach emphasizes how social context and social interdependency influence the ways in which individuals make choices. Durlauf provides an overview of an approach to integrating the theoretical and empirical analysis of such environments. His analysis is based on a framework in Brock and Durlauf (2000a and 2000b). In this paper, he assesses empirical evidence on behalf of this perspective and explores some of its policy implications.

Benabou and Tirole study internal commitment mechanisms or "personal rules" (diets, exercise regimens, resolutions, moral or religious precepts, and so on) through which people attempt to achieve self-discipline. The basic idea, which builds on Ainslie (1992), is that rules cause lapses to be interpreted as precedents, resulting in a loss of self-reputation that has an adverse impact on future self-control. The authors model the behavior of individuals who are unsure of their willpower, and they characterize rules as self-reputational equilibriums in which impulses are held in check by the fear of "losing faith in oneself." They then examine how equilibrium conduct is affected by opportunistic distortions of memory or inference, such as finding excuses for one's past behavior. The authors show that excessively rigid rules--anorexia, workaholism--can be understood as costly forms of self-signaling. In equilibrium, individuals are so afraid of appearing weak to themselves that every decision becomes a test of their willpower, even when the stakes are minor or when self-restraint is not desirable. The authors' results show that "salience of the future" is not only consistent with, but actually generated by, present-oriented preferences.

Mullainathan presents a model in which people use categories to think about the world around them. Faced with data, they first pick a category that best matches it. To make predictions, they ask how representative an outcome would be of the chosen category. This simple model unifies many of the experimentally documented biases: the law of small numbers, the hot hand, representativeness, and the conjunction fallacy. Moreover, the model provides enough structure that it results in readily testable out-of-sample predictions regarding these biases.

Lebow, Saks, and Wilson examine the extent of downward nominal wage rigidity using the microdata underlying the Bureau of Labor Statistics's employment cost index. This dataset has two significant advantages over those used previously. It is extensive, nationally representative, and based on establishment records. Thus it is free from much of the reporting error that has plagued work using the Panel Study of Income Dynamics and Current Population Survey. It also contains detailed information on benefit costs, allowing a first look at the rigidity of total compensation -- arguably the more relevant measure from the firm's perspective. In general, the authors find significantly stronger evidence of downward nominal wage rigidity than did studies using panel data on individuals. Total compensation appears somewhat more flexible than wages and salaries. However, this increased flexibility does not seem to reflect the deliberate attempt by firms to use benefits to circumvent wage and salary rigidity.

 

Productivity and Technological Change

The NBER's Program on Productivity and Technological Change met at the Bureau's offices in Cambridge on December 1 to discuss "Technological Change and Institutional Structure." Manuel Trajtenberg, NBER and Tel Aviv University, organized the meeting and chose these papers for discussion:

  • Philippe Aghion, Harvard University; Christopher J. Harris, King's College, Cambridge; Peter Howitt, Brown University; and John Vickers, All Souls College, Oxford, "Competition, Imitation, and Growth with Step-by-Step Innovation"
  • Ricardo J. Caballero, NBER and MIT, and Mohamad L. Hammour, CEPR and Delta, "Creative Destruction and Development: Institutions, Crises, and Restructuring" (NBER Working Paper No. 7849)
  • George P. Baker, NBER and Harvard University, and Thomas N. Hubbard, NBER and University of Chicago, "Make versus Buy in Trucking: Asset Ownership, Job Design, and Information"
  • Erik Brynjolfsson, MIT, and Shinkyu Yang, New York University, "Intangible Assets and Growth Accounting: Evidence from Computer Investments"
  • Susan C. Athey and Scott Stern, NBER and MIT, "The Impact of Information Technology on Emergency Health Care Outcomes" (NBER Working Paper No. 7887)

Aghion, Harris, Howitt, and Vickers ask whether more intense market competition and imitation are good for growth. They use an endogenous growth model with "step-by-step" innovations, in which technological laggards must first catch up with the leading-edge technology before battling for technological leadership in the future. The authors find that the usual Schumpeterian effect of more intense product market competition (PMC) is almost always outweighed by the increased incentive for firms to innovate in order to escape competition; this means that PMC has a positive effect on growth. They also find that a little imitation almost always enhances growth, as it promotes more frequent neck-and-neck competition, but too much imitation unambiguously reduces growth. Thus, their model points to complementary roles for competition (antitrust) policy and patent policy.

Creative destruction, driven by experimentation and the adoption of new products and processes when investment is sunk, is a core mechanism of development. Generically, underdeveloped and politicized institutions are a major impediment to a well-functioning creative destruction process and result in sluggish creation, technological sclerosis, and spurious reallocation. Those ills reflect the macroeconomic consequences of contracting failures in the presence of sunk investments. Recurrent crises are another major obstacle to creative destruction. But Caballero and Hammour reject the common inference that increased liquidations during crises result in increased restructuring. Rather, they suggest that crises freeze the restructuring process, and this is associated with the tight financial-market conditions that follow. This productivity cost of recessions adds to the traditional costs of resource underutilization.

Both organizational economics and industrial organization seek to explain patterns of asset ownership in the economy. To that end, Baker and Hubbard develop a model of asset ownership in trucking. They test it by examining how the adoption of different classes of on-board computers (OBCs) from 1987 to 1997 influenced shippers to use their own trucks for hauls versus contracting with for-hire carriers. Baker and Hubbard find that OBCs' incentive-improving features pushed hauls toward private carriage, but their resource-allocation-improving features pushed them toward for-hire carriage. The authors conclude that ownership patterns in trucking reflect the importance of both incomplete contracts (Grossman and Hart, 1986) and job design and measurement issues (Holmstrom and Milgrom, 1994).

Brynjolfsson and Yang revise growth accounting methodology and address several puzzles regarding the rapid computer investments and the disappointing productivity performance after 1973 followed by the productivity surge of the late 1990s. They show that the computer-related portion of intangible investments is substantial and growing rapidly. In particular, the authors find that the magnitude of the intangible capital investments that accompany the computerization of the economy are far larger than the direct investments in computers themselves. The apparent productivity slowdown after 1973 may be in part an artifact of the omission of this capital accumulation from the national accounts. A revised estimate that takes the intangible investments into account indicates that the total factor productivity of the U.S. economy grew up to one percent per year faster during this period than previously estimated. If the ratio of intangible assets to computer investments has remained approximately constant, then the recent productivity surge may have been underestimated as well.

Athey and Stern analyze the productivity of technology and job design in emergency response (911) systems. During the 1990s, many systems adopted Enhanced 911 (E911) which used information technology to link automatic caller identification to a database of address and location information. Using data from Pennsylvania counties in 1994-6, when almost half of them experienced a change in technology, Athey and Stern analyze the health status of cardiac patients at the time of ambulance arrival; this should be improved by timely response. The authors find that E911 increases the short-term survival rates for patients with cardiac diagnoses by about one percent. They also find that E911 reduces hospital charges. Finally, the authors find that Emergency Medical Dispatching (EMD), where call-takers gather medical information, provide medical instructions over the telephone, and prioritize the allocation of ambulance and paramedic services, does not affect the E911 results. EMD and E911 are neither substitutes nor complements.

 

International Trade and Investment

The NBER's Program on International Trade and Investment met on December 1-2 at the NBER's offices in Palo Alto, California. Program Director Robert C. Feenstra, University of California, Davis, organized the meeting. The following papers were discussed:

  • James E. Anderson, NBER and Boston College, and Eric van Wincoop, Federal Reserve Bank of New York, "Gravity with Gravitas: A Resolution of the Border Puzzle"
  • Peter Debaere, University of Texas, "Testing 'New' Trade Theory without Testing for Gravity: Reinterpreting the Evidence"
  • Donald R. Davis and David E. Weinstein, NBER and Columbia University, "A New Approach to Bilateral Trade Patterns and Balances"
  • Eckhard Janeba, NBER and University of Colorado, Boulder, "Global Corporations and Local Politics: A Theory of Voter Backlash"
  • Theo Eicher, University of Washington, Seattle, and Thomas Osang, Southern Methodist University, "Politics and Trade Policy: An Empirical Investigation"
  • James A. Levinsohn, NBER and University of Michigan, and Wendy Petropoulos, University of Michigan, "Creative Destruction or Just Plain Destruction? The U.S. Textile and Apparel Industries since 1972"
  • Robert C. Feenstra and Gordon H. Hanson, NBER and University of Michigan, "Intermediaries in Enterpôt Trade: Hong Kong Reexports of Chinese Goods"
  • Bruce A. Blonigen, NBER and University of Oregon, and Ronald B. Davies, University of Oregon, "The Effect of Bilateral Tax Treaties on U.S. FDI Activity" (NBER Working Paper No. 7929)

The gravity model has been widely used to infer that such institutions as customs unions and exchange rate mechanisms have substantial effects on trade flows. However, Anderson and van Wincoop show that the gravity model as usually estimated does not correspond to the theory behind it. They solve the "border puzzle" by applying that theory differently and find that national borders reduce trade between the United States and Canada by about 40 percent, while reducing trade among other industrialized countries by about 30 percent. Debaere revisits the ongoing debate about empirical support for monopolistic competition models in international trade. He refutes the claim that monopolistic competition models, which should explain trade primarily among developed countries, find empirical support among just any group of non-OECD countries. After reexamining Helpman's 1987 analysis of trade-to-GDP ratios and country similarity, Debaere re-introduces trade-to-GDP ratios in the test equations for these models; this has the advantage of providing a zero-gravity test of New Trade Theory. The results therefore will not depend on the strong correlation between countries' size and their volume of trade.

The standard approach to bilateral trade patterns is the so-called "gravity model," which holds that bilateral trade volumes are proportional to the product of country GDPs and inversely proportional to bilateral distance. Though this model generates good fits with data, it has an important shortcoming: it posits that all traded goods are differentiated by source, predicting that trade volumes move smoothly with distance and size. If instead there are also homogeneous goods for which price is the principal determinant of bilateral trade patterns, then the standard gravity model needs to be supplemented with a model of bilateral trade in homogeneous goods. Davis and Weinstein implement such a dual approach to bilateral trade patterns for a sample of 61 countries and 30 industries. The countries and industries that appear to focus largely on homogenous goods trade are so identified in this empirical exercise. Moreover, the authors identify substantial improvements in predictions of bilateral trade patterns and balances.

Host governments often display two types of behavior toward outside investors. Initially, they compete eagerly by offering subsidy packages, but often they reverse these policies later. In contrast to the literature that explains this behavior as a result of a hold-up problem, Janeba argues that these policy reversals are the result of a change in the policy choice or the identity of the policymaker. Voters disagree over the net benefits of attracting corporations because of a redistributional conflict. Economic shocks change the identity of the policymaker over time by affecting the number of people who support the corporation, the incentive of an opponent to become a candidate, and the opponent's probability of winning the election against a proponent. Janeba also shows that societies with more income inequality are less likely to attract outside investment.

Eicher and Osang examine the empirical relevance of three prominent endogenous protection models. Is protection for sale, or do altruistic policymakers worry about political support? They find that protection is indeed "for sale." However, the existence of lobbies matter, as does the relative size of the sectoral pro- and anti-protection contributions. The authors extend the previous tests of the Influence Driven approach (Grossman and Helpman, 1994), comparing its performance to well-specified alternatives. Using J-tests to directly compare the power of the models, they find significant misspecification in the Political Support Function approach. They cannot reject the null hypothesis of correct specification of the Influence Driven model, and they find evidence of some misspecification in the Tariff Function model (Findlay and Wellisz, 1982).

Levinsohn and Petropoulos use plant-level data to examine changes in the U.S. textile and apparel industries. They find that although industry-level evidence suggests that these industries are declining, some plants have experienced significant job creation, investment, and productivity gains. The authors make the case that these two industries are good examples of Schumpeterian Creative Destruction, but that this conclusion requires plant-level data, because the industry-level data paint a very pessimistic picture.

Feenstra and Hanson examine Hong Kong's role in intermediating trade between China and the rest of the world. Hong Kong distributes a large fraction of China's exports. Net of customs, insurance, and freight charges, reexports of Chinese goods are much more expensive when they leave Hong Kong than when they enter. Hong Kong markups on reexports of Chinese goods are higher for differentiated products, products with higher variance in export prices, products sent to China for further processing, and products shipped to countries that have less trade with China. These results are consistent with quality-sorting models of intermediation and with the outsourcing of production tasks from Hong Kong to China. Additional results suggest that Hong Kong traders price discriminate across destination markets and use transfer pricing to shift income from high-tax counties to Hong Kong.

The effects of bilateral tax treaties on foreign direct investment (FDI) activity have been unexplored, despite significant ongoing activities by countries to negotiate and ratify these treaties. Blonigen and Davies estimate the impact of bilateral tax treaties using both U.S. inbound and outbound FDI from 1966-92. Their estimates suggest a statistically significant average annual increase ranging from 2 to 8 percent of FDI activity for each additional year of a treaty, depending on the measure of FDI activity and the empirical framework the authors use. While treaties have an immediate impact on FDI flows, there is a substantial lag before treaty adoption positively affects FDI stocks and affiliate sales. Finally, the results suggest that bilateral tax treaties have an effect on investment beyond the withholding tax rates that they alter; this may include the commitment and risk reduction effects of these treaties.

 

Market Microstructure

Members and guests of the NBER's Market Microstructure Project met in Cambridge on December 8. Bruce Lehmann, University of California, San Diego; Andrew W. Lo, NBER and MIT; Matthew Spiegel, Yale University; and Avanidhar Subramanyam University of California, Los Angeles, organized the program. The following papers were discussed:

  • Michael J. Barclay, NBER and University of Rochester; Terrence Hendershott, University of Rochester; and D. Timothy McCormick, NASD, Inc., "Electronic Communications Networks and Market Quality"
  • Discussant: Robert Battalio, University of Notre Dame
  • Christine A. Parlour and Uday Rajan, Carnegie Mellon University, "Payment for Order Flow"
  • Discussant: Leslie Marx, University of Rochester
  • Charles M. Jones, Columbia University, "A Century of Stock Market Liquidity and Trading Costs"
  • Discussant: Joel Hasbrouck, New York University
  • Tarun Chordia, Emory University, Richard Roll, University of California, Los Angeles, and Avanidhar Subrahmanyam, "Market Liquidity and Trading Activity"
  • Discussant: Larry Glosten, Columbia University
  • Mark Peterson, Southern Illinois University, and Erik Sirri, Babson College, "Order Submission Strategy and the Curious Case of Marketable Limit Orders"
  • Discussant: Venkatesh Panchapagesan, Washington University

Barclay, Hendershott, and McCormick compare the execution quality of trades with market makers to trades on Electronic Communications Networks (ECNs). Average realized and effective spreads are smaller for ECN trades than for market-maker trades. The lower effective spreads for ECN trades are generated by lower quoted spreads at the time of the trade, and because market makers give more price improvement to small trades than ECNs do. ECN trades are also more informative than trades with market makers. The authors show that increased trading on ECNs improves most measures of overall market quality. In the cross section, more ECN trading is associated with lower quoted, effective, and realized spreads, both overall and on trades with market makers. More ECN trading is also associated with less quoted depth.

Parlour and Rajan develop a dynamic model of price competition in broker and dealer markets. Competing market makers quote bid-ask spreads, and competing brokers choose a commission to be paid by an investor. Brokers also choose a routing strategy across market makers. Then, to minimize their total transaction costs, investors choose a broker. This environment changes the order mix and can make retail investors worse off. It leads to lower brokerage commissions but higher market-maker spreads, thereby increasing the total transactions cost for investors.

Jones assembles an annual time series of bid-ask spreads on Dow Jones stocks from 1898-1998, along with an annual estimate of the weighted-average commission rate for trading New York Stock Exchange stocks since 1925. Spreads gradually declined over the course of the century but are punctuated by sharp rises during periods of market turmoil. Proportional one-way commissions rise dramatically to a peak of nearly one percent in the late 1960s and early 1970s, and fall sharply following commission deregulation in 1975. The sum of half-spreads and one-way commissions, multiplied by annual turnover, is an estimate of the annual proportional cost of aggregate equity trading. This cost drives a wedge between aggregate gross equity returns and net equity returns. This wedge accounts for only a small part of the observed equity premium, though. All else equal, the gross equity premium is perhaps one percent lower today than it was early in the 1900s. Finally, Jones shows that these measures of liquidity -- spreads and turnover -- predict stock returns up to one year ahead. High spreads predict high stock returns; high turnover predicts low stock returns. This suggests that liquidity is an important determinant of conditional expected returns.

After studying spreads, depths, and trading activity for U.S. equities over an extended time sample, Chordia, Roll, and Subrahmanyam find that daily changes in market averages of liquidity and trading activity are highly volatile, negatively serially dependent, and influenced by a variety of factors. Liquidity plummets significantly in down markets but increases weakly in up markets. Trading activity increases in either up or down markets. Recent market volatility induces less trading activity and reduces spreads. There are strong day-of-the-week effects; Fridays are relatively sluggish and illiquid while Tuesdays are the opposite. Long- and short-term interest rates influence liquidity and trading activity. Depth and trading activity increase just prior to major macroeconomic announcements.

Peterson and Sirri compare the execution costs of market orders and marketable limit orders (that is, limit orders with the same trading priority as market orders) to provide empirical evidence on the order submission strategy of investors with similar commitments to trade. The results indicate that the unconditional trading costs of marketable limit orders are significantly greater than those of market orders. The authors attribute the difference in costs to a selection bias and show that the order submission strategy decision is based on prevailing market conditions, stock characteristics, and the type of investor. After correcting for the selection bias, their results suggest that the average trader chooses the order type with lower conditional trading costs.