Meetings: Winter, 2003
Economic Fluctuations and Growth
The NBER's Program on Economic Fluctuations and Growth met in Chicago on October 18. Organizers Daron Acemoglu, NBER and MIT, and Thomas J. Sargent, NBER and Stanford University, chose these papers for discussion:
- Stephen L. Parente and Rui Zhao, University of Illinois, "From Bad Institutions to Worse: The Role of History in Development"
- Discussant: Luigi Zingales, NBER and University of Chicago
- James Feyrer, Dartmouth College, "Demographics and Productivity"
- Discussant: Peter J. Klenow, Federal Reserve Bank of Minneapolis
- Steven J. Davis, NBER and University of Chicago; Felix Kubler, Stanford University; and Paul Willen, University of Chicago, "Borrowing Costs and the Demand for Equity over the Life Cycle"
- Discussant: M. Fatih Guvenen, University of Rochester
- John Ameriks, TIAA-CREF Institute, and Andrew Caplin and John Leahy, NBER and New York University, "Wealth Accumulation and the Propensity to Plan" (NBER Working Paper No. 8920)
- Discussant: Robert E. Hall, NBER and Stanford University
- Russell W. Cooper, NBER and Boston University, and Jonathan L. Willis, Federal Reserve Bank of Kansas City, "The Economics of Labor Adjustment: Mind the Gap" (NBER Working Paper No. 8527)
- Discussant: Eduardo M. Engel, NBER and Yale University
- Charles I. Jones, NBER and University of California, Berkeley, "Why Have Health Expenditures as a Share of GDP Risen So Much?"
- Discussant: Robert Coppell, University of Chicago
Parente and Zhao consider the role of history in the evolution of a country's institutions and in its development. In particular, they ask how a policy implemented at an economy's agrarian stage of development to protect the vested interests of landowners will affect a country's subsequent development. The authors find that such a policy negatively affects the economy's development path in two ways. First, it delays the formation of industry. Second, it facilitates the formation of industry insider groups that further slow the growth process by delaying the adoption of better technology and limiting its use to a smaller group of workers.
Feyrer studies the impact of workforce demographics on aggregate productivity. He finds that the age structure of the workforce significantly affects aggregate productivity. A large cohort of workers aged 40 to 49 has a large positive impact on productivity. Feyrer estimates that U.S. productivity growth in the 1970s was 2 percent lower than trend because of the entry of the baby boom into the workforce. As the baby boomers entered their forties in the 1980s and 1990s, productivity growth rebounded. Japanese demographics predict almost the opposite pattern, with high growth in the 1970s followed by low growth in the 1990s. Demographics also can explain part of the productivity divergence between rich and poor nations between 1960 and 1990.
Davis, Kubler, and Willen analyze consumption and portfolio behavior in a life-cycle model calibrated to U.S. data on income processes, borrowing costs, and returns on risk-free and equity securities. Even a modest wedge between borrowing costs and the risk-free return dramatically shrinks the demand for equity. When the cost of borrowing equals or exceeds the expected return on equity -- the relevant case according to the data -- households hold little or no equity during much of the life cycle. The model also implies that the correlation between consumption growth and equity returns is low at all ages, and that risk aversion estimates based on the standard excess return formulation of the consumption Euler Equation are greatly biased upward. The bias diminishes, but remains large, for "samples" of households with positive equity holdings.
Why do similar households end up with very different levels of wealth? Ameriks, Caplin, and Leahy show that differences in attitudes and skills related to financial planning are a significant factor. The authors use new and unique survey data to assess these differences and to measure each household's "propensity to plan." They show that those with a higher propensity spend more time developing financial plans, and that this shift in planning effort is associated with increased wealth. The propensity to plan is not correlated with survey measures of the discount factor and the bequest motive, raising a question as to why it is associated with wealth accumulation. Part of the answer may lie in the very strong relationship found between the propensity to plan and the care with which households monitor their spending. It appears that this detailed monitoring activity helps households to save more and to accumulate more wealth.
Cooper and Willis study inferences about the dynamics of labor adjustment obtained by the "gap methodology" of Caballero and Engel [1993] and Caballero, Engel, and Haltiwanger [1997]. In that approach, the policy function for employment growth is assumed to depend on an unobservable gap between the target and current levels of employment. Using time-series observations, these studies reject the partial adjustment model and find that aggregate employment dynamics depend on the cross-sectional distribution of employment gaps. Thus, nonlinear adjustment at the plant level appears to have aggregate implications. The authors argue that this conclusion is not justified: these findings of nonlinearities in time-series data may reflect mismeasurement of the gaps rather than the aggregation of plant-level nonlinearities.
Aggregate health expenditures as a share of GDP have risen in the United States from about 5 percent in 1960 to nearly 14 percent in recent years. Jones explores a simple explanation for this based on technological progress. Medical advances allow diseases to be cured today -- at a cost -- that could not be cured at any price in the past. When this technological progress is combined with a Medicare-like transfer program to pay the health expenses of the elderly, Jones's model can reproduce the basic facts of recent U.S. experience, including the large increase in the health expenditure share, a rise in life expectancy, and an increase in the size of health-related transfer payments as a share of GDP.
International Finance and Macroeconomics
The NBER's Program on International Finance and Macroeconomics met in Cambridge on October 18. Organizers Charles M. Engel, NBER and University of Wisconsin, and Linda Tesar, NBER and University of Michigan, chose the following papers to be discussed:
- Daron Acemoglu and Simon Johnson, NBER and MIT; James Robinson, University of California, Berkeley; and Yunyong Thaicharoen, Bank of Thailand, "Institutional Causes, Macroeconomic Symptoms: Volatility, Crises, and Growth" (NBER Working Paper No. 9124)
- Discussant: Ross Levine, NBER and University of Minnesota, and Romain Wacziarg, NBER and Stanford University
- Itay Goldstein, Duke University, and Assaf Razin, NBER and Cornell University, "Volatility of FDI and Portfolio Investments: The Role of Information, Liquidation Shocks, and Transparency"
- Discussant: Michael Klein, NBER and Tufts University, and Enrique Mendoza, NBER and Duke University
- Xavier Gabaix, NBER and MIT, "Eliminating Self-Fulfilling Liquidity Crises Through Fundamentals-Revealing Securities"
- Discussant: Nouriel Roubini, NBER and New York University, and Roberto Chang, Rutgers University
- Philippe Bacchetta, Gerzensee, and Eric Van Wincoop, University of Virginia, "Can Information Dispersion Explain the Exchange Rate Discount Puzzle?"
- Discussant: Kenneth Froot, NBER and Harvard University, and Margarida Duarte, Federal Reserve Bank of Richmond
- Maurice Obstfeld, NBER and University of California, Berkeley; Jay Shambaugh, Dartmouth College; and Alan M. Taylor, NBER and University of California, Davis, "The Trilemma in History: Tradeoffs among Exchange Rates, Monetary Policies, and Capital Mobility"
- Discussant: Jeffrey Frankel, NBER and Harvard University, and Lars Svensson, NBER and Princeton University
Acemoglu and his co-authors document that countries that inherited more "extractive" institutions from their colonial past were more likely to experience high volatility and economic crises during the postwar period. More specifically, societies where European colonists faced high mortality rates more than one hundred years ago are much more volatile and prone to crises. Based on their previous work, the authors interpret this relationship as attributable to the causal effect of institutions on economic outcomes: Europeans did not settle in, and were more likely to set up extractive institutions in, areas where they faced high mortality. Once the authors control for the effect of institutions, macroeconomic policies appear to have only a minor impact on volatility and crises. This suggests that distortionary macroeconomic policies are more likely to be symptoms of underlying institutional problems rather than the main causes of economic volatility, and also that the effects of institutional differences on volatility do not appear to be mediated primarily by any of the standard macroeconomic variables. Instead, it appears that weak institutions cause volatility through a number of microeconomic, as well as macroeconomic, channels.
Goldstein and Razin develop a model of foreign direct investments (FDI) and foreign portfolio investments. FDI is characterized by hands-on management style which enables the owner to obtain relatively refined information about the productivity of the firm. This superiority, relative to portfolio investments, comes with a cost: a firm owned by the relatively well-informed FDI investor has a low resale price because of asymmetric information between the owner and potential buyers. Consequently, investors, who have a higher (lower) probability of getting a liquidity shock that forces them to sell early, will invest in portfolio (direct) investments. This result can explain the greater volatility of portfolio investments relative to direct investments. The authors show that this pattern may become weaker as the transparency in the capital market or the corporate governance in the host economy increase.
Gabaix proposes a mechanism that realistically could be used to avoid self-fulfilling liquidity crises. It rests on the general idea of "fundamentals revealing" securities. Those securities give a market-based assessment of variables such as "future solvency of the country if it receives a bail-out at reasonable rates in the near future." Hence, they are likely to be more informative and robust to misspecification than contingent rates based on macroeconomic variables. In all variants and extensions Gabaix considers, self-fulfilling crises are eliminated by the mechanism.
Exchange rates tend to be disconnected from fundamentals over substantial periods of time. The recent "microstructure approach to exchange rates" has shed some important light on this puzzle: most exchange rate volatility at short to medium horizons is related to order flows. This suggests that investor heterogeneity is key to understanding exchange rate dynamics, in contrast to the common representative agent approach in macroeconomic models of exchange rate determination. Bacchetta and Wincoop introduce investor heterogeneity into an otherwise standard monetary model of exchange rate determination. There is both heterogeneous information about fundamentals and non-fundamentals-based heterogeneity. The implications of the model are consistent with the evidence on the relationship between exchange rates and fundamentals: the exchange rate is disconnected from fundamentals in the short to medium run; over longer horizons the exchange rate is primarily driven by fundamentals; and, exchange rate changes are a weak predictor of future fundamentals.
Recently, the political economy of macroeconomic policy choice has been guided by the simple prescriptions of the classic trilemma. Policymakers often speak of the hollowing out of exchange rate regimes in a world of unstoppable capital mobility, and policy autonomy and a fixed nominal anchor present an unpleasant dichotomy for emerging markets beset by the fear of floating. Yet the trilemma is not an uncontroversial maxim, and its empirical foundations deserve greater attention. Using new techniques to study the coherence of international interest rates at high frequency, along with an examination of capital mobility policies and a data-based classification of exchange rate regimes, Obstfeld and his co-authors look at the empirical content of the trilemma based on consistent data over more than 130 years. On the whole, the predictions of this influential adage are borne out by history.
Labor Studies
The NBER's Program on Labor Studies met in Cambridge on October 18. Program Director Richard B. Freeman of Harvard University and NBER Research Associate Lawrence F. Katz, also of Harvard, organized this program:
- Bruce D. Meyer, NBER and Northwestern University, and James X. Sullivan, University of Notre Dame, "Measuring the Well-Being of the Poor Using Income and Consumption"
- James J. Choi, Harvard University; David Laibson, NBER and Harvard University; Brigitte C. Madrian, NBER and University of Chicago; and Andrew Metrick, NBER and University of Pennsylvania, "Benign Paternalism and Active Decisions: A Natural Experiment in Savings"
- Michael Ostrovsky and Michael Schwarz, Harvard University, "Equilibrium Information Disclosure: Grade Inflation and Unraveling"
- Austan Goolsbee and Jonathan Guryan, NBER and University of Chicago, "The Impact of Internet Subsidies in Public Schools" (NBER Working Paper No. 9090)
- Christopher Avery and Caroline M. Hoxby, NBER and Harvard University, "Do and Should Financial Aid Packages Affect Students' College Choices?"
- Lance Lochner, NBER and University of Rochester, and Alexander Monge-Naranjo, Northwestern University, "Education and Default Incentives with Government Student Loan Programs"
Meyer and Sullivan examine the relative merits of consumption and income measures of the material well being of the poor. Consumption offers several advantages over income because it is a more direct measure of well being than income and is less subject to under-reporting bias. Measurement problems with income complicate analyses of changes in the well being of the poor because the biases appear to have changed over time and are correlated with government policies. On the other hand, income is often easier to report and is available for much larger samples, providing greater power for testing hypotheses. The authors begin by considering the conceptual and pragmatic reasons why consumption might be better or worse than income. Then, using several empirical strategies, they examine the quality of income and consumption data. Although the evidence tends to favor consumption measures, these analyses suggest that both measures should be used to assess the material well being of the poor.
Decision makers tend to blindly accept default options. In this paper, Choi, Laibson, Madrian, and Metrick identify an overlooked but practical alternative to defaults. They analyze the experience of a company that required its employees to either affirmatively elect to enroll in the company's 401(k) plan or affirmatively elect not to enroll in the company's 401(k) plan. Employees were told that they had to actively make a choice, one way or the other, with no default option. This "active decision" regime provides a neutral middle ground that avoids the implicit paternalism of a one-size-fits-all default election. The active decision approach to 401(k) enrollment yields participation rates that are up to 25 percentage points higher than those under a regime with the standard default of non-enrollment. Requiring employees to make an active 401(k) election also raises average saving rates and asset accumulation with no increase in the rate of attrition from the 401(k) plan.
Ostrovsky and Schwarz explore information disclosure in matching markets -- for example, how informative are the transcripts released by universities? The authors show that the same amount of information is disclosed in all equilibriums. They then demonstrate that if universities disclose the equilibrium amount of information, unraveling does not occur; if they reveal more, then some students will find it profitable to contract early.
In an effort to alleviate the perceived growth of a digital divide, the U.S. government enacted a major subsidy for Internet and communications investment in schools starting in 1998. The program subsidized spending by 20-90 percent, depending on school characteristics. Using new data on school technology usage in every school in California from 1996 to 2000, as well as application data from the E-Rate program, Goolsbee and Guryan show that the subsidy did succeed in significantly increasing Internet investment. The implied first-dollar price elasticity of demand for Internet investment is between -0.9 and -2.2, and the greatest sensitivity shows up among urban schools and schools with large black and Hispanic student populations. Rural and predominantly white and Asian schools show much less sensitivity. Overall, by the final year of the sample, there were about 66 percent more Internet classrooms than there would have been without the subsidy. Using a variety of test score results, however, it is clear that the success of the E-Rate program, at least so far, has been restricted to the increase in access. The increase in Internet connections has had no measurable impact on any measure of student achievement.
Every year, thousands of high school seniors with high college aptitude face complicated "menus" of scholarship and aid packages designed to affect their college choices. Using an original survey designed for this paper, Avery and Hoxby investigate whether students respond to their "menus" like rational investors in human capital. Whether they make the investments efficiently is important not only because they are the equivalent of the "Fortune 500" for human capital, but also because they are likely available to the most analytic and long-sighted student investors. Avery and Hoxby find that the typical high aptitude student chooses his college and responds to aid in a manner that is broadly consistent with rational investment. However, some serious anomalies exist: excessive response to loans and work-study, strong response to superficial aspects of a grant (such as whether it has a name), and response to a grant's share of college costs rather than its amount. Approximately 30 percent of high aptitude students respond to aid in a way that apparently reduces their lifetime present value. While both a lack of sophistication/information and credit constraints can explain the behavior of this 30 percent of students, the weight of the evidence favors a lack of sophistication.
Lochner and Monge-Naranjo examine data on student loan default from the Baccalaureate and Beyond Survey. Their main findings include: 1) conditional on debt, the probability of default is declining in both predicted and actual post-school earnings; 2) conditional on earnings, the probability of default is increasing in debt; 3) default rates vary across undergraduate majors, but those differences disappear when controlling for debt and earnings; and most interestingly, 4) there is a U-shaped relationship between ability and the probability of default, even after controlling for debt and earnings. The authors go on to develop a model of endogenous human capital investment and default that attempts to replicate these facts. Within the context of the model, they ask what types of heterogeneity and market shocks explain their empirical findings, and how different are consumption and investment under the current program with respect to the optimal lending program. In contrast to the conventional wisdom, the model suggests that credit constraints do not necessarily imply under-investment in human capital, given the current lending system.
International Trade and Organizations
The NBER's Working Group on International Trade and Organizations met in Cambridge on October 26. The group's director, Gordon H. Hanson, of NBER and the University of California, San Diego, organized this program:
- Niko Matouschek, Northwestern University, and Paolo Ramezzana, University of Virginia, "Globalization, Market Efficiency, and the Organization of Firms"
- George P. Baker, NBER and Harvard University; Robert Gibbons, NBER and MIT; and Kevin J. Murphy, NBER and University of Chicago, "Relational Contracts and the Theory of the Firm"
- Gene M. Grossman, NBER and Princeton University, and Elhanan Helpman, NBER and Harvard University, "Managerial Incentives and the International Organization of Production"
- Robert C. Feenstra, NBER and University of California, Davis, and Gordon H. Hanson, "Ownership and Control in Outsourcing to China"
Matouschek and Ramezzana develop a matching model in which bilateral bargaining between agents is inefficient because of the presence of private information about the gains from trade. Globalization, by reducing market frictions, increases the probability that bilateral trade breaks down, but also reduces the costs of such breakdowns. Overall, a fall in market frictions reduces welfare if the initial level of market frictions is high; otherwise it increases welfare. Firms respond to the increased probability of trade breakdowns caused by globalization by adopting more flexible vertical structures that make it less costly for them to transact with third parties, for instance by switching from vertical integration to outsourcing, thereby further increasing the probability of trade breakdowns.
Relational contracts -- informal agreements sustained by the value of future relationships -- are prevalent within and between firms. Baker, Gibbons, and Murphy develop repeated-game models showing why and how relational contracts within firms (vertical integration) differ from those between firms (non-integration). They show that integration affects the parties' temptations to renege on a given relational contract, and hence affects the best relational contract the parties can sustain. In this sense, the integration decision can be an instrument in the service of the parties' relationship. The authors' approach also has implications for joint ventures, alliances, and networks, and for the role of management within and between firms.
Grossman and Helpman develop a model in which the heterogeneous firms in an industry choose their modes of organization and the location of their subsidiaries or suppliers. The authors assume that the principals of a firm are constrained in the nature of the contracts they can write with suppliers or employees. The main result concerns the sorting of firms with different productivity levels into different organizational forms. The authors use the model to examine the implications of falling trade costs for the relevant prevalence of outsourcing and foreign direct investment.
Feenstra and Hanson examine the organization of export processing operations in China. During the 1990s, export processing accounted for over half of China's total exports. The authors take into account who owns the plant and who controls the inputs that the plant processes. To explain how parties organize export processing in China, they apply two influential theories of the firm, the Holmstrom-Milgrom model and the Grossman-Hart-Moore model. In the Holmstrom-Milgrom framework, it is optimal for a single party to own the processing factory and to control the inputs used in export processing. In the Grossman-Hart-Moore framework, the gains to giving one party factory ownership tend to be greater when that party lacks control over inputs. The authors find that multinational firms engaged in export processing in China tend to split factory ownership and input control with factory managers in China. Chinese ownership of export processing factories is more common when the foreign buyer (the multinational) controls the inputs than when the processing factory (the factory manager) controls the inputs. This evidence is consistent with Grossman-Hart-Moore but is strongly inconsistent with Holmstrom-Milgrom.
Health Care
The NBER's Program on Health Care met in Cambridge on November 1. Alan M. Garber, NBER and Stanford University, organized the meeting, at which these papers were discussed:
- Sherry A. Glied, NBER and Columbia University, and Adriana Lleras-Muney, NBER and Princeton University, "Health Inequality, Education, and Medical Innovation"
- Dana Goldman, Rand Corporation, and Darius Lakdawalla, NBER and Rand Corporation, "Health Disparities and Medical Technology"
- Janet Currie, NBER and University of California, Los Angeles, and Mark Stabile, NBER and University of Toronto, "Socioeconomic Status and Health: Why is the Relationship Stronger for Older Children?"
- Kate Bundorf, NBER and Stanford University, "The Effects of Offering Health Plan Choice within Employment-Based Purchasing Groups"
- Michael Chernew, NBER and University of Michigan; David M. Cutler, NBER and Harvard University; and Patricia Keenan, Harvard University, "Rising Health Care Cost and the Decline in Health Insurance Coverage"
Recent studies suggest that health inequalities across socioeconomic groups in the United States are large and have been growing. Glied and Lleras-Muney hypothesize that, as in other non-health contexts, this pattern occurs because more educated people benefit more than do the less educated from technological advances in medicine. They test this hypothesis by examining the evolution of mortality differentials and medical innovation over time. Glied and Lleras-Muney focus on cancer mortality and examine the incidence of cancer and survival rates conditional on disease incidence. Although there have not been great improvements in cancer survival overall, there has been substantial progress in the treatment of some forms of cancer. Glied and Lleras-Muney find that more educated people are better able to take advantage of new medical innovations.
Although better-educated people are healthier, the relationship between health and education varies substantially across groups and over time. Goldman and Lakdawalla ask how health disparities by education vary according to underlying health characteristics and market forces. Consumer theory suggests that improvements in the productivity of health care will tend to confer the most benefits upon the heaviest users of health care. Since richer and more educated patients tend to use the most health care, this suggests that new technologies -- by making more diseases treatable, reducing the price of health care, or improving its productivity -- will tend to widen disparities in health. On the other hand, by the same reasoning, new technologies that are "timesaving" can lessen health disparities if they lower the productivity of patients' time investments in health. These ideas explain several empirical patterns. First, compared to healthy people, the chronically ill exhibit wider disparities in health status, but the terminally ill exhibit narrower ones. Second, the advent of complex new HIV technologies increased immune function among HIV patients, but seemed to benefit educated patients disproportionately. In contrast, however, new drugs for hypertension lowered health inequality, by making investments in diet, exercise, and weight control much less important for hypertension control.
The well-known relationship between socioeconomic status (SES) and health exists in childhood and grows more pronounced with age. However, it is difficult to distinguish between two possible explanations of this. Are low-SES children are less able to respond to a given health shock? Or, do low SES children experience more shocks? Using panel data on Canadian children, Currie and Stabile show that: 1) the cross-sectional relationship between low family income or low maternal education and health is very similar in Canada and the United States; and 2) both high and low-SES children recover from past health shocks to about the same degree. Hence, it must be that the relationship between SES and health grows stronger over time mainly because low-SES children receive more negative health shocks. In addition, the authors examine the effect of health shocks on math and reading scores. They find that health shocks affect test scores and future health in very similar ways. These results suggest that public policy aimed at reducing SES-related health differentials in children should focus on reducing the incidence of health shocks as well as on reducing disparities in access to palliative care.
Over the last two decades, employers increasingly have offered workers a choice of health plans. Yet, relatively little is known about the effects of this trend on consumers. The availability of choice has the potential benefits of lowering the cost and increasing the quality of health care through greater competition among health plans, as well as allowing consumers to enroll in the type of coverage that most closely matches their preferences. On the other hand, concerns exist about the potential for adverse selection within employment-based purchasing in response to the availability of choice. Bundorf examines the effects of offering choice in employment-based purchasing groups on access to and the cost of employer-sponsored coverage. She hypothesizes that the introduction of managed care, HMOs specifically, facilitated the offering of choice within employment-based purchasing groups. She then uses geographic variation in the availability of HMOs in the 1990s as an instrument for offering health plan choice. She finds that greater availability of choice was associated with sizable reductions in the premiums of employer-sponsored coverage and modest increases in the proportion of workers covered by the plans offered by employers. However, a large portion of the premium reduction was attributable to a major shift from family to single coverage within employment-based purchasing groups. The results suggest that gains to employees from the availability of choice in the form of lower premiums and increased employee coverage came at the cost of reductions in dependent coverage.
Chernew, Cutler, and Keenan examine the determinants of declining insurance coverage during the 1990s, with a focus on the role of rising health care costs relative to other explanations, such as regulatory changes (including changing Medicaid rules and tax rates) or the rise of working spouses. They use annual March supplements to the Current Population Survey (CPS) as the primary data source for insurance coverage and analyze coverage for two periods, 1989-91 and 1998-2000. Their models control for changes in population demographics and employment patterns. They estimate the impact of changing health care costs, tax subsidies, Medicaid reforms, other state regulatory reforms, a rise in spousal employment, and general economic conditions on declining coverage. The researchers find that the decline in coverage over the 1990s was not uniform across metropolitan areas (MSAs), even after accounting for the sampling variation in coverage estimates. Moreover, cost growth also varied across MSAs. Preliminary estimates suggest that a large share (over 50 percent) of the decline in coverage is attributable to rising health care costs. This effect is largely caused by the relationship between rising health care costs and private coverage. There is no statistically significant relationship between rising health care costs and public coverage. Apart from the general policy interest of the findings, the importance of premiums (as opposed to loads) suggests that renewed attention to how the price of insurance is conceptualized may be warranted.
Monetary Economics
The NBER's Program on Monetary Economics met in Cambridge on November 1. Organizers Simon Gilchrist, NBER and Boston University, and John V. Leahy, NBER and New York University, chose the following papers to be discussed:
- Daron Acemoglu and Simon Johnson, NBER and MIT; James Robinson, University of California, Berkeley; and Yunyong Thaicharoen, Bank of Thailand, "Institutional Causes, Macroeconomic Symptoms: Volatility, Crises and Growth" (NBER Working Paper No. 9124)
- Discussant: Stephen H. Haber, NBER and Stanford University
- (See "International Finance and Macroeconomics" earlier in this section for a summary of this paper)
- Timothy Cogley, Arizona State University, and Thomas J. Sargent, NBER and New York University, "Drifts and Volatilities; Monetary Policies and Outcomes in the Post WWII U.S."
- Discussant: Mark W. Watson, NBER and Princeton University
- Jean Boivin, NBER and Columbia University, and Marc Giannoni, Columbia University, "Has Monetary Policy Become Less Powerful?"
- Discussant: Charles Evans, University of Chicago
- Ariel Burstein, University of Michigan, "Inflation and Output Dynamics with State Dependent Pricing Decisions"
- Discussant: Robert King, NBER and Boston University
- Peter N. Ireland, NBER and Boston College, "Technology Shocks in the New Keynesian Model"
- Discussant: Miles S. Kimball, NBER and University of Michigan
- N. Gregory Mankiw, NBER and Harvard University, and Ricardo Reis, Harvard University, "What Measure of Inflation Should a Central Bank Target?"
- Discussant: Pierpaolo Benigno, New York University
Cogley and Sargent present posterior densities for several objects relevant to designing and evaluating monetary policy, including: measures of inflation persistence; the natural rate of unemployment; a core rate of inflation; and 'activism coefficients' for monetary policy rules. The posteriors imply that all of these objects vary substantially in post WWII U.S. data. After adjusting for changes in volatility, the persistence of inflation increases during the 1970s, then falls in the 1980s and 1990s. Innovation variances change systematically, and are substantially larger in the late 1970s than during other times. Measures of uncertainty about core inflation and the degree of persistence covary positively. The authors use their posterior distributions to evaluate the power of several tests that have been used to test the null of time-invariance of autoregressive coefficients of Vector Autoregressions against the alternative of time-varying coefficients. Except for one test, they find that those tests have low power against the form of time variation captured by their model. That one test also rejects time invariance in the data.
Recent VAR studies have shown that since the beginning of the 1980s monetary policy shocks have had a reduced effect on the economy since the beginning of the 1980s. Boivin and Giannoni first estimate an identified VAR over the pre- and post-1980 periods, and corroborate the existing results, suggesting a stronger systematic response of monetary policy to the economy in the later period. Then they present and estimate a fully specified model that replicates well the dynamic response of output, inflation, and the federal funds rate to monetary policy shocks in both periods. Using the estimated structural model, they perform counterfactual experiments to quantify the relative importance of changes in monetary policy and changes in the private sector in explaining the reduced effect of monetary policy shocks. Their main finding is that changes in the systematic elements of monetary policy are consistent with a more stabilizing monetary policy in the post-1980 period and largely account for the reduced effect of unexpected exogenous interest rate shocks. Consequently, there is little evidence that monetary policy has become less powerful.
Burstein studies the effects of monetary policy on output and inflation in a dynamic general equilibrium model. He assumes that firms face a fixed cost of changing their pricing plans: once a firm pays this fixed cost, it can choose both its current price and a plan specifying an entire sequence of future prices. He finds that the model's predictions are qualitatively consistent with the conventional wisdom about the response of the economy to three widely studied monetary experiments. Allowing firms to choose a sequence of prices rather than a single price generates inflation inertia in the response of the economy to small changes in the growth rate of money. Allowing firms to choose when to change their pricing plan generates a non-linear response of inflation and output to small and large changes in the money growth rate. The non-linear solution method allows one to quantify the range of changes in the growth rate of money for which time dependent models are a good approximation to state dependent models. This approach also reveals that the model generates an asymmetric response of output and inflation to monetary expansions and contractions.
Ireland notes that in a New Keynesian model, technology and cost-push shocks compete as factors that stochastically shift the Phillips curve. A version of this model, estimated via maximum likelihood, points to the cost-push shock as far more important than the technology shock in explaining the behavior of output, inflation, and interest rates in the postwar U.S. data. These results weaken the links between the current generation of New Keynesian models and the real business cycle models from which they were originally derived; they also suggest that Federal Reserve officials often have faced difficult trade-offs in conducting monetary policy.
Mankiw and Reis first assume that a central bank commits itself to maintaining an inflation target and then ask what measure of the inflation rate the central bank should use if it wants to maximize economic stability. They show how the weight of a sector in the stability price index depends on the sector's characteristics, including size, cyclical sensitivity, sluggishness of price adjustment, and magnitude of sectoral shocks. When they calibrate the problem to U.S. data, one tentative conclusion is that the central bank should use a price index that gives substantial weight to the level of nominal wages.
Macroeconomics and Individual Decisionmaking
The NBER's Working Group on Macroeconomics and Individual Decisionmaking met in Cambridge on November 2. Organizers George A. Akerlof, University of California, Berkeley, and Robert J. Shiller, NBER and Yale University, chose these papers to discuss:
- Truman Bewley, Yale University, "An Interview Study of Wage Setting"
- Discussant: Beth Ann Wilson, Federal Reserve Board of Governors
- Hanming Fang and Giuseppe Moscarini, Yale University, "Overconfidence, Morale, and Wage-Setting Policies"
- Edward L. Glaeser, NBER and Harvard University, "Political Economy of Hatred"
- Discussant: Vai-Lam Mui, University of Notre Dame
- Roland Benabou, NBER and Princeton University, and Jean Tirole, Institude d'Economie Industrielle, "Belief in a Just World and Redistributive Politics"
- Discussant: Bruce Sacerdote, NBER and Dartmouth College
- John Shea, NBER and University of Maryland, "Childhood Deprivation and Adult Wealth"
- Discussant: Laurence J. Kotlikoff, NBER and Boston University
- John Ameriks, TIAA-CREF Institute; Andrew Caplin, NBER and New York University; and John V. Leahy, NBER and Boston University, "Wealth Accumulation and the Propensity to Plan"
- Discussant: Annamaria Lusardi, Dartmouth College
Bewley summarized tentative conclusions from a survey currently being conducted on pricing in manufacturing, service, wholesale, and retail companies, as well as their intermediaries or brokers of various sorts. The survey asks how prices are set, how they are adjusted when costs or demand change, and whether any influences stand in the way of price changes. Bewley finds a surprising amount of price flexibility, both downward and upward. Downward price rigidity primarily occurs in retail sales of differentiated commodities with repeat customers and fluctuating costs and in sales to businesses of small differentiated items. Other influences sometimes do cause downward price rigidity, such as the fear of starting a price war and long-term contracts. Another conclusion from the survey is that most price setters are aware of the distinction between fixed and variable costs and use only variable costs to determine the lowest price they will accept before walking away from a deal. Another finding is that marginal costs at one production unit tend to be flat or to decline with output, until a capacity constraint is reached, and these constraints are usually reached abruptly. Another of Bewley's themes was the near universality of price discrimination. Sellers vary the product in order to price discriminate, and the bargaining that occurs over many sales is a way to gather information from customers used to extract as high a margin as possible from each of them. Still another theme is the importance of product differentiation. Even if there are few sellers, competitive pressures quickly erode margins on undifferentiated good and services, known as "commodities" in business parlance. The prices of such "commodities" fluctuate freely. Bewley also explains why long-term, fixed price contracts exist in such industries as coal, steel, and rail and truck transport, where the sales are to other companies. The contracts free sellers to invest in production by guaranteeing them a market and protect buyers against increases in the price of supplies. Buyers do not want price increases because they would be at a disadvantage against a competitor who was protected against price increases by a long-term contract.
Psychologists consistently have documented people's tendency to be overconfident about their own ability. Fang and Moscarini interpret workers' confidence in their own skills as morale, and investigate the implication of worker overconfidence on the firm's optimal wage-setting policies. In their model, a wage contract both provides incentives and conveys to workers the firm's opinion about their ability, hence affecting their morale. The authors, in numerical examples, show that worker overconfidence is a necessary condition for the firm to prefer no wage differentiation so as to preserve some workers' morale. A non-differentiation wage policy itself will breed more worker overconfidence; thus, "overconfidence begets overconfidence." Furthermore, wage compression is more likely when aggregate productivity is low.
What determines the intensity and objects of hatred? Glaeser theorizes that hatred arises when people believe that out-groups are responsible for past and future crimes. However, the reality of past crimes has little to do with the level of hatred. Instead, hatred is the result of an equilibrium in which politicians supply stories of past atrocities in order to discredit the opposition and consumers listen to the stories. The supply of hatred is a function of the degree to which minorities gain or lose from particular party platforms; as such, groups that are particularly poor or rich are likely to be hated. Strong constitutions that limit the policy space and ban specific anti-minority policies in turn will limit hate. The demand for hatred falls if consumers interact regularly with the hated group, unless those interactions are primarily abusive. The power of hatred is so strong that its opponents motivate their supporters by "hating the haters."
Benabou and Tirole propose a model of why people may feel a need to believe in a just world; of why this need, and therefore the prevalence of the belief, may vary considerably across countries; and of its implications for redistributive policies (taxes and welfare payments) and the stigma born by the poor. At the heart of the model are general-equilibrium interactions between each individual's psychologically based "demand" for a belief in a just world (or similar ideology) and the degree of redistribution chosen by the polity. Because of complementarities between an individual's desired beliefs or ideological choices, arising through the aggregate political outcome, there can be two equilibriums. The first is characterized by a high prevalence of the "belief in a just world" among the population (a high degree of repression or denial of bad news about the world), and a relatively laissez-faire public policy; both are mutually sustaining. The other equilibrium is characterized by more "realistic pessimism" (less collective denial, leading to a more cynical majority), and a more generous welfare state, which in turn reduces the need for individuals to invest in optimistic beliefs. In this equilibrium, there is also less stigma attached to being poor, in the sense that fewer agents are likely to blame poverty on a lack of effort or willpower.
Popular mythology holds that children growing up during the Great Depression are more frugal than subsequent generations. Shea asks whether childhood deprivation raises adult thriftiness. Specifically, he examines whether children whose fathers are displaced from their jobs have more wealth as adults than children with otherwise similar fathers who do not experience job loss. He finds that father's displacement indeed does raise children's net worth. This impact is most pronounced for job losses occurring when the child is an adolescent, and is concentrated on vehicles, unsecured debt, and business equity. The impact of father's displacement does not appear to be driven by lower expected bequests, and Shea finds no evidence that father's displacement forced children to become economically mature at a younger age. His results are consistent with anecdotal evidence suggesting that childhood deprivation causes subsequent aversion to debt, and are also consistent with the popular idea that post-Baby Boom generations are more entrepreneurial and independent than previous generations because of their exposure to corporate downsizing and deindustrialization.
Why do similar households end up with very different levels of wealth? Ameriks, Caplin, and Leahy show that differences in the attitudes and skills with which they approach financial planning are a significant factor. They use new and unique survey data to assess these differences and to measure each household's "propensity to plan." They show that those with a higher such propensity spend more time developing financial plans, and that this shift in planning effort is associated with increased wealth. The propensity to plan is uncorrelated with survey measures of the discount factor and the bequest motive, raising a question as to why it is associated with wealth accumulation. Part of the answer may lie in the very strong relationship uncovered between the propensity to plan and how carefully households monitor their spending. It appears that this detailed monitoring activity helps households to save more and to accumulate more wealth.
Public Economics
The NBER's Program on Public Economics met on November 7-8 in Cambridge. Organizer James M. Poterba, NBER and MIT, chose these papers for discussion:
- Austan D. Goolsbee and Jonathan Guryan, NBER and University of Chicago, "The Impact of Internet Subsidies in Public Schools (NBER Working Paper No. 9090)
- Discussant: David Figlio, NBER and University of Florida
- Holger Sieg and Dennis Epple, NBER and Carnegie Mellon University, and Richard E. Romano, University of Florida, "Admission, Tuition, and Financial Aid Policies in the Market for Higher Education"
- Discussant: Patrick J. Bayer, Yale University
- Peter A. Diamond, NBER and MIT, "Optimal Tax Treatment of Private Contributions for Public Goods with and without Warm Glow Preferences"
- Discussant: Louis Kaplow, NBER and Harvard University
- William D. Nordhaus, NBER and Yale University, "After Kyoto: Alternative Mechanisms to Control Global Warming"
- Discussant: Emmanuel Saez, NBER and University of California, Berkeley
- Esther Duflo, NBER and MIT, and Emmanuel Saez, "The Role of Information and Social Interactions in Retirement Plan Decisions: Evidence from a Randomized Experiment"
- Discussant: Brigitte C. Madrian, NBER and University of Chicago
- Patric H. Hendershott, NBER and Aberdeen University Business School; Gwilym Pryce, University of Glasgow; and Michael White, University of Aberdeen, "Household Leverage and the Deductibility of Home Mortgage Interest: Evidence from U.K. House Borrowers" (NBER Working Paper No. 9207)
- Discussant: Todd M. Sinai, NBER and University of Pennsylvania
- Katherine Baicker, NBER and Dartmouth College, "The Budgetary Repercussions of Capital Convictions"
- Discussant: Arik Levinson, NBER and Georgetown University
In an effort to alleviate the perceived growth of a digital divide, the U.S. government enacted a major subsidy for Internet and communications investment in schools starting in 1998. Goolsbee and Guryan evaluate the effect of the subsidy -- known as the E-Rate -- on Internet investment in California public schools. The program subsidized spending by 20-90 percent, depending on school characteristics. Using new data on school technology usage in every school in California from 1996 to 2000, as well as application data from the E-Rate program, the authors find that the subsidy did succeed in significantly increasing Internet investment. The greatest sensitivity shows up among urban schools and schools with large black and Hispanic student populations. Rural and predominantly white and Asian schools show much less sensitivity. Overall, by the final year of the sample, there were about 66 percent more Internet classrooms than there would have been without the subsidy. The program accelerated the spread of the Internet by about four years. However, the success of the E-Rate program, at least so far, appears to be restricted to the increase in access. The increase in Internet connections has had no apparent impact on any measure of student achievement.
Epple, Romano, and Sieg present a general equilibrium model of the market for higher education. Their model simultaneously predicts student selection into institutions, financial aid, and educational outcomes. The model gives rise to a strict hierarchy of colleges that differ by the educational quality provided to the students. To evaluate the model, the authors develop an estimation strategy that accounts for the fact that important variables are likely to be measured with error. Using data collected by the National Center for Educational Statistics and aggregate data from Peterson's and the NSF, they find that their model explains observed admission and price policies reasonably well. The findings also suggest that the market for higher education is quite competitive.
The United States relies on tax-favored contributions as well as direct government expenditures for financing some public goods. From the political perspective, this approach shifts some decisionmaking from the legislative process to the decisions of individual donors (and the managers of charitable organizations). From the economic perspective, this approach can be a useful part of optimal tax and expenditure policy. Diamond explores the latter issue, first using a model with standard preferences and then a model with a "warm glow of giving" (Andreoni, 1990). In addition to showing the conditions for the rate of subsidized private provision, Diamond considers the pattern of optimal subsidization across earnings levels. Analysis of optimal taxation with warm glow preferences is sensitive to the choice of preferences that are relevant for a social welfare evaluation. After considering optimal rules with formulations of social welfare which do and do not include warm glow utility, Diamond considers the choice of normative criterion. His paper focuses on private contributions with nonlinear income taxation. Like the earlier literature, this paper assumes that organizing private donations is costless while tax collection has a deadweight burden. Since private charitable fundraising is very far from costless, the paper is an exploration of economic mechanisms, not a direct guide to policy.
Nordhaus reviews different approaches to the political and economic control of global public goods, for example global warming. He compares quantity-oriented control mechanisms, like the Kyoto Protocol, with price-type control mechanisms, such as internationally harmonized carbon taxes. He focuses on such issues as performance under conditions of uncertainty, volatility of the induced carbon prices, the excess burden of taxation and regulation, accounting finagling, and ease of implementation. Nordhaus concludes that, although virtually all discussions about economic global public goods have analyzed quantitative approaches, price-type approaches are likely to be more effective and more efficient.
Duflo and Saez analyze a randomized experiment to shed light on the role of information and social interactions in employees' decisions to enroll in a Tax Deferred Account (TDA) retirement plan within a large university. The experiment encouraged a random sample of employees in a subset of departments to attend a benefits information fair organized by the university, promising a monetary reward for attendance. The experiment multiplied by more than five the attendance rate of these treated individuals (relative to the control group), and tripled the attendance rate of untreated individuals within departments where some individuals were treated. TDA enrollment 5 and 11 months after the fair was significantly higher in departments where some individuals were treated than in departments where nobody was treated. However, the effect on TDA enrollment is almost as large for individuals in treated departments who did not receive the encouragement as for those who did. The authors provide three interpretations -- differential treatment effects, social network effects, and motivational reward effects -- to explain these results.
Hendershott, Pryce, and White analyze over 117,000 loans used to finance home purchases originated in the United Kingdom during the 1988-91 and 1995-8 periods. They first estimate whether a household's loan exceeds the 30,000 deductibility ceiling and then construct debt tax penalty variables that explain household LTVs on these loans. The penalty variables depend on the predicted probability of having a loan that exceeds the ceiling, the market mortgage rate, and exogenous household specific tax rates. From these results the authors compute estimates of the impact of removing deductibility on initial LTVs in the United Kingdom and on the weighted average cost of capital for owner-occupied housing. Removal of deductibility is estimated to reduce initial LTVs, which mitigates the rise in the weighted average cost of capital, by about 30 percent, with the reduction varying with household age, loan size (above or below the 30,000 limit), and tax bracket.
Control of public spending and revenues is increasingly being left to states and localities. In order to understand the consequences of such a movement on the distribution of social spending, it is necessary to understand how fiscal distress will affect state and local budgets. Baicker exploits the large and unexpected negative shock to county budgets imposed by the presence of capital crime trials, first to understand the real incidence of the cost of capital convictions, and second to uncover the effects of local fiscal distress on the level and distribution of public spending and revenues. She shows that these trials are quite costly relative to county budgets, and that the costs are borne primarily by increasing taxes (although perhaps in part by decreases in spending on police and highways). The results highlight the vulnerability of county budgets to fiscal shocks: each trial causes an increase in county spending of more than $2 million, implying an increase of more than $6 billion in both expenditures and revenues between 1982 and 1997. Using these trials as a source of exogenous variation to examine interjurisdictional spillovers, she finds significant spillovers of both spending and revenues between counties.
Asset Pricing
The NBER's Program on Asset Pricing met in Cambridge on November 8. John H. Cochrane, NBER and University of Chicago, and Jonathan Lewellen, NBER and MIT, organized this program:
- John Y. Campbell and Tuomo Vuolteenaho, NBER and Harvard University, "Bad Beta, Good Beta"
- Discussant: Jay A. Shanken, NBER and Emory University
- Jing-Zhi Huang, Pennsylvania State University, and Ming Huang, Stanford University, "How much of the Corporate-Treasury Yield Spread is Due to Credit Risk? A New Calibration Approach"
- Discussant: Jun Pan, MIT
- Peter M. DeMarzo and Lian Kremer, Stanford University, and Ron Kaniel, University of Texas, Austin, "Diversification as a Public Good: Community Effects in Portfolio Choice"
- Discussant: Stephen Shore, Harvard University
- Jonathan B. Berk, NBER and University of California, Berkeley, and Richard C. Green, Carnegie Mellon University, "Mutual Fund Flows and Performance in Rational Markets"
- Discussant: Harrison Hong, Stanford University
- Bernard Dumas, NBER and INSEAD, and Pascal Maenhout, INSEAD, "A Central-Planning Approach to Dynamic Incomplete-Market Equilibrium"
- Discussant: Deborah J. Lucas, NBER and Northwestern University
- Eli Ofek and Matthew Richardson, New York University, and Robert F. Whitelaw, NBER and New York University, "Limited Arbitrage and Short Sales Restrictions: Evidence From the Options Market"
- Discussant: Owen Lamont, NBER and University of Chicago
Campbell and Vuolteenaho explain the size and value "anomalies" in stock returns using an economically motivated two-beta model. They break the beta of a stock with the market portfolio into two components, one reflecting news about the market's future cash flows and one reflecting news about the market's discount rates. Intertemporal asset pricing theory suggests that the former should have a higher price of risk; thus beta, like cholesterol, comes in "bad" and "good" varieties. The authors find that value stocks and small stocks have considerably higher cash-flow betas than growth stocks and large stocks, and this can explain their higher average returns. The post-1963 negative Capital Asset Pricing Model alphas of growth stocks are explained by the fact that their betas are predominantly of the good variety.
Huang and Huang show that credit risk accounts for only a small fraction of the observed corporate-Treasury yield spreads for investment grade bonds of all maturities, with the fraction smaller for bonds of shorter maturities; and that it accounts for a much higher fraction of yield spreads for junk bonds. This conclusion is robust across a wide class of structural models -- both existing and new ones -- that incorporate many different economic considerations. The authors obtain such consistent results by calibrating each of the models to be consistent with data on historical default loss experience. Different models, which in theory can still generate a very large range of credit risk premiums, predict fairly similar credit risk premiums under empirically reasonable parameter choices, resulting in the robustness of their conclusion.
DeMarzo, Kaniel, and Kremer examine the impact of community interaction on risk sharing, investments, and consumption. They do this using a rational general equilibrium model in which agents care only about their personal consumption. The authors consider a setting in which, because of borrowing constraints, individuals who are endowed with local resources under-participate in financial markets. As a result, individuals "compete" for local resources through their portfolio choices. Even with complete financial markets (in the sense of spanning) and no aggregate risk, agents herd into risky portfolios in all stable equilibriums. This yields a Pareto dominated outcome, as agents introduce "community" risk that does not follow from fundamentals. The authors show that when some agents are behaviorally biased, a unique equilibrium exists in which rational agents choose even more extreme portfolios and amplify the behavioral effect. This can rationalize the behavioral bias, because following the behavioral bias is optimal. A similar effect will result if some investors cannot completely diversify their holdings (for control or moral hazard reasons) and are biased towards a certain sector. Finally, the authors show that equilibrium Sharpe ratios can be high, even absent aggregate consumption risk. Also, from a welfare perspective, diversification has "public good" features. This provides a potential justification for policies that subsidize diversified holdings and limit trade in risky securities.
Berk and Green develop a simple rational model of active portfolio management that provides a natural benchmark against which to evaluate the observed relationship between returns and fund flows. They show that many effects widely regarded as anomalous are consistent with this simple explanation. In the model, investments with active managers do not outperform passive benchmarks because of the competitive market for capital provision, combined with decreasing returns to scale in active portfolio management. Consequently, past performance cannot be used to predict future returns, or to infer the average skill level of active managers. The lack of persistence in active manager returns does not imply that differential ability across managers is nonexistent or unrewarded, that gathering information about performance is socially wasteful, or that chasing performance is pointless. A strong relationship between past performance and the flow of funds exists in this model; indeed, it is the market mechanism that ensures that no predictability in performance exists. Calibrating the model to the fund flows and survivorship rates, the authors find that these features of the data are consistent with the vast majority (80 percent) of active managers having at least enough skill to make back their fees.
Dumas and Maenhout show that a central planner with two selves, or two "pseudo welfare functions," is sufficient to deliver the market equilibrium that prevails among any (finite) number of heterogeneous individual agents acting competitively in an incomplete financial market. Furthermore, the authors are able to demonstrate a recursive formulation of the two-central planner problem. In that formulation, every aspect of the economy can be derived one step at a time, by a process of backward induction, as in dynamic programming.
Ofek, Richardson, and Whitelaw empirically investigate the well-known put-call parity no-arbitrage relation in the presence of short sale restrictions. They use a new and comprehensive sample of options on individual stocks, in combination with a measure of the cost and difficulty of short selling: the spread between the rate a short-seller earns on the proceeds from the sale relative to the normal rate (the rebate rate spread). They find statistically and economically significant violations of put-call parity that are strongly related to the rebate rate spread. Stocks with negative rebate rate spreads exhibit prices in the stock market that are up to 7.5 percent greater than those implied in the options market (for the extreme 1 percent tail). Even after accounting for transaction costs in the options markets, these violations persist and their magnitude appears to be related to the general level of valuations in the stock market. Moreover, the extent of violations of put-call parity and the rebate rate spread for individual stocks are significant predictors of future stock returns. For example, cumulative abnormal returns, net of borrowing costs, over a 2½ year sample period can exceed 70 percent. It is difficult to reconcile these results with rational models of investor behavior, and, in fact, they are consistent with the presence of over-optimistic irrational investors in the market for some individual securities.
Behavioral Finance
The NBER's Working Group on Behavioral Finance met in Cambridge on November 9. Robert J. Shiller, NBER and Yale University, and Richard H. Thaler, NBER and University of Chicago, organized the meeting. The program was:
- Markus K. Brunnermeier, Princeton University, and Stefan Nagel, London Business School, "Arbitrage at its Limits: Hedge Funds and the Technology Bubble"
- Discussant: Cliff Asness, AQR Capital
- Jose Scheinkman and Wei Xiong, Princeton University, "Overconfidence and Speculative Bubbles"
- Discussant: Owen Lamont, NBER and University of Chicago
- Malcolm P. Baker, Harvard University, and Jeremy C. Stein, NBER and Harvard University, "Market Liquidity as a Sentiment Indicator" (NBER Working Paper No. 8816)
- Discussant: Dimitri Vayanos, NBER and MIT
- Massimo Massa, INSEAD, and Andrei Simonov, Stockholm School of Economics, "Behavioral Biases and Investment"
- Discussant: Terrance Odean, University of California, Berkeley
- Nicholas Barberis and Richard H. Thaler, NBER and University of Chicago, and Ming Huang, Stanford University, "Individual Preferences, Monetary Gambles, and the Equity Premium: The Case for Narrow Framing"
- Discussant: John Y. Campbell, Harvard University
- Malcolm P. Baker and Jeffrey Wurgler, New York University, "A Catering Theory of Dividends"
- Discussant: Sendhil Mullainathan, NBER and MIT
Classical finance theory maintains that rational arbitrageurs would find it optimal to attack price bubbles and thus exert a correcting force on prices. Brunnermeier and Nagel examine the stock holdings of hedge funds during the time of the technology bubble on the NASDAQ. Counter to the classical view, they find that hedge fund portfolios were heavily tilted towards (overpriced) technology stocks. This does not seem to be the result of unawareness of the bubble: at an individual stock level, these investments were well timed. On average, hedge funds started to reduce their exposure in the quarter prior to price peaks of individual technology stocks, and their overall stock holdings in the technology segment outperformed characteristics-matched benchmarks. These findings are consistent with models in which arbitrage is limited, because arbitrageurs face constraints, are unable to temporally coordinate their strategies, and investor sentiment is predictable. The results also suggest that frictions such as short-sales constraints are not sufficient to explain why the presence of sophisticated investors failed to contain the bubble.
Motivated by the behavior of internet stock prices in 1998-2000, Scheinkman and Xiong present a continuous-time equilibrium model of bubbles in which overconfidence generates disagreements among agents regarding asset fundamentals. With short-sale constraints, an asset owner has an option to sell the asset to other over-confident agents who have more optimistic beliefs. This re-sale option has a recursive structure; that is, a buyer of the asset gets the option to resell it. This causes a significant bubble component in asset prices even when small differences of beliefs are sufficient to generate a trade. Agents pay prices that exceed their own valuation of future dividends because they believe that in the future they will find a buyer willing to pay even more. The model generates: prices that are above fundamentals; excessive trading; excess volatility; and predictable returns. However, the analysis shows that while Tobin's tax can substantially reduce speculative trading when transaction costs are small, it has only a limited impact on the size of the bubble or on price volatility. The authors give an example where the price of a subsidiary is larger than its parent firm. Finally, they show how overconfidence can justify the use of corporate strategies that would not be rewarding in a "rational" environment.
Baker and Stein build a model that helps to explain why increases in liquidity -- such as lower bid-ask spreads, a lower price impact of trade, or higher turnover -- predict lower subsequent returns in both firm-level and aggregate data. The model features a class of irrational investors who underreact to the information contained in order flow, thereby boosting liquidity. In the presence of short-sales constraints, high liquidity is a symptom of the fact that the market is dominated by these irrational investors, and hence is overvalued. This theory also can explain how managers might successfully time the market for seasoned equity offerings (SEOs), simply by following a rule of thumb that involves issuing when the SEO market is particularly liquid. The authors find that aggregate measures of equity issuance and share turnover are highly correlated. Still, in a multiple regression, both have incremental predictive power for future equal-weighted market returns.
Massa and Simonov use a new and unique dataset to investigate how investors react to prior gains/losses and the so called "familiarity" bias. They distinguish between different behavioral theories (loss aversion, house-money effect, mental accounting) and between behavioral and rational hypotheses (pure familiarity and information-based familiarity). They show that, on a yearly horizon, investors react to previous gains/losses according to the house-money effect. There is no evidence of narrow accounting, because investors consider wealth in its entirety, and risk taking in the financial market is affected by gains/losses in overall wealth, as well as by financial and real estate wealth. In terms of individual stock picking, the authors' evidence favors the information-based theory and shows that familiarity can be considered a proxy for the availability of information, as opposed to a behavioral heuristic.
Many different preference specifications have been proposed as a way of
addressing the equity premium. How should we pick between them? Barberis, Thaler, and Huang suggest one possible metric, namely these utility functions' ability to explain other evidence on attitudes toward risk. They consider some simple observations about attitudes toward monetary gambles with just two outcomes and show that the vast majority of utility functions used in asset pricing have difficulty explaining these observations. However, utility functions with two features -- first-order risk aversion and narrow framing -- can explain them easily. The authors argue that, by this metric at least, such utility functions may be very attractive to financial economists: they can generate substantial equity premiums and, at the same time, make sensible predictions about attitudes toward monetary gambles.
Baker and Wurgler develop a theory in which the decision to pay dividends is driven by investor demand. Managers cater to investors by paying dividends when investors put a stock price premium on payers and not paying when investors prefer nonpayers. To test this prediction, the authors construct four time-series measures of the investor demand for dividend payers. By each measure, nonpayers initiate dividends when demand for payers is high. By some measures, payers omit dividends when demand is low. Further analysis confirms that the results are better explained by the catering theory than by other theories of dividends.
Program on Education
The NBER's Program on Education met in Cambridge on November 14. Program Director Caroline M. Hoxby of Harvard University organized the meeting. These papers were discussed:
- Benjamin Scafidi and David Sjoquist, Georgia State University, and Todd R. Stinebrickner, University of Western Ontario, "Where Do Teachers Go?"
- Discussant: Richard Murnane, NBER and Harvard University
- Eric A. Hanushek, NBER and Stanford University; John F. Kain, University of Texas at Dallas; and Steven G. Rivkin, NBER and Amherst College, "The Impact of Charter Schools on Academic Achievement"
- Discussant: Miguel Urquiola, Cornell University
- Martin R. West, Harvard University, and Ludger Wößmann, University of Kiel, "Class-Size Effects in School Systems Around the World: Evidence from Between Grade Variation in TIMSS"
- Discussant: Joshua Angrist, NBER and MIT
- Sarah Simmons and Sarah Turner, University of Virginia, "Taking Classes and Taking Care of the Kids: Do Childcare Benefits Increase Collegiate Attainment?"
- Discussant: Cecilia E. Rouse, NBER and Princeton University
- Christopher Avery, Harvard University; Mark Glickman, Boston University; Caroline Hoxby; and Andrew Metrick, NBER and University of Pennsylvania, "A Revealed Preference Ranking of American Colleges"
- Discussant: Bruce Sacerdote, NBER and Dartmouth College
- Eric Bettinger, Case Western Reserve University, and Bridget T. Long, Harvard University, "The Plight of Underprepared Students in Higher Education: The Role and Effect of Remedial Education"
- Discussant: Brian Jacob, Harvard University
Using new and unique administrative data from Georgia, Sjoquist, and Stinebrickner analyze transitions from full-time elementary and high school teaching. Contrary to public perception, they find that new female teachers are not leaving the teaching profession for high paying jobs in alternative occupations. In their sample of female teachers, only 3.8 percent of elementary school teachers and 5.4 percent of high school teachers who left full-time teaching took a non-education sector job in Georgia that paid more than the state minimum teaching wage. This implies that less than one percent of new female teachers leave full-time teaching for a relatively high paying non-education job in Georgia after the first year of teaching. Other groups of teachers, including males, also have low rates of exits to relatively high paying occupations. Given that these results are in direct contrast to public discussion on the issue, the authors consult the 1994-5 Teacher Followup Survey in an effort to provide some independent validation of their conclusions. While this national survey of teachers does not provide direct evidence on what individuals actually do when they leave teaching, its circumstantial evidence in the form of motives and anticipated activities is strongly consistent with their results.
Charter schools have become a very popular instrument for reforming public schools because they expand choices, facilitate local innovation, and provide incentives for the regular public schools while remaining under public control. Despite their conceptual appeal, little is known about their performance. Hanushek, Kain, and Rivkin provide a preliminary investigation of the quality of charter schools in Texas. They find that average school quality - measured by gains in student achievement in math and science for elementary students -- in the charter sector is not significantly different from that in regular public schools after the initial start-up period. Furthermore, the substantial variation in estimated school quality within the charter sector is quite similar to that of regular public schools. Perhaps most important, parents' decisions to exit a school appear to be much more sensitive to education quality in the charter sector than in regular public schools, consistent with the notion that the introduction of charter schools substantially reduces the transactions costs of switching schools.
Previous studies of class-size effects have been limited to individual countries, most often the United States. Wößmann and West use data from the Third International Mathematics and Science Study (TIMSS) to obtain comparable estimates of the effect of class size on student performance for 18 countries. To identify causal class-size effects, the authors compare the relative performance of students in adjacent grades with different average class sizes within the same school, thereby eliminating biases caused by the sorting of students between and within schools. Variation in average class sizes between grades presumably is driven by natural fluctuations in school enrollment, and should be exogenous to student performance. TIMSS actually tested classes in two adjacent grades within each sampled school. The results indicate that smaller classes have a sizable beneficial effect on achievement in Greece and Iceland. However, this cannot be interpreted as a general finding for all school systems because the possibility of even small effects is rejected in six countries. The possibility of large beneficial effects is rejected in an additional five countries. Comparing the education systems in these three groups of countries indicates that there are noteworthy class-size effects in countries with relatively low teacher salaries. This suggests an extension of this work to educational production: the performance of poorly-paid, and presumably less capable teachers may deteriorate when they are faced with additional students; that would explain the existence of class-size effects in countries with low teacher salaries. Conversely, highly-paid teachers appear capable of teaching well regardless of class size, at least within the range of variation observed from year to year within the same school.
College participation among non-traditional students - that is, not recent high school graduates, often older and with dependents -- has increased markedly in the last two decades, with over half of Pell grants awarded to these "independent" students. Yet the extent to which the availability of financial aid changes the enrollment and attainment of non-traditional students has received little attention in the research literature. For women with children, particularly those in disadvantaged circumstances, direct college costs combined with the need to finance childcare may impede investment in skills that would lead to long-run increases in economic well being. In the academic year beginning in 1988, up to $1000 in childcare expenditures for families with children was included in the cost of attendance used to determine Pell Grant amounts. For many women with children, this change led to a substantial increase in Pell grant eligibility. Simmons and Turner use data from the Current Population Survey and the National Longitudinal Survey of Youth to examine how introducing a childcare cost allowance into the Pell formula in 1988 increased maternal enrollment, attainment, and employment. They find that this program change had a substantial positive impact on enrollment of women with children, but very little effect on attainment or persistence. This finding raises significant questions about how colleges and universities serve these nontraditional students after their initial enrollment.
Avery, Glickman, Hoxby, and Metrick construct a ranking of U.S. colleges and universities based on students' revealed preferences. That is, they show which colleges students prefer when they are able to choose among alternatives. Students should be interested in a revealed preference ranking for two reasons: the ranking shows students where their most talented peers are concentrated; and, because the ranking reflects information gathered by many students, it is a more reliable indicator than the observations of any individual student. The authors use data from a survey of 3,240 highly meritorious students that was conducted specifically for this study. Although they account for the potentially confounding effects of tuition, financial aid packages, alumni preferences, and other preferences, these factors turn out not to affect the ranking significantly. The authors develop a statistical model that is a logical extension of the models used for ranking players in tournaments, such as chess and tennis. When a student makes his matriculation decision among colleges that have admitted him, he chooses which college "wins" in head-to-head competition. The model exploits the information contained in thousands of these "wins" and "losses." Simultaneously, the authors use information from colleges' admissions decisions, which implicitly rank students.
Remediation has become an important part of American higher education with over one-third of all students requiring remedial or developmental courses. With the costs of remediation amounting to over $1 billion each year, many policymakers have become critical of the practice. In contrast, others argue that these courses provide opportunities for underprepared students. Despite the growing debate on remediation and the thousands of underprepared students who enter the nation's higher education institutions each year, little research exists on the role or effects of remediation on student outcomes. Bettinger and Long address these critical issues by examining how higher education attempts to assimilate students in need of remediation and prepare them for future college-level work and labor market success. Using a unique dataset of students in Ohio's public higher education system, they explore the characteristics and features of remedial education, examine participation within the programs, and analyze the effects of remedial education on student outcomes in college
Corporate Finance
The NBER's Program on Corporate Finance met in Cambridge on November 15. Program Director Raghuram Rajan, University of Chicago, organized the meeting. These papers were discussed:
- Oliver S. Hart, NBER and Harvard University, and Bengt R. Holmstrom, NBER and MIT, "A Theory of Firm Scope"
- Discussant: Amar Bhide, Columbia University
- Raymond Fisman, NBER and Columbia University, and Inessa Love, The World Bank, "Patterns of Industrial Development Revisited: The Role of Finance"
- Discussant: Luigi Zingales, NBER and University of Chicago
- James Dow, London Business School and CEPR; Gary Gorton, NBER and University of Pennsylvania; and Arvind Krishnamurthy, Northwestern University, "Corporate Finance and the Term Structure of Interest Rates"
- Discussant: Philip Dybvig, Washington University
- Malcolm Baker, NBER and Harvard University, and Jeffrey Wurgler, New York University, "A Catering Theory of Dividends"
- Discussant: Laurie Hodrick, Columbia University
- Mark Aguiar and Gita Gopinath, University of Chicago, "Fire-Sale FDI and Liquidity Crises"
- Discussant: Anusha Chari, University of Michigan
- Matthew Rhodes-Kropf, Columbia University, and S. Viswanathan, Duke University, "Market Valuation and Merger Waves"
- Discussant: Augustin Landier, NBER and University of Chicago
- Heitor Almeida, New York University; Murillo Campello, University of Illinois; and Michael S. Weisbach, NBER and University of Illinois, "Corporate Demand for Liquidity" (NBER Working Paper No. 9253)
- Discussant: Viral Acharya, London Business School
The existing literature on firms, based on incomplete contracts and property rights, emphasizes that the ownership of assets -- and thereby firm boundaries -- is determined so as to encourage relationship-specific investments by the appropriate parties. It is generally accepted that this approach applies to owner-managed firms better than to large companies. Hart and Holmstrom attempt to broaden the scope of the property rights approach by developing a simpler model with three key ingredients: decisions are non-contractible, but transferable through ownership; managers (and possibly workers) enjoy private benefits that are non-transferable; and owners can divert a firm's profit. With these assumptions, firm boundaries matter. Nonintegrated firms fail to account for the external effects that their decisions have on other firms. An integrated firm can internalize such externalities, but it does not put enough weight on the private benefits of managers and workers. The authors explore this trade-off first in a basic model that focuses on the difficulties companies face in cooperating through the market if benefits are unevenly distributed; therefore, they sometimes may end up merging. They then extend the analysis to study industrial structure in a model with intermediate production. This analysis sheds light on industry consolidation in times of excess capacity.
Fisman and Love re-examine the role of financial market development in the intersectoral allocation of resources. First, they characterize the assumptions underlying previous work in this area, in particular, that of Rajan and Zingales (1998). The authors find that countries have more highly correlated growth rates across sectors when both countries have well-developed financial markets, suggesting that financial markets play an important role in allowing firms to take advantage of global growth opportunities. These results are particularly strong when financial development takes into account both the level and composition of financial development: private banking appears to play a particularly important role in resource allocation. The authors' technique allows them to further distinguish between this "growth opportunities" hypothesis and the related "finance and external dependence" hypothesis, which would imply that countries with similar levels of financial development should specialize in similar sectors. They do not find evidence in support of this alternative view of finance and development.
Dow, Gorton, and Krishnamurthy present a dynamic equilibrium model of the term structure of interest rates. The short-term interest rate is the price at which investors supply funds to the corporate sector. However, the authors assume that firms are run by managers whose interests conflict with those of their shareholders. Managers are empire-builders who prefer to invest all free cash flow rather than distributing it to shareholders. Shareholders are aware of this problem, but it is costly for them to intervene to increase earnings payouts. Firms with more cash invest more. Aggregate investment and the short-term interest rate are highest at business cycle peaks, when corporate cash flow is high, but the term spread is lowest at these times. Procyclical movements in interest rates are driven primarily by changes in corporate earnings rather than by shocks to the expected marginal rate of transformation. The pricing kernel derived under this free-cash-flow friction mimics one in which investors are "debt-holders" on the productive sector. They bear downside risk, but do not share equally on the upside. This aspect of the model sheds light on empirical regularities concerning the pricing of risky securities.
Baker and Wurgler develop a theory in which the decision to pay dividends is driven by investor demand. Managers cater to investors by paying dividends when investors put a stock price premium on payers and not paying when investors prefer nonpayers. To test this prediction, the authors construct four time-series measures of the investor demand for dividend payers. By each measure, nonpayers initiate dividends when demand for payers is high. By some measures, payers omit dividends when demand is low. Further analysis confirms that the results are better explained by the catering theory than by other theories of dividends.
In placing capital market imperfections at the center of emerging market crises, the theoretical literature has associated a liquidity crisis with low foreign investment and the exit of investors from the crisis economy. However, a liquidity crisis is equally consistent with an inflow of foreign capital in the form of mergers and acquisitions (M&A). To support this hypothesis, Aguiar and Gopinath use a firm-level dataset to show that foreign acquisitions increased by 88 percent in East Asia between 1996 and 1998, while intra-national merger activity declined. Firm liquidity plays a significant and sizeable role in explaining both the increase in foreign acquisitions and the decline in the price of acquisitions during the crisis. This effect is most prominent in the tradable sectors and represents a significant departure from the pattern of M&A observed both before and after the crisis. Quantitatively, the observed decline in liquidity can explain nearly 30 percent of the increase in foreign acquisition activity in the tradable sectors. The authors argue that the nature of M&A activity during the crisis contradicts productivity-based explanations of the East Asian crisis.
Does valuation affect takeovers? The data suggests that periods of merger activity are correlated with high market valuations and that firms use stock in acquisitions during these periods. If bidders are simply overvalued then targets should not accept the offers. However, Rhodes-Kropf and Viswanathan show that private information on both sides can lead rationally to a correlation between stock merger activity and market valuation. They assume that bidding firms have private information about the synergistic value of the target. All firms have a market price that may be over or under the true value of their firm as a stand alone entity. The target's and bidding firm's private information tells them whether they are over- or under-valued, but not why (whether it is market -- sector -- or firm specific misvaluation). Thus, target firms cannot distinguish whether high bids are synergies, relative target under-valuation, or bidder over-valuation. A rational target is unwilling to accept a takeover bid with expected value less than the true value of their firm. Consequently, the target uses all available information in an attempt to filter out the misvaluation from the bids. The rational target on average correctly filters but underestimates the market-wide effect when the market is overvalued and over-estimates the effect when the market is undervalued. Thus, the target rationally assesses high synergies when the market is overvalued or they are relatively undervalued and accepts more bids leading to merger waves. Furthermore, the market learns more from watching the takeover market and slowly readjusts prices until they realign with fundamental value. Thus, a simple fully rational model can explain a number of empirical puzzles.
Almeida, Campello, and Weisbach propose a theory of corporate liquidity demand and provide new evidence on corporate cash policies. Firms have access to valuable investment opportunities, but potentially cannot fund them with the use of external finance. Firms that are not financially constrained can undertake all positive NPV projects regardless of their cash position, so their cash positions are irrelevant. In contrast, firms facing financial constraints have an optimal cash position determined by the value of today's investments relative to the expected value of future investments. The model predicts that constrained firms will save a positive fraction of incremental cash flows, while unconstrained firms will not. The authors also consider the impact of Jensen (1986) style overinvestment on the model's equilibrium, and derive conditions under which overinvestment affects corporate cash policies. They test the model's implications on a large sample of publicly-traded manufacturing firms over the 1981-2000 period, and find that firms classified as financially constrained save a positive fraction of their cash flows, while firms classified as unconstrained do not. Moreover, constrained firms save a higher fraction of cash inflows during recessions. These results are robust to the use of alternative proxies for financial constraints, and to several changes in the empirical specification. There is also weak evidence consistent with an agency-based model of corporate liquidity.
Higher Education
The NBER's Working Group on Higher Education met in Cambridge on November 15. Director Charles T. Clotfelter of Duke University organized the meeting. These papers were discussed:
- John M. de Figueiredo, NBER and MIT, and Brian S. Silverman, University of Toronto, "Academic Earmarks and the Returns to Lobbying" (NBER Working Paper No. 9064)
- Discussant: Irwin Feller, Pennsylvania State University
- Michael Rothschild, NBER and Princeton University, "What Makes American Public Universities Great?"
- Discussant: Paul Courant, University of Michigan
- Ronald G. Ehrenberg, NBER and Cornell University, and George H. Jakubson and Michael J. Rizzo, Cornell University, "Who Bears the Growing Cost of Science at Universities?"
- Discussant: Paula Stephen, NBER and Georgia State University
- Michael Robinson, Mount Holyoke College, and James Monks, University of Richmond, "Making SAT Scores Optional in Selective College Admissions: A Case Study"
- Discussant: David Zimmerman, Williams College
De Figueiredo and Silverman statistically estimate the returns to lobbying by universities for educational earmarks (which now represent 10 percent of federal funding of university research). The returns to lobbying approximate zero for universities not represented by a member of the Senate Appropriations Committee (SAC) or House Appropriations Committee (HAC). However, the average lobbying university with representation on the SAC receives an average return on one dollar of lobbying of $11-$17; lobbying universities with representation on the HAC receive $20-$36 for each dollar spent. Moreover, lobbying universities with SAC or HAC representation appear to set the marginal benefit of lobbying equal to its marginal cost, although the vast majority of universities with representation on the HAC and SAC do not lobby and thus do not take advantage of their representation in Congress. On average, an estimated 45 percent of universities choose the optimal level of lobbying.
Rothschild's paper had three parts: 1) speculation about explanations of the strength of U.S. higher public education. He identified the following factors: wealth; competition; political acceptance of the differing roles of different public facilities; and diversity of revenue sources. 2) An attempt to explain stylized facts about the relationship between ability, price, cost, and wealth in U.S. higher education with a neoclassical (competitive) model. And 3) a discussion of how to think about the efficiency of sorting and matching in education. Abstract examples illustrate that the technology of teaching can make tracking efficient in some cases and inefficient in others.
Ehrenberg, Rizzo and Jakubson address the impact of the growing cost of scientific research at universities. What is not well known is that an increasing share of these growing costs are financed out of internal university funds rather than external funds. After providing some data on these costs, including information on the magnitudes of start-up costs in various disciplines, the authors present econometric evidence on the impact of the growing internal costs of science on student/faculty ratios, faculty salaries, and tuition levels at public and private universities. They use data for over 200 universities spanning the period 1972 to 1998. They find that student/faculty ratios, especially at public research universities, are modestly higher today than they would have been if the increase in university expenditures for research had not occurred. They also show that most universities are not earning large sums from commercialization of their faculty members' research.
Despite heightened scrutiny of the use of standardized tests in college admissions, there has been little public empirical analysis of the effects of an optional SAT score submission policy on college admissions. Robinson and Monks examine the results of the decision by Mount Holyoke College to make SAT scores optional in the admissions process. They find that students who "under-performed" on the SAT relative to their high school GPA were more likely to withhold their scores; the admissions office rated applicants who withheld their scores more highly than they otherwise would have been rated; and, matriculants who withheld their scores had a lower average GPA than those who submitted their standardized test results.
International Trade and Investment
The NBER's Program on International Trade and Investment met at the Bureau's California office on December 6-7. Program Director Robert C. Feenstra, University of California, Davis, organized this meeting. The following papers were discussed:
- Gordon H. Hanson, NBER and University of California, San Diego; Raymond J. Mataloni, Jr., U.S. Bureau of Economic Analysis; and Matthew J. Slaughter, NBER and Dartmouth College, "Vertical Specialization in Multinational Firms"
- Deborah L. Swenson, NBER and University of California, Davis, "Overseas Assembly and Country Sourcing Choices"
- Lee G. Branstetter and Raymond Fisman, NBER and Columbia University, and Fritz Foley, University of Michigan, "Will Stronger Intellectual Property Rights Increase International Technology Transfer? Empirical Evidence from U.S. Firm-Level Panel Data"
- Mihir A. Desai, NBER and Harvard Business School; Fritz Foley, University of Michigan; and James R. Hines, Jr., NBER and University of Michigan, "International Joint Ventures and the Boundaries of the Firm" (NBER Working Paper No. 9115)
- Zadia Feliciano and Robert E. Lipsey, NBER and Queens College, "Foreign Entry into U.S. Manufacturing by Takeovers and the Creation of New Firms"
- Kevin H. O'Rourke, NBER and Trinity College, and Jeffrey G. Williamson, NBER and Harvard University, "From Malthus to Ohlin: Trade, Growth and Distribution Since 1500" (NBER Working Paper No. 8955)
- Brian Copeland, University of British Columbia, and Scott Taylor, NBER and University of Wisconsin, "Trade, Tragedy, and the Commons"
- Gilles Duranton, London School of Economics, and Diego Puga, NBER and University of Toronto, "Microfoundations of Urban Agglomeration Economies"
In recent decades, the growth of overall world trade has been driven largely by the growth of trade in intermediate inputs. This input trade results in part from multinational firms choosing to outsource input processing to their foreign affiliates, thereby creating global production networks in which each actor is vertically specialized. Hanson, Mataloni, and Slaughter use firm-level data on U.S. multinationals to examine trade in intermediate inputs between parent firms and their foreign affiliates. They estimate affiliate demand for imported inputs as a function of host-country trade costs, factor prices, and other variables. They find that affiliate demand for imported inputs for further processing decreases in direct proportion to host-country tariffs, host-country wages for less-skilled labor (both in absolute terms and relative to wages for more-skilled labor), and host-country corporate income tax rates. Consistent with recent theory, these results suggest that vertical specialization within multinational firms rises as trade barriers between countries fall and as factor-price differences between countries widen.
The fragmentation of production has resulted in an increasing degree of vertical specialization across countries. Swenson studies one venue that has facilitated growth in U.S. vertical specialization, examining how the cross-country pattern of U.S. overseas assembly responds to changes in country and competitor costs. A number of interesting regularities emerge. Changes in sourcing are influenced not only by changes in import values, but also by a high degree of country entry to and exit from the program. Both developed and developing countries face exit pressures when their own costs rise, or their competitor's costs decline. For those countries that are selected to provide assembly, the value of assembly imports also is influenced by own and competitor costs. In all cases, the estimated cost sensitivity for developing countries is larger than it is for the richer nations of the OECD.
Branstetter, Fisman, and Foley examine the response of U.S. multinational firms to a series of reforms of intellectual property rights (IPR) regimes undertaken by 12 countries over 1982-99. Their results indicate that changes in the IPR regime produce 8.5 percent increases on average in royalty payment flows to parent firms and 22.8 percent increases for firms that hold more patents than the median firm prior to the reforms. The affiliates of parent companies that had a large number of U.S. patents before reforms experienced larger increases in employment, sales, and profitability than other firms around the time of policy changes. Since there is no evidence of an increase in royalties paid by unaffiliated foreigners, multinationals seem to respond to the IPR regime changes by exploiting their technologies inside the firm. The data on international patent filings suggests that some component of the increased royalty flows represents the transfer of new technologies to the host country; the increased flows do not merely reflect an increase in the price of the flows or greater rent extraction.
Desai, Foley, and Hines analyze what determines partial ownership of the foreign affiliates of U.S. multinational firms and, in particular, why partial ownership has declined markedly over the last 20 years. Whole ownership appears most common when firms: coordinate integrated production activities across different locations; transfer technology; and benefit from worldwide tax planning. Since operations and ownership levels are determined jointly, the authors use the liberalization of ownership restrictions by host countries and the imposition of joint venture tax penalties in the U.S. Tax Reform Act of 1986 as a measure of ownership levels. Firms responded to these regulatory and tax changes by expanding the volume of their intrafirm trade as well as the extent of whole ownership; 4 percent greater subsequent sole ownership of affiliates is associated with 3 percent higher intrafirm trade volumes. The implied complementarity of whole ownership and intrafirm trade suggests that reduced costs of coordinating global operations, together with regulatory and tax changes, gave rise to the sharply declining propensity of American firms to organize their foreign operations as joint ventures over the last two decades. The forces of globalization appear to have increased the desire of multinationals to structure many transactions inside firms rather than through exchanges involving other parties.
Using U.S. Bureau of Economic Analysis data for individual foreign acquisitions and new establishments in the United States from 1988 to 1998, and aggregate data for 1980 to 1998, Feliciano and Lipsey find that acquisitions and establishment of new firms tend to occur in periods of high U.S. growth and take place mainly in industries in which the investing country has some comparative advantage in exporting. New establishments are largely in industries of U.S. comparative disadvantage, and the relation of U.S. comparative advantage to takeovers is negative, but never significant. High U.S. stock prices, industry profitability, and industry growth discourage takeovers. High U.S. interest rates and high investing country growth and currency values encourage takeovers. Direct investments in acquisitions and new establishments thus tend to flow in the same direction as trade. They originate in countries with comparative advantages in particular industries and flow to industries of U.S. comparative disadvantage.
A recent endogenous growth literature has focused on the transition from a Malthusian world, where real wages were linked to factor endowments, to one where modern growth has broken that link. O'Rourke and Williamson present evidence on another, related phenomenon: the dramatic reversal in distributional trends -- from a steep secular fall to a steep secular rise in wage-land rent ratios -- which occurred some time early in the 19th century. What explains this reversal? While it may seem logical to locate the causes in the Industrial Revolutionary forces emphasized by endogenous growth theorists, the authors show that something else mattered just as much: the opening up of the European economy to international trade.
Copeland and Taylor investigate the conditions under which the market integration of resource-rich countries into the global trading system leads to greater or lesser conservation of natural resources. The authors present a model of common property resources where the strength of property rights varies endogenously with world market conditions. They find that some countries will never be able to develop control over access to their renewable resources, but others will, and increases in resource prices work towards solving the tragedy of the commons. The paper divides the set of resource-rich countries into three categories according to their ability to graduate to tighter resource management, and links these categories to country characteristics, such as resource growth rates, technologies, and the expected lifetime of agents. The authors also consider extensions to allow for political economy elements and government corruption.
Market Microstructure
The NBER's Working Group on Market Microstructure met in Cambridge on December 6. Bruce Lehmann, NBER and University of California, San Diego; Andrew Lo, NBER and MIT; Matthew Spiegel, Yale University; and Avanidhar Subrahmanyam, University of California, Los Angeles, organized this program:
- Bruno Biais and Christophe Bisière, Toulouse University, and Chester S. Spatt, Carnegie Mellon University, "Imperfect Competition in Financial Markets: ISLAND vs. NASDAQ"
- Discussant: Stewart Mayhew, University of Georgia
- Sugato Chakravarty, Purdue University; Venkatesh Panchapagesan, Washington University; and Robert A. Wood, University of Memphis, "Has Decimalization Hurt Institutional Investors? An Investigation into Trading Costs and Order Routing Practices of Buy-Side Institutions"
- Discussant: Tarun Chordia, Emory University
- Kee H. Chung, SUNY; Chairat Chuwonganant, Purdue University; and D. Timothy McCormick, NASDAQ, "Order Preferencing and Market Quality on NASDAQ Before and After Decimalization"
- Discussant: Michael Barclay, NBER and University of Rochester
- Burton Hollifield and Robert A. Miller, Carnegie Mellon University; Patrik Sandas, University of Pennsylvania; and Joshua Slive, HEC Montreal, "Liquidity Supply and Demand in Limit Order Markets"
- Discussant: Ohad Kadan, Washington University
- Magueye Dia, Oxford University, and Sébastien Pouget, Georgia State University, "Sunshine Trading in West Africa: Liquidity and Price Formation of Infrequently Traded Stocks"
- Discussant: Barbara Ostdiek, Rice University
The Internet technology reduces the cost of transmitting and exchanging information. Electronic Communications Networks (ECNs) - including Island, Archipelago, and Redi -- exploit this opportunity, enabling investors to place quotes at very little cost, and compete with incumbent stock exchanges. Does this quasi-free entry situation lead to competitive liquidity supply? Biais, Bisière, and Spatt analyze trades and order book dynamics on the Nasdaq and Island. The Nasdaq Touch - the best price quote available through the NASDAQ market makers' network for a given security at a point in time -- frequently is undercut by Island limit orders, using the finer tick size prevailing on that ECN. Before decimalization, the coarse tick size constrained Nasdaq spreads, and undercutting Island limit order traders earned oligopoly rents. After decimalization, the hypothesis that liquidity suppliers do not earn rents cannot be rejected.
Chakravarty, Panchapagesan, and Wood examine the effect of decimalization on institutional investors. Using proprietary data, they find that decimalization has not increased trading costs for institutions. In fact, they find an average decrease of 13 basis points, or roughly $224 million a month, in savings of institutional trading costs after moving to decimal trading. As to institutional order-routing practices, the smaller and easier-to-fill orders more often are routed to electronic brokers, while the larger and more-difficult-to-fill orders are sent to traditional brokers. The trading costs of orders routed to electronic and independent research brokers increase, while the costs of trading with full service and soft dollar brokers go down. Interestingly, the authors find less usage of soft dollar brokers, suggesting that decimalization may have altered the incentives of this multi-billion dollar industry. These results survive extensive partitioning of the data and differ in spirit from those reported around the transition of the minimum tick size from eighths to sixteenths. The results are also surprising in light of an oft-repeated complaint among professional traders: that liquidity is hard and expensive to find in a post-decimal trading milieu.
No hard evidence exists on the extent and determinants of order preferencing and its impact on dealer competition and execution quality. Chung, Chuwonganant, and McCormick show that the bid-ask spread (dealer quote aggressiveness) is positively (negatively) related to the proportion of internalized trades during both the pre- and post- decimalization periods. Although decimal pricing led to lower order preferencing on NASDAQ, the proportion of preferenced trades after decimalization is much higher than what some prior studies had predicted. The authors find that the price impact of preferenced trades is smaller than that of unpreferenced trades and that preferenced orders receive greater (smaller) size (price) improvements than unpreferenced trades.
Hollifield, Miller, Sandas, and Slive model a trader's decision to supply liquidity by submitting limit orders, or demand liquidity by submitting market orders, in a limit-order market. The best quotes and the execution probabilities and picking-off risks of limit orders determine the price of immediacy. The price of immediacy and the trader's willingness to pay for it determine the trader's optimal order submission; the trader's willingness to pay for immediacy depends on the trader's valuation for the stock. The authors estimate the execution probabilities and the picking-off risks using a sample from the Vancouver Stock Exchange to compute the price of immediacy. The price of immediacy changes with market conditions -- a trader's optimal order submission changes with market conditions. The authors combine the price of immediacy with the actual order submissions to estimate the unobserved arrival rates of traders and the distribution of the traders' valuations. High realized stock volatility increases the arrival rate of traders and increases the number of value traders arriving -- liquidity supply is more competitive after periods of high volatility. An increase in the spread decreases the arrival rate of traders and decreases the number of value traders arriving -- liquidity supply is less competitive when the spread widens.
Dia and Pouget study liquidity and price formation in the West-African Bourse. They provide evidence consistent with investors using the preopening period to implement sunshine trading, and prices revealing information before trading actually occurs. They argue that market participants implement order-placement strategies bound to enhance market liquidity. They also underline the role of the preopening period as a powerful tool for disseminating information regarding both liquidity needs and stock valuation. The authors interpret the empirical results in the framework of a simple theoretical model. For some parameters' value, at equilibrium, market non-anonymity and repeated interaction enable investors to coordinate on trading strategies, improving market quality as it is observed in the West-African Bourse. These findings have implications for global portfolio management and for the design of financial markets.
Productivity
The NBER's Program on Productivity met in Cambridge on December 6. Bronwyn H. Hall, NBER and University of California, Berkeley, organized this program:
- Bee Yan Aw-Roberts, Pennsylvania State University; Mark J. Roberts, NBER and Pennsylvania State University; and Tor Winston, U.S. Department of Justice, "Export Market Participation, Investments in R and D and Worker Training, and the Evolution of Firm Productivity"
- Discussant: Amil Petrin, NBER and University of Chicago
- Hajime Katayama, Pennsylvania State University; Shihua Lu, Charles River Associates; and James R. Tybout, NBER and Pennsylvania State University; "Why Plant-Level Productivity Studies Are Often Misleading, and an Alternative Approach to Inference"
- Discussant: Marc Melitz, Harvard University
- Barbara M. Fraumeni and Sumiye Okubo, Bureau of Economic Analysis, "R and D in the National Income and Product Accounts: A First Look at Its Effects on GDP"
- Discussant: Bronwyn H. Hall
- Saul Lach, NBER and Hebrew University, Jerusalem, and Mark Schankerman, London School of Economics, "Incentives, Academic Research, and Licensing"
- Discussant: Arvids Ziedonis, University of Michigan
- James D. Adams, NBER and University of Florida; Grant C. Black and Paula E. Stephan, Georgia State University; and Roger Clemmons, University of Florida, "Patterns of Research Collaboration in U.S. Universities, 1981-99"
- Discussant: Manuel Trajtenberg, NBER and Tel Aviv University
Aw, Roberts, and Winston use data for firms in the Taiwanese electronics industry in 1986, 1991, and 1996 to investigate a firm's decision to invest in two sources of knowledge: participation in the export market and investments in R and D and/or worker training. They also assess the effects of these investments on the firm's future total factor productivity. They find that past experience in exporting increases the likelihood that a firm currently exports, but that past experience in R and D and/or worker training does not have lasting effects on a firm's investment decisions. These results are consistent with the belief that exporting is less costly for firms that have already incurred some necessary sunk costs. In addition, the results indicate that larger firms and more productive firms are more likely to participate in each activity. The findings also suggest that, on average, firms that export but do not invest in R and D and/or worker training have significantly higher future productivity than firms that do not participate in either activity. In addition, firms that export and invest in R and D and/or worker training have significantly higher future productivity than firms that only export. These findings are consistent with the hypothesis that export experience is an important source of productivity growth for Taiwanese firms and that firm investments in R and D and worker training facilitate their ability to benefit from their exposure to the export market.
Applied economists often wish to measure the effects of policy changes (such as trade liberalization) or managerial decisions (for example, how much to spend on R and D) on plant-level productivity patterns. But plant-level data on physical quantities of output, capital, and intermediate inputs are usually unavailable. Therefore, when constructing productivity measures, most analysts proxy these variables with real sales revenues, depreciated capital spending, and real input expenditures. The first objective of Katayama, Lu, and Tybout is to show that the resultant productivity indexes have little to do with technical efficiency, product quality, or contributions to social welfare. Nonetheless, they are likely to be correlated with policy shocks and managerial decisions in misleading ways. The authors' second objective is to develop an alternative approach to inference. Applying their methodology to panel data on Colombian pulp and paper plants, the authors then study the relation between their welfare-based measures and conventional productivity measures. They find that conventional productivity measures are correlated positively with producer surplus because they depend positively on mark-ups. But the conventional measures are not related closely to product quality measures and they are nearly orthogonal to consumer surplus measures; from a social welfare standpoint, they are poor characterizations of producer performance. Finally, the authors show that conventional productivity measures imply firms that import their intermediate inputs tend to do worse, while their welfare-based measures suggest they do not.
According to the estimates of Fraumeni and Okubo, R and D is a significant contributor to economic growth. Over the 40-year period studied, 1961-2000, returns to R and D capital accounted for 10 percent of growth in real GDP. Treating R and D as an investment raises the national savings rate by 2 percentage points, from 19 to 21 percent. Their paper is a preliminary and exploratory examination of the role of R and D in the U.S. economy. It extends the National Income and Product Accounts (NIPA) framework by treating R and D as an investment and imputing a net return to general government capital. Capitalizing R and D investment has a small positive effect on the rate of growth of GDP. There is a significant effect on the distribution of consumption and investment on the product-side and the distribution of property-type income and labor income on the income-side. Most importantly, the partial R and D satellite account developed in this paper increases our understanding of the sources of economic growth.
Lach and Schankerman study how economic incentives affect university research and licensing outcomes. Using data on inventions, license income, and scientists' royalty shares for 103 U.S. universities, they examine how the cash flow rights from university inventions affect the quantity and value of university inventions. Controlling for other determinants, including university size, quality, and research funding, they find that universities with higher royalty shares produce fewer inventions with higher average value. Overall, total income from licensing university inventions increases with the royalty share. These incentive effects are much stronger in private than in public universities.
Adams, Black, Clemmons, and Stephan explore recent time trends as well as cross-sectional patterns in the size of scientific teams, and in collaboration between scientific institutions. The data derive from 2.4 million scientific papers written in 110 leading U.S. universities over the period 1981-99. The authors' measure of team size is the number of authors on a scientific paper. By this measure, the size of scientific teams increases by 50 percent over the 19-year period. Much of the increase takes place during 1991-5, when the Internet was commercialized rapidly. Cross-sectional patterns indicate that sciences that are intensive in instruments or research assistants employ larger teams, as do top departments that receive large amounts of federal R and D and employ faculty who have received prestigious prizes and awards. There is also evidence of rapid growth in institutional collaboration, especially international collaboration. Since these two factors determine the location of team members, the authors conclude that geographic dispersion of scientific teams has increased over time. Finally collaboration in its different dimensions generally contributes positively to papers and citations received. Since collaboration implies an increase in the division of labor, these results are consistent with the notion that the division of labor increases scientific productivity, consistent with Smith's famous dictum of 1776.