NATIONAL BUREAU OF ECONOMIC RESEARCH
NATIONAL BUREAU OF ECONOMIC RESEARCH

The NBER Reporter Winter 2006/2007: Program and Working Group Meetings



China Working Group Meeting
Health Care Program Meeting
Entrepreneurship Working Group Meeting
International Finance and Macroeconomics
Public Economics
Monetary Economics
Higher Education

Education Program Meeting
Asset Pricing
Corporate Finance
Behavioral Economics
Labor Studies
Productivity
International Trade and Investment

China Working Group Meeting

The NBER's Working Group on China met in Cambridge on October 13. NBER Research Associate Shang-Jin Wei of the IMF organized the meeting. The following papers were discussed:

Douglas Almond, Columbia University and NBER; Lena Edlund, Columbia University; and Hongbin Li and Junsen Zhang, Chinese University of Hong Kong, "Long-Term Effects of China's Great Famine in Hong Kong and Mainland China"
Discussant: Xiaobo Zhang, International Food Policy Research Institute

Tarun Khanna and Felix Oberholzer-Gee, Harvard University, "The Political Economy of Firm Size Distributions: Evidence from Post-Reform China"
Discussant: Bruce Reynolds, University of Virginia

David Dollar, World Bank, and Shang-Jin Wei, "Das (Wasted) Kapital: Firm Ownership and Investment Efficiency in China"
Discussant: Galina Hale, Federal Reserve Bank of San Francisco

Geert Bekaert, Columbia University and NBER; Campbell R. Harvey, Duke University and NBER; and Christian Lundblad, University of North Carolina,

"Financial Openness and the Chinese Growth Experience"
Discussant: Zhiwu Chen, Yale University

Chang-Tai Hsieh, University of California, Berkeley, and Peter Klenow, Stanford University and NBER, "Misallocation and Manufacturing TFP in China and India"
Discussant: Lee Branstetter, Carnegie Mellon University and NBER

Yi Qian, Northwestern University, "Pricing and Marketing Impacts of Entry by Counterfeiters and Imitators"
Discussant: Nancy Qian, Brown University

Almond and his co-authors evaluate whether the Great China Famine had negative effects on its survivors. According to the fetal-origins hypothesis, cohorts in utero during the famine should have suffered the greatest long-term damage. Consistent with this hypothesis, the authors finds a broad spectrum of compromised outcomes for cohorts born in 1960 who appear in the 2000 Chinese Census. These effects are greatest for those in rural areas, but extend to those who were born in urban areas. The authors also find that Hong Kong residents who were born in China exhibit inferior health outcomes, including reduced birth weight of children born to parents who themselves were in utero during the famine. Health effects exist among emigrants from mainland China despite the selective effects of emigration, which are generally positive. Moreover, no corresponding damage among cohorts born in Hong Kong, and thereby shielded from the famine, is observed.

Khanna and Oberholzer-Gee study the relationship between firm size distributions and some aspects of political economy. They exploit a new database of up to five million observations in two years in China, 1999 and 2003. They are particularly interested in studying the effects of China's uneven march to the market on firms of different ownership, namely state-owned enterprises, collectively owned enterprises, foreign invested enterprises, and private firms. Their results show that massive liberalization in China has encouraged the growth of foreign-invested enterprises and, to a lesser extent, collective enterprises (including Township and Village Enterprises), but they have never encouraged genuinely private firms. The best thing that can be said for private enterprise in China is that foreign direct investment appears to spur the entry of small firms. Surprisingly, price flexibility, an important form of liberalization, does not help private firms, although it does help large foreign firms and large collectives to become even larger. The researchers are also able to distinguish between government interference directed at provincial insiders (incumbents, if you will) and that directed at potential provincial outsiders (potential entrants, if you will) and show that the effects on the size distribution are opposite. The results are consistent with local governments - in an attempt to protect the autonomy granted them by the center during the reform process - "hitting back" at central government efforts to contain them, perhaps in order to encourage their own local (provincial) firms. The authors conclude that the simplest measures appear more important than conventional measures of financial constraints in determining firm size distributions, even after controlling for industry (technological) effects.

Based on a survey that Dollar and Wei designed, which covers a stratified random sample of 12,400 firms in 120 cities in China with firm-level accounting information for 2002-4, they examine the presence of systematic distortions in capital allocation that result in uneven marginal returns to capital across firm ownership, regions, and sectors. The survey provides a systematic comparison of investment efficiency among wholly and partially state-owned, wholly and partially foreign-owned, and domestic privately owned firms, conditioning on their sector, location, and size. The researchers find that even with a quarter-century of reforms, state-owned firms still have significantly lower returns to capital, on average, than domestic private or foreign-owned firms. Similarly, certain regions and sectors have consistently lower returns to capital than other regions and sectors. A back-of-the-envelope calculation suggests that if China could allocate capital more efficiently, it would reduce its investment intensity from the current 40 percent of GDP to 35 percent without sacrificing its economic growth (and hence deliver a greater improvement to its citizens' living standard).

Bekaert and his co-authors reflect on China's economic performance from the perspective of the experiences of a broad panel of countries. The authors formulate an econometric framework, building on standard growth regressions that allows them to measure the impact of various factors on economic growth and growth variability. Because China has become more and more integrated into the world's economic and financial landscape, the authors devote special attention to measures of (de jure) financial openness. They also document how the real effects of openness are affected by financial development, political risk, and the quality of institutions. Standard growth regressions cannot explain China's extraordinary growth experience, and the authors fail to find an important role for foreign trade and foreign direct investment. In contrast, the sheer volume of investment has played a significant role in China's growth. As China's per capita GDP continues to grow, it must find sustainable sources of growth. The authors identify a more efficient financial sector, less state ownership, higher quality of government institutions, and full financial openness as important factors. Interaction analysis suggests that the beneficial effects of financial openness first require further financial and institutional development. China is less of an outlier in its growth variability experience but has achieved high growth with surprisingly low growth volatility.

Resource misallocation can lower aggregate total factor productivity (TFP). Hsieh and Klenow use micro data on manufacturing establishments to quantify the extent of this misallocation in China and India in recent years. For each country, they measure sizable gaps in the marginal products of labor and capital across plants within narrowly-defined industries. When capital and labor are hypothetically reallocated to equalize the marginal products, they calculate manufacturing TFP gains on the order of a factor of 2. Output gains are nearly a factor of 4 if physical capital accumulates to restore the original average marginal product of capital.

Counterfeit and imitative products appear similar to authentic products but usually have lower quality. However, unlike imitation, counterfeiting infringes upon intellectual property rights by claiming a brand name that it does not own. Qian models the pricing, quality, and marketing strategies of producers of authentic and counterfeit goods in a setting of oligopolistic competition under both complete and asymmetric information. His model explains the effects of both counterfeit and imitative entry with different parameter specifications. He collects data from Chinese shoe companies from 1993-2004 to test the theoretical predictions. Exploiting the discontinuity of government enforcement efforts for the footwear sector in 1995, and the differences in authentic companies' relationships with their local governments, Qian uses two different techniques to measure the effects of counterfeit entry on authentic manufacturers' prices, qualities, and profits. The empirical results are consistent with the theoretical predictions. First, low-quality counterfeit entrants induce authentic producers to both produce higher quality products and raise prices. Second, there is empirical evidence for the presence of asymmetric information, under which authentic prices rise further to signal quality (or authenticity). However, this price-signaling effect diminishes over time. Third, other costly non-price devices are used for signaling and reducing counterfeit sales.

[back to top]


Health Care Program Meeting

The NBER's Program on Health Care met in Cambridge on October 20. David Meltzer, NBER and University of Chicago, organized the program. These papers were discussed:

Amy Finkelstein and Daron Acemoglu, MIT and NBER, "Input and Technology Choices in Regulated Industries: Evidence from the Health Care Sector"(NBER Working Paper No. 12254)

Darius Lakdawala and Neeraj Sood, The RAND Corporation and NBER, "Health Insurance as a Two-Part Pricing Contract"(NBER Working Paper No. 12681)

Dhaval Dave, Bentley College and NBER, and Robert Kaestner, University of Illinois at Chicago and NBER, "Medicare and Health Behaviors"

Joshua Graff Zivin, Columbia University and NBER; Harsha Thirumurthy, Yale University; and Markus Goldstein, The World Bank, "AIDS Treatment and Intrahousehold Resource Allocations: Children's Nutrition and Schooling in Kenya"

Sean Nicholson, Cornell University and NBER; Michael Waldman, Cornell University; and Nodir Adilov, Purdue University, "Does Television Cause Autism?"

David Meltzer, and Domenico Salvatore, Universita Bocconi, "Sex and Physician Practice Variation"

David M. Cutler and Robert S. Huckman, Harvard University and NBER, and Jonathan T. Kolstad, Harvard University, "Is Entry Efficient When Inputs are Constrained? Lessons from Cardiac Surgery"

Joshua Lerner, Harvard University and NBER, and Ulrike Malmendier, University of California, Berkeley and NBER, "Contractibility and the Design of Research Agreements"

Finkelstein and Acemoglu ask how regulatory change might affect the input mix and technology choices of regulated industries. They present a simple neoclassical framework that emphasizes the change in relative factor prices associated with a shift from full-cost to partial-cost reimbursement, and investigate how this affects firms' technology choices through substitution of (capital embodied) technologies for tasks previously performed by labor. Empirically, they study the change from full-cost to partial-cost reimbursement under the Medicare Prospective Payment System (PPS) reform, which increased the relative price of labor faced by U.S. hospitals. Using the interaction of hospitals' pre-PPS Medicare share of patient days with the introduction of these regulatory changes, they document a substantial increase in capital-labor ratios and a large decline in labor inputs associated with PPS. Most interestingly, they find that the PPS reform seems to have encouraged the adoption of a range of new medical technologies. They also show that the reform was associated with an increase in the skill composition of these hospitals, which is consistent with technology-skill or capital-skill complementarities.

Monopolies appear throughout medical care markets, as a result of patents, limits to the extent of the market, or the presence of unique inputs and skills. Economists typically think of such monopolies as necessary evils or even pure inefficiencies. However, in the health care industry, the deadweight costs of monopoly may be much smaller or even absent. Health insurance, frequently implemented as an ex ante premium coupled with an ex post co-payment per unit consumed, operates as a two-part pricing contract. This allows monopolists to extract consumer surplus without inefficiently constraining quantity. Lakdawala and Sood note that this view of health insurance contracts has several novel implications: 1) medical care monopolies may have smaller or no deadweight costs in the goods market, because insured consumers face low co-payments; 2) since monopolists have incentives to seek low co-payments, price regulation of health care monopolies is inferior to laissez-faire or simple tax-and-transfer schemes that redistribute monopoly profits; and 3) competitive health insurance markets or optimally designed public health insurance can eliminate static losses in the goods market while still improving dynamic efficiency in the innovation market.

Basic economic theory suggests that health insurance coverage may cause a reduction in prevention activities, but empirical studies have yet to provide evidence to support this prediction. Dave and Kaestner extend the analysis of the effect of health insurance on health behaviors by allowing for the possibility that health insurance has a direct (ex ante moral hazard) and indirect effect on health behaviors. The indirect effect works through changes in health promotion information and the probability of illness that may be a byproduct of insurance-induced greater contact with medical professionals. The authors identify these two effects and in doing so identify the pure ex ante moral hazard effect. They find limited evidence that obtaining health insurance reduces prevention and increases unhealthy behaviors among elderly persons.

The provision of life-saving antiretroviral (ARV) treatment has emerged as a key component of the global response to HI V/AIDS, but very little is known about the impact of this intervention on the welfare of children in the households of treated persons. Zivin, Thirumurthy, and Goldstein estimate the impact of ARV treatment on children's schooling and nutrition outcomes using longitudinal household survey data collected in collaboration with a treatment program in western Kenya. They find that children's weekly hours of school attendance increase by over 20 percent within about six months after treatment is initiated for the adult household member. For boys in treatment households, these increases are closely related to decreases in their market labor supply. Similarly, young children's short-term nutritional status - as measured by their weight-for-height Z-score - also improves dramatically. The researchers argue that these treatment effects will be considerably larger when compared to the counterfactual scenario of no ARV treatment. Their results show how intrahousehold resource allocation is altered in response to significant health improvements. Because the improvements in children's schooling and nutrition will affect their socioeconomic outcomes in adulthood, the provision of ARV treatment is likely to generate significant long-run macroeconomic benefits.

Autism is currently estimated to affect approximately one in every 166 children, yet the cause or causes of the condition are not well understood. One of the current theories concerning the condition is that among a set of children vulnerable to developing the condition because of their underlying genetics, the condition manifests itself when such a child is exposed to a (currently unknown) environmental trigger. Nicholson, Waldman, and Adilov empirically investigate the hypothesis that early childhood television viewing serves as such a trigger. Using the Bureau of Labor Statistics' American Time Use Survey, they first establish that the amount of television a young child watches is positively related to the amount of precipitation in the child's community. This suggests that, if television is a trigger for autism, then autism should be more prevalent in communities that receive substantial precipitation. Next they look at county-level autism data for three states - California, Oregon, and Washington - characterized by high precipitation variability. Employing a variety of tests, they show that in each of the three states (and across all three states when pooled) there is substantial evidence that county autism rates are indeed positively related to county-wide levels of precipitation. In the final set of tests, they use California and Pennsylvania data on children born between 1972 and 1989 to show, again consistent with the television-as-trigger hypothesis, that county autism rates are also positively related to the percentage of households that subscribe to cable television. The precipitation tests indicate that just under 40 percent of autism diagnoses in the three states studied are the result of television watching because of precipitation, while the cable tests indicate that approximately 17 percent of the growth in autism in California and Pennsylvania during the 1970s and 1980s is attributable to the growth of cable television. These findings are consistent with early childhood television viewing being an important trigger for autism.

In 2003, 281 attending physicians on the general medicine services of six academic hospitals were surveyed concerning demongraphic and professional attributes. They also were asked to name three other general medicine attendings to whom they were most likely to turn for advice during a typical month on the inpatient general medical services. Based on the survey results, Meltzer and Salvatore find that both professional and personal factors affect who such physicians ask for advice. Attendings are more likely to name colleagues who are more experienced (older), graduated from a higher-ranked medical school, and read more journals. However, personal factors are also highly influential, with attendings less likely to seek advice from colleagues who differed from them in gender, experience (age), sub-specialty, medical school rank, and journal readership. Women are more likely to name men as advisors than men are to name women .

Cutler, Huckman, and Kolstad test their theoretical predictions concerning the welfare effects of free entry in the presence of scarce inputs and heterogeneous quality by considering how the 1996 repeal of certificate-of-need (CON) legislation in Pennsylvania affected the market for coronary artery bypass graft (CABG) surgery in that state from 1993 to 2003. Variation in entry across markets following this exogenous regulatory change allows them to estimate the effect of entry on market quantity, output quality, and welfare. The 1996 repeal of CON led to a 56 percent increase in the number of hospitals offering CABG by 2003. This dramatic entry was not associated with an increase in the number of surgeries performed in the state. Rather, entry led to a redistribution of surgeries from lower- to higher-quality surgeons. Further, the researchers argue, the inelastic supply of high-quality cardiac surgeons limited the degree of excess entry following the repeal of CON. While they cannot observe cost, the fact that quality improved and that excessive entry was limited by the availability of surgeons together suggest that free entry increased welfare in Pennsylvnia's market for cardiac surgery.

Lerner and Malmendier analyze how variations in contractibility afect the design of contracts in the context of biotechnology research agreements. A major concern of firms financing biotechnology research is that the R and D firms might use the funding to subsidize other projects or to substitute one project for another. The researchers develop a model based on the property-rights theory of the firm that allows for researchers in the R and D firms to pursue multiple projects. When research activities are non-verifiable, it is optimal for the financing company to obtain the option right to terminate the research agreement while maintaining broad property rights to the terminated project. The option right induces the biotechnology firm researchers not to deviate from the proposed research activities. The contract prevents opportunistic exercise of the termination right by conditioning payments on the termination of the agreement. Using a new dataset on 584 biotechnology research agreements, the researchers find that the assignment of termination and broad intellectual property rights to the financing firm occurs in contractually dificult environments in which there is no specifiable lead product candidate. They also analyze how the contractual design varies with the R and D firm's financial constraints and research capacities and with the type of financing firm. The additional empirical results allow them to distinguish the property-rights explanation from alternative stories, based on uncertainty and asymmetric information about the project quality or research abilities.

[back to top]


Entrepreneurship Working Group Meeting

The Entrepreneurship Working Group met in Cambridge on October 20. Group Director Josh Lerner, NBER and Harvard Business School, organized this program:

Steven J. Davis, NBER and University of Chicago; John Haltiwanger, NBER and University of Maryland; and Javier Miranda and Ron Jarmin, Bureau of the Census, "Volatility and Dispersion in Business Growth Rates: Publicly Traded versus Privately Held Firms" (NBER Working Paper No. 12354)
Discussant: Richard Caves, Harvard University

Suresh De Mel, University of Peradeniya; David McKenzie, The World Bank; and Chris Woodruff, University of California,

San Diego, "Returns to Capital in Microenterprises: Evidence from a Field Experiment"
Discussant: Shawn Cole, Harvard University

Thomas Hellmann, University of British Columbia; Laura Bottazzi, Bologna University; and Marco Da Rin, Tilburg University, "The Importance of Trust for Investment: Evidence from Venture Capital"
Discussant: Rebecca Zarutskie, Duke University

Morten Sorensen, University of Chicago, "Learning by Investing: Evidence from Venture Capital"
Discussant: Dirk Bergemann, Yale University

Panel Discussion: "Where is the Venture Capital Going? And Does it Matter?" Paul Gompers, Harvard University and NBER; Bill Helman, Greylock Partners; and Philip Rotner, MIT

Yael Hochberg, Northwestern University; and Alexander Ljungvist and Yang Lu, New York University, "Networking as a Barrier to Entry and the Competitive Supply of Venture Capital"
Discussant: Toby Stuart, Harvard University

Davis and his co-authors study the variability of business growth rates in the U.S. private sector from 1976 onwards. They exploit the recently developed Longitudinal Business Database (LBD), which contains annual observations on employment and payroll for all U.S. businesses. Their central finding is a large secular decline in the cross-sectional dispersion of firm growth rates and in the average magnitude of firm-level volatility. Measured as in other recent research, the employment-weighted mean volatility of firm growth rates has declined by more than 40 percent since 1982. This stands in sharp contrast to previous findings of rising volatility for publicly traded firms in COMPUSTAT data. The researchers confirm the rise in volatility among publicly traded firms using the LBD, but show that its impact is overwhelmed by declining volatility among privately held firms. This pattern holds in every major industry group. Employment shifts toward older businesses account for 27 percent or more of the volatility decline among privately held firms. Simple cohort effects that capture higher volatility among more recently listed firms account for most of the volatility rise among publicly traded firms.

Small and informal firms account for a large share of employment in developing countries. The rapid expansion of microfinance services is based on the belief that these firms have productive investment opportunities and can enjoy high returns to capital if given the opportunity. However, measuring the return to capital is complicated by unobserved factors such as entrepreneurial ability and demand shocks, which are likely to be correlated with capital stock. De Mel, McKenzie, and Woodruff use a randomized experiment to overcome this problem, and to measure the return to capital for the average microenterprise in their sample, regardless of whether or not it applies for credit. The researchers provide cash and equipment grants to small firms in Sri Lanka, and measure the increase in profits arising from this exogenous (positive) shock to capital stock. They find the average real return to capital to be around 4 percent per month, substantially higher than the market interest rate. They then use the heterogeneity of treatment effects to explore whether missing credit markets or missing insurance markets are the most likely cause of the high returns. Returns vary with entrepreneurial ability and with measures of other sources of cash within the household, but not with risk aversion or uncertainty. The researchers therefore conclude that credit constraints are the main reason for the high returns.

The social capital literature finds a positive relationship between trust and economic growth or trade. Yet the use of macro-level data makes it difficult to identify the direction of causality. Hellmann, Bottazzi, and Da Rin examine hand-collected micro data on the patterns of venture capital investments, where the trust between investors and companies' countries is clearly exogenous. The researchers find that trust among nations has a significant effect on the likelihood that a venture capitalist invests in a company. This holds even after accounting for alternative factors, such as geographic distance, information, a variety of transaction costs, and even investor and company fixed effects. They also consider the relationship between trust and contracts and find no evidence that sophisticated contracts can be used to overcome lack of trust. They conclude that trust is a fundamental force driving investment choices.

Venture capital investors (VCs) can create value by actively exploring new investment opportunities to learn about their returns. In traditional financial markets, a free-rider problem reduces exploration and learning, but VCs' organizational structure may limit information spillovers and reduce this problem. Sorensen presents a basic model of learning, based on the statistical Multi-Armed Bandit model. The value of an investment consists of both its immediate return and an option value of learning. When he estimates the model, it turns out that VCs who explore more have higher returns.

Many financial markets are characterized by strong relationships and networks, rather than arm's length, spot-market transactions. Hochberg, Ljungvist and Lu examine the potential entry-deterring effects of this organizational choice in the context of relationships established when VCs syndicate portfolio company investments using U.S. data for the period 1980 to 2003. The results show that networking does help reduce entry: VC markets with more extensive networking among the incumbent players experience less entry, and the economic effect is sizeable. However, potential entrants can use their prior relationships with the incumbents as well as previous investment experience in the industry or state to overcome this barrier to entry. The researchers also document that companies seeking venture capital raise money on worse terms in more densely networked markets, and that increased entry into a market is associated with companies receiving increased valuations.

[back to top]


International Finance and Macroeconomics

The NBER's Program on International Finance and Macroeconomics met in Cambridge on October 27. Research Associates Charles Engel and Linda Tesar, organized the program. The following papers were discussed:

Enrique G. Mendoza, International Monetary Fund and NBER; Vincenzo Quadrini, University of Southern California and NBER; and Victor Rios-Rull, University of Pennsylvania and NBER, "Financial Integration, Financial Deepness and Global Imbalances"
Discussant: Manuel Amador, Stanford University and NBER

Bong-Chan Kho, Seoul National University;

Rene M. Stulz, Ohio State University and NBER; and Frank E. Warnock, University of Virginia and NBER, "Financial Globalization, Governance, and the Evolution of the Home Bias"(NBER Working Paper No. 12389)
Discussant: Joshua Coval, Harvard University and NBER

Ricardo J. Caballero and Guido Lorenzoni, MIT and NBER, "Persistent Appreciations, Overshooting, and Optimal Exchange Rate Interventions"
Discussant: Enrique G. Mendoza

Michael W. Klein, Tufts University and NBER; and Jay C. Shambaugh, Dartmouth College,

"The Nature of Exchange Rate Regimes"
Discussant: Christian Broda, University of Chicago and NBER

Craig Burnside, Duke University and NBER; Martin Eichenbaum and Sergio Rebelo, Northwestern University and NBER; and Issac Kleshchelski, Northwestern University, "The Returns to Currency Speculation"
Discussant: Eric van Wincoop, University of Virginia and NBER

Marianne Baxter, Boston University and NBER, "International Risk Sharing in the Short Run and the Long Run"
Discussant: Fabrizio Perri, New York University and NBER

Large global financial imbalances need not be the harbinger of a world financial crash as many authors believe. Instead, Mendoza and his co-authors show that large and persistent global imbalances can be the outcome of financial integration when countries have different financial markets characteristics. In particular, countries with more advanced financial markets accumulate foreign liabilities vis-a-vis countries with less developed financial systems in a gradual, long-lasting process. Moreover, differences in financial development affect the composition of foreign portfolios, so that a country with negative net foreign asset positions can receive positive factor payments. Three empirical observations support these arguments: 1) financial deepness varies widely even amongst industrial countries, with the United States ranking at the top; 2) the secular decline in the U.S. net foreign assets position started with a gradual process of financial markets liberalization; and 3) net exports and current account balances are negatively correlated with indicators of financial markets development.

Despite the disappearance of formal barriers to international investment across countries, Kho and his co-authors find that the average home bias of U.S. investors towards the 46 countries with the largest equity markets did not fall from 1994 to 2004 if countries are equally weighted but fell if countries are weighted by market capitalization. This is inconsistent with portfolio theory explanations of the home bias, but is consistent with what we call the optimal insider ownership theory of the home bias. Since foreign investors can only own shares not held by insiders, there will be a large home bias towards countries in which insiders own large stakes in corporations. Consequently, for the home bias to fall substantially, insider ownership has to fall in countries where it is high. Poor governance leads to concentrated insider ownership, so that governance improvements make it possible for corporate ownership to become more dispersed and for the home bias to fall. The researchers find that the home bias of U.S. investors decreased most towards countries in which the ownership by corporate insiders was low and countries in which ownership by corporate insiders fell. Using firm-level data for Korea, they find that portfolio equity investment by foreign investors in Korean firms is inversely related to insider ownership and that the firms that attract the most foreign portfolio equity investment are large firms with dispersed ownership.

Most economies experience episodes of large real exchange rate appreciations, when the question arises whether there is a need for intervention to protect the export sector. Caballero and Lorenzoni present a model of irreversible export destruction where exchange rate stabilization may be justified if the export sector is financially constrained. However, the criterion for intervention is not whether there are inefficient bankruptcies or not, but whether these can cause a large exchange rate overshooting (and real wage decline) once the factors behind the appreciation subside. The optimal policy often involves a mild initial intervention followed by an increasingly aggressive stabilization as the appreciation persists and the financial resources of the export sector dwindle. In some instances, the policy also involves an exacerbation of the initial overshooting during the depreciation phase.

The impermanence of fixed exchange rates has become a stylized fact in international finance. The combination of a view that pegs do not really peg with the "fear of floating" view that floats do not really float generates the conclusion that exchange rate regimes are, in practice, unimportant for the behavior of the exchange rate. This is consistent with evidence on the irrelevance of a country's choice of exchange rate regime for general macroeconomic performance. Recently, though, more studies have shown that the exchange rate regime does matter in some contexts. Klein and Shambaugh attempt to reconcile the perception that fixed exchange rates are only a "mirage" with the recent research showing the effects of fixed exchange rates on trade, monetary autonomy, and growth. First they demonstrate that, while pegs frequently break, many do last and those that break tend to reform, so a fixed exchange rate today is a good predictor that one will exist in the future. Second, they study the exchange rate effect of fixed exchange rates. Fixed exchange rates exhibit greater bilateral exchange rate stability today and in the future. Pegs also display lower multilateral volatility, which may explain why exchange rate regimes have an effect on a number of different macroeconomic variables.

Currencies that are at a forward premium tend to depreciate. This "forward-premium puzzle"' represents an egregious deviation from uncovered interest parity. Burnside and his co-authors document the properties of returns to currency speculation strategies that exploit this anomaly. The first strategy, known as the carry trade, is widely used by practitioners. This strategy involves selling currencies forward that are at a forward premium and buying currencies forward that are at a forward discount. The second strategy relies on a particular regression to forecast the payoff to selling currencies forward. The researchers show that these strategies yield high Sharpe ratios which are not a compensation for risk. However, these Sharpe ratios do not represent unexploited profit opportunities. In the presence of microstructure frictions, spot and forward exchange rates move against traders as they increase their positions. The resulting "price pressure'" drives a wedge between average and marginal Sharpe ratios. The authors argue that marginal Sharpe ratios are zero even though average Sharpe ratios are positive.

Baxter extends and refines the study of international risk-sharing in two dimensions. First, she investigates risk-sharing at different horizons. In other words, countries might pool risks associated with high-frequency shocks (for example, seasonal fluctuations in crop yields) but might not share risks associated with low frequency shocks (for example, different long-run national growth rates). Second, she studies bilateral risk-sharing, which is different from the approach taken in most previous studies of risk-sharing. Her focus on bilateral risk-sharing stems from the observation that, because of such factors as financial linkages, common cultural linkages, or simply proximity, countries might share risks with some countries but not with others. Baxter compares direct tests of risk-sharing to indirect tests and finds that the indirect tests common in the literature are less informative in evaluating the extent of international risk-sharing.

[back to top]


Public Economics

The NBER's Program on Public Economics met in Cambridge on November 2-3. Joshua Rauh and Austan Goolsbee, NBER and University of Chicago, organized the meeting. These papers were discussed:

Mark Duggan, University of Maryland and NBER; Perry Singleton, University of Maryland; and Jae Song, Social Security Administration, "Aching to Retire? The Rise in the Full Retirement Age and its Impact on the Disability Rolls"(NBER Working Paper No. 11811)
Discussant: Jonathan Gruber, MIT and NBER

Jeffrey Liebman, Harvard University and NBER, and Emmanuel Saez, University of California, Berkeley and NBER, "Earnings Responses to Increases in Payroll Taxes"
Discussant: Austan Goolsbee

Jonathan Gruber, MIT and NBER, and Daniel M. Hungerman, University of Notre Dame and NBER, "The Church vs. The Mall: What Happens When Religion Faces Increased Secular Competition?"(NBER Working Paper No. 12410)
Discussant: Edward Glaeser, Harvard University and NBER

Joseph E. Stiglitz, Columbia University and NBER, and Anton Korinek, "Dividend Taxation and Intertemporal Tax Arbitrage"
Discussant: Raj Chetty, University of California, Berkeley and NBER

Stefania Albanesi, Columbia University and NBER, "Optimal Taxation of Entrepreneurial Capital with Private Information"
Discussant: Aleh Tsyvinski, Harvard University and NBER

Stephen Coate, Cornell University and NBER, and Brian Knight, Brown University and NBER, "Socially Optimal Districting: A Theoretical and Empirical Exploration"
Discussant: Richard Holden, MIT

Woodrow T. Johnson, University of Oregon, and James M. Poterba, MIT and NBER,"Taxes and the Trading Behavior of Mutual Fund Investors around Fund Distribution Dates"
Discussant: William Gentry, Williams College

Leora Friedberg, University of Virginia and NBER, and Anthony Webb, Center for Retirement Research, "Life is Cheap: Using Mortality Bonds to Hedge Aggregate Mortality Risk"
Discussant: Jeffrey Brown, University of Illinois and NBER

The Social Security Amendments of 1983 reduced the generosity of benefits for retired workers in the United States by increasing the program's full retirement age from 65 to 67 and increasing the penalty for claiming benefits at the early retirement age of 62. These changes were phased in gradually, so that individuals born in, or before, 1937 were unaffected and those born in 1960 or later were fully affected. No corresponding changes were made to the program's disabled worker benefits, and thus the relative generosity of Social Security Disability Insurance (SSDI) benefits increased. Duggan and his co-authors investigate the effect of the Amendments on SSDI enrollment by exploiting variation across birth cohorts in the policy-induced reduction in the present value of retired worker benefits. They find that the Amendments significantly increased SSDI enrollment since 1983, with an additional 0.6 percent of men and 0.9 percent of women between the ages of 45 and 64 receiving SSDI benefits in 2005 as a result of the changes.Their results further indicate that these effects will continue to increase during the next two decades, as those fully exposed to the reduction in retirement benefit generosity reach their fifties and early sixties.

Liebman and Saez use SIPP data matched to longitudinal uncapped earnings records from the Social Security Administration for 1981 to 1999 to analyze earnings responses to increases in tax rates and to inform discussions about the likely effects of raising the Social Security taxable maximum. The earnings distribution of workers around the current taxable maximum is inconsistent with an annual model in which people are highly responsive to the payroll tax rate, even in the subset of self-employed individuals. Panel data on married men with high earnings display a tremendous increase in earnings relative to other groups over the 1980s and 1990s, with no clear breaks around the key tax reforms. This suggests that other income groups cannot serve as a control group for the high earners. This analysis does not support the finding of a large behavioral response to taxation by wives of high earners. The researchers actually find a decrease in the labor supply of wives of high earners around both the 1986 and the 1993 tax reforms, which they attribute to an income effect attributable to the surge in primary earnings at the top. Policy simulations suggest that with an earnings elasticity of 0.5, lost income tax revenue and increased deadweight loss would swamp any benefits from the increase in payroll tax revenue. In contrast, with an elasticity of 0.2, the ratio of the gain in OASDI revenue to lost income tax revenue and deadweight loss would be much greater.

Gruber and Hungerman identify a policy-driven change in the opportunity cost of religious participation based on state laws that prohibit retail activity on Sunday, known as "blue laws." Many states have repealed these laws in recent years, raising the opportunity cost of religious participation. The researchers construct a model that predicts, under fairly general conditions, that allowing retail activity on Sundays will lower attendance levels but may increase or decrease religious donations. They then use a variety of datasets to show that when a state repeals its blue laws, religious attendance falls, and church donations and spending fall as well. These results do not seem to be driven by declines in religiosity prior to the law change, norare comparable declines in membership or giving to nonreligious organizations after a state repeals its laws observed. The authors then assess the effects of changes in these laws on drinking and drug use reported in the NLSY. They find that repealing blue laws leads to an increase in drinking and drug use, and that this increase is found only among the initially religious individuals who were affected by the blue laws. The effect is economically significant; for example, the gap in heavy drinking between religious and non-religious individuals falls by about half after the laws are repealed.

Stiglitz and Korinek develop a life-cycle model of the firm to analyze the effects of dividend tax policy on aggregate investment. They find that new firms raise less equity and invest less as the level of dividend taxes increases, in accordance with the traditional view of dividend taxation. However, the dividend tax rate is irrelevant for the investment decisions of internally growing and mature firms, as postulated by the new view of dividend taxation. Since aggregate investment is dominated by these latter two categories, the level of dividend taxation as well as unanticipated changes in dividend tax rates have only a minor impact on aggregate investment and output.Anticipated dividend tax changes, on the other hand, allow firms to engage in inter-temporal tax arbitrage so as to reduce investors' tax burden. This can significantly distort aggregate investment. Anticipated tax cuts (increases) delay (accelerate) firms' dividend payments, which leads them to hold higher (lower) cash balances and, for capital constrained firms, can significantly in-crease (decrease) aggregate investment for periods after the tax change.

Albanesi studies optimal taxation of entrepreneurial capital and financial assets in economies with private information. Returns to entrepreneurial capital are risky and depend on entrepreneurs' hidden effort. The idiosyncratic risk in capital returns implies that the intertemporal wedge on entrepreneurial capital that characterizes constrained-efficient allocations can be positive or negative. The properties of optimal marginal taxes on entrepreneurial capital depend on the sign of this wedge. If the wedge is positive, the optimal marginal capital tax is decreasing in capital returns, while the opposite is true when the wedge is negative. Optimal marginal taxes on other assets depend on their correlation with idiosyncratic capital returns. The optimal tax system equalizes after tax returns on all assets, thus reducing the variance of after tax returns on capital relative to other assets. If entrepreneurs are allowed to sell shares of their capital to outside investors, returns to externally owned capital are subject to double taxation: at the level of the entrepreneur and at the level of the outside investors. Even if entrepreneurs can purchase private insurance against their idiosyncratic risk, optimal asset taxes are essential to implement the constrained-efficient allocation if entrepreneurial portfolios are private information.

Coate and Knight investigate the problem of optimal districting in the context of a simple model of legislative elections. In the model, districting matters because it determines the seat-vote curve, which describes the relationship between seats and votes. The paper first characterizes the optimal seat-vote curve and shows that, under a weak condition, there exist districtings that generate this ideal relationship. The paper then develops an empirical methodology for computing seat-vote curves and measuring the welfare gains from implementing optimal districting. This is applied to the districting plans used to elect U.S. state legislators during the 1990s.

Capital gain distributions by open-end mutual funds accelerate the capital gains tax liability of their taxable investors, thereby reducing their after-tax return. Investors can reduce this acceleration by delaying the purchase of fund shares until after capital gain distributions tax place. Tax-exempt investors, including individual investors who hold mutual funds in their IRAs, Keogh plans, and 401(k) accounts, have no such disincentives to invest prior to distributions. Johnson and Poterba examine data on fund inflows and outflows, and account openings and closures, at a sample of open-end mutual funds offered by a single fund complex. The data display a significant increase in inflows to taxable accounts, and an increase in the number of new accounts opened, in the weeks following distribution dates relative to the weeks prior to distributions. These findings are consistent with the hypothesis that at least some taxable investors adapt their portfolio trades to reduce their tax liabilities.

Insurance companies, employers, and the U.S. government all provide annuities and therefore assume aggregate mortality risk. Using the widely-cited Lee-Carter mortality model, Friedberg and Webb quantify aggregate mortality risk as the risk that the average annuitant lives longer than is predicted by the model, and determine that annuities expose providers to substantial risk. They also focus on other recent actuarial forecasts, some of which lie at the edge or outside of the 95 percent confidence interval of Lee-Carter. They focus on the implications of aggregate mortality risk for insurance companies; this analysis can be extended to private pension providers and Social Security. Given the forecasts of the Lee-Carter model, they calculate that a markup of 3.9 percent on an annuity premium(or shareholders' capital equal to 3.9 percent of the expected present value of annuity payments) would be required to reduce the probability of insolvency resulting from aggregate mortality shocks to 5 percent, and a markup of 5.7 percent would reduce the probability of insolvency to 1 percent. Based on the same model, they find that a projection scale commonly referred to by the insurance industry underestimates aggregate mortality improvements and would leave annuities underpriced. Annuity providers could deal with aggregate mortality risk more efficiently by transferring it to financial markets through mortality-contingent bonds. The researchers calculate the returns that one recently-proposed mortality bond would have paid had it been available over a long period. Using both the Capital and the Consumption Capital Asset Pricing Models, they determine the risk premium that investors would have required to hold the bond. At plausible coefficients of risk aversion, annuity providers should be able to hedge aggregate mortality risk via such bonds at very low cost.

[back to top]


Monetary Economics

The NBER's Program on Monetary Economics met in Cambridge on November 3. Christopher House and Matthew D. Shapiro, NBER and University of Michigan, organized the meeting, at which these papers were discussed:

Michael Elsby and Gary Solon, University of Michigan and NBER, and Ryan Michaels, University of Michigan, "Reassessing the Ins and Outs of Unemployment Again: Everyone's a Winner"
Discussant: Robert Shimer, University of Chicago and NBER

Olivier Blanchard, MIT and NBER, and Jordi Gali, Universitat Pompeu Fabra and NBER, "A New Keynesian Model with Unemployment"
Discussant: Robert E. Hall, Stanford University and NBER

Monika Piazzesi, University of Chicago and NBER, and Martin Schneider, New York University, "Inflation and the Price of Real Assets"
Discussant: Robert B. Barsky, University of Michigan and NBER

Laura Veldkamp, New York University, and Justin Wolfers, University of Pennsylvania and NBER, "Aggregate Shocks or Aggregate Information? Costly Information and Business Cycle Comovement"
Discussant: Mirko Wiederholt, Northwestern University

Emi Nakamura and Jon Steinsson, Harvard University, "Five Facts About Prices: A Reevaluation of Menu Cost Models"
Discussant: Mark Bils, University of Rochester and NBER

Responding to Shimer's already-influential manuscript "Reassessing the Ins and Outs of Unemployment," Elsby, Solon, and Michaels reconsider the extent to which the increased unemployment during a recession arises from an increase in the number of unemployment spells versus an increase in their duration. Like Shimer, they find an important role for increased duration. But contrary to Shimer's conclusions, they find that even his own methods and data, when viewed in an appropriate metric, reveal an important role for increased inflows to unemployment as well. This finding is further strengthened by their refinements of Shimer's methods of correcting for data problems and by an extension of his approach that enables a more detailed examination of particular components of the inflow to unemployment.

Blanchard and Gali develop a utility based model of fluctuations, with nominal rigidities and unemployment. In doing so, they combine two strands of research: the New Keynesian model, with its focus on nominal rigidities, and the Diamond-Mortensen-Pissarides model, with its focus on labor market frictions and unemployment. Their analysis proceeds in two steps. First, they leave nominal rigidities aside and show that, under a standard utility specification, productivity shocks have no effect on unemployment in the constrained efficient allocation. Then they focus on the implications of alternative real wage setting mechanisms for fluctuations in unemployment. Next they introduce nominal rigidities in the form of staggered price setting by firms. They derive the relationship between inflation and unemployment and discuss how it is influenced by the presence of real wage rigidities. Finally, they show the nature of the tradeoff between inflation and unemployment stabilization and draw the implications for optimal monetary policy.

In the 1970s, U.S. asset markets witnessed: 1) a 25 percent dip in the ratio of aggregate household wealth relative to GDP; and 2) negative co-movement of house and stock prices that drove a 20 percent portfolio shift out of equity into real estate. Piazzesi and Schneider use an overlapping generations model with uninsurable nominal risk to quantify the role of structural change in these events. They attribute the dip in wealth to the entry of baby boomers into asset markets, and to the erosion of bond portfolios by surprise inflation, both of which lowered the overall propensity to save. They also show that the Great Inflation led to a portfolio shift by making housing more attractive than equity. Apart from tax effects, a new channel is that disagreement about inflation across age groups drives up collateral prices when credit is nominal.

When similar patterns of expansion and contraction are observed across sectors, we call this a business cycle. Yet explaining the similarity and synchronization of these cycles across industries remains a puzzle. Whereas output growth across industries is highly correlated, identifiable shocks, like shocks to productivity, are far less correlated. Previous work has examined complementarities in production, but Veldkamp and Wolfers propose that sectors make similar input decisions because of complementarities in information acquisition. Because information about driving forces has a high fixed cost of production and a low marginal cost of replication, it can be more efficient for firms to share the cost of discovering common shocks than to invest in uncovering detailed sectoral information. Firms basing their decisions on this common information make highly correlated production choices. This mechanism amplifies the effects of common shocks, relative to sectoral shocks.

Nakamura and Steinsson establish five facts about prices in the U.S. economy: 1) The median duration of consumer prices when sales are excluded at the product level is 11 months. The median duration of finished goods producer prices is 8.7 months. 2) One-third of regular price changes are price decreases. 3) The frequency of price increases responds strongly to inflation while the frequency of price decreases and the size of price increases and price decreases do not. 4) The frequency of price change is highly seasonal: it is highest in the first quarter and lowest in the fourth quarter. 5) The hazard function of price changes for individual consumer and producer goods is downward sloping for the first few months and then flat (except for a large spike at 12 months in consumer services and all producer prices). These facts are based on CPI microdata and a new comprehensive dataset of microdata on producer prices that they construct from raw production files underlying the PPI. They show that the first, second and third facts are consistent with a benchmark menu-cost model, while the fourth and fifth facts are not.

[back to top]


Higher Education

The NBER's Working Group on Higher Education met in Cambridge on November 9. Working Group Director Charles T. Clotfelter of Duke University organized the meeting. These papers were discussed:

Susan M. Dynarski, Harvard University and NBER, and Judith E. Scott-Clayton, Harvard University, "The Cost of Complexity in Federal Student Aid: Lessons from Optimal Tax Theory and Behavioral Economics"(NBER Working Paper No. 12227)
Discussant: Eric Bettinger, Case Western Reserve University

Marko Tervio, University of California, Berkeley, "Network Analysis of Three Academic Labor Markets"
Discussant: Richard Jensen, University of Notre Dame

Brian C. Cadena and Benjamin J. Keys, University of Michigan, "Self-Control Induced Debt Aversion: Evidence from Interest-Free Student Loans"
Discussant: Ofer Malamud, University of Chicago

Megan MacGarvie, Boston University and NBER, "Foreign Students and the Diffusion of Scientific and Technological Knowledge to and from American Universities"

Discussant: William Kerr, Harvard University

Zeynep Hansen, Washington University and NBER; Hideo Owan, Aoyama Gakuin University; and Jie Pan, Washington University, "The Impact of Group Diversity on Performance and Knowledge Spillover:An Experiment in a College Classroom"(NBER Working Paper No. 12251)
Discussant: Jacob Vigdor, Duke University and NBER

The federal system for distributing student financial aid rivals the tax code in its complexity. Both have been a source of frustration and a focus of reform efforts for decades, yet the complexity of the student aid system has received comparatively little attention from economists. Dynarski and Scott-Clayton describe the complexity of the aid system, and apply lessons from optimal tax theory and behavioral economics to show that complexity is a serious obstacle to both efficiency and equity in the distribution of student aid. They show that complexity disproportionately burdens those with the least ability to pay and undermines redistributive goals. They use detailed data from federal student aid applications to show that a radically simplified aid process can reproduce the current distribution of aid using a fraction of the information now collected.

Tervio analyzes the academic labor market as a citation network, where departments gain citations by placing their Ph.D. graduates into the faculty of other departments. The aim is to measure the distribution of influence and the possible division into clusters between academic departments in three disciplines (economics, mathematics, and comparative literature). Departmental influence is measured by a similar method to that used by Google to rank web pages. In all disciplines, the distribution of influence is significantly more skewed than the distribution of academic placements, because of a strong hierarchy of schools in which movements are seldom upwards. This hierarchy is strongest in economics. Tervio also finds that, in economics, there are clusters of departments that are significantly more connected within than with each other. These clusters are consistent with anecdotal evidence about Freshwater and Saltwater schools of thought. There is a similar but weaker division within comparative literature, but not within mathematics.

Cadena and Keys use insights from behavioral economics to offer an explanation for a particularly surprising borrowing phenomenon: nearly 20 percent of undergraduate students who are offered interest-free loans turn them down. The authors present a formal model of the financial aid process emphasizing that a rational agent would not reject interest-free student loans because doing so requires foregoing a significant government subsidy. A student with time-inconsistent preferences, however, may optimally choose to turn down subsidized loans to avoid excessive consumption during school. Thus, debt-averse behavior arises even among consumers who have no direct distaste for debt. Using the 2003-4 wave of the National Post-Secondary Student Aid Study (NPSAS), the authors investigate students' financial aid situations and subsidized loan take-up decisions. They exploit an institutional detail of the financial aid process to identify a group of students who should be especially vulnerable to self-control problems. Their results suggest that consumers choose to limit their liquidity in economically meaningful situations, consistent with the predictions of the behavioral model.

MacGarvie combines counts of the number of Science and Engineering doctorates by country of origin at U.S. universities with data on citations to and from U.S. universities' patents to study the relationship between labor mobility and international patterns of diffusion of scientific and technological knowledge. Preliminary findings suggest that knowledge diffuses from U.S. universities to foreign countries when doctoral recipients migrate internationally, and there is some evidence of foreign knowledge acquisition by U.S. universities when doctoral recipients move abroad. However, there appears to be little evidence that foreign countries benefit from improved access to U.S. science and technology contained in patents when doctoral recipients remain in the U.S. after graduation.

Hansen, Owan, and Pan combine class performance data from an undergraduate class with students' personal records to explore diversity and knowledge spillover effects. A major advantage of their dataset is the exogenous assignment of groups, which rules out the self-selection problem. Their results indicate that male-dominant groups performed worse both in group work and in individually taken exams than female-dominant and equally-mixed gender groups. Individual members from a group with more diversity in age and gender scored higher in exams. Another novel aspect of this natural experiment is that each group chooses their own group contract form - members of "autonomous" groups receive equal grade for their group work while those in "democratic" groups can adopt differentiated point allocation, thus providing a proper mechanism to punish free riders. The estimation results show a significant correlation between the choice of a democratic contract and the group and individual performance.

[back to top]


Education Program Meeting

The NBER's Program on Education, directed by Caroline M. Hoxby of Harvard University, met in Cambridge on November 9 and 10. These papers were presented and discussed at the meeting:

Mark Hoekstra, University of Pittsburgh, "The Effect of Attending the Flagship State University on Earnings: A Regression Discontinuity Approach"

Carlos Dobkin, University of California, Santa Cruz, and Fernando Ferreira, University of Pennsylvania, "Should We Care About the Age at Which Children Enter School?

The Impact of School Entry Laws on Educational Attainment and Labor Market Outcomes"

Philip Babcock, University of California, Santa Barbara, "From Ties to Gains? Evidence on Connectedness, Skill Acquisition, and Diversity"

Andrea Ichino, European University Institute; Pietro Garibaldi, University of Turin; Francesco Giavazzi, MIT and NBER; and Enrico Rettore, University of Padova, "College Cost and Time to Obtain a Degree: Evidence from Tuition Discontinuities"

Moshe Justman, Ben Gurion University, and Yaakov Gilboa, Sapir Academic College,"Equal Opportunity in Education: Lessons from the Kibbutz"

Adalbert Mayer and Steven Puller, Texas A&M University, "The Old Boy (and Girl) Network: Social Network Formation on University Campuses"

Sally Kwak, University of Hawaii-Manoa, "The Impact of Intergovernmental Incentives on Disability Rates and Special Education Spending"

By combining confidential admissions records from a large state university with earnings data collected through the state's Unemployment Insurance program, Hoekstra examines the effect of attending the flagship state university on the earnings of 28-33 year-olds. To distinguish this effect from the effects of confounding factors correlated with the university's admission decision, and/or the applicant's enrollment decision, he uses a regression discontinuity approach along with a conventional instrumental variable approach. The results indicate that attending the most selective state university causes earnings to be at least 10 percent higher for white men, an effect that is considerably higher than ordinary least squares estimates. However, he finds no effect on earnings for white women generally and only weak evidence of a positive effect for white women with strong attachment to the labor force.

Dobkin and Ferreira examine how state laws regulating the age at which students can enter school affect children's progression through school, adult educational attainment, and labor market outcomes. Using the exact day of birth from the 2000 Long Form Decennial Census for the states of California and Texas, the authors first show that students born right before the cut-off date for school enrollment are 50-60 percentage points more likely to enroll in kindergarten a year earlier than similar students that were born right after the cut-off date. This effect is significantly larger for minorities and the children of parents with less than a high school education. The researchers also find that almost one third of the initial differences in enrollment rates disappear by ninth grade because the youngest children in a cohort are considerably more likely to be held back a grade during elementary school. Despite the striking differences in the timing of enrollment and the rates at which students are held back, the authors find only a modest impact on adult educational attainment, with individuals who enter school a year earlier having only about one percentage point increase in their probability of completing high school. They find slightly larger effects for Hispanics and older cohorts. However, the additional education resulting from early school entry does not result in differences in employment rates or wages. This may be because other confounding mechanisms, such as the impact of school entry laws on the age relative to peers, and the absolute age at which an individual is taught a particular material. These results suggest that it is problematic to use school entry laws as an instrument for educational attainment when trying to estimate the returns to education.

Babcock uses micro-level data on social networks in middle and secondary schools to estimate the effects of connectedness on education attainment outcomes and the association between racial diversity and connectedness. The analysis addresses concerns about unobserved neighborhood and school-level heterogeneity by using within-school variation between grade cohorts to identify effects of connectedness. There are two main findings: 1) Being part of a more connected cohort within a given secondary or middle school is associated with significantly higher years of schooling attained and higher probability of having attended college, seven years later. 2) Being part of a more racially diverse grade cohort, within a given school, is associated with significantly lower levels of connectedness - rare micro-level evidence to augment existing cross-region evidence on ethnic fractionalization and disconnectedness.

For many students throughout the world, time to obtain an academic degree extends beyond the normal completion time, while college tuition is essentially constant during the years of enrollment and, in particular, does not increase when a student remains in a program after its regular end. Using a Regression Discontinuity Design on data from Bocconi University in Italy, Ichino and his co-authors show that if tuition is raised by 1000 Euros in the last year of the program, the probability of late graduation decreases by 6.1 percentage points with respect to a benchmark average probability of 80 percent. The researchers conclude by showing that an upward sloping tuition profile may be efficient when effort is sub-optimally supplied in the presence of peer effects.

Justman and Gilboa use the unique circumstances of education in Israeli kibbutzim --communal villages - to derive two related findings. First, by regressing kibbutz members' test scores on parental education, the authors obtain an egalitarian standard of equal opportunity in education, measured as origin-independence, to which the degree of equal opportunity in other education systems can be compared. Second, by comparing this effect to its counterpart for the general Israeli population, they can quantitatively decompose the parental effect on test scores (in the general population) so as to distinguish between what money can and cannot buy.

Mayer and Puller document the structure and composition of social networks on university campuses and investigate the processes that lead to their formation. They use a large dataset that identifies students in one another's social network on campus and link these data to university records on each student's demographic and school outcome characteristics. The campus networks exhibit common features of social networks, such as clusteredness. The authors show that race is strongly related to social ties. In particular, blacks and Asians have disproportionately more same race friends than would arise from the random selection of friends, even after controlling for a variety of measures of socioeconomic background, ability, and college activities. Also, two students are more likely to be friends if they share the same major, participate in the same campus activities, and, to a lesser extent, come from the same socioeconomic background. Next, the authors develop a model of the formation of social networks that decomposes the formation of social links into effects based upon the exogenous school environment and effects of endogenous choice arising from preferences for certain characteristics in one's friends. They use student-level data from an actual social network to calibrate the model, which generates many of the characteristics common to social networks. They then simulate network structures under alternative university policies and find that changes in the school environment that affect the likelihood that two students interact have only a limited potential to reduce the segmentation of the social network.

In California in 1998, the state converted from a system that awarded funds based on the numer of disabled students in a district to one based on total enrollment. This change induced changes in the total funding awarded to different districts, and reduced the marginal "price" of an additional disabled student to zero. Kwak finds that the reform created both "income" and "substitution" effects on the number of students classified as disabled. In the short run, additional state special education grants translate into sizeable increases in special education spending, but in the longer run special education funds appear fungible across other spending needs.

[back to top]


Asset Pricing

The NBER's Program on Asset Pricing met at The Wharton School, University of Pennsylvania, on November 10. NBER researchers Leonid Kogan, Sloan School of Management at MIT, and Amir Yaron, The Wharton School, organized the meeting. These papers were discussed:

Stijn Van Nieuwerburgh, New York University and NBER, and Pierre-Olivier Weill, University of California, Los Angeles, "Why Has House Price Dispersion Gone Up?"
Discussant: Markus K. Brunnermeier, Princeton University and NBER

Robert Novy-Marx, University of Chicago, "Investment Cash Flow Sensitivity and the Value Premium"
Discussant: Joao Gomes, University of Pennsylvania

Arvind Krishnamurthy and Annette Vissing-Jorgensen, Northwestern University and NBER, "The Demand for Treasury Debt"
Discussant: Monika Piazzesi, University of Chicago and NBER

Martijn Cremers and Antti Petajisto, Yale University, "How Active is Your Fund Manager? A New Measure That Predicts Performance"

Discussant: Jonathan Berk, University of California, Berkeley and NBER

Long Chen, Michigan State University, and Xinlei Zhao, Kent State University, "Return Decomposition"
Discussant: John Heaton, University of Chicago and NBER

Lubos Pastor, University of Chicago and NBER, and Robert Stambaugh, University of Pennsylvania and NBER, "Predictive Systems: Living with Imperfect Predictors"
Discussant: Jonathan Lewellen, Dartmouth College and NBER

Nieuwerburgh and Weill investigate the 30-year increase in the level and dispersion of house prices across U.S. metropolitan areas, using a calibrated dynamic general equilibrium island model. The model is based on two main assumptions: households flow in and out of metropolitan areas in response to local wage shocks, and the housing supply cannot adjust instantly because of regulatory constraints. Feeding into the model the 30-year increase in cross-sectional wage dispersion that is documented based on metropolitan-level data, the authors generate the observed increase in house price level and dispersion. In equilibrium, workers flow towards exceptionally productive metropolitan areas and drive house prices up. The calibration also reveals that, while a baseline level of regulation is important, a tightening of regulation by itself cannot account for the increase in house price level and dispersion: in equilibrium, workers flow out of tightly regulated towards less regulated metropolitan areas, undoing most of the price impact of additional local supply regulations. Finally, the calibration with increasing wage dispersion suggests that the welfare effects of housing supply regulation are large.

Firms' equilibrium investment behavior explains two seemingly unrelated economic puzzles. Endogenous variation in firms' exposures to fundamental risks, resulting from optimal investment behavior, generates both investment-cash flow sensitivity and a countercyclical value premium. Novy-Marx explictly characterizes the investment strategies of heterogeneous firms as rules in industry average-Q that depend on the industry's concentration and capital intensity. Firms are unconstrained, investing when and because marginal-q equals one, but investment is still associated strongly with positive cash-flow shocks and only weakly with average-Q shocks, because firm value is insensitive to demand when demand is high. A value premium arises, both within and across industries, because the market-to-book sorting procedure over-weights the value portfolio with high-cost producers, firms in slow growing industries, and firms in industries that employ irreversible capital, which are riskier, especially in "bad" times. The two puzzles are linked directly, with theory predicting value firms should exhibit stronger investment-cash flow sensitivities than growth firms.

Krishnamurthy and Vissing-Jorgensen show that the U.S. debt/GDP ratio is negatively correlated with the spread between corporate bond yields and Treasury bond yields. The result holds even when controlling for the default risk on corporate bonds. The authors argue that the corporate bond spread reflects a convenience yield that investors attribute to Treasury debt. Changes in the supply of Treasury debt trace out the demand for convenience by investors. The authors furhter show that the aggregate demand curve for the convenience provided by Treasury debt is downward sloping; they provide estimates of the elasticity of demand. They also analyze disaggregated data from the Flow of Funds Accounts of the Federal Reserve and show that individual groups of Treasury bond holders have downward sloping demand curves. Even groups with the most elastic demand curves have demand curves that are far from flat. The authors discuss the implications for the behavior of corporate bond spreads, interest rate swap spreads, and the value of aggregate liquidity and for the financing of the U.S. deficit, Ricardian equivalence, and the effects of foreign central bank demand on Treasury yields.

To quantify active portfolio management, Cremers and Petajisto introduce a new measure they label "Active Share". It describes the share of portfolio holdings that differ from the portfolio's benchmark index. They argue that to determine the type of active management for a portfolio, they need to measure it in two dimensions, using both Active Share and tracking error. They apply this approach to the universe of all-equity mutual funds to characterize how much and what type of active management each practices. The authors test how active management is related to characteristics such as fund size, expenses, and turnover in the cross-section, and look at the evolution of active management over time. They find that active management predicts fund performance: the funds with the highest Active Share significantly outperform their benchmark indexes, both before and after expenses, while the non-index funds with the lowest Active Share underperform. The most active stock pickers tend to create value for investors while factor bets and closet indexing tend to destroy value.

Chen and Zhao study the robustness of the return decomposition approach, in which unexpected equity return is decomposed into discount rate (DR) news and cash flow (CF) news. In this approach, DR news is modeled directly but CF news is usually backed out as a residual component. The researchers argue that the approach has serious limitations because CF news, as a residual, depends critically on how successfully DR news is modeled. One missing forecasting variable can change the balance of the two news components, and the conclusions that compare the two parts. To illustrate this, the researchers apply their approach to Treasury bonds, which should have zero CF variance and zero CF betas. Instead, they find that the variance of the "CF news" is larger than that of the DR news, and that bonds with longer maturities have higher "CF betas." Applying the approach to equity returns, they show that the relative importance of CF variance and DR variance of the market portfolio is sensitive to the choice of forecasting variables; and, for most forecasting-variable specifications, value stocks usually do not have higher CF betas. These results run counter to what is reported using the decomposition approach in the current literature. Further decomposing the CF news in the current literature into directly-modeled CF news and residual news, the authors show that opposite conclusions can be drawn depending on the nature of the residual news. They also reconcile their finding that value stocks do not have higher CF betas with the finding in a related literature that value stocks have higher CF covariation with aggregate CFs.

The standard regression approach to investigating return predictability seems too restrictive in one way but too lax in another. A predictive regression assumes that expected returns are captured exactly by a set of given predictors but does not exploit the likely economic property that innovations in expected returns are negatively correlated with unexpected returns. Pastor and Stambaugh develop an alternative framework - a predictive system - that accommodates imperfect predictors and beliefs about that negative correlation. In this framework, the predictive ability of imperfect predictors is supplemented by information in lagged returns as well as lags of the predictors. Compared to predictive regressions, predictive systems deliver different and substantially more precise estimates of expected returns as well as different assessments of a given predictor's usefulness.

[back to top]


Corporate Finance

The NBER's Program on Corporate Finance met in Cambridge on November 10. NBER Research Associate Andrei Shleifer of Harvard University organized the meeting. These papers were discussed:

Camelia M. Kuhnen, Northwestern University, and Jeffrey Zwiebel, Stanford University, "Executive Pay, Hidden Compensation, and Managerial Entrenchment"
Discussant: Xavier Gabaix, MIT and NBER

Jin Xu, University of Chicago, "What Determines Capital Structure? Evidence from Import Competition"

Discussant: Michael Weisbach, University of Illinois and NBER

Bart Lambrecht, University of Lancaster, and Stewart C. Myers, NBER and MIT, "Debt and Managerial Rents in a Real-Options Model of the Firm"
Discussant: Douglas W. Diamond, University of Chicago and NBER

Douglas Baird, University of Chicago; Arturo Bris, Yale University; and Ning Zhu, University of California, Davis, "The Dynamics of Large and Small Chapter 11 Cases: An Empirical Study"
Discussant: Michelle J. White, University of California, San Diego and NBER

Nicola Gennaioli, Stockholm University, and Stefano Rossi, Stockholm School of Economics, "Optimal Resolutions of Financial Distress by Contract"
Discussant: Kenneth Ayotte, Columbia University

Simeon Djankov and Caralee McLiesh, World Bank, and Oliver D. Hart and Andrei Shleifer, Harvard University and NBER, "Debt Enforcement Around the World"
Discussant: David S. Scharfstein, Harvard University and NBER

Kuhnen and Zwiebel consider a managerial optimal framework for top executive compensation, where top management sets its own compensation subject to limited entrenchment, instead of the conventional setting where such compensation is set by a board that maximizes firm value. Top management would like to pay themselves as much as possible, but are constrained by the need to ensure sufficient efficiency to avoid replacement. Shareholders can remove a manager, but only at a cost, and therefore will only do so if the anticipated future value of the manager (given by anticipated future performance net future compensation) falls short of that of a replacement by this replacement cost. In this setting, observable compensation (salary) and hidden compensation (perks, pet projects, pensions) serve different roles for management and have different costs, and both are used in equilibrium. The authors examine the relationship between observable and hidden compensation and other variables in a dynamic model, and derive a number of unique predictions regarding these two types of pay. They then test these implications and find results that generally support the predictions of their model.

Xu uses the exogenous shock of greater import competition to study the effect of product market competition on corporate capital structure in the U.S. domestic textile and apparel sector. Theoretically, when import competition increases, expected domestic profitability drops, increasing the probability of bankruptcy and reducing the tax benefit of debt. According to the trade-off theory, optimal financial leverage should go down. Xu finds that after a quota-eliminating trade law took effect, firms in the sector significantly de-levered by reducing debt and increasing outside equity. The average textile and apparel firm reduced its leverage by 0.10, a 30 percent reduction, while the rest of the manufacturing sector barely de-levered. Xu extends the analysis by looking at the relation of capital structure to industry-level import penetration in a large sample of all U.S. manufacturing industries. Financial leverage is strongly negatively correlated with industry-level import penetration, controlling for documented determinants of leverage. Economically, one standard deviation increase in import penetration corresponds to about a 20 percent standard deviation decrease in leverage. Since import penetration could be endogenous to capital structure because of strategic behaviors of firms, Xu uses the industry tariff as an instrumental variable for import penetration. The IV result is consistent with the OLS result. Robustness checks with alternative leverage measures confirm the basic result. The results can be explained best by the trade-off theory. Xu also finds some evidence consistent with the disciplinary role of debt hypothesis, but it only accounts for a small portion of the competition- leverage effect.

Lambrecht and Myers present a theory of capital investment and debt and equity financing in a real-options model of a public corporation. The model assumes that managers maximize the present value of their future compensation (managerial rents), subject to constraints imposed by outside shareholders' property rights to the firms' assets. The authors show that managers adopt an optimal debt policy that generates efficient investment and disinvestment decisions. Optimal debt equals the liquidation value of the firms' assets and is therefore default-risk free. But managers' personal wealth constraints can justify additional risky debt to fund positive-NPV investments. Changes in cash flow can cause changes in investment by tightening or loosening the wealth constraints.

Baird and his co-authors show that the dynamics of Chapter 11 turn dramatically on the size of the business. The vast majority of the assets administered in Chapter 11 are concentrated in a handful of large cases, but most of the businesses in Chapter 11 are small, and the smaller the business, the smaller the distribution to general unsecured creditors. For businesses with assets above $5 million, unsecured creditors typically collect half of what they are owed. Where the business's assets are worth less than $200,000, ordinary general creditors usually recover nothing. In the typical small Chapter 11 case, the tax collector is the central figure. In small business bankruptcies, priority tax liabilities are the largest unsecured liabilities of the business. Tax obligations are entitled to priority and are obligations of both the corporation and those who run it. Given the large shadow tax claims cast over small Chapter 11 reorganizations, accounts of small Chapter 11 must focus squarely on them.

Gennaioli and Rossi theoretically explore the possibility that parties might efficiently resolve financial distress by contract as opposed to exclusively relying on state intervention. They characterize which financial contracts are optimal depending on legal protection of investors against fraud, and how efficient is the resulting resolution of financial distress. They find that when legal protection against fraud is strong, issuing a convertible debt security to a large, secured creditor allows the parties to attain the first best. Conversion of debt into equity upon default allows the debtor to collateralize the whole firm to the creditor, not just certain physical assets, thereby inducing the creditor to internalize the upside from efficient reorganization. When instead legal protection against fraud is poor, straight debt with foreclosure is the only feasible contract, even if it induces over-liquidation. The normative implication of this analysis is that an efficient resolution of financial distress is attained under freedom of contracting and strong protection against fraud.

Djankov, Mcliesh, Hart, and Shleifer present insolvency practitioners from 88 countries with an identical case of a hotel about to default on its debt, and ask them to describe in detail how debt enforcement against this hotel will proceed in their countries. The researchers use the data on time, cost, and the likely disposition of the assets (preservation as a going concern versus piecemeal sale) to construct a measure of the efficiency of debt enforcement in each country. They identify several characteristics of the debt enforcement procedure, such as the structure of appeals and the availability of floating charge finance, that influence efficiency. This measure of efficiency is strongly correlated with per capita income and legal origin and predicts debt market development across countries. Interestingly, it is also highly correlated with measures of the quality of government obtained in other studies.

[back to top]


Behavioral Economics

The NBER's Working Group on Behavioral Economics met in Cambridge on November 11. NBER Research Associates and Group Directors Robert J. Shiller of Yale University and Richard H. Thaler, University of Chicago, organized this program:

Am Tal Fishman and Harrison Hong, Princeton University, and Jeffrey D. Kubik, Syracuse University, "Do Arbitrageurs Amplify Economic Shocks?"
Discussant: Michael Rashes, Bracebridge Capital

John Y. Campbell, Harvard University and NBER, and Jens Hilscher and

Jan Szilagyi, Harvard University, "In Search of Distress Risk" (NBER Working Paper No. 12362)
Discussant: Tyler Shumway, University of Michigan

Andrea Frazzini, University of Chicago, and Owen Lamont, Yale University and NBER, "The Earnings Announcement Premium and Trading Volume"
Discussant: Steven Heston, University of Maryland

David Hirshleifer and Siew Hong Teoh, University of California, Irvine, and Sonya Seongyeon Lim, DePaul University, "Driven to Distraction:

Extraneous Events and Underreaction to Earnings News"
Discussant: Stefano Della Vigna, University of California, Berkeley and NBER

Robin Greenwood, Harvard University, and Stefan Nagel, Stanford University and NBER, "Inexperienced Investors and Speculative Bubbles"
Discussant: Jeremy C. Stein, Harvard University and NBER

Massimo Massa and Lei Zhang, INSEAD, "Cosmetic Mergers: The Effect of Style Investing on the Market for Corporate Control"
Discussant: Malcolm Baker, Harvard University and NBER

Fishman, Hong, and Kubik consider whether arbitrageurs amplify fundamental shocks in the context of short arbitrage in equity markets. The ability of arbitrageurs to hold on to short positions depends on asset values: shorts are often reduced following good news about a stock. As a result, the prices of highly shorted stocks are excessively sensitive to economic shocks. Using monthly short interest data and exploiting differences in short selling regulations across stock exchanges to instrument for the amount of shorting in a stock, the authors find: 1) The price of a highly shorted stock is more sensitive to earnings news than a stock with little short interest. 2) Short interest changes around announcements (proxied by share turnover) are more sensitive to earnings surprises for highly shorted stocks. 3) For highly shorted stocks, returns to shorting are higher following better earnings news. 4) These differential sensitivities are driven by very good earnings news as opposed to very bad earnings news. These findings point to the importance of limited arbitrage in affecting asset price dynamics and the potentially destabilizing role of speculators.

Campbell, Hilscher, and Szilagyi explore the determinants of corporate failure and the pricing of financially distressed stocks using U.S. data for 1963-2003. Firms with higher leverage, lower profitability, lower market capitalization, lower past stock returns, more volatile past stock returns, lower cash holdings, higher market-book ratios, and lower prices per share are more likely to file for bankruptcy, be de-listed, or receive a D rating. When predicting failure at longer horizons, the most persistent firm characteristics - market capitalization, the market-book ratio, and equity volatility - become relatively more significant. The model here captures much of the time variation in the aggregate failure rate. Since 1981, financially distressed stocks have delivered anomalously low returns. They have lower returns but much higher standard deviations, market betas, and loadings on value and small-cap risk factors than stocks with a low risk of failure. These patterns hold in all size quintiles but are particularly strong in smaller stocks. They are inconsistent with the conjecture that the value and size effects are compensation for the risk of financial distress.

On average, stock prices rise around scheduled earnings announcement dates. Frazzini and Lamont show that this earnings announcement premium is large, robust, and strongly related to the fact that volume surges around announcement dates. Stocks with high past announcement period volume earn the highest announcement premium, suggesting some common underlying cause for both volume and the premium. The authors show that high premium stocks experience the highest levels of imputed small investor buying, suggesting that the premium is driven by buying by small investors when the announcement catches their attention.

Psychological evidence indicates that it is hard to process multiple stimuli and perform multiple tasks at the same time. Hirshleifer and his co-authors test the investor distraction hypothesis, which holds that the arrival of extraneous news causes trading and market prices to react sluggishly to relevant news about a firm. They focus on the competition for investor attention between a firm's earnings announcements and the earnings announcements of other firms. They find that the immediate stock price and volume reaction to a firm's earnings surprise is weaker, and post-earnings announcement drift is stronger, when a greater number of earnings announcements by other firms are made on the same day. A trading strategy that exploits post-earnings announcement drift is most profitable for earnings announcements made on days with a lot of competing news, but it is not profitable for announcements made on days with little competing news.

Asset market experiments suggest that inexperienced investors play a role in the formation of asset price bubbles. Without first-hand experience of a downturn, these investors are more optimistic and likely to exhibit trend chasing in their portfolio decisions. Greenwood and Nagel examine this hypothesis with mutual fund manager data from the technology bubble. Using age as a proxy for managers' investment experience, they find that around the peak of the bubble, mutual funds run by younger managers are more heavily invested in technology stocks, relative to their style benchmarks, than their older colleagues. Consistent with the experimental evidence, the authors find that young managers, but not old managers, exhibit trend-chasing behavior in their technology stock investments. As a result, young managers increase their technology holdings during the run-up, and decrease them during the downturn. The economic significance of young managers' actions is amplified by large inflows into their funds prior to the peak in technology stock prices. These results are unlikely to be explained by standard career concerns models or by differences in the ability to pick technology stocks between young and old managers.

Massa and Zhang study the impact of style investing on the market for corporate control. By using data on the flows in mutual funds, they construct a measure of "neglectedness" that is not a direct transformation of stock market data, but directly relies on the identification of the sentiment-induced investor demand. They show that bidders tend to pair with targets that are relatively less neglected. The merger with a less neglected target generates a "halo effect" from the target to the bidder that induces the market to evaluate the assets of the more neglected bidder at the (inflated) market value of the less neglected target. Both bidder and target premiums are positively related to the difference in neglectedness between bidder and target. However, the target's ability to appropriate the gain is reduced by the fact that its bargaining position is weaker when the potential for asset appreciation of the bidder is higher. The effect on the value of the bidder is persistent in the medium run (1-2 years). The authors document a better medium-term performance of more neglected firms taking over less neglected ones. The bidder managers engaging in these types of "cosmetic mergers" take advantage of the temporary window of opportunity created by the higher stock price induced by the M and A deal to reduce their stake in the firm at convenient conditions.

[back to top]


Labor Studies

The NBER's Program on Labor Studies met in Cambridge on November 17. NBER Research Associates Lawrence F. Katz and Richard B. Freeman, both of Harvard University, organized the program. These papers were discussed:

Joseph G. Altonji, Yale University and NBER, and Anthony A. Smith and Ivan Vidangos, Yale University, "Modeling Earnings Dynamics"

Thomas Lemieux, University of British Columbia and NBER; W. Bentley Macleod, Columbia University;

and Daniel Parent, McGill University, "Performance Pay and Wage Inequality"

Sandra E. Black, University of California , Los Angeles and NBER, and Alexandra Spitz-Oener, Humboldt University Berlin, "Explaining Women's Success: Technological Change and the Skill Content of Women's Work"

Bryan S. Graham, University of California, Berkeley and NBER; Guido W. Imbens, Harvard University and NBER;

and Geert Ridder, University of Southern California, "Complementarity and Aggregate Implications of Assortative Matching: A Nonparametric Analysis"

Carmit Segal, Harvard University, "Motivation, Test Scores, and Economic Success"

Anne Case and Christina Paxson, Princeton University and NBER, "Stature and Status: Height, Ability, and Labor Market Outcomes"(NBER Working Paper No. 12466)

Altonji, Smith, and Vidangos use generalized indirect inference to estimate a joint model of earnings, employment, job changes, wage rates, and work hours over a career. Their model incorporates state and duration dependence in several variables, multiple sources of unobserved heterogeneity, job-specific error components in both wages and hours, and measurement error.They estimate the dynamic response of wage rates, hours, and earnings to various shocks, and measure the relative contributions of the shocks to the variance of earnings in a given year and over a lifetime. Shocks associated with job changes make a large contribution to the variance of career earnings and operate mostly through the job-specific error components in wages and hours. Unemployment shocks also make a large contribution and operate mostly through long-term effects on the wage rate.

An increasing fraction of jobs in the U.S. labor market explicitly pay workers for their performance using a bonus, a commission, or a piece rate. In this paper, Lemieux, Macleod, and Parent look at the effect of the growing incidence of performance pay on wage inequality. The basic premise of the paper is that performance pay jobs have a more "competitive" pay structure that rewards productivity differences more than other jobs. Consistent with this view, the authors show that compensation in performance pay jobs is more closely tied to both measured (by the econometrician) and unmeasured productive characteristics of workers. The authors conclude that the growing incidence of performance pay accounts for 25 percent of the growth in male wage inequality between the late 1970s and the early 1990s, and for most of the growth in top-end wage inequality (above the 80th percentile) during this period.

Black and Spitz-Oener adopt a task-based view of technological change and examine how the proliferation of computers in the 1980s and 1990s has affected women's tasks relative to those of men. Using data from West Germany, they find that women have witnessed large relative increases in non-routine analytic tasks and non-routine interactive tasks between 1979 and 1999. However, the most notable difference between the genders is the pronounced decline in routine task inputs among women with almost no change in routine task input for men. Consistent with the skill-biased technological change hypothesis, task changes were most pronounced within occupations, whereas only minor parts of the aggregate trends are attributable to women who were moving towards more skill-intensive occupations. In addition, the task changes are occurring most rapidly in occupations in which computers have made major headway. Overall - and in contrast to recent literature that puts a strong emphasis on only one dimension of activities on the job, namely interactive tasks - the researchers show that changes in job content have evolved differently for men and women on several dimensions.

Graham, Imbens, and Ridder present methods for evaluating the effects of reallocating an indivisible input across production units. When production technology is nonseparable such reallocations, although leaving the marginal distribution of the reallocated input unchanged by construction, may nonetheless alter average output. Examples include reallocations of teachers across classrooms composed of students of varying mean ability and altering assignment mechanisms for college roommates in the presence of social interactions. The researchers focus on the effects of reallocating one input while holding the assignment of another, potentially complementary input, fixed. They present a class of such reallocations - correlated matching rules - that includes the status quo allocation, a random allocation, and both the perfect positive and negative assortative matching allocations as special cases. Their econometric approach involves first nonparametrically estimating the production function and then averaging this function over the distribution of inputs induced by the new assignment rule.

Segal investigates how low-stakes test scores relate to economic success. The inferences in the economic literature on this subject are mostly based on tests, without performance-based incentives, administered to survey participants. Segal argues that the lack of performance-based incentives allows for the possibility that higher test scores are caused by noncognitive skills associated with test-taking motivation, and not necessarily by cognitive skills alone. This suggests that the coding speed test,which is a short and very simple test available for participants in the National Longitudinal Survey of Youth 1979 (NLSY), may serve as a proxy for test-taking motivation. To gather more definite evidence on the motivational component in the coding speed test, Segal conducts a controlled experiment, inducing motivation by the provision of incentives. In the experiment, the average performance improved substantially and significantly once incentives were provided. More importantly, there were heterogeneous responses to incentives. Roughly a third of the participants improved their performance significantly in response to performance-based incentives, while the others did not. These two groups have the same test score distributions when incentives are provided, suggesting that some participants are less motivated and invest less effort when no performance-based incentives are provided. These participants, however, are not less able. How do the coding-speed test scores relate to economic success? Focusing on male NLSY participants, Segal shows that the coding speed scores are highly correlated with earnings 23 years after NLSY participants took the test, even after controlling for usual measures of cognitive skills, like the Armed Forces Qualification Test (AFQT) scores. Moreover, for highly educated workers, the association between AFQT scores and earnings is significantly larger than the one between coding speed scores and earnings, while for less educated workers these associations are of similar size.

It has long been recognized that taller adults hold jobs of higher status and, on average, earn more than other workers. A large number of hypotheses have been put forward to explain the association between height and earnings. In developed countries, researchers have emphasized factors such as self esteem, social dominance, and discrimination. In this paper, Case and Paxson offer a simpler explanation. Prenatal and early childhood health and nutrition have critical effects on both growth and cognitive development. As a result, on average, taller people earn more because they are smarter. As early as age 3 - before schooling has had a chance to play a role - and throughout childhood, taller children perform significantly better on cognitive tests. The correlation between height in childhood and adulthood is approximately 0.7 for both men and women, so that tall children are much more likely to become tall adults. As adults, taller individuals are more likely to select into higher paying occupations that require more advanced verbal and numerical skills and greater intelligence, for which they earn handsome returns. Using four datasets from the United States and the United Kingdom, the researchers find that the height premium in adult earnings can be explained by childhood scores on cognitive tests. Furthermore, they show that taller adults select into occupations that have higher cognitive skill requirements and lower physical skill demands.

[back to top]


Productivity

The NBER's Program on Productivity met in Cambridge on December 1, 2006.Nick Bloom and Kathryn L. Shaw, Stanford University and NBER, organized the program. These papers were discussed:

Jan De Loecker, New York University, "Product Differentiation, Multi-Product Firms and Structural Estimation of Productivity"
Discussant: Marc Muendler, University of California, San Diego

Francine Lafontaine and Jagadeesh Sivadasan, University of Michigan, "The Microeconomic Implications of Input Market Regulations:Cross-Country Evidence from Within the Firm"
Discussant: Lee Branstetter, Carnegie-Mellon University and NBER

Nick Bloom; Raffaela Sadun, London School of Economics; and John Van Reenen, London School of Economics and NBER, "It Ain't What You Do But the Way that You Do IT:Investigating the U.S. Productivity Miracle Using Multinationals" Discussant: Susanto Basu, Boston College and NBER

Anne P. Bartel and Casey Ichinowski, Columbia University and NBER; Kathryn L. Shaw; and Ricard Correa, Federal Reserve Board of Governors, "International Differences in the Adoption and Impact of New Information Technologies and New HR Practices: The Valve-Making Industry in the U.S. and the U.K."
Discussant: Scott Stern, Northwestern University and NBER

Sabien Dobbelaere, Ghent University, and Jacques Mairesse, CREST and NBER, "Product Market and Labor Market Imperfections and Heterogeneity in Panel Data Estimates of the Production Function"
Discussant: Chad Syversson, University of Chicago and NBER

Bronwyn H. Hall, University of California, Berkeley and NBER; Grid Thoma, University of Bocconi; and Salvatore Torrisi, University of Bologna, "The Market Value of Patents and R&D: Evidence from European Firms"
Discussant: Megan MacGarvie, Boston University and NBER

De Loecker proposes a methodology for estimating (total factor) productivity in an environment of product differentiation and multi-product firms. In addition to correcting for the simultaneity bias in the estimation of production functions, he controls for the omitted price bias, as documented by Klette and Griliches (1996). By aggregating demand and production from product space into firm space, he can use plant-level data to estimate productivity. The productivity estimates are corrected for demand shocks and, as by-products, he recovers the elasticity of demand and implied mark-ups. He applies this methodology to the Belgian textile industry, using a dataset where he has matched firm-level with product-level information. The resulting production coefficients and productivity estimates change considerably after taking into account the demand variation and the product mix. Finally, he analyzes the effects of trade liberalization in the Belgian textile industry. While he finds significant productivity gains from trade liberalization, the estimated effects are approximately half of those obtained with standard techniques.

Lafontaine and Sivadasan investigate the microeconomic implications of labor regulations that protect employment and are expected to increase rigidity in labor markets. They exploit a unique outlet-level dataset obtained from a multi-national food chain operating about 2,840 retail outlets in over 48 countries outside the United States. The dataset provides information on output, input costs, and labor costs at a weekly frequency over a four-year period, allowing the authors to examine the consequences of increased rigidity at a much more detailed level than has been possible with commonly available annual frequency or aggregate data. They find that higher levels of the index of labor market rigidity are associated with significantly lower output elasticity of labor demand, as well as significantly higher levels of hysteresis (measured as the elasticity of current labor costs with respect to the previous week's). Specifically, an increase of one standard deviation in the labor regulation rigidity index 1) reduces the response of labor cost to a one standard deviation increase in output (revenue) by about 4.7 percentage points (from 27.2 percent to 22.5 percent); and 2) increases the response of labor cost to a one standard deviation increase in lagged labor cost by about 9.6 percentage points (from 17.8 percent to 27.4 percent).These estimates imply an increase in gross misallocation of labor of about 2 to 5 percent for a single standard deviation increase in the index of labor regulation. Finally, they find that the Company delayed entry, operates fewer outlets, and favors franchising in countries with more rigid labor laws. Overall, the data imply a strong impact of rigid labor laws on labor input and related decisions at the micro level.

Productivity growth in sectors that intensively use information technologies (IT) appears to have accelerated much faster in the United States than in Europe since 1995, leading to the U.S. "productivity miracle". If this was partly attributable to the superior management or organization of U.S. firms (rather than simply the advantages of being located in the United States geographically), we would expect to see a stronger association of productivity with IT for U.S. multinationals (compared to non-US multinationals) located in Europe. Bloom and his co-authors examine a large panel of U.K. establishments and provide evidence that U.S.-owned establishments do have a stronger relationship between productivity and IT capital than either non-U.S. multinationals or domestic establishments. Indeed, the differential effect of IT appears to account for almost all of the difference in total factor productivity between U.S.-owned and all other establishments. This finding holds in the cross section, when including fixed effects, and even when the authors examine a sample of establishments taken over by U.S. multinationals. They find that the U.S. multinational effect on IT is particularly strong in the sectors that intensively use information technologies (such as retail and wholesale): the very same industries that accounted for the U.S.-European productivity growth differential since the mid -1990s.

There is now a well-developed body of macroeconomic evidence that information technology (IT) investments are likely to have ''paid-off'' with higher levels of productivity growth in industries that invested more heavily in IT in recent years. In their earlier work, Bartel, Ichniowski, and Shaw provided evidence confirming that valve manufacturing plants that adopt new IT are in fact the same ones that increase the customization and reliability of their products and increase the speed and efficiency of their operations, thereby providing an explanation of what lies behind the macro-level trends. An important question is whether plants outside of the United States gain as much from IT as U.S. plants. In this paper with Correa, they add data from the United Kingdom to their U.S. dataset. Based on this combined data, they find that the plants in the United Kingdom have experienced the same changes that are evident in the United States: pronounced increases in productivity and increased skill demand associated with increases in the purchase of new capital that has IT imbedded in the capital.

Embedding the efficient bargaining model into the original R. E. Hall (1988) approach for estimating price-cost margins shows that imperfections in the product and labor markets generate a wedge between factor elasticities in the production function and their corresponding shares in revenue. Dobbelaere and Mairesse investigate these two sources of discrepancies, both at the sector level and the firm level, using an unbalanced panel of 10,646 French firms in 38 manufacturing sectors over the period 1978-2001. By estimating standard production functions and comparing the estimated factor elasticities for labor and materials and their shares in revenue, they are able to derive estimates of average price-cost mark-up and extent of rent sharing parameters. For manufacturing as a whole, their preferred estimates of these parameters are of an order of magnitude of 1.3 and 0.5 respectively. Their sector-level results indicate that sector differences in these parameters, and in the underlying estimated factor elasticities and shares, are quite sizeable. Since firm production function, behavior, and market environment are very likely to vary even within sectors, they also investigate firm-level heterogeneity in estimated mark-up and rent-sharing parameters. To determine the degree of true heterogeneity in these parameters, they adopt the P.A. Swamy (1970) methodology, allowing them to correct the observed variance in the firm-level estimates from their sampling variance. The median of the firm estimates of the price-cost mark-up, ignoring labor market imperfections, is 1.10, while as expected it is higher -- 1.20 - when taking them into account. The median of the corresponding firm estimates of the extent of rent sharing is 0.62. The Swamy corresponding robust estimates of true dispersion are about 0.18, 0.37, and 0.35, yielding very sizeable within-sector firm heterogeneity. The authors find that firm size, capital intensity, distance to the sector technology frontier, and investing in R and D seem to account for a significant part of this heterogeneity.

Hall and her co-authors provide novel empirical evidence on the private value of patents and R and D. They analyze an unbalanced sample of firms from five EU countries -- France, Germany, Switzerland, Sweden, and the United Kingdom - in the period 1985-2005. They explore the relationship between firm's stock market value and patents, accounting for the "quality" of EPO patents. They find that Tobin's q is positively and significantly associated with R and D and patent stocks. In contrast to results for the United States, forward citations do not add information beyond that in patents. However, the composite quality indicator based on backward citations, forward citations, and the number of technical fields covered by the patent is informative for value. Software patents account for a rising share of total patents in the EPO. Moreover, some scholars of innovation and intellectual property rights argue that software and business-methods patents on average are of poor quality and that these patents are applied for merely to build portfolios rather than for protection of real inventions. Therefore, the authors tested for the impact of software patents on the market value of the firm and did not find any significant effect, in contrast to results for the United States. However, in Europe, such patents are highly concentrated, with 90 percent of the software patents in this sample held by just 15 of the firms.

[back to top]


International Trade and Investment

The NBER's International Trade and Investment Program met at the NBER's office in California on December 1-2, 2006. Program Director Robert C. Feenstra, University of California, Davis, organized the meeting. These papers were discussed:

"Trading Tasks: A Simple Theory of Offshoring", Gene M. Grossman and Esteban Rossi-Hansberg, Princeton University and NBER

"Trade, Diffusion, and the Gains from Openness", Andres Rodriguez-Clare, Pennsylvania State University and NBER

"Multi-Product Firms and Trade Liberalization", Andrew B. Bernard,

Dartmouth College and NBER; Stephen J. Redding, London School of Economics; and Peter K. Schott, Yale University and NBER

"Quality Pricing and Endogenous Entry: A Model of Exchange Rate Pass-Through", Raphael Auer, Swiss National Bank, and Thomas Chaney, University of Chicago and NBER

"Explaining Import Variety and Quality: The Role of the Income Distribution", Yo Chul Choi and Chong Xiang, Purdue University, and David Hummels, Purdue University and NBER

"Trade Adjustment and Human Capital Investments:

Evidence from Indian Tariff Reform", Eric Edmonds and Nina Pavcnik, Dartmouth College and NBER, and Petia Topalova, International Monetary Fund

"Trade, Knowledge, and the Industrial Revolution", Kevin H. O'Rourke, Trinity College, Dublin and NBER; Ahmed S. Rahman, United States Naval Academy; and Alan M. Taylor, University of California, Davis and NBER

"Buffalo Hunt: International Trade and the Virtual Extinction of the North American Bison", M. Scott Taylor, University of Calgary and NBER

For centuries, most international trade involved an exchange of complete goods. But, with recent improvements in transportation and communications technology, it increasingly entails different countries adding value to global supply chains, or what might be called "trade in tasks." Grossman and Rossi-Hansberg propose a new conceptualization of the global production process that focuses on tradable tasks and use it to study how falling costs of offshoring affect factor prices in the source country. The authors identify a productivity effect of task trade that benefits the factor whose tasks are more easily moved offshore. In the light of this effect, reductions in the cost of trading tasks can generate shared gains for all domestic factors, in contrast to the distributional conflict that typically results from reductions in the cost of trading goods.

Rodriguez-Clare presents and then calibrates a model in which countries interact through trade and diffusion of ideas, and then quantifies the overall gains from openness and the contribution of trade to these gains. Having the model match the trade data (that is, the gravity equation) and the observed growth rate is critical for this quantification to be reasonable. It is shown that for this match it is necessary to introduce diffusion and/or knowledge spillovers to the basic model of trade and growth in Eaton and Kortum (2001). The main result of the paper is that, compared to the model without diffusion, the gains from trade are smaller whereas the gains from openness are much larger when diffusion is included in the model.

Bernard, Redding, and Schott develop a general equilibrium model of multi-product firms and analyze their behavior during trade liberalization. Firm productivity in a given product is modeled as a combination of firm-level "ability" and firm-product-level "expertise", both of which are stochastic and unknown prior to the firm's payment of a sunk cost of entry. Higher managerial ability raises a firm's productivity across all products, which induces a positive correlation between a firm's intensive (output per product) and extensive (number of products) margins. Trade liberalization fosters productivity growth within and across firms and in the aggregate by inducing firms to shed marginally productive products and forcing the lowest-productivity firms to exit. Though exporters produce a smaler range of products after liberalization, they increase the share of products sold abroad as well as exports per product. All of these adjustments are shown to be relatively more pronounced in countries' comparative advantage industries.

Auer and Chaney develop a model of international trade under perfect competition and flexible prices that accounts for the slow and incomplete pass through of exchange rate fluctuations into consumer prices. They build an extension of the Mussa and Rosen (1978) model of quality pricing. Exporters sell goods of different quality to consumers with heterogeneous preferences for quality. In equilibrium, higher quality goods are more expensive. The authors derive three testable predictions. First, exchange rate fluctuations are only partially passed through to consumers. Second, there is more pass through in the long run than in the short run, and more pass through for aggregate prices than for individual prices. Third, there is more pass through for low quality goods than for high quality goods. When the exchange rate of an exporting country appreciates, existing exporters scale down their production, driving prices up. In the long run, low quality exporters pull out, driving prices up even further. Sinse those goods are inexpensive, aggregate prices go up more than individual prices. This exit of low quality exporters has a larger impact on the price of low quality goods than on the price of high quality goods. Low quality goods prices adjust more than high quality goods prices.

Choi, Xiang, and Hummels examine a generalized version of Flam and Helpman's (1987) model of vertical differentiation that maps cross-country differences in income distributions to variations in import variety and price distributions. The theoretical predictions are examined and confirmed using micro data on income from the Luxemburg Income Study for 30 countries over 20 years. The pairs of importers whose income distributions look more similar have more export partners in common and more similar import price distributions. Similarly, the importers whose income distributions look more like the world buy from more exporters and have import price distributions that look more like the world.

Can the short- and medium-term adjustment costs associated with trade liberalization have long-term consequences through their impact on schooling and child labor? Edmonds, Pavcnik, and Topalova examine this question in the context of India's 1991 tariff reforms. Overall, in the 1990s, rural India experienced a dramatic increase in schooling and a decline in child labor. However, communities that relied heavily on employment in protected industries before liberalization do not experience as large an increase in schooling or decline in child labor. The data suggest that this failure to follow the national trend of increasing schooling and diminishing work is associated with a failure to follow the national trend in poverty reduction. Schooling costs appear to play a large role in this relationship between poverty, schooling, and child labor. Extrapolating from these results, the estimates here imply that roughly half of India's rise in schooling and a third of the fall in child labor during the 1990 can be explained by falling poverty and therefore improved capacity to afford schooling.

O'Rourke, Rahman, and Taylor address a basic empirical problem facing previous unified growth models, exemplified by Galor and Weil (2000). In such models, the onset of industrialization leads to an increase in skill premia, which is required in order to induce families to limit fertility and increase the education of their children. However, the onset of the Industrial Revolution saw a marked decline in skill premia, and this cannot be explained by supply-side educational reforms, since these only came much later. Thus the authors construct a model, in the tradition of Galor and Weil and Galor and Mountford (2004), which endogenizes the direction of technical change. They show that technological change during the early phases of the Industrial Revolution was inevitably unskilled-labor-biased. They also show that a growth in "Baconian knowledge" and international trade can explain a shift in the direction of technical change away from unskilled-labor-intensive innovations and towards skill-labor-intensive innovations. Simulations show that the model does a good job in tracking reality, at least until the late nineteenth century with its mass education reforms.

In the sixteenth century, North America contained 25-30 million buffalo; by the late nineteenth century fewer than 100 remained. While removing the buffalo east of the Mississippi took settlers two centuries, the remaining 10 to 15 million buffalo on the Great Plains were killed in a punctuated slaughter in a little over ten years. Taylor uses theory, data from international trade statistics, and first person accounts to argue that the slaughter on the plains was initiated by a foreign-made innovation and fueled by a foreign demand for industrial leather. Ironically, the ultimate cause of this sad chapter in American environmental history was of European, and not American, origin.

[back to top]

 
Publications
Activities
Meetings
Data
People
About

Support
National Bureau of Economic Research, 1050 Massachusetts Ave., Cambridge, MA 02138; 617-868-3900; email: info@nber.org

Contact Us