The NBER Reporter 2007 Number 2: Conferences
Innovation Policy and the Economy
An Economic Perspective on the Problems of Disadvantaged Youth
Labor Market Intermediation
Universities Research Conference: Behavioral Corporate Finance
International Dimensions of Monetary Policy
International Seminar on Macroeconomics
Frontiers in Health Policy Research
Eighteenth Annual EASE Conference
NBER Conference in Beijing
Numerous scholars have expressed concern over the growing "privatization of the scientific commons" represented by the growth in academic patenting. However, even before the Bayh-Dole Act and the pervasive patenting of academic science, there was an earlier concern over the extent to which the drive for recognition among scientists and competition for priority and associated rewards also limited contributions to the scientific commons. This suggests the utility of a more open-ended consideration of the different factorsnot just patentingthat might affect knowledge flows across scientists. Cohen and Walsh suggest, first, that one might distinguish between legal and practical excludabilityand that practical excludability, at least in the world of academic research, may have little to do with patents. At the same time, however, they suggest that excludability may indeed be a real concern for academic, and particularly biomedical research, but to understand where and how it occurs, they need to look beyon patents to consider additional ways in which flows of knowledge and other inputs into research may be restricted (including secrecy and control over materials). They do find restrictions imposed on the flow of knowledge and material across biomedical researchers. While intellectual property plays some role, it is not determinative. They find that what matters are both academic and commercial incentives, and effective (that is, cost effective) excludability. This practical excludability is rarely associated with the existence of a patent in academic settings, but may be more readily imposed through secrecy or not sharing research materials.
Litan, Mitchell, and Reedy note that with the passage of the Bayh-Dole Act of 1980, the federal government explicitly endorsed the transfer of exclusive control over government-funded inventions to universities and businesses operating with federal contracts. While this legislation was intended to accelerate further development and commercialization of the ideas and inventions developed under federal contracts, the government did not provide any strategy, process, tools, or resources to shepherd innovations from the halls of academia into the commercial market. And more than 25 years later, it is clear that few universities have established an overall strategy to foster innovation, commercialization, and spillovers. Multiple pathways for university innovation exist and can be codified to provide broader access to innovation, allow a greater volume of deal flow, support standardization, and decrease the redundancy of innovation and the cycle time for commercialization. Technology Transfer Offices (TTOs) were envisioned as gateways to facilitate the flow of innovation but instead have become gatekeepers, in many cases constraining the flow of inventions and frustrating faculty, entrepreneurs, and industry. The authors discuss potential changes focused on creating incentives that will maximize social benefit from the existing investments being made in R and D and commercialization on university campuses.
Innovation within Internet access markets can be usefully understood through the lens of economic experiments. Economic experiments yield lessons to market participants through market experience. Greenstein distinguishes between directed and undirected economic experiments and discusses how the spreading of lessons transforms a market. As a lesson becomes common, it becomes a part of industry know-how. Further innovations build on that know how, renewing a cycle of experimentation. Greenstein ends with insights about why some market institutions encourage economic experiments and what policy can do to nurture a positive outcome.
Economists and policymakers have long recognized that innovators must be able to appropriate a reasonable portion of the social benefits of their innovations if innovation is to be suitably rewarded and encouraged. However, Shapiro identifies a number of specific fact patterns under which the current U.S. patent system allows patent holders to capture private rewards that exceed their social contributions. Such excessive patentee rewards are socially costly, since they raise the deadweight loss associated with the patent system and discourage innovation by others. Economic efficiency is promoted if rewards to patent holders are aligned with and do not exceed their social contributions. He analyzes two major reforms to the patent system designed to spur innovation by better aligning the rewards and contributions of patent holders: establishing an independent invention defense in patent infringement cases, and strengthening the procedures by which patents are re-examined after they are issued. Three additional reforms relating to patent litigation are also studied: limiting the use of injunctions, clarifying the way in which "reasonable royalties" are calculated, and narrowing the definition of "willful infringement."
The past two decades have seen an explosion of patent awards and litigation across a wide variety of technologies, which numerous commentators have suggested has socially detrimental consequences. Patent pools, in which owners of intellectual property share patent rights with each other and third parties, have been proposed as a way in which firms can address this "patent thicket" problem. Lerner and Tirole discuss the current regulatory treatment of patent pools and highlight why a more nuanced view than focusing on the extreme cases of perfect complements and perfect substitutes is needed. They also highlight the importance of regulators' stances toward independent licensing, grantback policies, and royalty control. Finally, they present case study and large-sample empirical evidence.
These papers will appear in an annual volume published by the MIT Press. Its availability will be announced in a future issue of the Reporter. They can also be found at "Books in Progress" on the NBER's website.
Figlio and Roth investigate the effects of public pre-kindergarten participation on the subsequent behavioral outcomes of disadvantaged youth. They use a unique longitudinal dataset that links student birth records to pre-kindergarten participation for every child born in Florida on or after 1994 who subsequently attended public school in Florida. They find that children whose neighborhood-zoned elementary school offers a pre-kindergarten program are more likely to attend. Further, differences persist within families whose local schools add or drop programs. Specifically, within a family, the sibling with easier access to public pre-kindergarten measured by the fact that his or her locally-zoned elementary school offers a pre-kindergarten program when he or she is four years old is considerably more likely to attend. Using this differential access within a family as an instrument to predict public pre-kindergarten attendance, the authors find that pre-kindergarten participation reduces the likelihood of behavioral problems at school or classification of emotional disabilities, at least in the very disadvantaged neighborhoods where center-based Head Start is also offered.
To what extent does socioeconomic disadvantage lead to early childbearing? Kearney and Levine first use the Panel Study of Income Dynamics (PSID) to confirm the results of past studies showing that being born to a young mother, a mother with a low level of education, or an unmarried mother, is associated with substantially higher rates of teen childbearing. Then they use Vital Statistics natality data from 1968 to 2003 to consider whether the relationship is causal. To do so, they construct data for several state/year birth cohorts, tracking their rate of disadvantage at birth along with their subsequent rate of early childbearing. Using traditional panel data methods, they then ask: if we were to reduce (or eliminate) socioeconomic disadvantage among a birth cohort of girls, how much of a reduction in their teen pregnancy should we expect? Their estimates suggest that the impact would not be very large. Defining disadvantage alternately as having been born to a mother with a low level of education, to a unmarried mother, or to a teen or minor mother, the estimates imply that a 10 percent reduction in the proportion of a cohort with a particular disadvantage characteristic would lead to a decline of 0.5 to 2.5 percent in the proportion who give birth by age 18.
Cullen and Jacob review the existing evidence and provide new evidence on whether expanded access to sought-after schools can improve achievement. The setting they study is the "open enrollment" system in the Chicago Public Schools (CPS). Elementary students in Chicago can apply to gain access to public magnet schools and programs outside of their neighborhood school, but within the same school district. All but a handful of academically advanced elementary schools use lotteries to allocate spots when oversubscribed, and the authors analyze nearly 600 lotteries for grades K-4 at 34 popular schools in 2000 and 2001. Since those who randomly win and lose any given lottery will, on average, have the same characteristics, it is possible to obtain unbiased estimates of the impact of gaining access to one of these schools through a straightforward comparison of subsequent mean outcomes across the two groups, as long as there is not selective attrition. Comparing lottery winners and losers, the authors find that lottery winners attend higher quality schools as measured by both the average achievement level of peers in the school and by value-added indicators of the school's contribution to student learning. Yet, tracking students for up to five years following the application, the authors do not find that winning a lottery systematically confers any academic benefits. This suggests that the strong cross-sectional relationship observed between test score performance and school quality for the typical CPS elementary student is largely spurious.
Dropout rates have fallen little over the last 30 years and remain disproportionately high among blacks, Hispanics, and children from low-income families. Many states have considered raising the minimum school leaving age as a means to improve on these outcomes. Several states have already raised the school leaving age above 16, although often with exceptions. Oreopolous uses these recent changes to estimate the effects from more compulsory schooling. The results suggest that more restrictive laws would reduce dropout rates, increase college enrollment, and improve career outcomes. Some caution is warranted because focusing on more recent law changes leads to more imprecision. But the consistent findings with previous studies are suggestive that compulsory high school at later ages can benefit disadvantaged youth.
Although mental disorders are common among children, we know little about their long-term effects on child outcomes.Currie and Stabile examine U.S. and Canadian children with symptoms of Attention Deficit Hyperactivity Disorder (ADHD), depression, conduct disorders, and other behavioral problems. Their work offers a number of innovations. First, they use large nationally representative samples of children from both countries. Second, they focus on "screeners" that were administered to all children in their sample, rather than on diagnosed cases. Third, they address omitted variables bias by estimating sibling-fixed effects models. Fourth, they examine a range of outcomes. Fifth, they ask how the effects of mental health conditions are mediated by family income and maternal education. They find that mental health conditions, and especially ADHD, have large negative effects on future test scores and schooling attainment, regardless of family income and maternal education.
For children overall, obesity rates have tripled from 5 percent in the early 1970s to about 15 percent by the early 2000s. For disadvantaged children, obesity rates are closer to 20 percent. At the same time, obesity rates among adults also have increased. Parental obesity is very closely tied to children's obesity, for reasons of both nature and nurture. Anderson, Butcher, and Schanzenbach ask: how does parental obesity relate to children's obesity? Is this different for disadvantaged families? Have these relationships changed over time? They find that the elasticity between mothers' and children's body mass index (BMI) has increased over the period when obesity is rising, suggesting the common parent-child environment has become more important in determining obesity. However, this increase is smaller for disadvantaged children. If the only determinants of children's body mass were their genes and the environment they share with their parents, then we would expect to find that increases in parents BMI go a long way toward explaining the increases seen in children's BMI over the last few decades. This is not the case. Based on the estimates here of the mother-child BMI elasticity, the 4.1 percent increase in mothers' average BMI during the 1980s can explain between 20 and 30 percent of the 2.8 percent increase in children's average BMI. This suggests that there are factors in the environment that are unique to children that are important for determining their body mass, and that these factors are even more important for disadvantaged children.
Page, Stevens, and Lindo use layoffs and business closings to identify the effect of a permanent parental income shock on children's long-run socioeconomic outcomes.They find that estimates of the intergenerational effects of parental job loss are sensitive to their definition of job displacement. Focusing on measures of displacement that are most likely to be exogenous, they find that the long-term consequences of parental job loss are substantive for children growing up in low-income families, and largest for young children.
Dehejia and his co-authors examine whether participation in religious or other social organizations can help offset the negative effects of growing up in a disadvantaged environment. Using the National Survey of Families and Households, they collect measures of disadvantage as well as parental involvement with religious and other social organizations when the youth were ages 3 to 19 and observe outcomes of the youth 13 to 15 years later. They consider a range of definitions of disadvantage in childhood (family income and poverty measures, family characteristics including parental education, and child characteristics including parental assessments of the child) and a range of outcome measures in adulthood (including education, income, and measures of health and psychological well-being). Overall, they find that youth with religiously active parents were less affected later in life by childhood disadvantage than youth whose parents did not frequently attend religious services. These buffering effects religious organizations are most pronounced when outcomes are measured by high school graduation or non-smoking and when disadvantage is measured by family resources or maternal education, but they also find buffering effects for a number of other outcome-disadvantage pairs. For some other social organizations, they find moderate buffering effects but these effects are not nearly as strong as for religious organizations.
Using individual survey data on urban youth and their families from Los Angeles, Aizer finds that the most violent neighborhoods are also characterized by the highest degree of disadvantage: greatest poverty, highest unemployment, least education. And, while living in a violent neighborhood increases the probability of exposure to violence, within violent neighborhoods those personally exposed to street violence are significantly more disadvantaged and are more likely to associate with violent peers than their (unexposed) neighbors. After controlling for underlying measures of neighborhood and family disadvantage, the impact of violence on child outcomes such as math and reading scores is considerably diminished, and for some measures of violence even disappears. This suggests that underlying disadvantage, and not exposure to neighborhood violence, is responsible for the poor child outcomes observed.
These papers will be published by the University of Chicago Press in an NBER Conference Volume; its availability will be announced in a future issue of the NBER Reporter. They will also appear at "Books in Progress" on the NBER's website.
Stevenson examines the effect of Internet job search on the U.S. labor market. Combining monthly CPS data, the CPS Computer Use Supplements, and proprietary data on Internet penetration, she documents how the availability of the Internet has changed job search methods, demonstrating that job search activity by the unemployed has increased. Looking at employment-to-employment transitions, she examines on-the-job search and finds that the Internet has led to greater mobility among jobs, particularly for those with more education. Workers who are online are 15 to 30 percent more likely to change jobs than those who are not online, after controlling for observable worker and job characteristics. Similarly, workers who are online are 25 percent more likely to change jobs within the same firm. Among employed workers, employment-to-employment flows increase, particularly for those in the private sector with a college degree or higher.
Sylos Labini and Bagues empirically address the effects of on-line labor market intermediaries. In particular, they study the impact of the intermediation activity carried on by the inter-university consortium called AlmaLaurea on graduates' labor market outcomes. They argue that the existence of counterfactuals and the organizational features of AlmaLaurea allow them to overcome the problems faced by previous empirical investigations. Their evaluation is performed using the difference-in-differences method applied to a repeated cross section dataset. It is shown that, if the usual assumption concerning parallel outcomes holds, AlmaLaurea has a positive effect on individual unemployment probability and different measures of matching quality. Most interestingly, online intermediaries are shown to foster graduates' geographical mobility.
Freeman put online an English language job search survey. The survey questions fall into five main groups: 1) demographic and education information; 2) work experience and current/recent employment status; 3) general job search experience; 4) computer access and Internet usage; and 5) internet job search usage. The survey instrument was advertised world wide using Google AdWords and the international AdBrite service. The enticement offered to encourage those who saw the ads to complete the survey was a promise that they would be entered in a drawing for $1,000 U.S. A total of 1,602 useable survey forms were filled out by respondents 16 to 64 years of age. In this paper, Nakamura and her co-authors explore the responses obtained from the survey. It is often said and written that personal contacts are of greatest importance for finding employment. The respondents to the 2007 Job Search Survey mostly agree that personal contact and referrals are important, and networking and word of mouth are selected as important, too, by large percentages of the respondents. However, Internet recruitment sites are selected by even higher percentages in all of the age groups. More than three fourths of the job seekers in each age group indicated that Internet recruitment sites are useful for job search. The researchers are curious about whether those using job sites also check for work opportunities on employer web sites. They find that those with more education were especially likely to report using employer web sites to look for work.
Finlay examines how employer access to criminal history data influences the labor market outcomes of ex-offenders and non-offenders using detailed self-reported criminal history data and labor market variables from the 1997 cohort of the National Longitudinal Survey of Youth and a dataset he collected on state policies regarding criminal history records. Specifically, are the labor market effects of incarceration stronger and longer lasting in states that provide public access to criminal history records? Do non-offenders who are otherwise similar to ex-offenders have improved labor market outcomes when employers can verify their records of non-offense? He tests if these effects vary by race in the context of possible statistical discrimination by employers. He finds evidence that employment effects of incarceration are more negative and last longer in states that provide criminal history records over the Internet than in states that do not. There is some evidence that ex-offenders have lower wa those states with open records policies.
In the United States around the turn of the twentieth century, state and local governments began to establish public employment offices. These non-profit governmental organizations match businesses and job seekers. Their goals were to reduce search and information costs and to eliminate fraudulent activities by private employment agencies in the labor exchange. Lee estimates the usage and placements for job seekers through public employment offices in relation to the labor market conditions in order to explore the development of this labor market intermediary. He also estimates the geographical diffusion and the number of offices over time. In addition, he analyzes who used public employment offices. Finally, a theory of public employment offices is proposed to describe the situation which differs from that today.
How successful, if at all, have unions and nonunion groups been in delivering intermediary services to workers who lack collective bargaining contracts? Can unions or other groups create on-line communities that can complement or substitute for off-line worker groups to advance worker interests? Freeman seeks to answer these questions by analyzing innovative uses of the Internet by unions in the United States and the United Kingdom. He first lays out the challenge facing unions or other organizations in providing services outside of collective bargaining; he then reviews some of the major innovations in delivering services to workers outside of collective bargaining. Next he provides a detailed analysis of the Trade Union Congress's effort to build an on-line community of union activists (www.unionreps.org.uk) which he argues is the union innovation that most fits the Internet era.
New gastroenterologists participated in a labor market clearinghouse (a "match") from 1986 through the late 1990s, after which the match was abandoned. This provided an opportunity to study the effects of a match, by observing the differences in the outcomes and organization of the market when a match was operating, and when it was not. After the GI match ended, programs hired fellows earlier each year, eventually almost a year earlier than when the match was operating. It became customary for GI program directors to make very short offers, rarely exceeding two weeks and often shorter. Consequently, many potential fellows had to accept positions before they finished their planned interviews, and most programs experienced cancellations of interviews they had scheduled. Furthermore, without a match, many programs hired more local fellows, and fewer from other hospitals and cities than they did during the match. Wages, however, seem not to have been affected. To restart the match, Niederle and Roth proposed a policy, subsequently adopted by the gastroenterology professional organizations, that even if applicants had accepted offers prior to the match, they could subsequently decline thoseoffers and participate in the match. This made it safe for programs to delay hiring until the match, confident that programs that did not participate would not be able to "capture" the most desirable candidates beforehand. Consequently, it appears that most programs waited for the match in an orderly way in 2006.
Mortgage brokers are an emerging regulated occupation. Thirty years ago, there were almost no mortgage brokers, because individuals who wanted a mortgage dealt directly with a bank, a savings and loan, or a government-lending agency. With deregulation of financial services and major technology improvements that allowed for easy development and dissemination of credit scores and other financial records, lenders began to outsource the task of interacting with prospective borrowers. Today, about 60 percent of all mortgages are initiated through a mortgage broker.Thirty years ago, no state regulated mortgage brokering firms or members of the occupation. Currently, all states except Alaska regulate the industry, and 18 states license the occupation by law. Kleiner and Todd have gathered all the principal laws and major administrative rulings covering the industry and occupation over the 1996-2006 period by date and type of statute and administrative procedure. They also have gathered information by (and sometimes finer geographies) on foreclosures and, for 2005, prices of loans. In addition, they have data on the wages and employment of mortgage brokers and loan officers over a period that coincides with the information on regulatory statutes. Their preliminary exploration of the influence of regulation on mortgage broker markets finds that it had a small and usually insignificant impact on both labor market outcomes and on the quality of service provided to consumers in mortgage markets.
Based on administrative data from the federal employment service in Germany, Kvasnicka applies statistical matching techniques to estimate for unemployed job seekers the stepping-stone function of temporary help work to regular employment. His results show that workers who took temporary work within twelve months of unemployment registration in 1994-6 did not enjoy subsequent greater chances of employment outside of temporary help work over the next four years. Neither, however, did these workers suffer from future greater risks of unemployment. Quite to the contrary, they appeared significantly less likely to be unemployed. Moreover, they enjoyed higher chances of being in paid work throughout the four-year period they were followed in the data. While these results do not lend empirical support to a stepping-stone function of temporary help employment for the unemployed, neither do they confirm the existence of adverse effects on the future regular employment and unemployment chances of unemplo job seekers. If anything, temporary help work appears to provide an access-to-work function for the unemployed in Germany.
Whereas there is widespread belief that workers in temporary work agencies (TWA) are subject to poorer working conditions, in particular pay, than comparable workers in the rest of the economy, there is little evidence on whether that is driven by the sector per se or by the workers' characteristics. Boeheim and Cardoso first aim therefore to quantify the wage penalty, if any, for workers in TWA. Remarkable linked employer-employee data covering the whole private sector in Portugal enable them to account for observable as well as unobservable worker quality. The results suggest that workers in TWA are indeed subject to a wage penalty and that the major share of this penalty is associated with the attributes both observable and unobservable of the worker. If the authors control for the workers' observable and unobservable characteristics, they estimate that comparable workers receive 5 percent lower wages in TWAs than workers in all other firms. Once the TWA workers leave that sector and work elsewhere, their wages are not significantly different from those of other workers who switched jobs. The authors conclude that working for TWA does not lead to a stigma that lowers future wages.
Holzer and his co-authors use a very large matched database on firms and employees to analyze the use of temporary agencies by low earners, and to estimate the impact of temp employment on subsequent employment outcomes for these workers. Their results show that, while temp workers have lower earnings than others while working at these agencies, their subsequent earnings are often higher but only if they manage to gain stable work with other employers. Furthermore, the positive effects seem mostly to occur because those working for temp agencies subsequently gain access to higher-wage firms than do comparable low earners who do not work for temps. The positive effects seem to persist for up to six years beyond the period during which the temp employment occurred.
Heinrich and her co-authors examine the effects of temporary help service (THS) employment on later earnings and employment for individuals participating in three federal programs providing supportive services to those facing employment difficulties. The programs include TANF, whose participants are seriously disadvantaged; a job training program, with a highly heterogeneous population of participants; and employment exchange services, whose participants consist of Unemployment Insurance claimantsand individuals seeking assistance in obtaining employment. The researchers undertake their analyses for two periods: the late 1990s, a time of very strong economic growth, and shortly after 2000, a time of relative stagnation. Their results suggest that temporary help service firms may facilitate quicker access to jobs for those seeking employment assistance and impart substantial benefits as transitional employment, especially for individuals whose alternatives are severely limited. Those who do not out of temporary help jobs, however, face substantially poorer prospects, and the authors observe that nonwhites are more likely than whites to remain in THS positions in the two years following program participation. These results are robust to program and time period.
These papers will be published by the University of Chicago Press in an NBER Conference Volume; its availability will be announced in a future issue of the NBER Reporter. They will also appear at "Books in Progress" on the NBER's website.
Greenwood examines a series of stock splits in Japan in which firms restrict the ability of their investors to sell their shares for a period of approximately two months. By removing potential sellers from the market, the restrictions have the effect of increasing the impact of trading on prices. The greater is the desire of investors to trade, and the greater are the restrictions, the larger is the impact of the restrictions. In the data, particularly severe restrictions are associated with returns of over 30 percent around the ex-date, most of which are reversed when investors are allowed to sell again. Firms are more likely to issue equity or redeem convertible debt during the restricted period, suggesting strong incentives for manipulation.
Chahine, Ljungqvist, and Michaely examine a comprehensive set of 524 pre-IPO analyst reports made available to investors and the broker's sales force in the course of marketing equity IPOs in France over the period 1991-2002. Exploiting exogenous variation in the behavior of affiliated and independent analysts caused by well-known conflicts of interest, they find that analyst reports are used to "hype" the issuer's stock in harder-to-place offerings and when more stock is reserved for allocation to retail investors. Moreover, retail (but not institutional) demand is stimulated by hype, and this demand boost in turn enables the underwriting bank to increase the offer price for the benefit of its corporate client. Over the first year of trading, hyped stocks underperform non-hyped stocks by a wide margin. These results underscore the marketing (as opposed to information production) role of research analysts. They also suggest that analysts sometimes pander to certain psychological biases retail investors may be prone to, consistent with recent theoretical models.
Barberis and Huang study the asset pricing implications of Tversky and Kahneman's (1992) cumulative prospect theory, with particular focus on its probability weighting component. Their main result, derived from a novel equilibrium with non-unique global optima, is that, in contrast to the prediction of a standard expected utility model, a security's own skewness can be priced: a positively skewed security can be "overpriced," and can earn a negative average excess return. These results offer a unifying way of thinking about a number of seemingly unrelated financial phenomena, such as the low average return on IPOs, private equity, and distressed stocks; the diversification discount; the low valuation of certain equity stubs; the pricing of out-of-the-money options; and the lack of diversification in many household portfolios.
A diversified firm can trade at a discount to a matched portfolio of single-segment firms if the diversified firm has either lower expected cash flows or higher expected returns than the single-segment firms. Mitton and Vorkink study whether firms with diversification discounts have higher expected returns in order to compensate investors for offering less upside potential (or skewness exposure) than focused firms. Their empirical tests support this hypothesis. First, they find that focused firms offer greater skewness exposure than diversified firms. Second, they find that diversified firms have significantly larger discounts when the diversified firm offers less skewness than matched single-segment firms. Finally, they find that up to 53 percent of the excess returns received on diversification-discount firms relative to diversification-premium firms can be explained by differences in exposure to skewness.
Jenter, Lewellen, and Warner study put option sales undertaken by corporations during their repurchase programs. Put sales' main theoretical motivation is market timing, providing an excellent framework for studying whether security issues reflect managers' ability to identify mispricing. The evidence is that these bets reflect timing ability, and are not simply a result of overconfidence. In the 100 days following put option issues, there is roughly a 5 percent abnormal stock price return, and the abnormal return is concentrated around the first earnings release date following put option sales. Longer-term effects are generally not detected. Put sales also appear to reflect successful bets on the direction of stock price volatility.
DeGeorge, Patel, and Zeckhauser (1999) show that companies manage their earnings to meet or exceed three earnings targets: zero earnings, earnings reported four quarters ago, and analysts' earnings forecasts. In this paper, they extend the work along two lines. First, they document time and cross-sectional variation in threshold-regarding behavior. Second, they examine the market responses when earnings surpass thresholds, controlling for the earnings surprises. Their results suggest that investors view threshold crossing as an important indicator of the health of companies.
Corporate governance disasters often could be averted if directors asked hard questions, demanded clear answers, and blew whistles. Work in social psychology, by Milgram and others, suggests that humans have an innate predisposition to obey authority. This excessive subservience of agent to principal, which Morck here dubs a "type I agency problem", explains directors' eerie submission. He reviews rational explanations for it, but argues that behavioral explanations are more complete. Further behavioral studies reveal this predisposition to be disrupted by dissenting peers, conflicting authorities, and distant authorities. This suggests that independent directors, non-executive chairs, and committees composed of independent directors and excluding CEOs might induce greater rationality and more considered ethics in corporate governance. The empirical evidence for this is scant, though. This may reflect measurement problems, for many apparently independent directors may have hidden financial or perso ties to CEOs. But it also might reflect other behavioral considerations that reinforce agentic subservience to CEOs.
Many firms and organizations involve a team of agents in which the marginal productivity of any one agent increases with the effort of others. Because the effort of each agent is not observable to any other agent, the performance of the firm is negatively affected by a free-rider problem and a general lack of cooperation between agents. In this context, Gervais and Goldstein show that an agent who mistakenly overestimates her own marginal productivity works harder, thereby increasing the marginal productivity of her colleagues who then work harder as well. This not only enhances firm performance and value but also may make all agents better off, including the biased ones. Indeed, although biased agents overwork, they benefit from the positive externality generated by other agents working harder. The presence of a leader improves coordination and firm value, but self-perception biases can never be Pareto-improving when they affect the leader. Self-perception biases also are shown to affect the likelihood and value of mergers, job assignments within firms, and compensation contracts.
Ben-David, Graham, and Harvey use a direct measure of overconfidence to test whether managerial overconfidence manifests itself in corporate policies. They collect a unique panel of over 6,500 quarterly stock market forecasts (expected returns, and the tenth and ninetieth percentiles of their perceived distributions) by Chief Financial Officers (CFOs) over a span of more than six years. On average, CFOs are miscalibrated: realized returns are within respondents' 80 percent confidence intervals only 40 percent of the time. After one controls for firm characteristics, the companies with overconfident CFOs (that is, CFOs with narrow confidence intervals) invest more, have higher debt leverage, pay out fewer dividends, use proportionally more long-term than short-term debt, engage in market timing activity, and tilt executive compensation towards performance-based bonuses. In addition, merger announcements by firms with overconfident CFOs are received negatively by investors.
Using a detailed dataset with assessments of CEO candidates for companies involved in private equity (PE) transactions, including both buyout (LBO) and venture capital (VC) deals, Kaplan, Klebanov, and Sorensen study how CEOs' characteristics and abilities relate to hiring decisions, PE investment decisions, and subsequent performance. The candidates are assessed on more than 40 individual characteristics in seven general areas leadership, personal, intellectual, motivational, interpersonal, technical and functional. In general, characteristics and abilities are found to be highly correlated. For both LBO and VC firms, outside CEO candidates are more highly rated than insiders, and both LBO and VC firms are more likely to hire and invest in more highly rated and talented CEOs. Investors also value "soft" or team-related skills in the hiring decisions. However, these skills are not necessarily associated with greater success. For LBO deals in particular, "hard" abilities and execution skills predi success. Finally, they find that insiders are no more likely to succeed than outside CEOs, holding observable talent and ability constant.
Erceg and his co-authors use an open economy model to explore how trade openness affects the transmission of domestic shocks. For some calibrations, closed and open economies appear dramatically different, reminiscent of the implications of Mundell-Fleming style models. However, these researchers argue that such stark differences hinge on calibrations that impose an implausibly high trade price elasticity, and Frisch elasticity of labor supply. Overall, their results suggest that the main effects of openness are on the composition of expenditure, and on the wedge between consumer and domestic prices, rather than on the response of aggregate output and domestic prices.
Coenen and his co-authors use a version of the New Area-Wide Model developed at the European Central Bank to quantify the gains from monetary policy cooperation. The model is calibrated to match a set of empirical moments. The researchers then derive the cooperative and (open-loop) Nash monetary policies, assuming that the central bank's objectives reflect the preferences of households. There are three main results in this paper. First, in line with the recent literature, the gains from cooperation are small in the benchmark model. They amount to about 0.03 percent of steady-state consumption. Second, decomposing the sources of the gains from cooperation with respect to the various shocks, it turns out that mark-up shocks are the most important source for gains from international monetary policy cooperation. Third, a sensitivity analysis with respect to various key parameters of the model suggests that the gains from cooperation become larger the more open is the economy.
Boivin and Giannoni quantify the changes in the relationship between international forces and many key U.S. macroeconomic variables over 1984-2005, and analyze changes in the monetary policy transmission mechanism. They do so by estimating a Factor-Augmented Variable Auto Regression on a large set of U.S. and international data series. They find that the role of international factors in explaining U.S. variables has been changing over this period. However, while some U.S. series have become more correlated with global factors, there is little evidence suggesting that these factors have become systematically more important. The authors don't find strong evidence of a change in the transmission mechanism of monetary policy because of global forces. Taking their point estimates literally, global forces do not seem to have played an important role in the U.S. monetary transmission mechanism between 1984 and 1999. In addition, since the year 2000, the initial response of the U.S. economy following a monetary policy shock the first 6 to 8 quarters is essentially that same as what has been observed in the 1984-1999 period. However, point estimates suggest that the growing importance of global forces might have contributed to reducing some of the persistence in the responses, two or more years after the shocks. Overall, the authors conclude that if global forces have had an effect on the monetary transmission mechanism, this is a recent phenomenon.
Blanchard and Gali characterize the macroeconomic performance of a set of industrialized economies in the aftermath of the oil price shocks of the 1970s and the last decade, focusing on the differences across episodes. They examine four different hypotheses for the mild effects on inflation and economic activity of the recent increase in the price of oil: 1) good luck (offsetting shocks); 2) smaller share of oil in production; 3) more flexible labor markets; 4) improvements in monetary policy. They conclude that all four have played an important role.
Batini and her co-authors build a two-bloc emerging market-rest of the world model. The emerging market bloc incorporates partial transactions and financial dollarization, as well as financial frictions including a "financial accelerator", where capital financing is partly or totally in foreign currency as in Gertler et al. (2003) and Gilchrist (2003). Simulations of the model under various "operational" monetary policy rules derived, assuming that the central bank maximizes households' utility, point to important results. First, the authors reaffirm the finding in the literature that financial frictions, especially when coupled with financial dollarization, severely increase the costs of a fixed exchange rate regime. By contrast, transactions dollarization has only a small impact on the choice of the monetary regime. Second, with dollarization and frictions, the zero lower bound constraint on the nominal interest rate makes simple Taylor-type rules perform much worse than fully optimal monetary policy rules.
Ferrero, Gertler, and Svensson explore the implications of current account adjustment for monetary policy within a simple two-country model. Their framework nests Obstfeld and Rogoff's (2005) static model of exchange rate responsiveness to current account reversals. It extends this approach by endogenizing the dynamic adjustment path and by incorporating production and nominal price rigidities in order to study the role of monetary policy. They consider two different adjustment scenarios. The first is a "slow burn" where the adjustment of the current account deficit of the home country is smooth and slow. The second is a "fast burn" where, owing to a sudden shift in expectations of relative growth rates, there is a rapid reversal of the home country's current account. They examine several different monetary policy regimes under each of these scenarios. Their principal finding is that the behavior of the domestic variables (for instance, output, inflation) is quite sensitive to the monetary regime, while the behavior of the international variables (for instance, the current account and the real exchange rate) is less so.
Uhlig compares monetary policy in the United States and European Monetary Union during the last decade, using an estimated hybrid New Keynesian cash-in-advance model, driven by five shocks. It appears that the difference between the two monetary policies between 1998 and 2006 is attributable to both surprises in productivity and surprises in wage demands, moving interest rates in opposite directions in Europe and the United States, but not to a more sluggish response in Europe to the same shocks or to different monetary policy surprises.
Sbordone addresses the issue of how globalization may have affected the slope of the Phillips curve. Starting from the Calvo model of staggered price setting, she modifies some of its standard underlying assumptions in order to provide a channel for an increase in trade openness to affect the elasticity of inflation with respect to marginal costs. After qualitatively assessing the nature of the impact of trade openness, Sbordone evaluates whether the increase in the varieties of goods traded since the beginning of globalization, as documented in the empirical literature, might have had a sizable negative impact on the component of the slope of the new Keynesian Phillips curve that relates domestic inflation to real marginal costs.
Corsetti and his co-authors reconsider the policy trade-offs created by stable import prices in local currency, which realistically result from both nominal rigidities and endogenous destination-specific markup adjustment by exporters. The main novelty of their model consists of placing price stickiness at the heart of strategic interactions between upstream producers and local firms. They show that price dispersion at the consumer level, attributable to staggered pricing decisions by local firms, affects the desired markup by upstream producers, magnifying their price response to shocks. The researchers solve for the optimal policy under cooperation, unveiling a reason specific to the international dimension of the economy in support of price stability as the main prescription for monetary policy. As stable consumer prices feed back into a low volatility of markups among upstream producers, this contains inefficient deviations from the law of one price at the border.
Woodford considers three possible mechanisms through which it might be feared that globalization can undermine the ability of monetary policy to control inflation: by making liquidity premiums a function of global liquidity rather than the supply of liquidity by a national central bank alone; by making real interest rates dependent on the global balance between saving and investment rather than the balance in one country alone; or, by making inflationary pressure a function of global slack rather than a domestic output gap alone. He reviews the consequences of global integration of financial markets, final goods markets, and factor markets and finds that globalization, even of a much more thorough sort than has yet occurred, is unlikely to weaken the ability of national central banks to control the dynamics of inflation.
The conference also included a panel discussion on "International Dimensions of Monetary Policy." The panelists were: NBER President Martin Feldstein of Harvard University; Donald Kohn, Vice Chairman of the Federal Reserve Board of Governors; Laurence H. Meyer, former governor of the Federal Reserve Board and currently of Macroeconomic Advisers; Lucas Papademos, Vice President of the European Central Bank; and Carlos Sales, Deputy Governor, Bank of Spain.
The University of Chicago Press expects to publish these papers and discussions in an NBER Conference Volume. Its availability will be announced in a future issue of the NBER Reporter.
Gosselin and his co-authors extend the recent literature on central bank transparency by noting that most central banks now publish their interest rates. As a consequence, central banks cannot hide all of their information; they must reveal some of it as they set and publish the interest rate. These researchers study the role of transparency in the presence of the distortionary common knowledge effect emphasized by Morris and Shin (2002). Publishing the interest rate implies that the central bank becomes the source of such a distortion. The question, then, is whether additional information is desirable. The answer depends not just on the relative precision of central bank information but also on what is known of this precision. The authors find that the interest rate can be used by the central bank to offset the common knowledge effect. Transparency, which undermines the information content of the interest rate, tends to undermine its strategic use. They also find that uncertainty about private sec information precision leads to monetary policy mistakes that become the source of price volatility. Transparency helps to reduce this volatility.
Nominal exchange rates are asset prices they are the relative price of two moneys and are determined largely by expected future macroeconomic conditions. When some goods prices are sticky in each currency, exchange rate changes also determine changes in the relative prices of goods. There is a sticky-price distortion, since freely set relative goods prices do not in general act like the relative price of two moneys. Large nominal exchange rate swings, reflecting expectations of the future, can lead to substantial misalignments in prices, even flexible commodity prices, when some nominal prices are sticky. In a series of stylized models, Devereaux and Engel examine the implications for monetary policy.
Using a structural Vector Autoregression approach, Perotti compares the macroeconomic effects of the two components of government purchases of goods and services: government consumption and government investment. He studies both the size and the speed of their effects on GDP and its components. Contrary to common opinion, there is no evidence that government investment shocks are more effective than government consumption shocks in boosting GDP: this is true both in the short and, perhaps more surprisingly, in the long run. In fact, government investment appears to crowd out private investment. And, there is no evidence that government investment "pays for itself" in the long run, as proponents of the "Golden Rule" implicitly or explicitly argue. Defense purchases have even smaller (or negative) effects on GDP and private investment.
Ekinci and his co-authors investigate the degree of financial integration within European countries. They construct two measures of de facto integration across European regions to capture "diversification" and "development" finance in the language of Obstfeld and Taylor (2004). They find that capital market integration within the EU is less than what is implied by theoretical benchmarks and less than what is found for U.S. states. Why is this the case? Using data from the World Value Surveys, the authors investigate the effect of "social capital" on financial integration among European regions controlling for the effect of country level institutions. They find that regions where the level of confidence and trust is high are more financially integrated with each other.
Coeurdacier and his co-authors seek to understand three stylized facts observed in industrialized countries: 1) portfolios are biased toward local equity; 2) international portfolios are long in foreign currency and short in domestic currency; 3) valuation effects are such that an exchange rate depreciation is associated to a positive transfer of wealth. They build a two-country, two-good model with stocks and bonds where uncertainty is not only attributable to productivity shocks but also to shocks on the distribution of income between labor and capital, and to demand shocks. They show that, in this case, optimal portfolios are broadly consistent with these stylized facts. They perform the analysis in situations of both complete and incomplete markets.
Pesenti and Corsetti provide a graphical introduction to the recent literature on macroeconomic stabilization in closed and open economies. Among the issues they discuss are: international transmission of real and monetary shocks and the role of exchange rate pass-through; optimal monetary policy and the welfare gains from macroeconomic stabilization; and monetary coordination among interdependent economies.
Aoki and his co-authors construct a model of small open economy in which it is difficult to insist that debtors repay debt unless it is secured by collateral, and where assets and projects usable as collateral for international borrowing are more restricted than for domestic borrowing. Using this model, they analyze how adjustment to capital account liberalization depends upon the degree of development of domestic financial institutions. They show that asset prices importantly affect the dynamic adjustment of output and total factor productivity during the adjustment process. They also analyze why the economy with underdeveloped financial systems may be vulnerable to shocks to the condition of domestic and foreign credits.
It is widely argued that countries can reap large gains from liberalizing their capital accounts if financial globalization is accompanied by the development of domestic institutions and financial markets. However, if liberalization does not lead to financial development, globalization can result in adverse effects on social welfare and the distribution of wealth. Mendoza and his co-authors use a multi-country model with non-insurable idiosyncratic risk to show that, if countries differ in the degree of asset market incompleteness, financial globalization will hurt the poor in countries with less developed financial markets. This is because in these countries liberalization leads to an increase in the cost of borrowing, which is harmful for the poor because they have larger liabilities relatively to their assets. The quantitative analysis shows that the welfare effects are sizable and can justify policy intervention.
The MIT Press will publish these papers in an annual conference volume later this year. They are also available at "Books in Progress" on the NBER's website.
Lichtenberg examines the impact of pharmaceutical innovation and other factors on the survival of U.S. cancer patients during the 1990s. In particular, he investigates whether cancer survival rates increased more for those cancer sites that had the largest increases in the proportion of drug treatments that were "new" treatments. He controls for "expected survival," that is the survival of a comparable set of people that did not have cancer, thereby measuring the excess mortality that is associated with a cancer diagnosis. He also controls for other types of medical innovation, such as innovation in surgical procedures, diagnostic radiology procedures, and radiation oncology procedures. Data on observed and expected survival rates, the number of people diagnosed, mean age at diagnosis, and stage distribution come from the National Cancer Institute's Surveillance, Epidemiology, and End Results (SEER) 1973-2003 Public-use Data. Estimates of rates of innovation in drugs and other treatment and diagnos procedures were constructed from the MED STAT Marketscan database and other data sources. He computes weighted least-squares estimates of 12 versions of a survival model, based on different survival intervals, functional forms, and sets of weights. The drug vintage coefficient is positive and significant in almost every model. This indicates that the cancer sites whose drug vintage (measured by the share of post-1990 treatments) increased the most during the 1990s tended to have larger increases in observed survival rates, ceteris paribus. Estimates of the fraction of the 1992-9 change in the observed survival rate attributable to the increased use of post-1990 drugs range from 12 percent to 121 percent. The estimated fraction is higher for shorter survival intervals, when observations are weighted by the number of MEDSTAT drug treatments, and for the logarithmic specification. The mean of the 12 estimates of the fraction of the 1992-9 change in the observed survival rate attributable to the increased use of post-1990 drugs is 44 percent. Because of sampling and other measurement errors, these estimates may be conservative. The coefficients on measures of other types of medical innovation (in radiation oncology, diagnostic radiology, and surgery innovation) are generally not significant. However, these measures may be less reliable than the drug innovation measure: they are based upon the year in which the AMA established a new procedure code, which may be a far less meaningful indicator of innovation than the year in which the FDA first approved a drug. This topic warrants further research.
Government-sponsored health reinsurance increasingly has been promoted as a strategy for addressing problems in the non-group health insurance market. While reinsurance is promising, economic analysis has identified superior schemes for better accomplishing the same goals at lower cost. Specifically, reinsurance can be considered a crude special case of risk-adjusted insurance subsidies. Dow considers economic benefits and budgetary costs of reinsurance schemes as compared to more sophisticated risk-adjustment, calibrated to the current U.S. context. In particular, risk adjustment is likely to perform better at reducing insurer cream-skimming incentives. Although in the past risk adjustment had been considered too complex to implement in practice, recent experience shows that risk adjustment is now feasible, and he argues that incorporation of risk adjustment would strengthen many current U.S. health insurance reform proposals.
Policies designed to expand insurance coverage may have very different implications beyond the number of newly insured, particularly among the working poor. These broader effects include employment, wages, and the distribution of costs and benefits across families. Meara and her co-authors compare the likely effects of three common approaches to covering the uninsured: public insurance expansions, refundable tax credits for low-income people, and employer and individual mandates. The most common approaches being pursued by the states are likely to miss a large share of the uninsured working poor. Approaches that expand coverage most broadly have potentially significant negative labor market consequences, while market-based approaches redistribute dollars but insure relatively few. Policymakers must consider the full range of economic costs when designing health insurance expansions.
Despite numerous attempts to enact legislation throughout the twentieth century, the United States is the only developed country without a system of national health insurance.Yet, public opinion polls over the last 20 years consistently find that a solid majority of Americans support national health insurance. Bundorf and Fuchs examine the relationship between public support for national health insurance and attitudes toward different roles of government and individual beliefs. They find that people who have favorable attitudes toward government economic intervention and government redistribution are more likely to favor national health insurance than those who have less favorable attitudes toward these roles of government. The most intense support for national health insurance is among those who have favorable attitudes toward both roles of government. Consistent with research about other social programs, the authors find that the beliefs regarding racial minorities, as well as beliefs regarding individual control over life, limit support for national health insurance in the United States. On the other hand, negative beliefs regarding businesses are an important source of support for national health insurance.
Many believe the high level of U.S. health care costs compared with other countries is attributable to high administrative costs inherent in our pluralistic health care system. Instead of the usual statistics examining percentage of GDP that various countries spend on health care, Newhouse and Sinaiko show the percentage of Gross State Product that various states spend on health care. Even adjusting for age and income, there is considerable variation across the states in spending levels, with the lowest quintile of states spending approximately the same percentage as the higher spending OECD countries other than the United States. Although a single-payer system may be a sufficient condition to spend at these levels, it is not a necessary condition.
It is often presumed that a publicly funded, single payer health care system like Canada's will deliver better health outcomes and distribute health resources more equitably than the multi-payer, largely private system in the United States. This judgment has been bolstered by comparisons of life expectancy and infant mortality, two measures on which Canada scores higher than the United States. However, both measures are influenced by factors unrelated to the quality or accessibility of medical care (for example, the relatively high teen birth rate in the United States 2.8 times higher than the Canadian rate contributes to the differential in infant mortality because teen births are more likely to be high risk- preterm and low weight). Thus, the efficacy of the two health care systems cannot be usefully evaluated by comparisons of the two popular measures. O'Neill and O'Neill use a unique dataset that was designed to enable comparisons of health outcomes and access to health resources in Canada an the United States. Based on their analysis of these data (The Joint Canada/U.S. Survey of Health), they find a somewhat higher incidence of chronic health conditions in the United States, combined with evidence of greater access to health treatments for those conditions. Moreover, with respect to preventive care, a higher percentage of U.S. than Canadian women receive cancer screening in the form of mammograms and pap smears. Health status, measured in various ways is similar in both countries. But Canada has no more abolished the tendency for health status to improve with income than have other countries. Indeed, the health-income gradient is slightly steeper in Canada than it is in the United States, even after adjusting for education and other variables related to income. The need to ration when care is delivered "free" ultimately leads to unmet needs because of long waits or unavailable services. In the United States, costs are more often a source of unmet needs. But costs may be more easily overcome than the absence of services. When asked about satisfaction with health services and the ranking of the quality of services recently received, a significantly higher percentage of U.S. residents than Canadians respond that they are very satisfied with services and rank quality of care received as excellent.
Using a comprehensive dataset, Hosono and his co-authors investigate the motives and consequences of the consolidation of banks in Japan during the fiscal years 1990-2004. Their analysis suggests that regulators' attempts at stabilizing the local financial market through consolidations played an important role in the mergers and acquisitions (M&As) conducted by regional banks or credit corporative (shinkin) banks, although their attempt does not seem to have been successful. Value maximization motives also seem to have driven the M&As conducted by major banks and regional banks in the early 2000s. The researchers find no evidence supporting managerial motives for empire building.
Using event study methodology, Sakuragawa and Watanabe ask how the stock market evaluated Japanese financial reform -- the so-called "Takenaka Plan." They examine how market participants perceived three important events that occurred in 2003: the release of a package of monetary policies initiated by Fukui, the new governor of BOJ, as well as the failures of Risona Bank and Ashikaga Bank. The response of Japanese banks' stock market returns to the failures of Risona and Ashikaga Banks reveals that bank shareholders differentiate individual banks by their financial conditions, as targeted for disciplinary purposes by the Takenaka Plan. The response to the monetary package, however, shows no evidence of differentiating banks by individual conditions.
At some point in the not too distant future, China will ease its capital controls and make the yuan renminbi fully convertible into foreign currencies. Shortly after that, Shanghai will re-emerge as an international financial center. What then will become of Hong Kong, an established financial center now? McCauley and Chan argue that Hong Kong will gain stature as an international financial center when China is more open financially and Shanghai returns as a competing center. This thesis is in the tradition of Kindleberger (1974), who argued that federal states can support more than one financial center. The thesis that the development of an onshore international financial center can contribute to the development of a nearby offshore international financial center is in some ways the inverse of that of Rose and Spiegel (2006), who argue that offshore competition can spur the onshore center.
Park documents the rapid growth of household credit in Korea since the foreign exchange crisis of 1997 and the development of a credit card crisis in 2003, then evaluates the adequacy of policy responses to both. The increase in household debt was primarily the result of financial deregulation and a paradigm shift in the financial industry. The new principle adopted by financial institutions after deregulation was mainly to emphasize resource allocation based on market mechanisms.The deregulation of the financial industry after the foreign exchange crisis was not an exception to this, in the sense that it was accompanied by a boom in the financial market that eventually resulted in a violent crash landing. Park discusses the development of the credit card crisis presuming that it was a classic example of regulatory failure. He argues that, with timely and proper regulatroy actions, mouch of the difficulty inflicted by the credit card crisis would have been alleviated, if not averted.
Lim notes that the mechanisms for corporate restructuring are significantly different depending upon whether the financial system of the economy is "bank-based" or "stock market-based.." In the bank-based system, corporate control remains within the firms. Only in the case of (near) bankruptcy is corporate control transferred to banks. In the stock market-based financial system, on the other hand, the owners or management of the firms have to share corporate control with outside investors in the market. This activation of the market for corporate control is exactly the key to corporate restructuring. Before the financial crisis in 1997, Korea's financial system was mainly bank-based. The financial crisis itself meant a collapse of that system. After the crisis, the Korean government made extensive reform efforts to improve the financial system: enhancing the workings of the banking system and the capital market. Since then, Korea has been moving away from a bank-based financial system towards a sto market based aystem. In particular, in the Korean stock market, foreign investors have been increasing their shares (of large blue-chip firms). This exposure of corporate control to outside competition has a significant impact on the behavior of (large) firms. Since large firms are mostly affiliated with big business groups, cross shareholdings within the groups increase significantly. In addition, the ownership structure within the group changes to enhance the control on affiliated firms.
Green and his co-authors provide a conceptual basis for the price discovery potential of tradable market instruments and, specifically, for the development of mortgage securitization in Asia. Securitization in Asia, they argue, is particularly important because of its potential role in increasing the transparency of the financial sector of Asian economies. The authors put forth a model explaining how misaligned incentives can lead to bank generated real estate crashes and to macroeconomic instability. They examine the banking sector's performance in Asia compared to securitized real estate returns, in order to provide evidence on the contribution of misaligned incentives. They discuss how the addition of mortgage-backed securities (MBS) helps to inoculate markets from the shocks arising from bank-financed mortgage lending. They conclude with a brief discussion of current MBS markets in Asia.
Shen and Lin study the motivation that drives financial institutions to engage in cross-border M&A activity in ten Asian countries before and after the 1997 financial crisis. Five hypotheses are examined: the gravity hypothesis; the following the client hypothesis; the market opportunity hypothesis; the information cost hypothesis ; and the regulatory restriction hypothesis. Each hypothesis includes two or three proxies and therefore results are better interpreted based on the proxies. The results suggest that although the "client hypothesis" and the "regulatory restriction hypothesis" are applicable for both periods, the "information cost hypothesis" is only supported after the Asian crisis.
Recent studies have established that the Japanese stock market was quite large in the pre-war period and played an important role in financing economic development. However, the pre-war stock market in Japan did not achieve its size and status quickly. Indeed, market capitalization remained relatively small during the early years of stock market development in Japan. Hamao and his co-authors study the pre-war development of the Tokyo Stock Exchange (TSE), which eventually grew to be one of the two largest stock exchanges in pre-war Japan. They ask why the development was rather stagnant between its establishment in 1878 and the 1910s, and what led to its take-off in the late 1910s. They argue that the TSE stayed small because low liquidity discouraged new companies from listing their stocks. The lack of growth in newly listed stocks meant that liquidity continued to be low until 1918, when the TSE changed its policy to start listing companies without waiting for their listing applications. They provide empirical evidence from the listing behavior of cotton spinning firms showing that the size of the market indeed mattered for their listing decision before 1918.
What is good for a country's biggest businesses is not good for its overall economy, at least in the late twentieth century. More turnover in a country's list of top ten businesses between 1975 and 1996 is associated with faster per capita GDP growth, productivity growth, and (in low income countries) capital accumulation. This accords with Schumpeter 's early concept of creative destruction, wherein growth is a positive feedback process of innovative firms blooming as stagnant firms wither. However, Fogel, Morck, and Yeung find that growth correlates with old big businesses declining and disappearing, not just new ones arising, which seems more consistent with creative destruction than with other theories of economic growth. Consistent with Aghion and Howitt 's (1992) theory that creative destruction matters more to economies nearer the technological frontier, growth correlates more strongly with big business turnover in higher income countries. More rapid turnover of big business also correlates with smaller government, Common Law, a less bank-dependent economy, stronger shareholder rights, and greater openness. Only the last is more prominent in low than high-income countries. In low income countries, the turnover of onetime state controlled big businesses correlates markedly with growth.
Merger activities have been growing significantly in China's stock markets in recent years. Adopting the event-study method and accounting method (financial indicators method), Wu and Zhang examine 752 merger and acquisition events involving 587 companies traded on Shanghai and Shenzhen stock exchanges in 2005. They find that within the event period (-50 days, 40 days), M&A companies' values were increased. The CAR (Cumulative Abnormal Return) of all M&A firms within the event period was 1.74 percent; for acquiring firms and target firms, the CAR were respectively 1.68 percent and 2.03 percent. They also examine whether M&A types, M&A companies' industries, ownership structures (the types of controlling shareholders), and the stock market's aggregate performance can have effects on the returns to M&A events. Analyzing accounting indicators within a longer observation period (four years), they find that the financial conditions of M&A companies showed a certain degree of decline in the first year of M&A, but an obvious improvement in the following year. However, the short-term and long-term ability to repay debt declined without a clear sign of improvement after the merger.
Chinn and Ito investigate the role of budget balances, financial development, and openness in the evolution of global imbalances. Financial development or the lack thereof has received considerable attention as a possible contributing factor to the development of persistent and expanding current account imbalances. Several observers have argued that the depth and sophistication of U.S. capital markets have caused capital to flow from relatively underdeveloped East Asian financial markets. Here the authors extend their previous work by examining the effect of different types and aspects of financial development. Their cross-country analysis, encompassing a sample of 19 industrialized countries and 70 developing countries for the period 1986 through 2005, yields a number of new results. First, they confirm a role for budget balances in industrial countries when bond markets are incorporated. Second, empirically, both credit to the private sector and stock market capitalization appear to be equall important determinants of current account behavior. Third, while increases in the size of financial markets induce a decline in the current account balance in industrial countries, the reverse is more often the case for developing countries, especially when other measures of financial development are included. However, because of nonlinearities incorporated into the specifications, this characterization is contingent. Fourth, a greater degree of financial openness is typically associated with a smaller current account balance in developing countries.
Sturm and Williams examine the factors that determine differences in efficiency of foreign banks in the host market (Australia). They consider theimpact of home market, host market, and parent bank characteristics, within the frameworks offered by comparative advantage and new trade theories. They also use parametric distance functions to estimate the efficiency of foreign banks in Australia, and they test the robustness of model specification using both general-to-specific modelling and extreme-bounds analysis. They find that following clients reduces the efficiency of profit creation. Incumbent bank's market share acts as a barrier to entry, while parent bank profits do not improve host nation efficiency. The limited global advantage hypothesis is relevant for banks from the United Kingdom, while banks from the United States were generally less efficient.
At this conference, the discussion topics presented by Chinese participants included: the relationship between trade, foreign direct investment, and technological upgrades in China; measurements of the return to capital in China; how entrepreneurship will influence China's growth; Chinese energy policy and the environment; healthcare reforms in China; and, how aging of China's population will affect economic development.
U.S. participants at this year's conference were: NBER President Martin Feldstein, and Professor Shang-Jin Wei of Columbia University and NBER, both serving as the U.S. conference organizers; NBER researchers Kristin Forbes of MIT, Douglas A. Irwin of Dartmouth College, Kathleen M. McGarry of University of California Los Angeles, Gilbert E. Metcalf of Tufts University, Alvin E. Roth of Harvard University, Scott Stern of Northwestern University, and Mark W. Watson of Princeton University. They discussed: the lessons from history regarding trade policy; the impact of patents on scientific research; international business cycle dynamics; energy policy and the environment; health insurance and health care; and the economics of kidney exchange.
The entire conference program with links to other related information is available on the NBER's web site here.