The NBER Reporter Summer 2003: News
President Bush Taps NBER Economists for CEA
Levitt Receives John Bates Clark Medal
NBER Researchers Lead American Economics Association
NBER Announces 2003-4 Nonprofit Fellowships
Thomas D. Flynn, Director Emeritus, Dead at 90
2002 Japan Conference: A Summary of the Papers
International Trade and Organizations
President Bush has nominated two more NBER researchers to become members of his Council of Economic Advisers. Harvey S. Rosen and Kristin Forbes will be joining N. Gregory Mankiw, a Harvard economics professor and NBER Research Associate, who is CEA Chairman. Rosen, an NBER Research Associate and professor of economics at Princeton University, was nominated in April. He holds a Ph. D. from Harvard University and served as deputy assistant secretary for tax analysis in the administration of George H. W. Bush. Forbes, an NBER Faculty Research Fellow and professor at MIT's Sloan School of Management, was nominated in May. She has also served as deputy assistant secretary of the Treasury Department in the current Bush Administration. NBER Research Associate Andrew A. Samwick, a member of the economics faculty at Dartmouth College, is the CEAs Chief Economist [back to top]
Levitt Receives John Bates Clark Medal
NBER Research Associate Steven D. Levitt, a professor of economics at the University of Chicago, received the American Economics Associations John Bates Clark Medal this year. The award is given every two years to the economist under the age of 40 who has made the most substantial contribution to the field of economics. Levitt is a member of the NBERs Programs on Public Economics, Law and Economics, Children, and Education. He received the medal for his work on crime and corruption, including studies of teacher cheating at Chicago schools, corruption in Japanese sumo wrestling circles, and the economics of gang life. Levitt received his B.A. from Harvard University in 1989 and his Ph.D. from MIT in 1994. Previous Clark medalists among NBER Research Associates were: Zvi Griliches, Daniel L. McFadden, Martin S. Feldstein, Joseph E. Stiglitz, James J. Heckman, Jerry A. Hausman, Sanford J. Grossman, Paul R. Krugman, Lawrence H. Summers, David Card, Kevin M. Murphy, and Andrei Shleifer [back to top]
NBER Researchers Lead American Economics Association
Three NBER Research Associates are the President, President-Elect, and nominee for President-Elect, respectively, of the American Economics Association (AEA). Peter A. Diamond, an MIT economics professor and member of the NBERs Programs on Monetary Economics and on Economic Fluctuations and Growth, is the current AEA President. NBER President Martin Feldstein, a professor of economics at Harvard University and a member of the NBERs Programs on Aging, Health Care, Public Economics, and Economic Fluctuations and Growth, is President-Elect. He is slated to become the AEAs President in January, succeeding Diamond. The AEAs Executive Committee also has chosen Daniel L. McFadden as the nominee for President-Elect, to succeed Feldstein. McFadden is a professor of economics at the University of California, Berkeley, and a member of the NBERs Program on Aging [back to top]
NBER Announces 2003-4 Nonprofit Fellowships
NBER Research Associate James M. Poterba has announced the recipients of NBER Fellowships for the Study of Nonprofit Institutions for the 2003-4 academic year. This NBER fellowship program is designed to encourage research on nonprofit institutions by NBER Research Associates and Faculty Research Fellows, and to support dissertation research on the same subject by graduate students in economics who work closely with them. The four graduate students who will receive fellowship support are: Martha Bailey, Vanderbilt University, whose topic is "Nonprofit Organizations Providing Family Planning Services"; Guy David, University of Chicago, who is studying the "Performance of Not-for-profit versus For-Profit Hospitals"; Dan Hungerman, Duke University, for "Behavior of Organized Religious Organizations"; and Peter Katuscak, University of Michigan, for Managerial Pay in Nonprofit Organizations." Faculty grants were made to: Joshua Angrist, MIT, and Kevin Lang, Boston University, who will study A Nonprofit Charter School: Media and Technology Charter School, Boston; Esther Duflo, MIT, who will focus on "The Effectiveness of Nonprofit Organizations in Developing Countries; Charles Phelps, University of Rochester, whose research is on "Governance in Not-for-Profit Hospitals and Universities"; David Reiley, University of Arizona, who will analyze "Fundraising Mechanisms Used by Charitable Organizations"; and Michael Rothschild, Princeton University, who will ask How Does California Produce First-Rate Research Universities? [back to top]
Thomas D. Flynn, Director Emeritus, Dead at 90
Thomas D. Flynn, who became a Director of the NBER in 1967 representing the American Institute of Certified Public Accountants as its president, died on April 12 at age 90. He has been a Director Emeritus of the NBER since 1968. Flynn was a 1935 graduate of Princeton University with a degree in economics. He received his M.B.A. in accounting from the Columbia Graduate School of Business in 1938. After a short stint on the staff of the Federal Trade Commission, Flynn joined the accounting firm Arthur Young & Company in 1940. He rose to senior partner before being named vice chairman of the firms management committee. When he retired in 1975, Governor Hugh L. Carey of New York named him to head the Municipal Assistance Corporation, an agency set up to help manage New York Citys troubled finances. He remained a member of that Board until 1979. Flynn was also a trustee emeritus of Columbia University. He leaves his wife of 62 years, Harriett Howland Flynn, two daughters, one son, and six grandchildren [back to top]
2002 Japan Conference: A Summary of the Papers
The NBER recently published a booklet summarizing the papers and discussions presented at a September 2002 conference in Tokyo. This conference, which has been held annually for the past several years, presents some of the best English language papers on the Japanese economy. This year, there was also a panel discussion on monetary policy in Japan, and a luncheon address by Haruhiko Kuroda, at the time the Vice Minister of the Japanese Ministry of Finance. The full versions of these papers are available as NBER Working Papers and can be found at /papers/. An electronic version of this booklet is also available at /2002japanconf/ [back to top]
Farber models the labor supply of taxi drivers, suggesting that because income effects in response to temporary fluctuations in daily earnings opportunities are likely to be small, cumulative hours will be much more important than cumulative income in the decision to stop work on a given day. However, if income effects are large because of very high discount and interest rates, then labor supply functions could bend backward and, in the extreme case where the wage elasticity of daily labor supply is minus one, drivers could be target earners. Indeed, earlier studies by other researchers find that the daily wage elasticity of labor supply of New York City cab drivers is substantially negative; they conclude that cab drivers probably are target earners. However, based on new data, Farber concludes that -- when accounting for earnings opportunities in a reduced form with measures of clock hours, day of the week, weather, and geographic location -- cumulative hours worked on the shift is a primary determinant of the likelihood of stopping work while cumulative income earned on the shift is related weakly, at best, to the likelihood of stopping work. This is consistent with the existence of inter-temporal substitution and is inconsistent with the hypothesis that taxi drivers are target earners.
Many markets have organizations that influence or try to establish norms concerning when offers can be made, accepted, and rejected. An examination of a dozen previously studied markets suggests that markets in which transactions are made far in advance are those in which it is acceptable for firms to make exploding offers, and where it is unacceptable for workers to renege on commitments they make, however early. Still, markets differ in many ways other than norms concerning offers. Laboratory experiments allow Niederle and Roth to isolate the effects of exploding offers and binding acceptances. In a simple environment, where uncertainty about applicants' quality is only resolved over time, the authors find inefficient early contracting when firms can make exploding offers and applicants' acceptances are binding. Relaxing either of these two conditions causes matching to take place later, when more information about applicants' qualities is available, and consequently results in higher efficiency and fewer blocking pairs. This suggests that elements of market culture may play an important role in influencing market performance.
Botero, Djankov, LaPorta, Lopez-de-Silanes, and Shleifer investigate the regulation of labor markets through employment laws, collective bargaining laws, and social security laws in 85 countries. They find that richer countries regulate labor less than poorer countries do, although the former have more generous social security systems. The authors' measures of the political power of the left are associated with more stringent labor regulations and more generous social security systems, although the former result is not robust. Finally, countries with socialist and French legal origin have sharply higher levels of labor regulation than do common law countries. These results are difficult to reconcile with straightforward versions of efficiency and with political power theories of institutional choice, but are broadly consistent with legal theories, according to which countries have pervasive regulatory styles inherited from the transplantation of legal systems.
Bitler, Gelbach, and Hoynes use experimental data to estimate quantile treatment effects (QTEs) of Connecticut's Jobs First reform. They find considerable evidence of systematic heterogeneity, which is generally consistent with theoretical predictions. Moreover, they find that mean impacts computed for subgroups would not have uncovered a number of important findings. For example, earnings at the very top of the distribution fall, as theory predicts should happen when more generous policies encourage women to stay on or re-enter welfare. Also, Jobs First has essentially no impact on earnings at the bottom of the earnings distribution: income at the bottom deciles is constant before time limits hit (contrary to large positive mean impacts for a disadvantaged subgroup) and falls after time limits begin to take effect (again contrary to a zero effect using means). These findings suggest cause for concern about the ability of programs like Jobs First to end welfare dependence of significant numbers of women.
Each day close to 20,000 people become infected with the HIV virus worldwide; a large portion of them are infected through unprotected sex with commercial sex workers. While condoms are an effective defense against the transmission of HIV and other sexually transmitted infections, large numbers of sex workers are not using them with their clients. Gertler, Shah, and Bertozzi argue that sex workers are willing to take the risk because clients are willing to pay more to avoid using condoms. Using a panel data set from Mexico, the authors estimate that commercial sex workers received a 23 percent premium for unprotected sex from clients who requested not to use a condom. However, this premium jumped to 46 percent if the sex worker was considered very attractive. These results suggest that the current supply-side policies aimed at educating sex workers about risk and empowering them is not sufficient to significantly increase condom use. Rather, complementary interventions aimed at reducing the demand for not using condoms are needed.
Advocates of teacher incentive programs argue that they can strengthen weak incentives, while opponents argue that they lead to "teaching to the test." Glewwe, Ilias, and Kremer find that existing teacher incentives in Kenya are indeed weak, with teachers absent 20 percent of the time. The authors then report on a randomized evaluation of a program that provided primary school teachers in rural Kenya with incentives based on students' test scores and dropout rates. Students in program schools had higher test scores, significantly so on at least some exams, during the time the program was in place. An examination of the channels through which this effect took place, however, provides little evidence of more teacher effort aimed at increasing long-run learning. Teacher attendance did not improve, homework assignments did not increase, dropout rates did not fall, and pedagogy did not change. However, there is evidence that teachers increased effort at manipulating short-run test scores. Conditional on being enrolled, students in treatment schools were more likely to take tests. Teachers in treatment schools scored higher than their counterparts in comparison schools during the life of the program; they did not retain these gains after the end of the program, consistent with a model in which teachers focused on manipulating short-run scores.
A common view in macroeconomics is that business cycles can be decomposed meaningfully into fluctuations driven by demand shocks -- shocks that have no short- or long-run effects on productivity -- and fluctuations driven by unexpected changes in technology. Beaudry and Portier propose a means of evaluating this view and show that it is strongly at odds with the data. They show instead that the data favor a view of business cycles driven primarily by a shock that does not affect productivity in the long run. The structural interpretation they suggest for this shock is that it represents news about future technological opportunities. They show that this shock explains about 50 percent of business cycle fluctuations and therefore deserves to be acknowledged and further understood by macroeconomists.
Conventional measures of monetary policy, such as the federal funds rate, are surely influenced by forces other than monetary policy. More importantly, central banks adjust policy in response to a wide range of information about future economic developments. As a result, estimates of the effects of monetary policy derived using conventional measures tend to be biased. To address this problem, Romer and Romer develop a measure of monetary policy shocks in the United States from 1969 to 1996 that is relatively free of endogenous and anticipatory movements. To address the problem of forward-looking behavior, the authors control for the Federal Reserve's forecasts of output and inflation prepared for scheduled FOMC meetings. They remove from their measure policy actions that are a systematic response to the Federal Reserve's anticipations of future developments. To address the problem of endogeneity, and to ensure that the forecasts capture the main information the Federal Reserve had at the times decisions were made, they consider only changes in the intended federal funds rate around scheduled FOMC meetings. The series for intended changes spans periods when the Federal Reserve was not exclusively targeting the funds rate. It is derived using information on the expected funds rate from the records of the Open Market Manager and information on intentions from the narrative records of FOMC meetings. Estimates of the effects of monetary policy obtained using the new measure indicate that policy has large, relatively rapid, and statistically significant effects on both output and inflation. The authors find that the effects using the new measure are substantially stronger and quicker than those using prior measures. This suggests that previous measures of policy shocks are significantly contaminated by forward-looking Federal Reserve behavior and endogeneity.
Beyer and Farmer propose a method for estimating a subset of the parameters of a structural rational expectations model by exploiting changes in policy. They define a class of models, midway between a vector autoregression (VAR) and a structural model, that they call the recoverable structure. They provide an application of their method by estimating the parameters of a three-equation model of the monetary transmission mechanism using data from 1970:Q1 to 1999: Q4. They first estimate a VAR and find that its parameters are unstable. However, using their proposed identification method, they are able to attribute instability in the parameters of the VAR solely to changes in the parameters of the policy rule. They recover parameter estimates of the recoverable structure and demonstrate that these parameters are invariant to changes in policy. Since the recoverable structure includes future expectations as explanatory variables, their parameter estimates are not subject to the Lucas  critique of econometric policy evaluation.
Recent papers have analyzed how adaptive agents may converge to and escape from self-confirming equilibriums. All of these papers have imputed to agents a particular prior about drifting coefficients. In the context of a model of monetary policy, Sargent and Williams analyze dynamics that govern both convergence and escape under a more general class of priors for the government. They characterize how the shape of the prior influences the dynamics in important ways. There are priors for which the E-stability condition is not enough to assure local convergence to a self-confirming equilibrium. This analysis also tracks down the source of differences in the sustainability of Ramsey inflation encountered in the analyses of Sims (1988) and Chung (1990), on the one hand, and Cho, Williams, and Sargent (2002), on the other.
Using survey data on expectations, Leduc, Sill, and Stark examine whether the post-war data are consistent with theories of a self-fulfilling inflation episode during the 1970s. Among commonly cited factors, oil and fiscal shocks do not appear to have triggered an increase in expected inflation that was subsequently validated by monetary policy. However, the evidence suggests that, prior to 1979, the Fed accommodated temporary shocks to expected inflation, which then led to permanent increases in actual inflation. The authors do not find this behavior in the post-1979 data.
To reproduce the high variation of employment and low variation of real wages observed in the post-war U.S. data, many business cycle models must assume that there are high price markups and that agents have high labor supply elasticities. The problem with these assumptions is that microeconomic evidence indicates that markups and labor supply elasticities are generally low. To eliminate the need for these assumptions, Alexopoulos introduces a labor market friction into a monetary general equilibrium model with limited participation. In the model, firms imperfectly observe their workers' effort levels; detected shirkers forgo an increase in their compensation; and households make their decisions about their level of monetary deposits for the period before seeing the shocks to the economy. The estimated model is able to account for the observed variation in employment and real wages over the business cycle, and is consistent with the existing evidence about the qualitative responses of the U.S. economy to fiscal and monetary policy shocks.
Beginning in 1998, all high school students in the state of Texas who graduated in the top 10 percent of their class were guaranteed admission to any public higher education institution, including the University of Texas. While the goal of the policy was to improve access for disadvantaged and minority students, the use of a school-specific standard to determine eligibility could have unintended consequences. Students may benefit from switching schools near the end of their high school career in order to change their peer reference group and to increase the chances of being in the top 10 percent. In their analysis of student mobility patterns before and after the policy change, Cullen, Long, and Reback find evidence that these strategic moves did occur.
Jensen and Thursby examine concerns about whether recent changes in patent policy that increased the ability and the incentives for U.S. universities to patent inventions have been detrimental to academic research and education. They analyze a model in which a researcher allocates time between applied and basic research, given administration choices of salary and teaching load. The choice depends on a "composite" marginal rate of substitution of applied-for-basic research which takes into account both the utility of each type of research and its productivity in generating income and prestige. In equilibrium, if license income is the same whether the researcher remains in the university or not, an increase in the share of income leads to no change in optimal teaching load or salary, or to a change in opposite directions. A decrease in the portion of patentable knowledge that can be used in education either has no effect on the optimal teaching load or salary, or causes them to move in opposite directions. Results on the quality of education are typically ambiguous because changes in the teaching load induce changes in research that have an opposite effect on educational quality.
Using data from 13 liberal arts colleges, Baum and Goodstein test for affirmative action for men in the college admissions process. Despite the relative shortage of men in the applicant pools, except for some formerly female institutions, gender either has no effect on the probability of admission or there is a slight preference for women. The authors do find, however, that the bottom quartile of both the applicant and acceptance pools, as measured by academic record, is disproportionately male. As a result, even with a gender-blind admissions policy, the lower tail of college classrooms is likely to be dominated by men.
To ask whether the best-informed consumers of higher education -- the faculty -- make different choices from other similarly endowed consumers, Siegfried and Getz compare the pattern of colleges chosen by 5,592 children of college and university faculty with the pattern chosen by the children of non-faculty families of similar socioeconomic status. The patterns are remarkably different. The children of faculty are more likely to choose research universities and even more likely to choose selective liberal arts colleges. This evidence is consistent with the view that the level of information makes a difference in the choice of college.
Are federal Pell grants "appropriated" by universities through increases in tuition -- consistent with what is known as the Bennett hypothesis? Based on a panel of 71 universities from 1983 to 1996, Singell and Stone find little evidence of the Bennett hypothesis among either public or lower-ranked private universities. For top-ranked private universities, though, increases in Pell grants appear to be more than matched by increases in net tuition. The behavior most consistent with this result is price discrimination that is not purely redistributive from wealthier to needier students.
International Trade and Organizations
Legros and Newman show that vertical integration may be chosen by managers to the detriment of consumers, even in the absence of monopoly power in either supply or product markets. This effect is most likely to occur when demand is initially high and there is a negative supply shock, or when demand is low and there is a positive demand shock. Therefore, there is a need for scrutiny of vertical mergers even in the absence of market power. This result is robust to the introduction of active shareholders who may oppose the merger and to other extensions.
The incomplete nature of contracts governing international transactions limits the extent to which the production process can be fragmented across borders. In a dynamic, general-equilibrium Ricardian model of North-South trade, the incompleteness of contracts leads to the emergence of product cycles. Goods initially are manufactured in the North, where product development takes place. As the good matures and becomes more standardized, manufacturing shifts to the South, benefiting from lower wages. Following the property-rights approach to the theory of the firm, the same force that creates product cycles -- that is, incomplete contracts -- opens the door to a parallel analysis of the determinants of the mode of organization. As a result, a new version of the product cycle emerges, in which manufacturing shifts to the South first within firm boundaries, and only later to independent firms in the South. Relative to a world with only arm's length transacting, allowing for intrafirm production transfer by multinational firms accelerates the shift of production towards the South, while having an ambiguous effect on relative wages. Antràs also discusses several microeconomic implications of the model and relates them to the empirical literature on the product cycle.
Andrabi, Ghatak, and Khwaja develop a simple model of flexible specialization under demand uncertainty. A buyer faces multiple suppliers with heterogeneous quality who have the option of selling to other buyers. The more specific a seller's assets are to the buyer, the lower is his flexibility to cater to the outside market, and this cost is greater for higher quality suppliers. Therefore, even if a buyer typically prefers high quality suppliers, some low quality suppliers might be kept as marginal suppliers because of their greater willingness to invest more in assets specific to the buyer, especially in the presence of contracting problems and high uncertainty. Andrabi, Ghatak, and Khwaja examine a primary dataset on contracts between the largest tractor assembler in Pakistan and its suppliers and ask how the extent of asset specificity and other supplier characteristics affect contractual outcomes, such as prices and distribution of orders. They find that the more dedicated suppliers are indeed of lower quality.
Globalization has been identified by many experts as a new way for firms to organize their activities and as the emergence of talent as the new stakeholder in the firm. Marin and Verdier examine the role of trade integration for the changing nature of the corporation. International trade leads to a "war for talent." This makes it more likely that in the integrated world economy an organizational equilibrium emerges in which control is delegated to lower levels of the firms' hierarchy, empowering human capital. Furthermore, trade integration leads to waves of outsourcing and to convergence in corporate cultures across countries.
There are a variety of models that seek to explain the process of technological innovation, but less is understood about the nature of innovation and diffusion in health care. Most models of technological innovation imply convergence; gradually, even the laggard adopters of new technology catch up to the innovators. And, while different hospitals or regions may develop their own strategies for treating specific diseases, the productivity of their strategies -- that is, their quality-adjusted price -- should exhibit convergence over time. Using Medicare data on 2.6 million heart attack patients during 1989-2000, Skinner and Staiger find first that the rapid technological gains occurring during the late 1980s and early 1990s have flattened by the late 1990s; indeed the quality-adjusted price has risen since 1995. Second, in considering regional differences in mortality costs and outcomes, the authors find no evidence of convergence with respect to productivity during this period. The evidence favors long-term differences across states in their propensity to adopt new technology; states likely to adopt hybrid corn in the 1930s and 1940s were most likely to adopt the use of highly effective and low cost beta-blockers in the 1990s.
Khwaja uses microdata on a longitudinal sample of individuals from the Health and Retirement Study and conducts a computer- simulated thought experiment comparing the trends and outcomes under the status quo to a world without Medicare. Further, Khwaja analyzes the effect of Medicare on life-cycle health related behaviors, that is, alcohol consumption, smoking, and exercise. The simulations from the model show that, in the absence of Medicare, the health outcomes fall marginally while the utilization of medical care falls dramatically. Further, in the absence of Medicare alcohol consumption, smoking, and exercise also increase. This suggests that Medicare induces significant moral hazard -- that is, over-consumption of medical care -- with small consequent effects in terms of improving health outcomes. Medicare also provides an incentive to reduce the risky habits of smoking and alcohol consumption. But the largest benefit of Medicare is in terms of mitigating financial loss, especially for the elderly. The presence of Medicare reduces out-of-pocket medical expenditures by large amounts. Thus, in sum the benefits of Medicare are primarily in terms of providing financial protection against illness. There is a small benefit in terms of improving health outcomes, because of better access to care. But Medicare leads to over-consumption of medical care. Finally, Medicare also seems to generate the right incentives vis-à-vis the health related habits.
A number of studies suggest that collecting and publicly reporting quality data about health plans is of limited value because consumers make little use of the information in their health plan choices. However, this does not address another possible beneficial effect of collecting quality data: health care providers may respond to the information, independent of the extent to which it is used by consumers, resulting in improvements in the practice of medicine. Bundorf and Baker investigate the extent to which reporting on quality among managed care plans within an area affects area-level health care delivery patterns. They study a nationally representative sample of individuals with employer-sponsored health insurance living in metropolitan areas during 1996-9. The authors consider four services: breast cancer screening for women 52-69; cervical cancer screening for women 21-64; eye exams for diabetics; and annual check-ups for adults. They find that area-level reporting on health plan quality is associated with area-level utilization of some, but not al,l measured services. Rates of screening for cervical cancer and eye exams for diabetics were higher in markets with a greater proportion of HMOs participating in HEDIS (Health Plan Employer Data and Information Set) quality measurement. Mammogram and physical exam rates, in contrast, appear not to have been affected by HEDIS quality measurement and reporting activity. The implication is that health plan quality measurement and reporting activity may drive changes in health care delivery, improving the quality of care throughout a market. This suggests that collection of quality data can have beneficial effects on health care even if consumers do not actively use health plan quality information when making health plan choices. However, some measures of quality may be more effective than others in stimulating change in provider behavior.
Beaulieu develops a model and presents evidence of a spillover, or free-riding externality, that may arise when managed care organizations (MCOs) have overlapping provider networks and compete on non-price characteristics. With overlapping provider networks, quality improvements initiated by one MCO may spill over and improve the quality of other MCOs sharing the same provider network. Since the quality of all products potentially improves, less differentiation is achieved by the MCO making the initial investment. This free-riding externality reduces the return on investment in quality improvement by increasing the costs of differentiation. Beaulieu presents evidence on the degree of physician network overlap for health plans competing in the same market. She confirms that MCOs with broader provider networks are less likely to invest in quality improvement activities that might spillover to other MCO plans. Consistent with the finding on health plan investment, a second analysis shows that MCOs with tighter provider networks, and MCOs that operate in markets with lower degrees in network overlap, have higher performance on HEDIS quality measures. One implication of the model is that the degree of exclusivity of a MCOs provider network is a strategic organizational choice that interacts with the MCOs' product choice.
Picone, Brown, and Sloan use a longitudinal national sample of Medicare claims linked to the National Long-Term Care Survey (NLTCS) to assess the productivity of routine eye examinations. Although such exams are widely recommended by professional organizations for certain populations, there is limited empirical evidence on the productivity of these preventive services. The authors measure two outcomes, the ability to continue reading, and blindness or low vision. They find a statistically significant and beneficial effect of routine eye exams for both outcomes. The marginal effects for reading ability are large but declining in the number of years with annual visits. Effects for blindness are smaller for the general population, but larger for diabetics.
Goettler, Parlour, and Rajan provide an algorithm for solving for equilibrium in a dynamic limit-order market. The authors formulate a limit-order market as a stochastic sequential game and use a simulation technique based on Pakes and McGuire (2001) to find a stationary equilibrium. Given the stationary equilibrium, they generate artificial time series and perform comparative dynamics. They then can compare transaction prices to true or efficient prices. Because of the endogeneity of order flow, the midpoint is not always a good proxy for the true price. The authors also find that the effective spread is negatively correlated with transactions costs and uncorrelated with welfare. They explicitly determine investor welfare in their numerical solution. As one policy experiment, they evaluate the effect of changing tick size.
Hasbrouck examines various approaches for estimating effective costs and price effects using daily data. Then he compares the daily-based estimates with corresponding estimates based on high-frequency TAQ data. The best daily-based estimate of effective cost is the Gibbs sampler estimate of the Roll model. The correlation between the best daily-based estimate and the TAQ value is about 0.90 for individual securities and about 0.98 for portfolios. Daily-based proxies for price effect costs are more problematic, though. Among the proxies Hasbrouck considers, the illiquidity measure (Amihud (2000)) appears to be the best: its correlation with the TAQ-based price impact measure is 0.47 for individual stocks and 0.90 for portfolios. Hasbrouck then extends the Gibbs estimate to the full sample covered by the daily CRSP database (beginning in 1962). The CRSP estimates exhibit considerable cross-sectional variation -- consistent with the presumption that trading costs vary widely across firms -- but only modest time-series variation. In specifications using Fama-French factors, the Gibbs effective cost estimates are found to be positive determinants of expected returns.
Sadka demonstrates the importance of liquidity for asset pricing, especially for understanding the momentum anomaly. First, systematic liquidity risk, rather than the absolute level of liquidity, is shown to be important in explaining the cross-sectional variation of expected returns. Moreover, momentum returns can be attributed partially to compensation for liquidity risk. Second, seemingly profitable momentum strategies that earn superior risk-adjusted returns (in absolute value) in fact are associated with low levels of liquidity. Therefore, the liquidity level of momentum portfolios suggests possible limits to arbitrage.
Werner uses unique order data to compare trading costs for institutional orders routed to Nasdaq sell-side dealers pre- and post-decimals. Her results show that average execution-value-weighted trading costs have declined by 20 basis points compared to open mid-quotes, by 16 basis points compared to the quotes at order arrival (effective half-spreads), and by 1 basis point compared to mid-quotes at the close (realized half-spreads). The reduction in effective half-spreads represents a 22 percent reduction in average institutional trading costs, which corresponds to $69 million in total cost savings for one week's worth of institutional orders. The results are robust to controlling for order, stock, trading, and dealer characteristics. Moreover, fill rates have improved and duration has not increased significantly. Werner also documents differences in trading costs by order type: effective half-spreads for market orders are now 52 basis points and for marketable limit orders 42 basis points, while limit orders enjoy a 46 basis point spread gain on average.
Boehmer, Saar, and Yu investigate an important feature of market design: pre-trade transparency, defined as the availability of information about pending trading interest in the market. They look at the way the NYSE's introduction of OpenBook, which enables traders off the exchange floor to observe depth in the limit-order book in real time, affects the trading strategies of investors and specialists, and influences informational efficiency, liquidity, and returns. The authors find that traders attempt to manage the exposure of their limit orders: the cancellation rate increases, time-to-cancellation shortens, and smaller orders are submitted. The new information that OpenBook provides seems to cause traders to prefer to manage the trading process themselves, rather than to delegate the task to floor brokers. The authors also show that specialists' participation rate in trading declines, and the depth they add to the quote is reduced, consistent with a loss of their information advantage or with being "crowded out" by active limit-order strategies. There is an improvement in the informational efficiency of prices after the introduction of OpenBook. Greater pre-trade transparency leads to some improvement in displayed liquidity in the book and a reduction in the execution costs of trades. The authors find that cumulative abnormal returns are positive following the introduction of OpenBook, consistent with the view that improvement in liquidity affects stock returns.
Fahlenbrach and Sandas study the determinants of the bid-ask spreads for index options using a sample that consists of all trades and quotes for the European style options and the futures on the FTSE 100 stock index from August 2001 to July 2002. They compute two measures that show that bid-ask spreads in their sample are economically large. First, they show that the reservation spreads for a liquidity provider who hedges the delta risk of a bought or sold index option by trading index futures is on average only 47 percent of the observed spread. Then they show that, on average, the spread for a synthetic index futures contract is 8-14 times the spread for the actual index futures contract, even after adjusting for differences in trading volume and contract size. The authors estimate fixed effects regressions for a panel of options and show that 42 percent of the variation in the daily average spreads is driven by order processing and inventory costs. The results show that inventory risk is an important determinant of spreads for index options, but that the compensation for inventory risk together with order processing costs does not explain all systematic variation in the observed spreads. The authors reject the null that all fixed effects are equal to zero for both calls and puts. The fixed effects vary systematically with option characteristics; for example, out-of-the-money options have wider spreads and in-the-money options have narrower spreads than predicted by these regressions. One interpretation is that the spreads include a mark-up above the marginal costs of providing liquidity for at least some of the options in this sample.