Bergemann, Heumann, and Morris analyze demand function competition with a finite number of agents and private information. The researchers show that the nature of the private information determines the market power of the agents and thus price and volume of equilibrium trade. The researchers establish their results by providing a characterization of the set of all joint distributions over demands and payoff states that can arise in equilibrium under any information structure. In demand function competition, the agents condition their demand on the endogenous information contained in the price. Bergemann, Heumann, and Morris compare the set of feasible outcomes under demand function to the feasible outcomes under Cournot competition. They find that the first and second moments of the equilibrium distribution respond very differently to the private information of the agents under these two market structures. The first moment of the equilibrium demand, the average demand, is more sensitive to the nature of the private information in demand function competition, reflecting the strategic impact of private information. By contrast, the second moments are less sensitive to the private information, reflecting the common conditioning on the price among the agents.
Discrete choice models of demand are widely used for counterfactual policy simulations, yet their out-of-sample performance is rarely assessed. Pathak and Shi use a large-scale policy change in Boston to investigate the performance of discrete choice models of school demand. In 2013, Boston Public Schools considered several new choice plans that differ in where applicants can apply. At the request of the mayor and district, the researchers estimated discrete choice demand models to forecast the effects of these alternatives. This work led to the adoption of a plan which significantly altered choice sets for thousands of applicants. Pathak and Shi (2014) update forecasts prior to the policy change and describe prediction targets involving access, travel, and unassigned students. Here, the researchers assess how well these ex ante counterfactual predictions compare to the actual choices made under the new choice sets. For equilibrium outcomes, a simple ad hoc model performs as well as the more complicated structural choice models for one of the two grades they examine. However, the inconsistent performance of the structural models is largely due to prediction errors in the characteristics of applicants, which are auxiliary inputs. Once the researchers condition on the characteristics of the actual applicants, the structural choice models outperform the ad hoc alternative in predicting both equilibrium outcomes and choice patterns. Moreover, refitting the models using the new choice data does not significantly improve their prediction accuracy, suggesting that the choice models are indeed "structural" and are robust across the reform. Pathak and Shi's findings show that structural choice models can be effective in predicting counterfactual outcomes, as long there are accurate forecasts about auxiliary input variables.
This paper was distributed as Working Paper 24017, where an updated version may be available.
Milgrom and Segal demonstrate how a deferred-acceptance (DA) clock auction for procurement chooses winning bids by reducing prices in each round to the least attractive current bids. In contrast to Vickrey auctions, DA clock auctions for single-minded bidders are obviously strategy-proof and group strategy-proof, preserve winners' privacy, avoid intractable optimizations, can incorporate the auctioneer's budget constraint, and set prices to be no higher than either competitive equilibrium or Nash equilibrium in the related first-price auction. In simulations based on the US Incentive Auction, the DA clock auction used by the FCC leads to nearly efficient outcomes at a lower cost than a Vickrey auction while using a fraction of the computational effort.
This paper was distributed as Working Paper 24349, where an updated version may be available.
Using a rich data set on Australian college admissions, Artemov, Che, and He show that a non-negligible fraction of applicants adopt strategies that are unambiguously dominated; however, the majority of these 'mistakes' are payoff irrelevant. In a model where colleges rank applicants strictly, the researchers demonstrate that such strategic mistakes jeopardize the empirical analysis based on the truth-telling hypothesis but not the one based on a weaker stable-matching assumption. Artemov, Che, and He's Monte Carlo simulations further illustrate this point and quantify the differences among the methods in the estimation of preferences and in a hypothetical counterfactual analysis.
The prominent Top Trading Cycles (TTC) mechanism has attractive properties for school choice, as it is strategy-proof, Pareto efficient, and allows school boards to guide the assignment by specifying priorities. However, the common combinatorial description of TTC does little to explain the relationship between student priorities and their eventual assignment. This creates difficulties in transparently communicating TTC to parents and in guiding policy choices of school boards. Leshno and Lo show that the TTC assignment can be described by (n * n) admission thresholds, where n is the number of schools. These thresholds can be observed after the mechanism is run, and can serve as non-personalized prices that allow parents to verify their assignment. In a continuum model these thresholds can be computed directly from the distribution of preferences and priorities, providing a framework that can be used to evaluate policy choices. The researchers provide closed form solutions for the assignment under a family of distributions, and derive comparative statics. As an application of the model, they solve for the welfare maximizing investment in school quality, and find that a more egalitarian investment can be more efficient because it promotes more efficient sorting by students.
A transplant can improve a patient's life while saving several hundred thousands of dollars of healthcare expenditures. Organs from deceased donors, like many other common pool resources (e.g. public housing, child-care slots, publicly funded long-term care), are rationed via a waitlist. The efficiency and equity properties of design choices such as penalties for refusing offers or object-type specific lists are not well understood and depend on agent preferences. Agarwal, Ashlagi, Rees, Somaini, and Waldinger establish an empirical framework for analyzing the trade-offs involved in wait list design and applies it to study the allocation of deceased donor kidneys. They model the decision to accept an offer from a waiting list as an optimal stopping problem and use it to estimate the value of accepting various kidneys. The researchers estimated values for various kidneys is highly correlated with predicted patient outcomes as measured by life-years from transplantation (LYFT). While some types of donors are preferable for all patients (e.g. young donors), there is substantial heterogeneity in willingness to wait for good donors and also substantial match-specific heterogeneity in values (due to biological similarity). The researchers find that the high willingness to wait for good donors without considering the effects of these decisions on other results in agents being too selective relative to socially optimal. This suggests that mild penalties for refusal (e.g. loss in priority) may improve efficiency. Similarly, the heterogeneity in willingness to wait for young, healthy donors suggests that separate queues by donor quality may increase efficiency by inducing sorting without significantly hurting assignments based on match-specific payoffs.
Kyle and Lee propose continuous scaled limit orders to implement Fischer Black’s vision of financial markets. By making trading continuous in price, quantity, and time, continuous scaled limit orders eliminate rents high frequency traders earn exploiting artifacts of the current market design. By avoiding time priority, this new order type protects slow traders from being picked off by high frequency traders and makes high frequency traders compete among themselves. All traders, regardless of their technological capacity, can optimally spread trades out over time to minimize adverse price impact. Organized exchanges should move not toward more discreteness but toward a full continuity.
As of early 2017 there are 12 stock exchanges in the US, across which 1.5 trillion shares ($60 trillion) are traded annually. All 12 exchanges use the continuous limit order book market design, a design that causes latency arbitrage and the associated high-frequency trading arms race (Budish, Cramton and Shim 2015). Will the market adopt new market designs, such as frequent batch auctions (FBA) that address the negative aspects of high-frequency trading? Budish, Lee, and Shim build a simple new model of stock exchange competition to address this question. The model, which is guided by institutional details of the US equities market, shows that under the status quo market design: (i) the 12 distinct exchanges aggregate up into a "single virtual platform"; (ii) competition among exchanges is fierce on the dimension of traditional trading fees; but (iii) exchanges have market power in the sale of exchange-specific speed technology — arms for the arms race — from which they earn economic rents. The researchers use a variety of data to empirically validate these three sets of results. They then use the model to study the private and social incentives for market design innovation. If a new exchange enters with a market design that eliminates latency arbitrage (e.g., FBA), it would win share and tip other exchanges into also adopting the new design; perhaps surprisingly, the usual coordination problems associated with getting a new market design off the ground would not be an issue. However, the researchers find that the private returns to introducing the new design are zero for a de novo entrant and negative for an incumbent, in contrast with social returns that are large. There are two sources of this tension. First is a version of the classic problem of non-excludability, leading to competitive trading fees and no economic profits for the innovator. Second is incumbents' rents from speed technology. Budish, Lee, and Shim conclude with policy implications. Despite the pessimistic results, their analysis does not imply that a market-wide market design mandate is necessary. Rather, the model points to a more circumscribed policy response that would tip the balance of incentives and encourage the "market to fix the market."
Liver is the second most transplanted organ from living donors. A living donor can usually donate either his smaller left lobe or larger right lobe. Left-lobe donation is substantially less risky for the donor. Because size compatibility is required besides bloodtype compatibility for liver transplantation, doctors often utilize right-lobe donation due to organ shortage. To remedy the shortage, living donor liver exchange has already been utilized in some countries. Ergin, Sönmez, Unver model liver exchange as a matching market design problem. They first introduce an algorithm to find a two-way efficient matching when only left-lobe donation is feasible and there are only two sizes of donors and patients. The researchers then introduce a Pareto-efficient, individually rational, and incentive-compatible mechanism to elicit the right-lobe donation willingness of donors of the patients in the liver exchange pool and extend this approach to any number of individual patient and donor sizes. This approach is quite general and introduces a new class of mechanisms for bilateral exchange problems with weak preferences induced by multi-dimensional vector partial order. By simulations, Ergin, Sönmez, Unver show that the decrease of number of transplants because of the incentive compatibility requirement is very small, while the number of transplants can increase substantially as liver exchange is utilized.
Traffic congestion is a global problem with annual costs approaching $1 trillion. The cost of traffic congestion across the combined British, French, German and American economies was estimated at $200 billion, or about 0.8 percent of GDP, in 2013. In Los Angeles alone, traffic jams cost $23 billion each year. The health and environmental costs are severe in urban centers worldwide. With the right policies those high social costs can be avoided. Advances in mobile communications and computer technology now make it possible to efficiently schedule, route, and price the use of roads. Efficient real-time pricing of road use can eliminate traffic congestion, enhance safety, improve the environment, and increase vehicle throughput. It also raises reliable, much-needed revenue to modernize decaying infrastructure while improving the allocation of transportation investment. Cramton, Geddes, and Ockenfels describe the design of a market for road use and transportation that is based on efficient scheduling, routing, and pricing. Under their design, road use is priced dynamically by marginal demand during constrained times and locations. In unconstrained times and locations, a nominal fee is paid for road use to recover costs, as in other utilities. Transport is scheduled based on forward prices and then routed in real time based on real-time road-use prices. Efficient pricing of network capacity is not new. Indeed, wholesale electricity markets have been dynamically priced for over a decade. Communications markets are adopting dynamic pricing today. Efficient pricing of road use, however, has only recently become feasible. Advances in mobile communications make it possible to identify and communicate the location of a vehicle to within one cubic meter allowing precise measurement of road use. User preferences can be communicated both in advance to determine scheduled transport and in real time to optimize routes based on the latest information. Computer advances also facilitate efficient scheduling and pricing of road use. Consumer apps help road users translate detailed price information into preferred transport plans. Computers also allow an independent system operator to better model demand and adjust prices to eliminate congestion and maximize the total value of road infrastructure. An independent market monitor, distinct from the operator, observes the market, identifies problems, and suggests solutions. A board governs the market subject to regulatory oversight. The market objective is to maximize the value of road infrastructure via scheduling, routing, and real-time pricing of its use. The optimization of road use eliminates congestion, making roads safer, faster, cleaner and more enjoyable to use. The road-use market thus maximizes the value of existing transport infrastructure while simultaneously providing essential funding for the roads network as well as valuable price information to evaluate road enhancements. The market is highly complementary with and indeed promotes rapid innovation in the transport sector.
Many markets in developing countries, particularly those focused on public services, are marked by inefficiencies stemming from mismatches between demand and supply and market power. Houde, Johnson, Lipscomb, and Schechter institute just-in-time procurement auctions over three years and test the effect increased competition on prices and take-up of the services. They supplement the auctions data with survey data from 5,991 households around Dakar and compare prices between the auctions and the general market: Prices decrease in the auctions by 7% relative to the general market. The auctions for desludging services in Dakar, Senegal ran from June 2013 through July 2016 and led to 4,674 procurement auctions with 104 desludging operators. The researchers show that auctions in which more desludgers are invited are more competitive and that much of the cost of distance is priced into the bids. They estimate that the expected profit of participating firms is approximately 895 CFA, or 17% of the realized profit margin of the winning firm. Assuming that the current winners in the auctions are retained, the researchers find that perfect competition would lower prices by an additional 20%, increasing the percentage of prices accepted by consumers by 70%. Houde, Johnson, Lipscomb, and Schechter simulate the effect of inviting more active lower cost bidders to the auctions, and find that price could decrease by up to an additional 13% and acceptances by households could increase by 29% if more bidders are chosen from among those who bid actively in the platform.
Daskalakis, Papadimitriou, and Tzamos study the problem of optimal auction design in a valuation model, explicitly motivated by online ad auctions, in which there is two-way informational asymmetry, in the sense that private information is available to both the seller (the item type) and the bidders (their type), and the value of each bidder for the item depends both on his own and the item’s type. Importantly, the researchers allow arbitrary auction formats involving, potentially, several rounds of signaling from the seller and decisions by the bidders, and seek to find the optimum co-design of signaling and auction (they call this optimum the “optimum augmented auction”). Daskalakis, Papadimitriou, and Tzamos characterize exactly the optimum augmented auction for their valuation model by establishing its equivalence with a multi-item Bayesian auction with additive bidders. Surprisingly, in the optimum augmented auction there is no signaling whatsoever, and in fact the seller need not access the available information about the item type until after the bidder chooses their bid. Sub-optimal solutions to this problem, which have appeared in the recent literature, are shown to correspond to well-studied ways to approximate multi-item auctions by simpler formats, such as grand-bundling (this corresponds to Myerson’s auction without any information revelation), selling items separately (this corresponds to Myerson’s auction preceded by full information revelation as in [FJM+12]), and fractional partitioning (this corresponds to Myerson’s auction preceded by optimal signaling). Consequently, all these solutions are separated by large approximation gaps from the optimum revenue.
Doraszelski, Seim, Sinkinson, and Wang explore the sensitivity of the U.S. government's ongoing incentive auction to multi-license ownership by broadcasters. The researchers document significant broadcast TV license purchases by private equity firms prior to the auction and perform a prospective analysis of the effect of ownership concentration on auction outcomes. They find that multi-license holders are able to raise spectrum acquisition costs by 22% by strategically withholding some of their licenses to increase the price for their remaining licenses. A proposed remedy reduces the distortion in payouts to license holders by up to 80%, but lower participation could greatly increase payouts and exacerbate strategic effects.
This paper was distributed as Working Paper 23034, where an updated version may be available.
The impacts of cash grants and access to credit are known to vary widely, but progress on targeting these services to high-ability, reliable entrepreneurs is so far limited. This paper reports on a field experiment in Maharashtra, India that assesses (1) whether community members have information about one another that can be used to identify high-ability microentrepreneurs, (2) whether organic incentives for community members to misreport their information obscure its value, and (3) whether simple techniques from mechanism design can be used to realign incentives for truthful reporting. Hussam, Rigol, and Roth asked 1,380 respondents to rank their entrepreneur peers on various metrics of business profitability and growth and entrepreneur characteristics. We also randomly distributed cash grants of about $100 to measure their marginal return to capital. The researcher find that the information provided by community members is predictive of many key business and household characteristics including marginal return to capital. While on average the marginal
return to capital is modest, preliminary estimates suggest that entrepreneurs given a community rank one standard deviation above the mean enjoy an 8.8% monthly marginal return to capital and those ranked two standard deviations above the mean enjoy a 13.9% monthly return. When respondents are
told their reports influence the distribution of grants, they find a considerable degree of misreporting in favor of family members and close friends, which substantially diminishes the value of reports. Finally, Hussam, Rigol, and Roth find that monetary incentives for accuracy, eliciting reports in public, and cross-reporting techniques motivated by implementation theory all significantly improve the accuracy of reports.
Market-based mechanisms for the allocation of spectrum licenses have been a prominent feature of the
telecommunications landscape for more than two decades. Generally, spectrum auctions have taken the
spectrum's use as given and have sought to allocate licenses efficiently. In March 2017, the FCC completed
the Incentive Auction, the first-ever auction that incorporated voluntary clearing directly into the allocation
mechanism. As such, it is the most prominent auction to date that has sought to let the market determine
not only who uses the spectrum but how it is used. This article reviews the design and assesses the outcome
of the FCC Incentive Auction.
Ride-hailing apps introduced a more efficient matching technology than traditional taxis (Cramer and Krueger, 2016), with potentially large welfare gains under the appropriate market design. However, Castillo, Knoepfle, and Weyl show that when price is too low they fall into a failure mode first pointed out by Arnott (1996) that leads to market collapse. An over-burdened platform is depleted of idle drivers on the streets and is forced to send cars on a wild goose chase to pick up distant customers. These chases occupy cars, reducing the number of customers served, earnings and thus effectively removing drivers from the road and exacerbating the problem. The researchers use data from Uber to show that wild goose chases are indeed a problem in the Manhattan market. The effects of wild goose chases dominate more traditional price theoretic considerations and imply that welfare and profits fall dramatically as price falls below a certain threshold and only gradually move in price above this point. A platform forced to charge uniform prices over time will therefore have to set very high prices to avoid catastrophic chases. Dynamic "surge pricing" can avoid these high prices while maintaining the system functioning when demand is high.
Randomized Controlled Trials (RCT) enroll hundreds of millions of people and involve many human lives. In this paper, Narita proposes a design of RCT with high-stakes treatment. Unlike conventional RCT, her design respects subject welfare; it optimally randomly assigns each treatment to subjects predicted to experience better treatment effects, or to subjects with stronger preferences for the treatment. For preference elicitation, Narita's design is also approximately incentive compatible. Yet this design unbiasedly estimates any causal effect estimable with standard RCT. To quantify these properties, the researcher applies her proposal to a water cleaning experiment in Kenya (Kremer et al., 2011). Compared to usual RCT, her design substantially improves subjects’ well-being while reaching almost the same treatment effect estimates.
The Dodd-Frank Act mandates that certain standard OTC derivatives be traded on swap execution facilities (SEF). Onur, Reiffen, Riggs, and Zhu provide a granular analysis of SEF trading mechanisms, using message-level data for May 2016 from the two largest customer-to-dealer SEFs in index CDS markets. Both SEFs offer customers various execution mechanisms that differ in how widely customers' trading interests are exposed to dealers. A theoretical model shows that although exposing the order to more dealers increases competition, it also causes a more severe winner's curse. Consistent with this trade-off, the data show that customers contact fewer dealers if the trade size is larger or nonstandard. Dealers are more likely to respond to customers' inquiries if fewer dealers are involved in competition, if the notional size is larger, or if more dealers are making markets. Finally, dealers' quoted spreads and customers' transaction costs increase in notional quantity and the number of dealers involved. Onur, Reiffen, Riggs, and Zhu results contribute to the understanding of swaps markets by providing insights into investors' and dealers' revealed preferences and strategic behaviors.