This conference is supported by Grant #1838183 from the National Science Foundation
Summary
van Binsbergen, Han, and Lopez-Lira use machine learning to construct a statistically optimal and unbiased benchmark for firms' earnings expectations. They show that analyst expectations are on average biased upwards, and that this bias exhibits substantial time-series and cross-sectional variation. On average, the bias increases in the forecast horizon, and analysts revise their expectations downwards as earnings announcement dates approach. The researchers find that analysts' biases are associated with negative cross-sectional return predictability, and the short legs of many anomalies consist of firms for which the analysts' forecasts are excessively optimistic relative to our benchmark. Managers of companies with the greatest upward biased earnings forecasts are more likely to issue stocks.
In addition to the conference paper, the research was distributed as NBER Working Paper w27843, which may be a more recent version.
Beason and Wahal study the anatomy of four widely used institutional trading algorithms representing $675 billion in demand from 961 institutions between 2012 and 2016. Parent orders generate hundreds of child orders which strategically employ price, time-in-force, and display priority rules to navigate the tradeoff between the desire to trade and minimizing transaction costs. Child orders incur price impact at the time they are submitted to the book regardless of whether or not they are (ex post) filled, and even when passively priced relative to the prevailing quote. The intra-parent distribution of child orders is non-random, generating strategic runs which oscillate between the aggressive or passive side of the spread. Despite algorithmic attempts to reduce their influence, programmatic child-level price, time-in-force, and display choices aggregate up to parent-level trading costs borne by investors.
Conventional wisdom warns that exchange-traded funds (ETFs) harm stock price discovery, either by "stealing" single-stock liquidity or forcing stock prices to co-move. Contra this belief, Ernst develops a theoretical model and present empirical evidence which demonstrate that investors with stock-specific information trade both single stocks and ETFs. Single-stock investors can access ETF liquidity by means of this tandem trading, and stock prices can flexibly adjust to ETF price movements. Using high-resolution data on SPDR and the Sector SPDR ETFs, he exploits exchange latencies in order to show that investors place simultaneous, same-direction trades in both a stock and ETF. Consistent with my model predictions, effects are strongest when an individual stock has a large weight in the ETF and a large stock-specific informational asymmetry. Ernst concludes that ETFs can provide single-stock price discovery.
Chen, Hoberg, and Maksimovic propose that the product life cycle is important in understanding the firm's disclosure policy and test this hypothesis using a 4-dimensional text-based life cycle model. Mature-stage life cycle firms disclose more, consistent with an outward-focused investment strategy that lowers search costs for finding synergistic alliance partners. Early-stage life cycle firms are secretive, consistent with inward-focused organic investment and mitigating competitive threats. These results obtain across disclosure measures relating to intellectual property, redaction of contracts, and readability. A quasi-natural experiment based on waves of rapid depreciation of protected intellectual property, and analysis of pairwise co-search of peer filings on the SEC EDGAR website, reinforce this interpretation.
Putnins and Barbara show that behind the aggregate effects of algorithmic and high-frequency traders (AT/HFT) is substantial heterogeneity in how individual algorithms impact institutional trading costs. Using unique trader-identified regulatory data, they find that the cluster of "harmful" algorithmic traders doubles institutional trading costs. "Beneficial" algorithmic traders offset much of this increase. The researchers find no evidence that speed (e.g., being an HFT) is a characteristic of harmful traders. Traders that hold inventory overnight are more likely to benefit institutional investors by providing more sustained liquidity. The heterogeneity explains why AT/HFT appear detrimental to some investors despite being beneficial or benign in aggregate.
Rossi and Utkus study the effects of a large U.S. hybrid robo-adviser on the portfolios of previously selfdirected investors. Across all investors, robo-advising reduces idiosyncratic risk by lowering the holdings of individual stocks and active mutual funds and raising exposure to low-cost indexed mutual funds. It further eliminates investors' home bias and increases investors' overall risk-adjusted performance, mainly by lowering investors' portfolio risk. The researchers use a machine learning algorithm, known as Boosted Regression Trees (BRT), to explain the cross-sectional variation in the effects of advice on portfolio allocations and performance. Finally, they study the determinants of investors' sign-up and attrition.
Traditional liquidity measures can provide a false impression of the liquidity and stability of financial market trading. Using data on auctions (bids wanted in competition; BWICs) from the collateralized loan obligation (CLO) market, we show that a standard measure of liquidity, the effective bid-ask spread, dramatically underestimates the true cost of immediacy because it does not account for failed attempts to trade. The true cost of immediacy is substantially higher than the observed costs for successful BWICs. This cost gap is higher in lower-rated CLOs and stressful market conditions when failure rates exceed 50%. Across our 2012-2020 sample period for trades in senior CLOs, the observed cost is four basis points (bps) while the true cost of immediacy is 13bps. In stressful periods, such as the COVID-19 pandemic, for junior tranches the observed cost of trading increases from an average of 12bps to 25bps while the true cost of immediacy increases from less than 3% to almost 15%.
The closing auction is the most important event of the trading day, now accounting for over 10% of trading volume. The NYSE closing auction design is highly advantageous to NYSE floor brokers, who have near-exclusive auction access from 3:50pm to 4:00pm. Murphy and Hu show that closing auction quality, as measured by the accuracy of closing auction information feeds and efficiency of closing prices, is significantly worse on NYSE than Nasdaq. However, closing auction quality improved when NYSE halted floor trading during the COVID-19 pandemic. Their findings highlight the tradeoffs associated with designing a single-price call auction that accepts orders during regular trading hours.
Gerken and Painter show that analysts incorporate geographically dispersed information about firms into individual forecasts and that limited analyst geographic diversity adversely affects consensus forecasts and firm liquidity. Using satellite imagery of U.S. retailers' parking lots, they find analysts shade their own forecast in the direction of local car counts relative to other analysts covering the same firm at the same time but from different locations. Examining all industries, the researchers find firms with more geographically concentrated analyst coverage have higher consensus forecast errors and are less liquid. Evidence from shocks in geographic coverage due to brokerage closures suggest these relations are causal.
In this paper, Cao, Jiang, Yang, and Zhang analyze how corporate disclosure has been reshaped by machine processors, employed by algorithmic traders, robot investment advisors, and quantitative analysts. Their findings indicate that increasing machine and AI readership, proxied by machine downloads, motivates firms to prepare filings that are more friendly to machine parsing and processing. Moreover, firms with high expected machine downloads manage textual sentiment and audio emotion in ways catered to machine and AI readers, such as by differentially avoiding words that are perceived as negative by computational algorithms as compared to those by human readers, and by exhibiting speech emotion favored by machine learning software processors. The publication of Loughran and McDonald (2011) is instrumental in attributing the change in the measured sentiment to machine and AI readership. While existing research has explored how investors and researchers apply machine learning and computational tools to quantify qualitative information from disclosure and news, this study is the first to identify and analyze the feedback effect on corporate disclosure decisions, i.e., how companies adjust the way they talk knowing that machines are listening.
Imprecise language in corporate disclosures can convey valuable information on firms' fundamentals during uncertain times. To evaluate this idea, Cookson, Moon, and Noh develop a novel measure of linguistic imprecision based on sentences marked with the "weasel tag" on Wikipedia. For a 10-week window following the 10-K disclosure, the researchers find that the use of imprecise language in 10-Ks predicts 1) positive and non-reverting abnormal returns, 2) improvements to stock liquidity, 3) greater intensities of insider and informed buying, and 4) higher news sentiment. These findings are the strongest when the firm disclosures are more forward looking, and for firms with greater idiosyncratic volatility. Taken together, their findings imply that the imprecise language in 10-Ks contains new information on positive but yet immature prospects of future cash flow.
Fresard, Foucault, and Dessaint study how data abundance affects the informativeness of financial analysts' forecasts at various horizons. Analysts forecast short-term and long-term earnings and choose how much information to process about each horizon to minimize forecasting error, net of information processing costs. When the cost of obtaining short-term information drops (i.e., more data becomes available), analysts change their information processing strategy in a way that renders their short-term forecasts more informative but that possibly reduces the informativeness of their long-term forecasts. The researchers provide empirical support for this prediction using a large sample of forecasts at various horizons and novel measures of analysts' exposure to abundant data. Data abundance can thus impair the quality of long-term financial forecasts.