Barbiero, Farhi, Gopinath, and Itskhoki analyze the dynamic macroeconomic effects of border adjustment taxes, both when they are a feature of corporate tax reform (C-BAT) and for the case of value added taxes (VAT). Their analysis arrives at the following main conclusions. First, C-BAT is unlikely to be neutral at the macroeconomic level, as the conditions required for neutrality are unrealistic. The basis for neutrality of VAT is even weaker. Second, in response to the introduction of an unanticipated permanent C-BAT of 20% in the U.S. the dollar appreciates strongly, by almost the size of the tax adjustment, U.S. exports and imports decline significantly, while the overall effect on output is small. Third, an equivalent change in VAT by contrast generates only a weak appreciation of the dollar, a small decline in imports and exports, but has a large negative effect on output. Lastly, border taxes increase government revenues in periods of trade deficit, however, given the net foreign asset position of the U.S., they result in a long-run loss of government revenues and an immediate net transfer to the rest of the world.
Comparing U.S. GDP to the sum of measured payments to labor and imputed rental payments to capital results in a large and volatile residual or "factorless income." Interpreting factorless income as economic profits implies a tight negative relationship between common measures of the real interest rate and markups, leads to large fluctuations in inferred factor-augmenting technologies, and results in markup levels that have risen since the early 1980s but that remain lower today than in the 1960s and 1970s. Alternatively, unmeasured capital plausibly accounts for all factorless income in recent decades, but its value in the 1960s would have to be even larger than the sum of non-residential and residential capital. Attributing factorless income to deviations of the rental rate of capital from standard measures based on bond returns leads to an inference of more stable factor shares and technological growth, but requires an explanation for why these cost of capital deviations exhibit trends. Using a multi-sector model with multiple types of capital, Karabarbounis and Neiman demonstrate that their assessment of the drivers of changes in output, factor shares, and inequality between representative workers and capitalists depends critically on the strategy chosen to allocate factorless income.
Banks' ratio of market equity to book equity was close to one until the 1990s, then more than doubled during the 1996-2007 period, and fell again to values close to one after the 2008 financial crisis. Sarin and Summers (2016) and Chousakos and Gorton (2017) argue that the drop in banks market to book ratio since the crisis is due to a loss in bank franchise value or profitability. Atkeson, D'Avernas, Eisfeldt, and Weill argue that the market to book ratio is the sum of two components: franchise value, and government guarantees. They empirically decompose the ratio between these two components, and find that a large portion of the variation in this ratio over time is due to changes in the value of government guarantees.
Riskless interest rates fell in the wake of the financial crisis and have remained low. Kozlowski, Veldkamp, and Venkateswaran explore a simple explanation: This recession was perceived as an extremely unlikely event before 2007. Observing such an episode led all agents to re-assess macro risk, in particular, the probability of tail events. Since changes in beliefs endure long after the event itself has passed, perceived tail risk remains high, generates a demand for riskless, liquid assets, and continues to depress the riskless rate. The researchers embed this mechanism in a simple production economy with liquidity constraints and use observable macro data, along with standard econometric tools, to discipline beliefs about the distribution of aggregate shocks. When agents observe an extreme, adverse realization, they re-estimate the distribution and attach a higher probability to such events recurring. As a result, even transitory shocks have persistent effects because, once observed, the shock stays forever in the agents' data set. Kozlowski, Veldkamp, and Venkateswaran show that Their belief revision mechanism can help explain the persistent nature of the fall in the risk-free rates.
This paper was distributed as Working Paper 24362, where an updated version may be available.
Using data from a variety of sources, Charles, Hurst, and Schwartz comprehensively document the dramatic changes in the manufacturing sector and the large decline in employment rates and hours worked among prime-aged Americans since 2000. They use cross-region variation to explore the link between declining manufacturing employment and labor market outcomes. They find that manufacturing decline in a local area in the 2000s had large and persistent negative effects on local employment rates, hours worked and wages. The researchers also show that declining local manufacturing employment is related to rising local opioid use and deaths. These results suggest that some of the recent opioid epidemic is driven by demand factors in addition to increased opioid supply. Charles, Hurst, and Schwartz conclude the paper with a discussion of potential mediating factors associated with declining manufacturing labor demand including public and private transfer receipt, sectoral switching, and inter-region mobility. Overall, they conclude that the decline in manufacturing employment was a substantial cause of the decline in employment rates during the 2000s particularly for less educated prime age workers. Given the trends in both capital and skill deepening within this sector, the researchers further conclude that many policies currently being discussed to promote the manufacturing sector will have only a modest labor market impact for less educated individuals.
This paper was distributed as Working Paper 24468, where an updated version may be available.
It is common to analyze the effects of alternative possible monetary policy commitments under the assumption of optimization under rational (or fully model-consistent) expectations. This implicitly assumes unrealistic cognitive abilities on the part of economic decision makers. The relevant question, however, is not whether the assumption can be literally correct, but how much it would matter to model decision making in a more realistic way. Woodford proposes a model based on the architecture of artificial intelligence programs for problems such as chess or go, in which decision makers look ahead only a finite distance into the future, and use a value function learned from experience to evaluate situations that may be reached after a finite sequence of actions by themselves and others. Conditions are discussed under which the predictions of a model with finite-horizon forward planning are similar to those of a rational expectations equilibrium, and under which they are instead quite different. The model is used to re-examine the consequences that should be expected from a central-bank commitment to maintain a fixed nominal interest rate for a substantial period of time. "Neo-Fisherian" predictions are shown to depend on using rational expectations equilibrium analysis under circumstances in which it should be expected to be unreliable.
This paper was distributed as Working Paper 24692, where an updated version may be available.