Pitchers appear to throw too many fastballs; football teams pass less than they should.
In the perfect world of game theory, two players locked in a zero-sum contest always make rational choices. They opt for the "minimax" solution -- the set of plays that minimizes their maximum possible loss -- and their play selection does not follow a predictable pattern that might give their opponent an edge. But minimax predictions typically have not fared well in lab experiments. And real-world studies, while more supportive, have often used small samples.
Now a new study, Professionals Do Not Play Minimax: Evidence from Major League Baseball and the National Football League (NBER Working Paper No. 15347), looks at two of the biggest high-stakes examples of zero-sum contests: pitch selection in Major League Baseball and play-calling in the National Football League. Authors Kenneth Kovash and Steven Levitt find that: "Pitchers appear to throw too many fastballs; football teams pass less than they should." They also find that the selection of pitches or plays is too predictable. The researchers conclude that "correcting these decisionmaking errors could be worth as many as two additional victories a year to a Major League Baseball franchise and more than a half win per season for a professional football team."
Kovash and Levitt examine all Major League pitches -- more than 3 million of them -- during the regular seasons from 2002 to 2006 (excluding extra innings). They categorize them as fastballs, curveballs, sliders, or changeups. They measure the outcome of each pitch using the sum of the batter's on-base percentage and slugging percentage (a measure they label OPS) and they determine that fastballs lead to a slightly higher OPS than other types of pitches.
If batters are more likely to score runs on fastballs, then minimax theory says that pitchers should adjust. To find out why they haven't, the authors look more deeply into the data, controlling for everything from the inning and number of strikes to the number of runners on base. A key factor, they find, is pitch count. As long as there are fewer than two strikes during an at-bat, the difference in outcome between throwing fastballs and non-fastballs tends to be small. But when there are two strikes, the outcomes diverge dramatically. Fastballs generate an OPS that is more than 100 points higher than non-fastballs. The authors calculate that if a team's pitchers reduced their share of fastballs by 10 percentage points, they would allow roughly 15 fewer runs in a season, about 2 percent of their total runs allowed.
The study then looks at the order of pitches. Because pitch selection can depend on so many variables that the authors cannot measure (pitcher fatigue, whether the curveball is 'working' that day, and so on), they limit their study to situations where the same pitcher faces the same batter with the same count and the same number of specific pitches but in a different order.
Say, for example, the count is 2-and-1 after two fastballs and a slider -- minimax theory predicts that it doesn't matter which pitch the slider was. But the study finds that pitchers are more predictable than that. If the last pitch was a fastball, the likelihood that the next one will be a fastball falls by 4.1 percentage points. If the last pitch was a slider, then it is 2 percentage points less likely that the next one will be a slider. Other patterns also emerge: fastballs are more likely to follow changeups than other types of pitches; curveballs are most likely to follow fastballs and least likely to follow changeups. Based on interviews with MLB executives and some assumptions of their own, the authors estimate that knowing these statistics would boost a batter's OPS by .006 -- worth about 10 to 15 runs per team per year.
For the NFL, the study concentrates on 125,000 plays during the 2001 to 2005 seasons when the offense clearly was going to run or pass. The authors construct their own measure of the likelihood of scoring based on the down, the distance to first down, field position, and so on. Then they analyze the change in a team's expected points before and after the play. They find that a pass on average gains .55 yards more than a run, is 9 percentage points more likely to yield a first down, and leads to scores with a 3.8 percent probability. Runs have only a 2.8 percent scoring probability, although in fairness they lead to fewer turnovers. Using an expanded set of measures, the authors find that if a team went from passing 56 percent of the time (the current average) to 70 percent, they would score an additional 10 points over the course of a season -- or 3 percent of their total scoring.
Kovash and Levitt then look at the order of plays, and again find patterns that minimax theory would not predict. Conditional on other factors, a team that has passed is 10 percentage points less likely to pass on the next play. After a passing play with a poor outcome, a team is 14.5 percent points more likely to switch from a pass to a run on the next play (or vice versa), even after controlling for the down and distance.
The authors estimate, under some conservative assumptions, that if a defense could better anticipate the play using such offensive tendencies, it would give up an average of 10.8 fewer yards per game. That would translate to a point a game, or half a victory a year -- a gain that is slightly larger than that from calling more passing plays.
The authors conclude that "These deviations are not enormous in magnitude -- meaning that they might plausibly not have been detected in the smaller datasets that have been available in most prior field research on the topic -- but are large enough that a team that successfully exploited these patterns could add one or two season wins and millions of dollars in associated revenue"
-- Laurent Belsie