For those who are new to the feature, here's the deal: every week, I break down a topic related to regression to the mean. Some weeks, I'll explain what it is, how it works, why you hear so much about it, and how you can harness its power for yourself. In other weeks, I'll give practical examples of regression at work.
In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.
Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Justin Jefferson is one of the top performers in my sample, then Justin Jefferson goes into Group A, and may the fantasy gods show mercy on my predictions.
And then because predictions are meaningless without accountability, I track and report my results. Here's last year's season-ending recap, which covered the outcome of every prediction made in our seven-year history, giving our top-line record (41-13, a 76% hit rate) and lessons learned along the way.
Our Year to Date
Sometimes, I use this column to explain the concept of regression to the mean. In Week 2, I discussed what it is and what this column's primary goals would be. In Week 3, I explained how we could use regression to predict changes in future performance-- who would improve, who would decline-- without knowing anything about the players themselves. In Week 7, I explained why large samples are our biggest asset when attempting to benefit from regression.
In Week 9, I gave a quick trick for evaluating whether unfamiliar statistics are likely stable or unstable. In Week 11, I explained the difference between regression and the gambler's fallacy, or the idea that players are "due" to perform a certain way.
Sometimes, I point out broad trends. In Week 5, I shared twelve years worth of data demonstrating that preseason ADP held as much predictive power as performance to date through the first four weeks of the season.
Other times, I use this column to make specific predictions. In Week 4, I explained that touchdowns tend to follow yards and predicted that the players with the highest yard-to-touchdown ratios would begin outscoring the players with the lowest. In Week 6, I explained that yards per carry was a step away from a random number generator and predicted the players with the lowest averages would outrush those with the highest going forward.
In Week 8, I broke down how teams with unusual home/road splits usually performed going forward and predicted the Cowboys would be better at home than on the road for the rest of the season. In Week 10, I explained why interceptions varied so much from sample to sample and predicted that the teams throwing the fewest interceptions would pass the teams throwing the most.
The Scorecard
Statistic Being Tracked | Performance Before Prediction | Performance Since Prediction | Weeks Remaining |
---|---|---|---|
Yard-to-TD Ratio | Group A averaged 17% more PPG | Group B averages 10% more PPG | None (Win!) |
Yards per carry | Group A averaged 22% more yards per game | Group B averages 38% more yards per game | None (Win!) |
Cowboys Point Differential | Cowboys were 90 points better on the road than at home | Cowboys are 40 points better on the road than at home | 6 |
Team Interceptions | Group A threw 58% as many interceptions | Group B has thrown 60% as many interceptions | 2 |
The Cowboys have played significantly worse at home than on the road since our prediction-- but they had Dak Prescott for both road games and Cooper Rush for both home games, so it's still too early to tell how meaningful that is. Either way, given the early hole they've dug themselves into, it's looking less likely that their remaining road performances will be bad enough (or their remaining home performances good enough) to pull out a win on this prediction.
I was similarly worried about our interception prediction after a math error made it much harder than intended, but Group B continues to hold off Group A. All of the Group B teams are past their bye, while Group A still has four more to go, so we'll see if they don't catch back up.
Predicting the Past
Understanding regression to the mean is not just useful for predicting the future-- it's also handy for predicting the past. "Predicting the past?" you may scoff, "Surely that's not a thing." Reader, your skepticism wounds me-- predicting the past is called "retrodiction" and is very much a thing.
Why on earth would someone want to predict something that already happened? Because it's one of the easiest ways to check whether one's model is accurate.
For instance, in 1859, astronomer Urbain Le Verrier discovered that Mercury was "orbiting wrong"-- under the standard Newtonian model of gravity, its orbit should be a fixed ellipse, but instead the ellipse was itself slowly rotating around the sun over time.
(Le Verrier wasn't known to be shoddy with his calculations; thirteen years earlier, he had noticed several unexplained deviations in Uranus' orbit and, from those deviations, predicted not only the existence of Neptune-- an as-yet-undiscovered planet-- but also its precise location. This was an example of predicting the present, which unfortunately lacks a catchy name.)
Mercury's odd orbit remained a puzzle for more than 50 years until Albert Einstein began working on his own theory of gravity. One of the first tests of his new system was "predicting" what Mercury's orbit should look like. The fact that his model succeeded where Newton's failed helped convince him of its accuracy.
We may not be an Einstein or Le Verrier, but we also have a model. And we can also harness the power of retrodiction to reassure ourselves of its accuracy.
The Arrow of Time and Regression to the Mean
Most of the time, when we talk about regression to the mean in this space, we're talking about it from the beginning of the process. We take a starting state with large gaps between players or teams and predict an ending state where those gaps are much smaller.
But if regression operates that way (it does), if we're given an end-state with small gaps, we should be able to "predict" a prior starting state where those gaps were much larger (we can).
Continue reading this content with a PRO subscription.
"Footballguys is the best premium
fantasy football
only site on the planet."
Matthew Berry, NBC Sports EDGE