Welcome to Regression Alert, your weekly guide to using regression to predict the future with uncanny accuracy.
For those who are new to the feature, here's the deal: every week, I dive into the topic of regression to the mean. Sometimes I'll explain what it really is, why you hear so much about it, and how you can harness its power for yourself. Sometimes I'll give some practical examples of regression at work.
In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.
Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Cooper Kupp is one of the top performers in my sample, then Cooper Kupp goes into Group A and may the fantasy gods show mercy on my predictions.
Most importantly, because predictions mean nothing without accountability, I track the results of my predictions over the course of the season and highlight when they prove correct and also when they prove incorrect. At the end of last season, I provided a recap of the first half-decade of Regression Alert's predictions. The executive summary is we have a 32-7 lifetime record, which is an 82% success rate.
If you want even more details, here's a list of my predictions from 2020 and their final results. Here's the same list from 2019 and their final results, here's the list from 2018, and here's the list from 2017.
The Scorecard
In Week 2, I broke down what regression to the mean really is, what causes it, how we can benefit from it, and what the guiding philosophy of this column would be. No specific prediction was made.
In Week 3, I dove into the reasons why yards per carry is almost entirely noise, shared some research to that effect, and predicted that the sample of backs with lots of carries but a poor per-carry average would outrush the sample with fewer carries but more yards per carry.
In Week 4, I discussed the tendency for touchdowns to follow yards and predicted that players scoring a disproportionately high or low amount relative to their yardage total would see significant regression going forward.
In Week 5, I revisited an old finding that preseason ADP tells us as much about rest-of-year outcomes as fantasy production to date does, even a quarter of the way through a new season. No specific prediction was made.
In Week 6, I explained the concept of "face validity" and taught the "leaderboard test", my favorite quick-and-dirty way to tell how much a statistic is likely to regress. No specific prediction was made.
In Week 7, I talked about trends in average margin of victory and tried my hand at applying the concepts of regression to a statistic I'd never considered before, predicting that teams would win games by an average of between 9.0 and 10.5 points per game.
In Week 8, I lamented that interceptions weren't a bigger deal in fantasy football, given that they're a tremendously good regression target, and then I predicted interceptions would regress.
In Week 9, I explained why the single greatest weapon for regression to the mean is large sample sizes. For individual players, individual games, or individual weeks, regression might only be a 55/45 bet, but if you aggregate enough of those bets, it becomes a statistical certainty. No specific prediction was made.
In Week 10, I explored the link between regression and luck, noting that the more something was dependent on luck, the more it would regress, and predicted that "schedule luck" in the Scott Fish Bowl would therefore regress completely going forward.
In Week 11, I broke down the very important distinction between "mean reversion" (the tendency of players to perform around their "true talent level" going forward, regardless of how they have performed to date) and "gambler's fallacy" (the idea that overperformers or underperformers are "due" for a correction).
In Week 12, I talked about how much of a team's identity was really just random noise and small samples and projected that some of the most rush-heavy teams would skew substantially more pass-heavy going forward.
In Week 13, explained why the optimal "hit rate" isn't anywhere close to 100% and suggested that fantasy players should be willing to press even marginal edges if they want to win in the long run.
In Week 14, I sympathized with how tempting it is to assume that players on a hot streak can maintain that level of play but discussed how larger (full-season) samples were almost always more accurate. I predicted that the hottest players in fantasy would all cool down substantially toward their full-season averages.
In Week 15, I discussed several methods of estimating your championship odds and explained why virtually every team is more likely to lose than to win.
In Week 16, I examined what happened to some of our failed predictions if you looked at them over longer timespans and found that while regression could be deferred, the bill eventually came due.
STATISTIC FOR REGRESSION | PERFORMANCE BEFORE PREDICTION | PERFORMANCE SINCE PREDICTION | WEEKS REMAINING |
---|---|---|---|
Yards per Carry | Group A had 24% more rushing yards per game | Group B has 25% more rushing yards per game | None (Win!) |
Yards per Touchdown | Group A scored 3% more fantasy points per game | Group A has 12% more fantasy points per game | None (Loss) |
Margin of Victory | Average margins were 9.0 points per game | Average margins are 9.9 points per game | None (Win!) |
Defensive INTs | Group A had 65% more interceptions | Group B has 50% more interceptions | None (Win!) |
Schedule Luck | Group A had 38% more wins | Group A had 4% more wins | None (Loss*) |
Offensive Identity | Group A had 12% more rushing TDs | Group A had 4% more rushing TDs | None (Loss) |
"Hot" Players Regress | Players were performing at an elevated level | Players have regressed 128% to season avg. | 1 |
Our previously hot players remain ice cold and getting colder. This underperformance isn't the result of a single player; you could remove the worst underperformer from the sample, and the prediction would still be on track to win. In fact, you could remove the nine worst underperformers from the sample (Davante Adams, DeAndre Carter, Josh Jacobs, Tyler Bass, Christian Kirk, Amon-Ra St. Brown, Garrett Wilson, Samaje Perine, and Justin Fields), and the group still would have regressed 70% of the way toward their full-season average, enough to secure a win. Remove the least-favorable 9 out of 24 players and the prediction still hits.
The good news is that outside of a couple players who have completely lost their role (DeAndre Carter went from playing ~80% of offensive snaps in the seven weeks before the prediction to playing ~20% in the three weeks since, say), the group as a whole is averaging within 10% of their full-season average. Performance to date actually is quite predictive, at least in the macro sense! It's only small sub-samples (like "most recent weeks") that fails the test.
Regression and Dr. Ian Malcolm
My first love in fantasy football is dynasty, a format where once players are on your roster you keep them indefinitely, drafting new rookies every year as a fresh batch of players enters the league. This column naturally focuses on shorter time scales with predictions that are testable over four-week windows, but late in the season, I like to deviate a bit and look at a way that regression impacts my favorite format. By now, the redraft cake has already been baked, so to speak; 80+% of teams are already eliminated from contention and the ones that are still alive don't have much of a future to look forward to. But dynasty is forever.
In recent years, I've taken the opportunity to look at how incoming talent tended to regress over time, with positions getting younger once a strong crop of rookies entered the league and then older over time as that group aged. The 2017 running back class was the best in history, so top running backs were very young after 2017 and 2018 but much older today as Christian McCaffrey, Dalvin Cook, Austin Ekeler, Joe Mixon, Leonard Fournette, James Conner, Aaron Jones, and even fantasy-viable role players like DOnta Foreman and Jamaal Williams get a year older with each passing season and few new backs entering the league to take their place.
This year, I wanted to talk about one of my most important beliefs in dynasty and how that belief is shaped by my knowledge of regression. The belief is expressed in some quarters as "talented players tend to shine eventually" or "good players get theirs". Personally, I like to call it the Dr. Ian Malcolm Hypothesis after Jeff Goldblum's character in Jurrasic Park. Confronted by a scientist over his concern that the dinosaurs of Jurrasic Park might begin to breed despite all the dinosaurs in question being female, Dr. Malcolm is asked how that could happen. Malcolm replies that he doesn't know but that "life finds a way".
Continue reading this content with a PRO subscription.
"Footballguys is the best premium
fantasy football
only site on the planet."
Matthew Berry, NBC Sports EDGE