Welcome to Regression Alert, your weekly guide to using regression to predict the future with uncanny accuracy.
For those who are new to the feature, here's the deal: every week, I dive into the topic of regression to the mean. Sometimes I'll explain what it really is, why you hear so much about it, and how you can harness its power for yourself. Sometimes I'll give some practical examples of regression at work.
In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.
Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Cooper Kupp is one of the top performers in my sample, then Cooper Kupp goes into Group A and may the fantasy gods show mercy on my predictions.
Most importantly, because predictions mean nothing without accountability, I track the results of my predictions over the course of the season and highlight when they prove correct and also when they prove incorrect. At the end of last season, I provided a recap of the first half-decade of Regression Alert's predictions. The executive summary is we have a 32-7 lifetime record, which is an 82% success rate.
If you want even more details here's a list of my predictions from 2020 and their final results. Here's the same list from 2019 and their final results, here's the list from 2018, and here's the list from 2017.
The Scorecard
In Week 2, I broke down what regression to the mean really is, what causes it, how we can benefit from it, and what the guiding philosophy of this column would be. No specific prediction was made.
In Week 3, I dove into the reasons why yards per carry is almost entirely noise, shared some research to that effect, and predicted that the sample of backs with lots of carries but a poor per-carry average would outrush the sample with fewer carries but more yards per carry.
In Week 4 I discussed the tendency for touchdowns to follow yards and predicted that players scoring a disproportionately high or low amount relative to their yardage total would see significant regression going forward.
In Week 5, I revisited an old finding that preseason ADP tells us as much about rest-of-year outcomes as fantasy production to date does, even a quarter of the way through a new season. No specific prediction was made.
In Week 6, I explained the concept of "face validity" and taught the "leaderboard test", my favorite quick-and-dirty way to tell how much a statistic is likely to regress. No specific prediction was made.
STATISTIC FOR REGRESSION | PERFORMANCE BEFORE PREDICTION | PERFORMANCE SINCE PREDICTION | WEEKS REMAINING |
---|---|---|---|
Yards per Carry | Group A had 24% more rushing yards per game | Group B has 25% more rushing yards per game | None (Win!) |
Yards per Touchdown | Group A scored 3% more fantasy points per game | Group A has 6% more fantasy points per game | 1 |
We've covered the journey of our yards per carry prediction well enough that all that's left is to announce the final score. Our "high-ypc" backs fell from 6.41 to 4.28 yards per carry while our "low-ypc" backs rose from 3.81 to 4.53 yards per carry. League average yards per carry among running backs, for context, is 4.47, so both groups have essentially been average (with our "bad" backs slightly above average and our "good" backs slightly below). This isn't over a small, fluky sample size; Group B has logged 583 carries since our prediction, or a strong two seasons worth of work. (Due to injuries, Group A has "only" logged 320 carries, which would still be one massive single-season workload.) Yards per carry is not-- in any sense that matters for our purposes here-- "a thing".
Meanwhile, our Yard-to-Touchdown prediction fared a bit better; both groups had bad weeks, but Group B's week was marginally less bad, and as a result, they moved us closer to a "flip". Again, Group B needs not just to outscore Group A, but to do so by at least 10%. With one week to go, it's very unlikely that they'll manage to salvage a win.
Testing Our Intuitions Regarding Regression
While preparing for the column this week I saw a tweet that immediately sparked my interest.
One stat NFL owners will hear at today’s fall meetings: average margin of victory this season is 8.9 points, on pace to shatter the Super Bowl-era record of 10.2.
— Tom Pelissero (@TomPelissero) October 18, 2022
The only other year below 10 was 1932, when the Bears opened with three straight scoreless ties and won the title.
Now, hopefully, I've taught enough about regression that your first thought on reading "this is the most extreme X in 90 years" should be, "I bet X is going to regress toward historical averages going forward". That was my first thought, too. But how much of its outlier status is just randomness being especially random this year, and how much is due to actual structural realities in the league today?
I'm fairly certain I've never considered "average margin of victory" as a standalone stat before, in much the same way that I've never considered "average temperature at kickoff" or "average miles traveled by the visiting team". As a factor in individual games, sure, I've given thought to games that were especially close or especially cold or where teams had to travel especially far. But as an aggregate, leaguewide measure it was entirely new, so I didn't have any preconceived notions about what it "should" be or what factors might drive it higher or lower. Which makes this a really good test for my intuitions, which are coming in entirely naive.
Continue reading this content with a PRO subscription.
"Footballguys is the best premium
fantasy football
only site on the planet."
Matthew Berry, NBC Sports EDGE