Welcome to Regression Alert, your weekly guide to using regression to predict the future with uncanny accuracy.
For those who are new to the feature, here's the deal: every week, I dive into the topic of regression to the mean. Sometimes I'll explain what it really is, why you hear so much about it, and how you can harness its power for yourself. Sometimes I'll give some practical examples of regression at work.
In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.
Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Cooper Kupp is one of the top performers in my sample, then Cooper Kupp goes into Group A and may the fantasy gods show mercy on my predictions.
Most importantly, because predictions mean nothing without accountability, I track the results of my predictions over the course of the season and highlight when they prove correct and also when they prove incorrect. At the end of last season, I provided a recap of the first half-decade of Regression Alert's predictions. The executive summary is we have a 32-7 lifetime record, which is an 82% success rate.
If you want even more details, here's a list of my predictions from 2020 and their final results. Here's the same list from 2019 and their final results, here's the list from 2018, and here's the list from 2017.
The Scorecard
In Week 2, I broke down what regression to the mean really is, what causes it, how we can benefit from it, and what the guiding philosophy of this column would be. No specific prediction was made.
In Week 3, I dove into the reasons why yards per carry is almost entirely noise, shared some research to that effect, and predicted that the sample of backs with lots of carries but a poor per-carry average would outrush the sample with fewer carries but more yards per carry.
In Week 4 I discussed the tendency for touchdowns to follow yards and predicted that players scoring a disproportionately high or low amount relative to their yardage total would see significant regression going forward.
In Week 5, I revisited an old finding that preseason ADP tells us as much about rest-of-year outcomes as fantasy production to date does, even a quarter of the way through a new season. No specific prediction was made.
In Week 6, I explained the concept of "face validity" and taught the "leaderboard test", my favorite quick-and-dirty way to tell how much a statistic is likely to regress. No specific prediction was made.
In Week 7, I talked about trends in average margin of victory and tried my hand at applying the concepts of regression to a statistic I'd never considered before, predicting that teams would win games by an average of between 9.0 and 10.5 points per game.
In Week 8, I lamented that interceptions weren't a bigger deal in fantasy football given that they're a tremendously good regression target, and then I predicted interceptions would regress.
STATISTIC FOR REGRESSION | PERFORMANCE BEFORE PREDICTION | PERFORMANCE SINCE PREDICTION | WEEKS REMAINING |
---|---|---|---|
Yards per Carry | Group A had 24% more rushing yards per game | Group B has 25% more rushing yards per game | None (Win!) |
Yards per Touchdown | Group A scored 3% more fantasy points per game | Group A has 12% more fantasy points per game | None (Loss) |
Margin of Victory | Average margins were 9.0 points per game | Average margins are 11.4 points per game | 2 |
Defensive INTs | Group A had 65% more interceptions | Group B has 125% more interceptions | 3 |
Average margins were closer in Week 2 of our prediction but finished slightly above our target at 10.9 points per game, which brings our two-week average to 11.4. Remember, in our attempt to predict a specific range, we followed three steps: predicting that margins would be higher than early-season averages (check), lower than last season's average (check), and closer to the early-season average than to last season's average (through two weeks, not a check). I still don't have strong intuitions about what this stat will do over the next two weeks, but we'll keep watching and find out.
At the time of our prediction last week, Group A averaged 1.28 interceptions per game and Group B averaged 0.38 interceptions per game. Last week, Group A averaged 0.67 interceptions per game against 0.82 for Group B, and since Group B featured more total teams, their total edge was even larger than their per-game edge. (For comparison, the teams in the middle of the distribution who didn't land in either Group A or Group B averaged 0.88 interceptions per game at the time of prediction and 0.77 interceptions per game last week.)
Given that intercepting passes is genuinely a skill, I would expect Group A to pass Group B in interceptions per game again at some point. But given how thoroughly individual interceptions are dependent on luck, I think Group B is going to handily win this prediction overall.
Regression and Large Samples
One of the key features of regression to the mean is that outlier performances are significantly more likely over small samples. If I flip a coin that's weighted to land on heads 60% of the time, that means there's still a 40% chance it lands on tails. Given those odds, landing on tails wouldn't be very surprising at all. But if I flipped the same coin a million times, the odds of seeing Tails come up more often than Heads dwindles down to nothing.
This idea that variance evens out over larger samples is one of the key insights in fantasy football. Why do top DFS players compete with so many different lineups every week? The answer is not, as is commonly believed, because it increases their expected return on investment. Indeed, every DFS player has a "best" lineup, a lineup that they think is most likely to win that week, and every other lineup that player submits actually decreases expected payout (because it's a worse lineup than the best lineup).
So why submit so many different lineups? Because outlier performances are significantly more likely over small samples. By using 20 lineups in a week, top players reduce the amount of money they'd be expected to win, but they also reduce the chances of a single injury or bad performance wiping out their entire bankroll, and that's a worthwhile trade.
Continue reading this content with a PRO subscription.
"Footballguys is the best premium
fantasy football
only site on the planet."
Matthew Berry, NBC Sports EDGE