Welcome to Regression Alert, your weekly guide to using regression to predict the future with uncanny accuracy.
For those who are new to the feature, here's the deal: every week, I dive into the topic of regression to the mean. Sometimes, I'll explain what it really is, why you hear so much about it, and how you can harness its power for yourself. Sometimes, I'll give some practical examples of regression at work.
In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.
Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Justin Jefferson is one of the top performers in my sample, then Justin Jefferson goes into Group A, and may the fantasy gods show mercy on my predictions.
Most importantly, because predictions mean nothing without accountability, I report on all my results in real time and end each season with a summary. Here's a recap from last year detailing every prediction I made in 2022, along with all results from this column's six-year history (my predictions have gone 36-10, a 78% success rate). And here are similar roundups from 2021, 2020, 2019, 2018, and 2017.
The Scorecard
In Week 2, I broke down what regression to the mean really is, what causes it, how we can benefit from it, and what the guiding philosophy of this column would be. No specific prediction was made.
In Week 3, I dove into the reasons why yards per carry is almost entirely noise, shared some research to that effect, and predicted that the sample of backs with lots of carries but a poor per-carry average would outrush the sample with fewer carries but more yards per carry.
In Week 4, I explained that touchdowns follow yards, but yards don't follow touchdowns, and predicted that high-yardage, low-touchdown receivers were going to start scoring a lot more going forward.
In Week 5, we revisited one of my favorite findings. We know that early-season overperformers and early-season underperformers tend to regress, but every year, I test the data and confirm that preseason ADP is still as predictive as early-season results even through four weeks of the season. I sliced the sample in several new ways to see if we could find some split where early-season performance was more predictive than ADP, but I failed in all instances.
In Week 6, I talked about how when we're confronted with an unfamiliar statistic, checking the leaderboard can be a quick and easy way to guess how prone that statistic will be to regression.
In Week 7, I discussed how just because something is an outlier doesn't mean it's destined to regress and predicted that this season's passing yardage per game total would remain significantly below recent levels.
In Week 8, I wrote about why statistics for quarterbacks don't tend to regress as much as statistics for receivers or running backs and why interception rate was the one big exception. I predicted that low-interception teams would start throwing more picks than high-interception teams going forward.
In Week 9, I explained the critical difference between regression to the mean (the tendency for players whose performance had deviated from their underlying average to return to that average) and the gambler's fallacy (the belief that players who deviate in one direction are "due" to deviate in the opposite direction to offset).
In Week 10, I discussed not only finding stats that were likely to regress to their "true mean", but also how we could estimate what that true mean might be.
In Week 11, I explained why larger samples work to regression's benefit and made another yards per carry prediction.
In Week 12, I used a simple model to demonstrate why outlier performances typically require a player to be both lucky and good.
In Week 13, I talked about how a player's mean wasn't a fixed target and predicted that rookie performance would improve later in the season.
Statistic Being Tracked | Performance Before Prediction | Performance Since Prediction | Remaining Weeks |
---|---|---|---|
Yards Per Carry | Group A had 42% more rushing yards/game | Group A has 10% more rushing yards/game | None (Loss) |
Yard-to-TD Ratio | Group A had 7% more points/game | Group B has 38% more points/game | None (Win) |
Passing Yards | Teams averaged 218.4 yards/game | Teams average 220.1 yards/game | 4 |
Interceptions Thrown | Group A threw 25% fewer interceptions | Group B has thrown 11% fewer interceptions | None (Win) |
Yards Per Carry | Group A had 10% more rushing yards/game | Group A has 10% more rushing yards/game | 1 |
Rookie PPG | Group A averaged 9.05 ppg | Group A averages 9.09 ppg | 3 |
Rookie Improvement | 60% are beating their prior average | 3 |
I recognize there's a bit of a "heads I win, tails you lose" situation with the yards per carry prediction. I predict that ypc will regress and workload will stay constant and Group B will outrush Group A as a result. Usually, both happen, and I get to declare victory. Rarely, workload stays constant but yards per carry doesn't regress, and I get to say, "I may have lost, but I was right about the workloads, wasn't I?" On the other hand, if yards per carry regresses but workloads shift, I get to say, "I may have lost, but I was right about the yards per carry, wasn't I?"
Anyway, both Group A and Group B are averaging an identical 4.02 yards per carry since the prediction, but the volume edge has shifted from Group B to Group A, the first time we've had this happen in a yards per carry prediction. So... I may lose, but at least I'll be right about the yards per carry, right?
The rookie improvement prediction started off splendidly, except for one tiny little problem: Tank Dell broke his leg without recording a catch and is now done for the season. Not only was he one of the most productive rookie WRs (greatly increasing the average our group has to beat), but having a zero sitting on the books will make it more difficult to beat that higher average, too. Right now, rookies averaged 9.09 points per game compared to 9.05 prior to the prediction, with 60% beating their prior average.
But if we removed Dell from the sample, rookies would be averaging 10.10 points per game compared to a prior average of 8.56, with an improvement rate of 67%. That's quite the difference.
Anyway, we're not going to remove Dell because our whole schtick here is we might as well make the predictions as difficult as possible.
Do Players Get Hot?
It's widely acknowledged that succeeding in the fantasy playoffs is largely about securing players who all "get hot" at the right time. But is "getting hot" a real, predictable phenomenon? Certainly, some players outscore other players in any given sample, but any time performance is randomly distributed, you'd expect clusters of good games or clusters of bad games to occur by chance alone.
If a player has been putting up better games recently, does that indicate that he's "heating up" and will likely sustain that performance going forward? Or does it just mean that he just happened to string together a couple of good games, but you'd expect he'd be no more likely to do that again? The fantasy community often believes the former, but I'll venture that the truth is much closer to the latter.
Indeed, looking at how a player has performed over the last three, four, or five games is almost always worse than looking at how he's performed over the last nine, ten, or eleven games. As I keep saying around here, large samples are more predictable than small samples. Ignoring half or more of a player's games doesn't give you a better idea of how well that player will perform in the near future; it gives you a worse idea.
Continue reading this content with a PRO subscription.
"Footballguys is the best premium
fantasy football
only site on the planet."
Matthew Berry, NBC Sports EDGE