For those who are new to the feature, here's the deal: every week, I break down a topic related to regression to the mean. Some weeks, I'll explain what it is, how it works, why you hear so much about it, and how you can harness its power for yourself. In other weeks, I'll give practical examples of regression at work.
In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.
Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Justin Jefferson is one of the top performers in my sample, then Justin Jefferson goes into Group A, and may the fantasy gods show mercy on my predictions.
And then because predictions are meaningless without accountability, I track and report my results. Here's last year's season-ending recap, which covered the outcome of every prediction made in our seven-year history, giving our top-line record (41-13, a 76% hit rate) and lessons learned along the way.
Our Year to Date
Sometimes, I use this column to explain the concept of regression to the mean. In Week 2, I discussed what it is and what this column's primary goals would be. In Week 3, I explained how we could use regression to predict changes in future performance-- who would improve, who would decline-- without knowing anything about the players themselves. In Week 7, I explained why large samples are our biggest asset when attempting to benefit from regression. In Week 9, I gave a quick trick for evaluating whether unfamiliar statistics are likely stable or unstable.
Sometimes, I point out broad trends. In Week 5, I shared twelve years worth of data demonstrating that preseason ADP held as much predictive power as performance to date through the first four weeks of the season.
Other times, I use this column to make specific predictions. In Week 4, I explained that touchdowns tend to follow yards and predicted that the players with the highest yard-to-touchdown ratios would begin outscoring the players with the lowest. In Week 6, I explained that yards per carry was a step away from a random number generator and predicted the players with the lowest averages would outrush those with the highest going forward.
In Week 8, I broke down how teams with unusual home/road splits usually performed going forward and predicted the Cowboys would be better at home than on the road for the rest of the season. In Week 10, I explained why interceptions varied so much from sample to sample and predicted that the teams throwing the fewest interceptions would pass the teams throwing the most.
The Scorecard
Statistic Being Tracked | Performance Before Prediction | Performance Since Prediction | Weeks Remaining |
---|---|---|---|
Yard-to-TD Ratio | Group A averaged 17% more PPG | Group B averages 10% more PPG | None (Win!) |
Yards per carry | Group A averaged 22% more yards per game | Group B averages 38% more yards per game | None (Win!) |
Cowboys Point Differential | Cowboys were 90 points better on the road than at home | Cowboys are 16 points better on the road than at home | 7 |
Team Interceptions | Group A threw 58% as many interceptions | Group B has thrown 41% as many interceptions | 3 |
The Cowboys put Dak Prescott on injured reserve and it's possible that he took the rest of the offense with him; against the Eagles, Dallas finished with fewer than 50 net passing yards. This is quite bad for our prediction. At the time we made it, the team had five home and five road games remaining in the sample. Because of the order, though, 40% of those road games will come with Dak Prescott under center, compared to 0% of the home games.
Ordinarily, this is why we prefer to bundle our predictions into groups-- over larger samples, the unlucky breaks will hit Group A at about the same rate that they hit Group B, and everything evens out. But we do like to try something different on occasion, and while there are still seven weeks to go, this might be a situation where it bites us.
Our interception prediction is faring much better so far. There was an error in my math last week-- I said that Group B led Group A in interceptions 89 to 64, but the real lead was 111 to 64. This will make it more difficult for Group B to flip the result. I worried about it for much of last week... until Sunday Night Football, when the "low-interception" Lions combined with the "low-interception" Texans to throw 7 picks, as many as all Group B teams combined.
Gambler's Fallacy and Regression to the Mean
Before we start, a quick quiz:
Imagine a receiver plays especially well over the first eight games of a sixteen-game season, averaging 100 yards per game (on pace for 1600 total). Imagine that we also happen to know this player is overperforming; his "true mean" performance level is just 80 yards per game. How many yards per game should we expect this receiver to average at the end of the year?
We'll get to the answer in a bit.
The goal of this column is to convince you to view regression to the mean as a force of nature, implacable and inevitable, a mathematical certainty. I can generate a list of players and, without knowing a single thing about any of them, predict which ones will perform better going forward and which will perform worse. I like to say that I don't want any analysis in this column to be beyond the abilities of a moderately precocious 10-year-old.
But it's important that we give regression to the mean as much respect as it deserves... and not one single solitary ounce more.
This is difficult because regression is essentially the visible arm of random variation, and our brains are especially bad at dealing with genuine randomness. We're just not wired that way. We see patterns in everything. There's even a name for this hardwired tendency to "discover" patterns in random data: Apophenia.
A fun example of apophenia is pareidolia, or the propensity to "see" faces in random places. Our ancestors used to tell stories of the "Man in the Moon". We... type silly faces to communicate emotion over the internet. Yes, pareidolia is why I can type a colon and a close paren and you'll immediately know that I'm happy and being playful. :)
Our ability to "see" these faces is surprisingly robust. -_- is just three short lines, and not only do most people see a face, they also mentally assign it a specific mood. '.' works as well. With small changes, I can convey massive differences in that mood. (^.^) and (v.v) are remarkably similar, yet the interpreted moods are drastically different.
Continue reading this content with a PRO subscription.
"Footballguys is the best premium
fantasy football
only site on the planet."
Matthew Berry, NBC Sports EDGE