In 2009, the New York Giants' Steve Smith was one of the most consistent fantasy wide receivers. He averaged 13.6 FP/G and had a standard deviation of only 6.3 FPs; that ratio of 6.3 to 13.6 (46.1%) was the lowest among the top 50 fantasy wide receivers in 2010. Thirteen of his sixteen games produced between 8.5 and 16.5 fantasy points, an incredibly narrow range for a top-end fantasy wideout. Only his two monster games early in the year (10/134/1 against the Cowboys, 11/134/2 against the Chiefs) stand out as outliers; as far as "worst" games go, his 4/44 performance against the Saints was pretty good. In fact, he was the only fantasy receiver in 2009 to score at least 6 points in sixteen games last season (using a 0.5 PPR scoring system).

But does this *mean* anything?

Over the 20-year period from 1989 to 2008, 134 different receivers averaged between 12.6 and 14.6 fantasy points per game. Those receivers ranged from guys with super low standard deviations like Terry Glenn in 1996 or Chad Johnson in 2002 (4.8 for both) to really inconsistent players like ... Chad Johnson in 2006 (11.8). Most fantasy owners would agree that some level of consistency is good; it's better to have players consistently score around 15 fantasy points than for someone to alternate between 5 and 25 points every other week. Assuming, for the sake of argument, that consistency is desirable, we need to know whether consistency is predictable. That is, do players with high standard deviations continue to perform inconsistently? Or is a player's standard deviation in any one season something that has little predictive value?

Of those 134 receivers, 123 of them played in 11 or more games in the following season. Let's break that group into three subgroups -- the inconsistent receivers (highest standard deviation), the most consistent receivers (lowest standard deviation) and everyone in the middle. The table below shows how those players performed in the year in question (where they averaged between 12.6 and 14.6 FP/G) and their production in Year N+1.

Category |
Rec |
Yrds |
TDs |
FP/G |
StdDev |
N+1G |
N+1FP |
N+1FP/G |
N+1StdDev |

High StdDev |
77.4 |
1191 |
8.8 |
13.7 |
9.6 |
15.2 |
182.3 |
12.0 |
7.2 |

Mid StdDev |
83.4 |
1175 |
8.4 |
13.5 |
8.1 |
15.3 |
185.4 |
12.1 |
6.6 |

Low StdDev |
83.5 |
1176 |
7.6 |
13.4 |
6.2 |
15.4 |
186.4 |
12.1 |
7.2 |

As you can see, knowing a player's standard deviation appears to have little predictive value. The really inconsistent players and the really consistent players both performed nearly identically the following season, in terms of both average and standard deviation. I also looked at just the 15 most extreme players on both sides of the list; the fifteen most inconsistent receivers averaged 13.6 FP/G with a standard deviation of 10.4 in Year N; the next year, they averaged 11.9 FP/G and had a standard deviation of 7.1. The 15 most consistent wideouts averaged 13.3 fantasy points per game with a standard deviation of just 5.4; they averaged 12.7 FP/G in Year N+1 and had a higher deviation of 7.4. I don't see much evidence that being inconsistent -- or consistent -- in one year has any bearing on your consistency in the future.

But -- and this is a big but -- inconsistent and consistent aren't terms that exist in a vacuum. There's a correlation between consistency and receptions, and there's a correlation between inconsistency and touchdowns. Among the 123 wide receivers we've been looking at, the correlation coefficient between a player's standard deviation and his receiving touchdowns was 0.24; this indicates a mild, but noticeable, correlation between inconsistency and touchdowns. In that same group, the correlation coefficient between a player's receptions and standard deviation was -0.13, indicating a slight negative correlation between the two variables.

And this makes sense. Touchdowns are worth a lot of points, and if you're a player who gets a large percentage of his value by scoring touchdowns, you're more likely to be inconsistent since touchdowns occur relatively infrequently. Receptions, on the other hand, are more consistent than touchdowns. If you're a possession receiver, you're less likely to have those really high and really low scoring games.

At least, that's what I would think. What do the numbers say?

I looked at those same 123 receivers and broke them up into three tiers, based on what percentage of their fantasy points came via receptions. Here are the results:

Category |
Rec |
Yrds |
TDs |
FP/G |
StdDev |
N+1G |
N+1FP |
N+1FP/G |
N+1StdDev |

High Catch |
92.7 |
1189 |
7.1 |
13.4 |
7.4 |
15.3 |
182.3 |
11.8 |
6.9 |

Mid Catch |
81.4 |
1202 |
8.1 |
13.6 |
8.1 |
15.4 |
195.1 |
12.7 |
7.3 |

Low Catch |
70.3 |
1151 |
9.6 |
13.6 |
8.3 |
15.2 |
176.7 |
11.6 |
6.8 |

The high catch guys caught about 93 passes for 1,189 yards and 7 scores, while the low catch guys had only 70 receptions, slightly fewer yards, but about 2.5 more receiving touchdowns. Both groups averaged around 13.5 FP/G, but -- as expected -- the high catch guys had the lower standard deviation. What happened the next year? The high catch guys and the low catch guys, surprisingly, had nearly identical fantasy points per game averages and standard deviations in Year N+1.

Ultimately, I think the connection is just too tenuous to expect a trait like consistency to well, be consistent, from year to year. The quality of the quarterback, offensive line and coaching staff, injuries to key offensive players, and the dynamics of each game will have an impact on a receiver that far outweighs any theoretical notion like consistency. The lesson, as usual, is to draft good players, not consistent ones.

Questions, suggestions and comments are always welcome to stuart@footballguys.com.