Using Team WR Rankings to Identify Early MFL10 Targets
MFL10 season is underway! If you are one of the sick degenerates who could not bear the last month of detox from fantasy football, then you are probably chomping at the bit. For those of you in the “but how can you pick players if free agency has not even started yet” camp, then this article is to help you get started.
There really is no surefire way to pick players this early in the offseason. While many writers will preach picking “safe” players, how many “safe” players are there really? Between coaching changes, free agency, the draft, and pre-season, there are so many moving parts to keep track of over the next seven months. The truth is, the only idea that is “safe” is that very little is actually safe. There could be as much as one quarter of the NFL teams with new starting QBs by Week 1; RB committees are, in most cases, very volatile, and who knows what injuries or draft picks could shake up a depth chart in an instant. The best we can do is try to use what information we do have to make better estimations than the others.
Regression to the Mean
Sometimes the data we have to look at is obvious stuff: points, targets, etc. To really start estimating better, we need to look one step further. Perhaps we can identify attractive opportunities at the team level before worrying about the players themselves, by starting with team points.
I went through all of the wide receivers that played from 2012-2016 and aggregated their receiving statistics by the team they were on. I then ranked each team by their total PPR points for all of their WRs in that season. To start, let’s look at the correlation matrix between all of the years.
“Fin12” refers to the team’s finish in 2012. Pay the most attention to the diagonal immediately below the identity rows as those are the year-over-year, or YOY, correlations.
So generally speaking, outside of 2014 to 2015, the year-over-year correlation of team WR PPR ranks is very strong — a little over 0.5. I took it a step further and then looked at just the top eight teams in each season to see how they correlated to their next season(s).
Notice the YOY correlation lagged behind the entire league’s correlation in every year (and was basically a free-for-all for 2015’s top eight last season). As any sabermetrician will probably tell you, counting stats and ratios for teams/players at either end of the spectrum will tend to regress to the mean the following year.
The idea behind this is that most stats tend to have a large luck component attached to them that will balance out over time. This could be a combination of injuries, dropped passes, a defender slipping while chasing down a WR, etc. We can further validate that by now looking at the bottom eight teams YOY and noticing that their n+1 correlation was also far less predictable than the rest of the league’s (even last year’s abnormally-higher 0.48 correlation lagged behind the league as a whole).
To visualize that better, take a look at the next-year finishes of the top and bottom eight teams in team PPR points from 2012-2015. The average top-eight team finished 5.7 spots lower the next season, while the bottom-eight teams finished 6.7 spots higher.
Sell the Top, Buy the Bottom
So what does this mean for our MFL10s? For one, we probably want to back off a little on teams that finished near the top of the team WR ranks last season (GB, NO, ATL, ARI, NYG, PIT, OAK, DET). I’m not advising you avoid them completely — it would be idiotic to avoid Antonio Brown because of a broad math principle — but the theory is sound. These teams are more likely to rank lower in team WR points next season than where they finished in 2016, and the fervor that may come from drafters wanting to buy into last year’s numbers may cause many of the members of these teams to be candidates for over-drafting.
For instance, both GB and NO produced two WR1s last season. Odds are, this will not happen again due to general regression. Over the last two seasons, the top two teams in team WR points both saw regression the next season (ATL and PIT finished eighth and ninth in 2015; NYJ and PIT finished 16th-sixth last season). That doesn’t mean to avoid Davante Adams, Jordy Nelson, Brandin Cooks, and/or Michael Thomas, but there could be the opportunity cost to buy the top on last year’s production for a team’s WR2 over another team’s WR1.
Another takeaway may be to look for targets on last year’s bottom-feeders. That’s right, all of the 49ers and Eagles! From 2012-2015, the rank of the bottom three teams went up by an average of 7.5 spots the next season (even removing the 2014 Jets going from 31st to 1st, it’s still an average of 5.5 spots). The only teams to repeat a bottom-three finish the next season for the last five years were the 2012 and 2013 Chiefs (who threw zero TDs to WRs, a comical outlier) and the 2014 Rams. It may not sound sexy to draft Torrey Smith or Jordan Matthews anymore, but remember we can’t fill our rosters with seven or eight copies of Mike Evans, so we need to look for an alpha anywhere we can get it.
The full table of yearly team WR ranks is below. Next time — once we have some actual ADP to compare to — we will try to use this information to source some potentially under-priced WRs.