Heading into the 2021 draft, Ja’Marr Chase was lauded by many as one of the best wide receiver prospects to enter the NFL in a decade. He was the highest-ranked prospect in our Rookie Guide that season and sat inside our top 20 dynasty rankings prior to the NFL Draft. Why were we, and many others, so optimistic about his future? It was pretty simple. He projected as an early Round 1 pick, made an impact at an early age in college, and had one of the most productive WR seasons in the history of college football.
When considering prospects, we often question whether they “check all of the boxes.” It’s great when players like Chase accomplish this, but we know that there are plenty of successful players that only checked a handful. Some boxes are more important than others and in some cases, the combination of checks is more important than the total.
Regression trees are a great way to help us reframe this idea. Throughout the history of the site, writers such as Kevin Cole and Anthony Amico used regression trees to consider the importance of production and age in predicting NFL outcomes. Regression trees help us to understand the mixture of attributes that tend to drive NFL performance and provide a visual way to understand how these attributes interact. Heading into the 2021 Draft, I built a simple regression tree model with the intent of outlining a simple “rubric” that readers could use to better understand a WRs profile and if it lent itself to NFL success. With two seasons now behind us, this model has proved to do a solid job of predicting the results of that class. That said, the tree produced in this model was a little too specific about the touchdown rates in certain branches. As a result, I decided to revisit the use of regression trees to better understand the incoming class of rookies.
Since you’re probably interested, here are the model results I published in late February of 2021, sorted by projected draft positions that were sourced from NFLMockDraftDatabase.
Player | School | Projected Draft Position | PPG |
---|---|---|---|
Ja’Marr Chase | LSU | 5 | 15 |
DeVonta Smith | Alabama | 6 | 11 |
Jaylen Waddle | Alabama | 12 | 11 |
Rashod Bateman | Minnesota | 23 | 11 |
Kadarius Toney | Florida | 27 | 11 |
Rondale Moore | Purdue | 37 | 11 |
Terrace Marshall Jr. | LSU | 39 | 6.3 |
Elijah Moore | Ole Miss | 56 | 11 |
Amon-Ra St. Brown | USC | 72 | 8.3 |
Tylan Wallace | Oklahoma State | 77 | 7.6 |
Chatarius Atwell | Louisville | 85 | 6.3 |
Sage Surratt | Wake Forest | 91 | 7.6 |
Seth Williams | Auburn | 93 | 6.3 |
Marquez Stevenson | Houston | 101 | 6.3 |
Nico Collins | Michigan | 106 | 8.1 |
D’Wayne Eskridge | Western Michigan | 106 | 4.5 |
Tamorrion Terry | Florida State | 107 | 6.2 |
Anthony Schwartz | Auburn | 113 | 8.1 |
Jaelon Darden | North Texas | 116 | 8.1 |
Amari Rodgers | Clemson | 126 | 4.5 |
Dyami Brown | North Carolina | 129 | 6.2 |
Shi Smith | South Carolina | 131 | 4.5 |
Marlon Williams | UCF | 140 | 6.2 |
Dazz Newsome | North Carolina | 170 | 4.5 |
What Is A Regression Tree?
So what in the world is a regression tree? For the purposes of this article, think of a regression tree as a series of questions. The response to each question leads to another question, which leads to another question. This process is repeated until all questions are answered and an estimate of the points per game a prospect is expected to score in the first three seasons of his NFL career is reached.
To create the regression trees used in this article, I gathered collegiate production, age, athletic measurables, and draft data for every WR included in the Prospect Box Score Scout. I filtered this listing to only include players that had logged two or more NFL seasons. With some help from an algorithm, I then worked through different combinations of measures until I arrived at a mixture that was well tied to NFL fantasy points, easy to visually follow, and included intuitive inputs.
The specifics of how the tree was built fall outside the scope of this article. But for a little more background, I split my listing of players into training sets and test sets. I then fed the training sets into an algorithm that ran through the process of building the tree by continually separating players based on thresholds. Once the tree was built, I fed the test sets into the model and compared each player’s predicted result (as calculated by the regression tree) to his actual result. I repeated this process, workshopping different mixtures of statistics until the differences between predicted and actual results were limited while also producing a tree that was small enough to easily interpret.
Keep in mind, this exercise isn’t as much about trying to build as accurate of a model as possible as it is about giving us another tool to help us understand how WR metrics interact. Further, it gives us another input into building an expected range of outcomes for incoming rookies. One challenge with regression trees is that as you get into the lower branches, you’ll often find a couple that seem counterintuitive. This can happen for a variety of reasons but more often than not it is because the model is overfitting. (This happens when the model aligns too closely to the specific training data, and it prevents the model from being usefully applied to new data.) Also, it’s possible that there are counter-intuitive relationships, or specific profiles that tend to violate some of the general lessons we have learned.