Monte Carlo Strategies to Win 2016 MFL10s – Part II
Get a free NFL subscription for 3 days.
Editor’s note: This is one of two Monte Carlo simulation articles aimed at solving the best-ball puzzle, each using different assumptions. We believe doing so gives a good idea of the range of possible outcomes. For the other article by A.J. Bessette and Greg Meade, click here. Almost exactly two years ago, A.J. Bessette and Greg Meade wrote a great piece on using Monte Carlo simulation to solve the best-ball puzzle, aimed at helping you win your MFL10 leagues. There’s no doubt their groundbreaking work led to the popular, and successful, RB-heavy approach employed by many top best-ball drafters in 2014. In fact, their recommended first four round start of RB TE RB RB was, on average, the 13th best start among 1296 possible combinations of first four round starts. I italicized on average for a reason. To win an MFL10 you have to place first out of 12 teams, so I tend to look at upside, rather than averages. There’s nothing wrong with averages, but I also am a GPP-oriented DFS player, so upside tends to be my personal focus. So I’ve decided, rather than giving a roster combination which will — under the given assumptions — give you the best team on average, instead I’ll look at the top scoring rosters among each of the individual simulations to give you an idea of which roster combinations have the most upside, and how often they fall within that upside range.
AssumptionsLike A.J. and Greg, I’ll be using 2014-2015 data (I will also have future articles with Monte Carlo simulations using data from multiple time frames to get a better idea of what the upside combinations are depending on what type of output we might see). This allows us to capture the changing landscape of positional output in the NFL. I am also using the current MFL10 format, which is different than the format when they ran their original article in 2014. Also like A.J. and Greg (and full credit to them for suggesting this), I’ve adjusted positional ADP to the current 2016 overall ADP, because WRs are now going earlier than their RB counterparts, and QBs are going later than ever. In other words, the QB1 in 2015 went with a 19.54 overall ADP. So if I simulated based off overall ADP, it would treat this year’s QB1, Cam Newton, as approximately the QB3, diminishing his statistical output compared to what it would be historically for the QB1. But we know that’s not the case. Making the adjustment allows us to appropriately account for 2016 ADP. I also incorporated zeroes for bye weeks and gave zeroes for missed games, which I assigned proportionally based on historic missed game rates for each position, whether due to injury, suspension, or benching. I found that missed games does correlate with positional ADP, but a lot of that is likely due to the fact that players with later ADPs may not have posted stats because they were more likely to be replaced on the depth chart, or they were backups to begin with. I will have more on missed games in a future article. The other, and most major difference from their prior work, is that I used a Bayesian version of Monte Carlo simulation to simulate the uncertainty in the parameters for the optimal fit to the data. I also allowed for the code to identify the appropriate distribution at each position when sampling weekly points by positional ADP. This lets the simulation pick up the appropriate upside by position. For example, here are the points scored above and below positional ADP expectation for QBs (on the left) and RBs (on the right), drafted in rounds 1-4 the last two years.
MethodologyFor the sake of time and computational power, I chose to start drafts from the third spot, which is typically the end of the WR trio of Antonio Brown, Julio Jones, and Odell Beckham, Jr., and from the 1.09 spot (for no particular reason other than to have a late draft spot). Finally, I am looking at upside. To do so, I am taking the top 8.33 percent of all my individual simulations, and looking at the roster combinations that made up those simulations. The reason I chose 8.33 percent is because that is equal to 1/12 — which is where you need to finish in MFL10s to win. Anything else doesn’t matter.1 In other words, I’m not looking at what’s happening on average. Instead, I’m looking at what individual data points put me at the upper tail of a distribution. Here are the results.
- Okay, second place matters to a degree, but the big prize is winning. (back)