Sabermetrics, Stabilization Rates, and Regression to the Mean

Fantasy Douche wrote a great post the other day asking whether Jarvis Landry being good even matters. It reminded me of something I consider from time to time regarding the conclusions that can be drawn from baseball statistics, and how difficult the task is in football. It’s interesting to consider how data is compiled in the two sports. Baseball is largely a series of one-on-one interactions. Pitcher pitches. Hitter hits. Rinse and repeat. Each hitter comes up four or five times a game, 162 games a year. Controlling for the external variables that can impact the success or failure outcome of that one-on-one interaction seems, if not easy, at least doable. While a 16-game stretch of good or bad play is usually written off as a small sample in baseball — a hot streak or a slump — in football, it’s all we have. In the offseason, players change teams, maybe a quarter of the coaches in the league are replaced, and it becomes nearly impossible to control for the same factors we could in the previous season. Maybe one way to think about this is a player’s career is divided into a series of 16-game splits, not much more predictive in nature than taking smaller splits from within a season. Of course, it’s easy to place significantly more weight on the 16-game splits because they amount to the defined measure that is a season. But let’s look at some baseball numbers and see if that’s really a logical way to think of stats.

Subscribe to the best value in fantasy sports

You're all out of free reads for now and subscribing is the only way to make sure you don't ever miss an article.

By Ben Gretch | @YardsPerGretch | Archive

Comments   Add comment