The game against Wisconsin was predicted to be the closest of the season so far - the median NU prediction, which I calculate by combining the median predictions of each team's score, was 31-31, with the median predicted margin of victory at NU +3. Oops. Predictors from Bucky's 5th Quarter and Madtown Badgers did relatively well, but still pretty poorly in an absolute sense.
The big takeaway from the Week 7 rankings is that I need to find a new way to deal with missing predictions. Not making a prediction should never yield a lower MSE than making a prediction, but missing a week shouldn't permanently relegate someone to the bottom of the season rankings. Let me know if you have any suggestions that satisfy those criteria. I may just stop including the three consistently missing predictors.
In the season rankings, this week sees three new entrants into the top five, as Chris Johnson, Lowes Line, and Adam Rittenberg replace Josh Rosenblot, MNWildcat, and Rodger.
This shakeup at the top is mostly due to the measure I'm using to rank predictors. To recap briefly, I'm using mean squared error, which I calculate by finding the difference between predicted and realized values of Northwestern's score, the opponent's score, and the margin of victory, squaring those differences, and averaging them. The "squared" part of mean squared error means this approach severely punishes big misses - squaring 2 or 3 gets you a small number, but squaring 20 or 30 is pretty big. In other words, using MSE makes being off by an additional point hurt more the further one is from being correct.
Although that's arguably a desirable feature for a ranking system to have, it's not necessary. For example, I could also rank predictors according to least absolute deviations (LAD). To calculate a predictor's LAD, I find the same differences between predicted and realized values I used for MSE, but then I just sum them, rather than squaring them and averaging them. Under LAD, being off by an additional point is the same amount of bad, regardless of how wrong the prediction was initially.
Prior to this week, the season rankings under MSE and LAD moved pretty much in tandem, with the order mostly the same across systems after each week, save differences of one or two spots for a few predictors. This week, however, we see some bigger differences between MSE and LAD season rankings emerge. The table below shows predictors ranked by MSE as above, but with values for LAD and rankings according to LAD appended. Looking at this table together with the Week 7 table above, you can really see the penalty for big misses at work.