The Offense-Defense model is a version of this I first implemented for the 2010 March Madness Predictive Analytics Challenge. It's a fairly simple model. For each team, for each game, it predicts a score based upon the current Offense and Defense ratings. It then determines the error between the prediction and the actual game result, and adjusts the appropriate Offense and Defense ratings to remove 75% of the error. It then iterates this across all the games for a fixed number of iterations. (This algorithm isn't guaranteed to converge, although in practice it usually does.)

Testing this algorithm with our usual methodology gives these results (for comparison, I show the best non-MOV predictor as well):

Predictor | % Correct | MOV Error |
---|---|---|

TrueSkill + iRPI | 72.9% | 11.01 |

Offense-Defense | 69.6% | 11.84 |

This performance (with some adjustments) was enough to win the 2010 Challenge, but seems disappointing in comparison to the TrueSkill + iRPI performance. In particular, we might expect ratings based upon MOV to have lower MOV error rates, but that is not the case here. I also implemented a version of the Dick Vitale methodology, where I calculated separate home and away ratings for all teams. In this case, our predicted score is the home team's home Offense times the away team's away Defense (and vice versa). Here's how that performs:

Predictor | % Correct | MOV Error |
---|---|---|

TrueSkill + iRPI | 72.9% | 11.01 |

Offense-Defense (home & away) | 68.2% | 12.26 |

Surprisingly (at least to me) this is significantly worse than the undifferentiated ratings. Perhaps this is additional evidence that teams don't play differently at home than away; the home court advantage would then be due primarily to the referees -- a conclusion shared by Sports Illustrated.

The second model I tested is the "Probabilistic Matrix Model" (PMM). This model is based upon the code Danny Tarlow released for his tournament predictor, which he discusses here. This is similar in spirit to the Offense-Defense model, if much more sophisticated mathematically. (You can tell this because the code has variables like s_hat_i in it.) Testing PMM gives these results:

Predictor | % Correct | MOV Error |
---|---|---|

TrueSkill + iRPI | 72.9% | 11.01 |

Offense-Defense | 69.6% | 11.84 |

PMM | 71.7% | 11.23 |

The PMM does better than my naive Offense-Defense model (apparently there's something to all that math stuff) but still does not approach the performance of TrueSkill + iRPI. I did not implement separate home & away ratings, but there's no reason to think they would provide improved performance.