Tuesday, April 19, 2011


So far, we have explored various shortcomings of the RPI: home court advantage, averaging, and the distribution of the elements.  We've found a few tweaks that have improved the performance of RPI as a predictor.  We turn now to yet another potential area of improvement: the depth of evaluation.

Recall the (revised) formula for RPI:
RPI = 0.23*WP + 0.23*OWP + 0.54*OOWP
The last two terms of this formula can be thought of as a measure of a team's "Strength of Schedule" expressed as the winning percentage of a team's opponents and their opponents.  RPI arbitrarily stops evaluating this "Strength of Schedule" term at two levels.  Does extending this to more levels (e.g., OOOWP) add any predictive value?

The answer turns out to be yes and no.  With a formula of approximately:
RPI = 7*WP + 7*OWP + 7*OOWP + OOOWP
we get a performance of:

  Predictor    % Correct    MOV Error  
RPI (unw,15+15+70)75.4%11.49
RPI (+oowp)74.6%11.36

This reduces the MOV Error but doesn't improve % Correct.

So extending the depth of RPI another step provides at least some value.  This raises the natural question: is there value in extending it yet another step?  ..and another step?

While we could certainly manually explore those possibilities by calculating OOOOWP, etc., it's perhaps better to cut to the chase and ask whether we can extend the depth of RPI infinitely, and see what predictive value that has.  It may seem counter-intuitive, but it's possible to extend RPI to an "infinite" depth, but it requires a different computational approach.


  1. Extending the depth infinitely sounds like an approach closer to the Colley Matrix.

  2. @Probable: Thanks for that pointer, I hadn't seen that. There are several different approaches to "infinite depth" ratings and I'll hit some of them after I'm done bashing RPI. I'll add the Colley Matrix to the list. At a quick glance it looks reasonable to implement.


Note: Only a member of this blog may post a comment.