The question I want to ponder for a moment is whether that's a meaningful distinction.
As a mental exercise, imagine that we decided to create a rating system for the sport of competitive lifting. Based upon how much weight competitors lifted in various competitions, we'd assign them a rating that would reflect their weightlifting ability. What would this rating represent? Most people would say that it represents (or measures) the "strength" of the competitor.
That seems like a silly exercise for weightlifting, because we already have a direct measure of the competitors' strength -- how much they lifted. It makes more sense for sports like basketball, where we understand that the final score doesn't directly measure a team's "basketball strength" but is instead a complicated function of the two teams' strengths and factors like the officiating crew, the venue, and so on. The rating is intended to tease out the hidden variable -- the team's basketball strength -- which cannot be measured directly. So a rating represents a team's basketball strength, and usually a higher number represents more strength.
Now let's return to the distinction between predictive ratings and achievement ratings.
There's an easy and intuitive understanding of how to assess a predictive rating: We test it's ability to predict future games. A rating that does that better is a better predictive rating. The achievement rating looks backward rather than forward, so we should assess an achievement rating by how well it predicts past games. A rating that does that better is a better achievement rating.
Here's the rub, though: those are the same things! The more accurately a rating reflects the true "basketball strength" of a team, the better it will perform predicting all of the team's games -- whether they have already occurred or are in the future.
Monte also argues in this posting that:
When assessing how well a team has played over a season, the only factors that should come into play are: (1) how often did you win and (2) how difficult was your schedule.I think there's a simple counter-example to this notion. Imagine that going into the last game of conference play, Indiana and Michigan have played exactly the same schedule of opponents, and they're about to play each other. They each played Butler in the third game of the season, but I won't tell you how that game came out. In all the other games, Indiana beat each opponent by at least 12 points, while Michigan never won by more than 6 points and went to OT in three of the games.
Now I ask you two questions: (1) Who was more likely to have won against Butler when they played early in the season? and (2) Who is more likely to win when they play each other tonight on a neutral floor?
My guess is that almost everyone would answer Indiana to both questions -- which means that Indiana should be rated higher than Michigan. Regardless of whether you're trying assess what a team has already achieved or how it might perform in a future game, how a team wins (or loses) a game is very important.
Of course, you may reject the notion that ratings should reflect a team's "basketball strength". But then I challenge you to express clearly what a rating should mean. I think you'll find it very hard to find a meaningful definition that doesn't come back to being an accurate measure of a team's strength.