First, I developed and quickly discarded PM 1.1. It used win/loss records from the season and adjusted the value of wins on the quality of the opponent using a basic Elo Chess format. I included a factor for home field advantage and gave the world PM 1.11.
EP(w) = 1/(1+10^((opp rating-team rating+x*Home)/400))
EP(w) = the estimated probability of a win
x = the home adjustment factor
Home = location (1=home, -1=away)
Each team starts the season with a rating of 1200 (Div 1AA teams enter each game at 800), and then are adjusted based on their performance in each game. The adjustment factor is:
team rating(t) = team rating(t-1) + k*(X(w)-EP(w))
X(w) = the outcome of the game (1=win, 0=loss)
k = the adjustment factor
k can be adjusted to give greater emphasis to games. You might want to place more importance on games against better opponents, such that k(opp rating) = f(opp rating)*l. Or, you can put more emphasis on losses, or wins, or home games, or games late in the season, etc.
For this week, I have opted for a k of 300. Chess uses values of 32 and 16, but football seasons are not long enough to allow for that kind of adjustment. A larger value also places greater emphasis on the team's performance in recent games, and the k value actually had very little impact on the model's accuracy.
First, let's review its past performance. I didn't bother with week 1 because the rating's at that point were random.
week 2 = 59.09%
week 3 = 65.22%
week 4 = 60.55%
week 5 = 69.91%
week 6 = 61.82%
I have two major concerns with these results. First, they aren't very good. Second, they are consistent. Theoretically, the model's predictive power should increase through the season--unless team's performances vary more than I am assuming with this model. To the model's credit, it actually predicts a team's odds of winning and is, in this sense, more accurate than it is at actually picking the winner.
And the ratings for the first six weeks produce the following top 25:
|15||Brigham Young University|
Illinois got a big boost and Wisconsin took a fall after the Illini completed the un-upset last week (did anyone outside of Badger nation think Wisconsin would win that game?). BYU, Texas A&M and Maryland are surprises in the top 25--though as a BYU grad, a U of Maryland student and lifelong Aggie I can't disagree--but the biggest shocker, in my opinion, is the Jayhawks breaking the top 10.
There are three prominent exclusions from this 25. West Virginia had been in the top 10 the two weeks prior to their loss to South Florida and Syracuse did not help them climb back into the top 25--the Elo Chess model adjusts only for differences in the odds of a win and the actually game outcome. This is, obviously, a flaw in the system.
But PM 1.11 does have the wisdom to recognize that a bad Texas team, who has lost two straight and barely escaped against Arkansas State and Central Florida, has no place in the top 25. PM 1.11 places them at a more deserved 64.
USC dropped 34 spots to 41. PM 1.11 had given them a 95.3% chance of winning. Anyone, even Harbaugh himself, who claims to have believed that Stanford would pull that off has got to be an idiot. But USC lost with an odds differential of -.953, which means their rating dropped almost 300 points in one week.
On a side note, its a bad sign for college football as a whole this year that Ohio State is contending for a national championship. No offense to the Buckeyes, of which I can name one or two at most, but they have half the team they had a year ago. LSU is a very good team, but they needed a heroic effort to beat a rebuilding Florida squad.
As for this weeks picks, PM 1.11 is pretty conservative. It favors the higher ranked team (in the national polls, not the PM 1.11 poll) in all cases except it gives Missouri a 70% chance against Oklahoma. Aggie fans might want to get excited. If they win (and they have a 92% chance according to PM 1.11 at Texas Tech) and Oklahoma does lose, they will have a two game lead on the rest of the South.