#3 on Lindy's Five Essential Websites (Non-Major Media) for 2013
[+] Team Summaries

Monday, August 31, 2009

This week's games: Oregon at Boise State

I am posting two sets of graphs to compare teams that will be playing this week. One set shows how the teams have performed from 1994 to 2008 using the Matrix Hybrid rating. The second shows the trend-O-meter rating from 2008. Enjoy.


Boise State and Oregon have been fairly good and fairly even the last few years. Both teams improved substantially over the course of last season as they broke in new quarterbacks. Boise State caught Oregon at a low point last season, and won't do the same again this year, but the game is in Boise, and few are better at home than the Broncos-and Boise needs the win more.

Saturday, August 29, 2009

The Myth of Home Field Advantage

Complete Home Field Advantage Statistics

About a year ago, in my most widely read and discussed post to date, I detailed the hard facts of home field advantage. I showed that it was small, isolated stadiums that gave their teams the most boost on the scoreboard and not the rocking behemoths that we love so much. But some people just couldn't handle the truth. I now return to the topic to show how I was right and they were wrong (so suck it Trebek) . . . but also how I was wrong and they were right, as Yoda would say, from a certain point of view.

First, we need to cover some facts. Since 1994, when playing FBS opponents, home teams have won 60% of the time and have outscored their opponents by an average of about 10.5 points. In part, this is because lesser programs often take paychecks to travel and play bigger programs, home teams are more often better teams and therefore win more often.

Home field advantage, though, is very real. On average, home field advantage is about 3.5 points. Specifically, from 1994 to 2008 it was 3.500949. In other words, the home team could expect to do 3.5 points better on average playing at home than at a neutral site against the same team. There is a 7 point swing between playing at home versus playing at someone else's home-exactly 2/3 of the average margin of victory for home teams (10.5). The other 1/3 is because Louisiana-Monroe goes to Alabama and not vice-versa (oh, wait, bad example--suck it Saban).

To understand HFA, we first look at the point differentials (PD) or the difference in the average margin of victory at home versus on the road. Again, this is not my opinion, this is data. Over this period, Arkansas State has lost home games by an average of 1 point, but they have lost road games by an average of 20 for a differential of 19. The highest ranked BCS team is Texas A&M at 10, and there are only 7 in the top 25.

This, of course, does not actually measure HFA because it does not account for the strength of schedule. For example, Arkansas State's average home opponent was about 12.4 points worse than its average road opponent, so when we take that into account we see that Arkansas State had a 6.8 point HFA, or 13th best in the country.

After accounting for strength of schedule, Boise State and Hawaii come out on top. Oklahoma State is at 4, Texas A&M and Texas Tech at 8 and 9. Beaver Stadium comes in just a hair below Arkansas State at 14. You have to go to 39 with Florida before you find an SEC team.

These are facts-hard, undeniable facts--but there is more to football than point margins. Arkansas State has a real home field advantage, but getting less plastered at home is not anything to write home about.

So I decided to measure HFA as the oomph that helps a team win at home when they would lose on the road. This measure is a bit more technical, but the results are also a bit more satisfying. Interpreting the numbers is just about impossible, but the most important thing to remember is that teams with a larger number have been able to win more games at home that they would have lost on the road than are teams with smaller numbers.

Texas Tech is number 1, as Longhorn fans know all too well. Texas is 12, which might come in handy when they are looking for revenge against the Red Raiders this year. Florida State is at 3, showing the superiority of the tomahawk chop over the gator chomp, which comes in at 14. Despite the long home winning streak at Kyle Field in the 90's, Texas A&M drops to 26.

In summary, home field advantage means different things at different times. It helps almost all teams put more points on the board than their opponents (with the exception of Navy), and this characteristics of home field advantage seems to have less to do with big stadiums and raucous crowds than we might think. On the other hand, home field advantage helps some teams win when they might otherwise have lost. It might not show up in gaudy numbers, but Nebraska is able to win games in Lincoln that they would have lost somewhere else. And at the end of the day, that's what really matters. And Georgia plays better and is more likely to win on the road-go figure.

Wednesday, August 26, 2009

Ranking high: scientific proof that preseason polls matter

This post will be a bit technical, but bare with me. I have argued before (rather convincingly, I think) that preseason polls are somewhat effective at predicting the eventual national champ.1 This then begs the question--do preseason polls just predict or do they actually influence the final rankings?

Those who argue that preseason and postseason polls are independent say that any correlation between the two shows that pollsters made some good guesses about which teams will be good and which won't. Florida might not finish #1 in 2009, but I can guarantee that they'll finish in the top ten. It's also possible that the relationship is spurious-voters put Notre Dame too high and Utah too low at all times, be it pre, post or mid-season.

Those in the other camp, though, point to the stepwise fashion in which teams move in the polls. It is usually controversial for a team to jump another team that also won that week, and therefore those teams that start on top have an advantage over those that need to jump them. It can also be hard to get noticed if you start outside of the top 25. Consequently, preseason poll results improperly influence the final outcome.

I also think we should not underestimate the importance of the pernicious disease I call Neuheiselitus. Much like Eli Manning or Mall Cop, people can't seem to figure out that Rick Neuheisel isn't actually good at coaching football. It often takes a while for pundits to realize that some talented teams with high expectations aren't any good. On the other end of the spectrum is Applewhiteocious-just because they couldn't find a helmet that didn't cover his eyes didn't mean Major Applewhite wasn't twice the quarterback that Chris Simms could ever hope to be, and yet he had a hard time staying on the field. This is alternatively called Flutiecoccus and is now plaguing Hyundai and Canadian bacon.

Whose right? To answer that question, I used regression to estimate the importance of different factors-win/loss record, strength of schedule, national prestige, and, of course, preseason ranking. Basically, by taking into account other factors that can influence a team's final ranking, I can isolate the unique influence of preseason polls on postseason results.

I've used data from 1994 to 2008 from AP Poll Archive. I first used regression to predict the final rankings using only the win/loss records and the strength of schedule. In the blue box, you see the R-squared is .78-this means that just using these four factors we can very accurately predict the final rankings. The green box shows the strength of the effects. Each win moves a team up the polls (closer to number 1) by 1.6 on average and a loss moves you down 3.4. That should seem about right. A tougher schedule also moves a team up in the polls-no surprise there.
Next, I add prestige factors-total wins for the program, national champions and whether or not they are in a BCS conference. Of these, only being in a BCS conference really matters (if the number below P>|t| is above .05 the factor is not significant). On average, a team in a BCS conference will finish about 5 spots higher than another team not in a BCS conference with the same record and strength of schedule. Figures.
Next, I add general measures of the team's performance. the PerfRating is based on margin of victory and EloRating just on win/loss record (like those used for the BCS computer rankings). The EloRating is not significant because it measures the same thing as the win/loss record and strength of schedule, but the PerfRating is important. Finally, I add the preseason rankings. You will first, notice that the P>|t| value is below .05, which means that preseason polls have a real influence on postseason polls. In other words, the results in the final rankings would be different if we didn't do preseason polling. But before we get too excited, it is important to also look at the coefficient (=.0539). This means that two equal teams with the same performance and backgrounds would finish one spot in the final poll if they started 20 sports apart. So, while preseason polls do inappropriately influence final rankings, the effect is not large. Being in a BCS conference, though, still bumps up 4 spots.
One group, though does seem to benefit more than others. The table below lists the biggest benefactors of preseason polling. The Pred. is where the team should have finished, but these teams all finished a few spots ahead of where they should. They also have some other commonalities--they are major programs from BCS conferences, started between 2 and 6 and finished between 9 and 18. Classic cases of Neuheiselitus
In summary, preseason polls do influence final results in a way they are not supposed to, but not enough to really worry about. It will help you more if you are a disappointing major program that was supposed to have a shot at a national championship. And teams in BCS conferences can lose one more game than an otherwise equal non-BCS team and still finish higher in the polls. The non-BCS conspiracy theorists have been right all along.

Tuesday, August 25, 2009

Trend-O-Meter 2008, cont.

Again for a quick explanation, the curve represents a team's trended performance over the course of the season. It is fit to the data points that measure roughly how well a team played in each game. You will notice in some cases that, even when team A beats team B, team B might have a higher score for that week. Essentially, this means that team A won on luck--e.g., bad turnovers, injuries, defensive mistakes for team B.

You will notice that, for the most part, Oklahoma was better than Texas for all but one week, Utah was consistently better than Alabama (though the gap in the lines exaggerates the difference a little), and Michigan was really bad and not getting better under Rich Rod. And before I forget, Washington State really was the worst BCS conference team in the history of the world, even if they did peak just at the right time to take home the apple bowl win.
I've also ranked the top 25 by improvement over the season. This does not include bowl game performances. This year we'll see if NC State and Ole Miss can continue where they finished last season.

I would be happy to produce any other two team comparisons that ya'll might be interested in seeing.





Monday, August 24, 2009

Trend-O-Meter 2008

I will soon be posting the results from the trend-O-meter 2008. For those who are unfamiliar with the concept (and most are since I invented it), the trend-O-meter - formerly known as the trend-O-matic - tracks a teams performance over the course of the season. The curve below follows the performance trends of the teams below.

As you can see, Florida was the best team for most of the season, but particularly after "the speech" and the Ole Miss game. Kudos to Tebow, because they were faced an opponent in the national championship game that was also peaking at the end of the season.

cRPI goes for baseball too

In the spirit of the season, I've ranked MLB teams using the cRPI. I only had data for 2007, so its a bit antiquated, but it does demonstrate some of the advantages of the cRPI.

For example, from the more competitive east, the Ray, Red Sox and Yankees are 3, 4, and 7 by winning percentage, respectively, but are 2, 3, and 5 by cRPI-a more accurate assessment of their performance.