Note: similar effect has been found in basketball, with refs showing various biases including toward the home team and the losing team, both of which increase profitability. Read on for an analysis of college football asking the same question but with different metrics.
On the heels of my previous speculation about
path-dependency in BCS college football rankings, I was recently discussing the not-infrequently-heard assertion that some teams (and some entire conferences) seem to get overly generous preseason rankings in the BCS polls - most famously, Notre Dame. (And if you're a Notre Dame fan, please stop making my blog dirty with your eyes right now, thank you.) I freely admit to being an outsider so while I hope I can provide a new perspective, it also means this may have been done. Still, so far as I know no one has actually run the numbers. My prediction before I started crunching was that schools with historically strong "legacy" programs (even if they aren't so strong any more) would get overly gentle pre-season ratings (e.g. Notre Dame) and also that certain conferences would get an advantage, in particular the Big 12 and SEC. (Skip ahead to "
THE BOTTOM LINE" if you don't care about the individual team numbers.)
Before I commence BCS bashing, I would be remiss not to link to all the excellent Onion headlines about the BCS over the last few years: "Coaches Thought BCS Computer Would At Least Make a Noise
When Boise State Lost", and then comparing to March Madness, "Cheering Fans, Thrilling Tournament
Disgust BCS Officials"; also see "BCS Picture Made Clearer by
Pretending Certain Teams Don't Exist", and a 2001 Space Odyssey reference for the win, "Lip-Reading BCS Computer
Kills Officials Who Want to Shut It Down". It's curious that any discussion of the BCS rankings usually connects to a discussion of the bowl system (why is this? Keep reading.) Having mentioned that, can you imagine an NFL or MLB or NCAA basketball official suggesting changing to a bowl system in those sports? They'd be laughed out of their organization.
Back to the cold hard math. How, statistically, can we tell who's getting an unfair break? Easy. For each team that's been ranked in the Top 25 in the coaches' poll at the beginning or end of a season, look at the average movement between preseason and the end of the bowls. Same for conferences. This is how it shakes out for all teams in the rankings at the beginning or end of 2006 through 2010 seasons:
Team | Post Minus Pre Rank Diff |
Boise State | 8.4 |
BYU | 5.2 |
TCU | 4.8 |
Stanford | 4.4 |
Oklahoma State | 4.2 |
Boston College | 3.2 |
Utah | 3.2 |
Arkansas | 3.0 |
Alabama | 2.8 |
Hawaii | 2.8 |
Missouri | 2.8 |
Nevada | 2.6 |
Michigan State | 2.4 |
Oregon | 2.4 |
Georgia Tech | 1.8 |
Texas A&M | 1.8 |
Wake Forest | 1.8 |
Oregon State | 1.4 |
Illinois | 1.2 |
Texas Tech | 1.2 |
UCF | 1.2 |
Arizona State | 1.0 |
Kansas | 1.0 |
Pittsburgh | 0.8 |
South Carolina | 0.8 |
Maryland | 0.4 |
North Carolina State | 0.2 |
Arizona | 0.0 |
Ball State | 0.0 |
Cincinnati | 0.0 |
Connecticut | 0.0 |
Northwestern | 0.0 |
Rutgers | 0.0 |
USF | 0.0 |
Virginia | 0.0 |
Fresno State | -0.2 |
Wisconsin | -0.2 |
Ohio State | -0.6 |
Tennessee | -0.6 |
Auburn | -1.0 |
Mississippi | -1.2 |
Louisville | -1.6 |
UCLA | -1.6 |
LSU | -2.0 |
Virginia Tech | -2.0 |
Michigan | -2.2 |
Notre Dame | -2.2 |
Clemson | -2.8 |
Iowa | -2.8 |
North Carolina | -2.8 |
Miami (Fla.) | -3.4 |
Nebraska | -3.6 |
Penn State | -3.6 |
West Virginia | -3.6 |
Oklahoma | -3.8 |
Florida State | -4.8 |
Florida | -5.4 |
USC | -6.2 |
Cal | -6.8 |
Georgia | -7.2 |
Texas | -9.4 |
*Note: this system counts un-ranked as 26, and of course un-ranked (and no-conference or small-conference) teams do break into or drop out of the rankings all the time, so it's a non-zero sum game, hence at the conference level most (or even all) of the teams can in principle either win or lose on average.
The conferences look different too. Here are the pre-season vs. post-season average ranking changes for the conferences, calculated the same way:
Team | Post Minus Pre Rank Diff |
Mountain West | 4.4 |
WAC | 3.67 |
MAC | 0.0 |
Big East | -0.629 |
Big 10 | -0.725 |
Big 12 | -0.725 |
SEC | -0.92 |
ACC | -1.125 |
Pac 10 | -6.80 |
Mountain West and WAC teams do better than expected, and ACC and Pac 10 do the worst. (I had predicted SEC and Big 12 would be at the lower end, and they were until I added 2010 to the 2006 through 2009 numbers.)
And here's how the distribution of the mis-rankings of individual teams looks for 2006 through 2010:
Are there any characteristics the outliers on the left, i.e. the consistently under-rated teams, have in common? Yes. Many are from newer programs with smaller fan-bases in the less-populated western states. And although it looks basically like a normal distribution, it's worth noting that there's a little cluster at the overrated (right) end of the distribution, but not so much at the unfairly-ranked-to-start-out end. The overrated cluster actually became even more pronounced when I added the '10 season data; that is, this curve got
less normal, statistically speaking, which makes that overrated bump harder to explain away as mere incompetence (that is to say, as just random unbiased inaccuracy). Sure, the median postseason-minus-preseason ranking difference is zero, but if the BCS polls knew what they were doing, the individual teams' differences should
all be zero - i.e., they would all individually be zero if the polls accurately predicted performance during the season. One objection to this might be summarized as follows: "Well, the coaches (or any other poll) can't possibly be expected to accurately predict the rank of all these teams, because anything can happen during the season." And for those people who say that:
YES! THAT'S EXACTLY THE POINT! SO WHY ARE THEY BOTHERING WITH PRE-SEASON RANKINGS AT ALL?
THE BOTTOM LINE
There are teams that consistently get an unfairly high or low ranking at the start of seasons, relative to how they end up performing. And it turns out that there's a single clear explanation for not only this observation, but in fact for several mysteries in college football. Those mysteries are:
1) Why does college football bother with pre-season rankings at all?
2) Why are there certain college football programs that are consistently ranked unfairly low in the preseason, and why are some consistently given an advantage in the preseason?
3) Why there are so damn many bowls?
For crying out loud, I started writing this the Thursday night after New Years when we're all back to work and there were still 4 more bowls to go at that point! Why so many? It's not your imagination; there are more every decade. Again the visual makes it more striking:
The answer for all of these is the same, and to red-blooded American males who know how the world works, it should be obvious: there's money to be made from the bowls, namely $400 million in revenues (and that number is
3 years old). If there's a conflict between identifying an unambiguous champion on one hand and revenues on the other, revenues will win. (Anytime you hear someone grunting about how money misses The True Meaning of The Game and/or If I Have to Explain It You'll Never Understand It, run away. He's either justifying his own irrational spending on his team, or he's trying to sell you something.) So, if you want to maximize how much money the bowls make, you have to
load the dice at the start of the season to make it more likely that those bowls will be packed with people wearing the classic school colors of legacy programs - people who
spend money - regardless of whether those teams suck that year, or have sucked for many years. Sorry, Boise State, for all your talent and hard work, your fans don't spend enough money at bowl games, and your sparsely-populated part of the country doesn't have enough fans watching the advertising during the game to drive up ratings! Prove that your fans are tcotschke-buying suckers like Notre Dame and Cal fans, and soon you'll get better pre-season rankings!
Suddenly the more byzantine aspects of the BCS ranking and bowl system start to make sense. The rankings are how you can give the bowls
some semblance of athletic legitimacy without a play-off system. So if it's all driven by dollars, you ask, then why even worry about seeming legitimate? Because that semblance of fairness is necessary to keep the fans coming - even the die-hard fans of those legacy teams will stop coming once they realize their team is on BCS affirmative action, and they're actually playing in the "We're All Winners!" Bowl. Solution: seed the rankings with big-market teams. That way you can have your cake (a semblance of legitimate athletic competition) and eat it too (profit maximization).
On the other end, have you ever actually heard a good argument for why there's no play-off system? The only attempt I've ever heard is that
play-offs would take more time out of the student-athletes' schedules which would damage their academic performance. Yes, I've actually seen people say this with a straight face. Seriously? That's all you got? I guess we must not care about basketball players then, since I've never heard anyone say we should stop March Madness for that reason.
While you're in the mood to read more cynical sports economics, how's this for a quantitative definition of loyalty: price insensitivity relative to win-loss record. In other words, fans who are loyal to their team buy stuff with logos on it and go to games consistently, even when the teams lose, even when the teams
always lose year after year. In other words, winning doesn't necessary correlate with sports club income, and the
less it correlates with income, the
more loyal the fans are. (In all sincerity - is there a better definition than this?) I'm going to measure that at some point but that's a later post.
I recognize that I'm probably not concluding anything that perceptive sports fans haven't figured out a long time ago, but the relationship between sports, business and human psychology is damn interesting.