Roller coaster polls, popularity and experience

posted by Jeff | Tuesday, March 30, 2010, 12:48 AM | comments: 3

A couple of months ago, I opened up the track record feature of CoasterBuzz to everyone, and put a rating mechanism in place (not to be confused with a ranking mechanism). For better or worse, people really care about polls. They're always a hot topic when they come out. For that reason, I decided that I should probably do some kind of poll, because regardless of my feelings on it, clearly our members have a great interest in them.

With this concession, I still have some of my own requirements. The first is that it can't be annual, because in the Internet world that just seems as stupid as printing a newsletter. It should be real-time, or something nearly real-time. The second criteria is that members shouldn't have to over-think it. Ranking your rides is easy if you've only been on 20, but we have members who have been on 600 or more. Even with the drag-and-drop ease that's already in place, it's still a deterrent. I've got a little over 150 and have zero desire to ever order them. I'm content that you can rate a ride as among the worst, below average, average, above average, or among the best. That's good enough for me, and it requires very little thought to arrive at a conclusion.

So with that in mind, people have been rating coasters, and there's a shit-ton of data to draw from. Now I have to figure out what to do with it. The straight popularity contest is bullshit, and I decided that I wanted to avoid that. Experience is important, but if it can overcome a total lack of popularity, then that's bullshit too. One of the issues I've had with the Hawker poll for years is that a half-dozen uber-enthusiast assholes could travel to Zimbabwe to ride some obscure ride, rank it highly, and suddenly it's a top-5 ride. That's nonsense. The selection bias and warm fuzzies of those people having the travel experience taints the results, and the sample size is simply way too small to be statistically significant.

That leaves you with a need to somehow balance popularity and experience. A popular ride deserves a certain amount of weight, and that's why something like Millennium Force (rightfully, in my mind) ranks high no matter what. At the same time, an experienced rider's opinion definitely deserves more weight, meaning they wouldn't put Raptor, for example, as high up relative to other coasters that haven't seen as much action.

My first stab at this data was simply to apply experience and popularity factors, on a scale of 0 to 1. The person with the largest track record would get the full 1 for an experience index, and the person with the smallest would get zero (or .0001 or whatever). Conversely, the coaster appearing in the most track records would get 1 (Millennium Force, in case you were wondering), while those not appearing would get 0. The math looked something like:

(User rating * user experience index)/SUM(user experience index) * popularity index

The truth of the matter is that these results look pretty good after the first 20 entries. Unfortunately, the top 20 are mostly Cedar Point and Kings Island coasters, with two from Magic Kingdom, including Disaster Transport, which is a steaming pile of shit working off of a high popularity index. But beyond that, you see a pretty rational distribution of good rides, including The Voyage, Montu, Superman/Bizarro, X2, Phantom's Revenge, etc., despite having lower popularity indices.

The problem is probably with the popularity index. With it being linear, it gives far too much advantage to well-riden rides, and goes too far to penalize those that don't see as much action. It over-compensates for the problem I described. I don't remember anything from high school statistics, and didn't take any in college, so I'm not sure what to do. It feels like the linear experience index actually works really well, as rides like The Voyage fall somewhere in the middle of the popularity index and still come up pretty high.

Any suggestions?


Bob Hansen

March 30, 2010, 8:05 AM #

Take a look at Bayesian Averages (

These tend to smooth out the big anomalies and allows for an accurate ranking when a coaster may only have a few votes. It takes into account the greater average of all scores in determining the average score for the coaster.

One of my other hobbies is board gaming and a popular site for board gaming ranks every single board game ever made this way. It does require a certain minimum of votes for something to be "ranked" but its a fairly small number (some of the popular games have 10s of thousands of scores). The system tends to wash out the scores of fanboys and people who purposely tank a game.

The system is not perfect. In anger many people dropped their score for one game after the game company issued a cease and desist letter to the website and enough people dropped their scores to drop it's overall ranking from the top twenty to around 150 (still a respectable position with over 10,000 games in the database). Another time the community as a whole attempted to get a silly game called Monkey Auto Races to number one. Nearly everyone on the site ranked it a ten and it made it to number one for a week before people started dropping their scores again.

Anyways, just an idea. It's difficult to have any perfect system, but I think that one does work for the most part.

Skydiving Jeff

March 30, 2010, 11:06 AM #

When I made my own track record app for my personal website back in the day (it's no longer up), I ranked each coaster by various criteria, all of which themselves would be rated — airtime and speed were near the top, while location and themeing were towards the bottom. It's a whole new layer of complexity, but I've always wondered what would happen if such a system was expanded to encompass a much bigger sample size of people.


April 7, 2010, 12:04 PM #

So, what’s the verdict? What did you come up with? Did you find a formula that works? How about a follow up post sometime? This is the sort of thing I find to be very interesting!

Post your comment: