Tag Archives: modeling

Who Won Sochi? Wrangling Olympic Medal Count Data

I have always been a big fan of the Olympics (albeit I like the Summer Games better given my interest in Track & Field, Fencing and Soccer). However, something that has always bothered me is concept of the Medal Count. For years I have seen countries listed as “winning” because their medal count was higher—even though several countries “below” it often had many more Gold medals. Shouldn’t a Gold medal count for more than a Silver (and much more than a Bronze)? What would you rather have as an athlete: three Gold medals or four Bronzes?

Evidently, I am not the only one debating this point. Googling “value of olympic medals for rank count” yielded a range of debates on the first page alone (Bleacher Report, USA Today, The New Republic, the Washington Post and even Bicycling.com). Wikipedia even has an entry on this debate.

This year, however, I noticed that throughout the games that Google’s medal count stats page (Google “olympic medal count”) was not ranking countries by absolute medal count. For quite a while Norway and Germany were on top—even when they did not have the highest total number of medals—because they had more Gold medals than anyone else. Clearly Google was using a different weighting than “all medals are alike.” Not a surprise given their background in data.

Winter_Games_CoverartI started to wonder what type of weighting they were using. In 1984 (when the Olympics were in Los Angeles) a bunch of gaming companies came out with various Olympic games. Konami’s standup arcade game Track & Field was widely popular (and highly abusive to trackballs). The game I used to play the most (thanks to hacking it) was Epyx’s Summer (and Winter) Games. This game had the “real” challenge of figuring out a “who won the Olympics” as it was a head-to-head multi-player game (someone had to win). It used the 5:3:1 Medal Weighting Model to determine this: each Gold medal was worth 5 points, each Silver 3 points, each Bronze 1. I wondered if Google was using this model, so I decided to wrangle the data and find out.

Data processing

I used Google’s Sochi Olympic Medal Count as my source of data as this had counts and Google ranks of winners (I go this via their Russian site so I could get final results, there were 26 countries who won any Olympic Medal).

Of course, by the end of the Olympics it was a bit less interesting as Russia had both the most medals and the highest rank. However, I still wanted to figure out their weighting as a curious exercise. I built a model that calculated ranks for various Medal Weighting Model (MWM) approaches and calculated the absolute value of all Rank Error deltas from Google’s ranking. I both computed both the sum of these errors (Total Rank Error or TRE) and highlighted any non-zero error, enabling me to quickly see any errors in various MWM weightings.

Trying out a few random models

The first model I tried was the “Bob Costas Model” where every medal is the same (1:1:1). This was a clearly different than Google’s as it a TRE of 72. I then tried the Epyx 5:3:1 model… no dice: this one had a TRE of 35 (better than Bob, but not great). I tried a few other mathematical series:

  • Fibonacci: 0,1,1 (TRE=50); 1,1,2 (TRE=42); and 1,2,3 (TRE=43)
  • Fibonacci Prime (TRE=54)
  • Abundant Numbers (TRE=54)
  • Prime Numbers: (TRE=42)
  • Lucas Numbers (TRE=28)
  • Geometric Sequence (TRE=23)
  • Weird Numbers (TRE=2)
  • Happy Numbers (TRE=39)

I then tried logical sequences such as the lowest ratios where a Silver is worth more than a Bronze, and Gold is worth more than both (TRE=31). Still not luck

Getting more systematic

I decided to get more systematic and begin to visualize the TRE based on different MWM weights. I decided to keep Whole Number weights as I was operating under the general principal that each Medal has N points and that points (true in most sports—but not in things like Diving, Figure Skating and Gymnastics—nevertheless, I wanted to keep things simple).

I first looked at Gold Weight influence, WGOLD:1:1 where I varied WGOLD from 1 upwards. This clearly showed a rapid decay in TRE that flattened out at 2 with Gold was worth 13x that of a single Silver or Bronze medal:

Rapid decay in TRE as Gold medals gain higher weighting
Rapid decay in TRE as Gold medals gain higher weighting

This reinforced that Gold was King, but that Silver was better than Bronze by some value (not surprising). I then kept WGOLD at 13 and started to reduce WBRONZE. I found an interesting result: as soon as I made Bronze worth any value smaller than Silver (even ε = 0.001), I got Zero TRE (a complete match to Google’s Rank). However, I could not image a scoring system of 13:1:<1 (or 13:1:0.99). It was just too geeky. As such I tried a different approaches, all with Whole Number ratios of Gold:Silver:Bronze. The lowest ratios I found with Zero TREs were the following:

  • Gold=21, Silver=2, Bronze=1
  • Gold=29, Silver=3, Bronze=1
  • Gold=40, Silver=4, Bronze=1
  • Gold=45, Silver=5, Bronze=1

TRE never went to zero when Bronze was given Zero weight. Of these models, 40:4:1 had the most symmetry (10:1 to 4:1), so used that is my approximated Google Olympic Rank MDW (it did have zero TRE for all medal winners).

So who won?

I figured I would look at the Top Five Ranked Countries over various models:

Demonstration of how easy it is to add a Grading Curve to the rankings. The higher the TRE the more underweighted winning Gold medals (i.e., truly winning events) is. The country in bold is the one that benefits most from the Grading Curve
Demonstration of how easy it is to add a Grading Curve to the rankings. The higher the TRE the more underweighted winning Gold medals (i.e., truly winning events) is. The country in bold is the one that benefits most from the Grading Curve

Obviously, Russia is the all around winner as they won the most medals and the most Golds and the most Silvers. (Making this exercise a bit less interesting than it was about a week ago). However, it will be fun to apply this in 2016.

And at least Mr Putin is happy.