Tag Archives: Rio 2016

Bringing Machine Vision to Olympic Judging

If you’re like me, your favorite part of the Olympics is watching athletes from all over the world come together and compete to see who is the best. For many situations it is easy to clearly determine who is the best. The team that scores the most goals wins at Football (a.k.a. Soccer). The person who crosses the finish line first wins the 100-meter Dash. The swimmer who touches the wall first wins the 200-meter Breaststroke.

Victims of Human Error (and Bias)

However, is some cases, determining what happened is less clear. All of these cases involve subjective human judgment. I am not just talking about judgment regarding stylistic components; I am talking about judgment on absolute principles of scoring and penalties. As a result, athletes (who have trained for tens of thousands of hours over years of their lives) are often at the mercy of human judgment of motions that are almost to fast to observe. A few examples:

  1. A sprinter can be disqualified if she or he kicks off the starting blocks before the sound of the starting gun could potentially reach him or her
  2. A boxer may miss a point because he punches and connects too quickly
  3. A diver or gymnast can receive unwarranted penalties (or conversely, not receive warranted ones) because human judges misperceive the smallest of angles during an movement that takes just a few seconds

Even worse, athletes in these situations are not only subject to human error, they are often subject to human bias as well. We have all seen countless questionable judgment calls based on national or political bias in too many events. As upsetting as these are to the spectator they are utterly heart breaking for the athletes involved.

Bringing Machine Intelligence to the Rescue

We already use technology to aid in places where events happen to quickly for humans to accurately perceive them. In racing (humans to horses, on land or water), we use photo-finish cameras to resolve which athlete has actually one when a finish is too close (or as happened this year, when there is actually a tie for the Gold Medal). In Gymnastics and Skating we allow judges to review slow motion cameras as part of their judging. In Fencing, we go one step further and equip athletes with electronic sensors to measure when a blade has touched a target area (or which touched first to resolve simultaneous touches).

It is time to go a few steps further and actually bring machine intelligence (machine vision + machine learning) to the stage to provide the same absolute scoring that photo-finish cameras bring. I am not advocating using machines to replace people for stylistic judging. However, it makes absolutely no sense to not use machines to detect and score absolutes such as:

  • A gymnast’s bent arms, separated knees or mixed tempo
  • Level of differentiation of a diver’s twist from 90°
  • The actual time a sprinter kicks off the blocks based a microphone’s detection of when the sound arrived
  • Detection of a skater’s under-rotated jump

Not only would this significantly reduce bias and error. It would actually be a great training tool. Just as advanced athletes today use sensors to measure performance and conditioning, they could use a similar approach to detect small errors and work to eliminate them earlier in training.

This is Now Possible

Just a few years ago, this was the stuff to science fiction. Today it is feasible. Half a dozen companies have developer self-driving cars equipped with sensors and machine learning programs to deal with conditions with much higher levels of variability than judging a 10-meter dive or Balance Beam program. However, one does not need to equip arenas with multiple cameras and LIDAR arrays. Researchers at DARPA have even moved down the direction of teaching robots to cook by having them review two-dimensional YouTube videos.

Similar approaches could be uses for “Scoring Computers.” If we wanted to go down the path of letting computer see exactly (and only) what humans can see we can go down the machine-learning route. First program the rules for scores and penalties. Then create training sets with identified scores and infractions to train a computer to detect penalties and score them as a judge would do—but with the aid of slow motion review in a laboratory without the pressure of on-the-spot judging on live TV. This would not remove the human, it would just let the human teach a computer to do something with higher accuracy and speed than a person could do in real-time.

If we wanted to go a step further, just as Fencing has done. We can add sensors to mix. A LIDAR array could measure exact motion (actually measuring that bent knee or over-rotation). Motion- capture (mo-cap) would make this accuracy even better. Both would also create amazing advanced sports training technology.

It’s More Feasible Then You May Think

All of this technology sounds pretty expensive: computers, sensors, data capture, programming, testing, verification, deployment, etc. However, it is not nearly as expensive and “sci-fi-ish” as one might think (or fear).

Tens of thousands of hours of video already exists to train computers to judge events (the same videos that judges, athletes and coaches review in training libraries—libraries even better than robo.watch). Computing time is getting cheaper every year thanks to Moore’s Law and public cloud computing. An abundant number of Open Source libraries for machine learning are available (some companies have opened proprietary libraries; others are offering Machine Learning-as-a-Service). There are now even low-cost LIDAR sensors available for less than $500 that can resolve distances of 1 cm or less (for $2,000 college programs and Tier I competitive venues can get sensors that resolve to 1 mm or less).

Given the millions of dollars poured into these sports (and the billions into transmission rights), it would not require an Apollo Program to build a pilot of this in time for the 2020 Olympics (or even 2018 Winter Olympics). Companies like Google and IBM likely donate some R&D to show off their capabilities. Universities like MIT, Carnegie Mellon, and Stanford are already putting millions of dollars in biomimetics, computer vision, and more. Even companies like ILM and Weta Digital might offer their mo-cap expertise as they would benefit from joint R&D. Thousands of engineers would likely jump in to help out via Kaggle Competitions and Hackathons as this would be really fun to create.

Some Interesting Side Benefits

There are benefits to technology outside of “just” providing more accurate judging and better training tools. This same technology could create amazing television that would enable spectators to better appreciate and understand these amazing sports. Yes, you could also add your Oculus Rift or similar AR technology to create some amazing immersive games (creating new sources of funding for organizations like the US Olympic Team or USA Gymnastics to help pay for athlete training).

An Opportunity Missed: The Olympics-as-a-Platform

Article first published as An Opportunity Missed: The Olympics-as-a-Platform on Technorati. Embedded video of “Rethink Possible” added in this blog post.

The Summer Olympics are very special. Every four years, for over two weeks, people all over the world (even those who are not normally sports fans) spend hours every day engrossed in the innermost details of dozens of sports—at home, at work, at school and at play.

However, in 2012 the IOC had opportunities never seen in any prior Summer Olympics…

olympic_open_data_280pxThis year was not just the first Summer Olympics since social media, multi-media mobile phones, and smart phone (and tablet) apps have become the ubiquitous means that over a billion people use to find and share information, opinion, photos and video globally—and instantly. It was also the first Summer Olympics since the rise in use of Open Data Platforms and Apps Competitions to tap the innovation of thousands of people to create better ways to access information (without adding the cost and complexity of hiring thousands of designers, developers and testers).

The IOC could have taken advantage of this by doing four things:

If the IOC had done this they could have created the biggest, most exciting Open Data and App competition we have ever seen. Not only would this have tapped into the innovation of tens of thousands of developers, it would have harnessed competition between teams who wanted to highlight the technology strength of their countries, their love of their country’s history and culture, and their passion for the athletes representing them in their favorite sports.

Imagine what kind of Apps this global technology could have created:

  • Apps written by ex-gymnasts that combined athlete bios and explanations of events and rules with (official and fan) video of preliminary rounds and the World Championships. Apps that even let the audience score what they saw in real-time.
  • Apps combining location-based data with captured photos and video along the entire 26-mile, 385-year course of the marathon, letting you play back key parts of the race, see every part of the course at once, and cheer on runners via Facebook and Twitter
  • Fantasy Olympic Team apps that let you assemble your own dream team for events and compete with your friends—or globally in the Olympic spirit
  • Training gamification apps that let you record and visually display your running and swimming times (like Nike’s training apps) to understand in new ways the tremendous the speed, strength and endurance of Olympians


AT&T’s Rethink Possible Ad: Imagine if the swimmer did not have to write down the new record (and instead an App logged his times and showed them again every record Olympic Record—and every qualifying round—back to 1896)

Apps like these would have made these Olympics more interactive and participatory than any in history. While we did not get this in 2012, I am keeping my fingers crossed for a 2014 Sochi Winter Apps Competition, and perhaps an even 2016 Rio Summer Apps competition.