L3: What If’s

Lagrange Point 3 (L3): Exploring “What If” scenarios and flights of fancy

Bringing Machine Vision to Olympic Judging

If you’re like me, your favorite part of the Olympics is watching athletes from all over the world come together and compete to see who is the best. For many situations it is easy to clearly determine who is the best. The team that scores the most goals wins at Football (a.k.a. Soccer). The person who crosses the finish line first wins the 100-meter Dash. The swimmer who touches the wall first wins the 200-meter Breaststroke.

Victims of Human Error (and Bias)

However, is some cases, determining what happened is less clear. All of these cases involve subjective human judgment. I am not just talking about judgment regarding stylistic components; I am talking about judgment on absolute principles of scoring and penalties. As a result, athletes (who have trained for tens of thousands of hours over years of their lives) are often at the mercy of human judgment of motions that are almost to fast to observe. A few examples:

  1. A sprinter can be disqualified if she or he kicks off the starting blocks before the sound of the starting gun could potentially reach him or her
  2. A boxer may miss a point because he punches and connects too quickly
  3. A diver or gymnast can receive unwarranted penalties (or conversely, not receive warranted ones) because human judges misperceive the smallest of angles during an movement that takes just a few seconds

Even worse, athletes in these situations are not only subject to human error, they are often subject to human bias as well. We have all seen countless questionable judgment calls based on national or political bias in too many events. As upsetting as these are to the spectator they are utterly heart breaking for the athletes involved.

Bringing Machine Intelligence to the Rescue

We already use technology to aid in places where events happen to quickly for humans to accurately perceive them. In racing (humans to horses, on land or water), we use photo-finish cameras to resolve which athlete has actually one when a finish is too close (or as happened this year, when there is actually a tie for the Gold Medal). In Gymnastics and Skating we allow judges to review slow motion cameras as part of their judging. In Fencing, we go one step further and equip athletes with electronic sensors to measure when a blade has touched a target area (or which touched first to resolve simultaneous touches).

It is time to go a few steps further and actually bring machine intelligence (machine vision + machine learning) to the stage to provide the same absolute scoring that photo-finish cameras bring. I am not advocating using machines to replace people for stylistic judging. However, it makes absolutely no sense to not use machines to detect and score absolutes such as:

  • A gymnast’s bent arms, separated knees or mixed tempo
  • Level of differentiation of a diver’s twist from 90°
  • The actual time a sprinter kicks off the blocks based a microphone’s detection of when the sound arrived
  • Detection of a skater’s under-rotated jump

Not only would this significantly reduce bias and error. It would actually be a great training tool. Just as advanced athletes today use sensors to measure performance and conditioning, they could use a similar approach to detect small errors and work to eliminate them earlier in training.

This is Now Possible

Just a few years ago, this was the stuff to science fiction. Today it is feasible. Half a dozen companies have developer self-driving cars equipped with sensors and machine learning programs to deal with conditions with much higher levels of variability than judging a 10-meter dive or Balance Beam program. However, one does not need to equip arenas with multiple cameras and LIDAR arrays. Researchers at DARPA have even moved down the direction of teaching robots to cook by having them review two-dimensional YouTube videos.

Similar approaches could be uses for “Scoring Computers.” If we wanted to go down the path of letting computer see exactly (and only) what humans can see we can go down the machine-learning route. First program the rules for scores and penalties. Then create training sets with identified scores and infractions to train a computer to detect penalties and score them as a judge would do—but with the aid of slow motion review in a laboratory without the pressure of on-the-spot judging on live TV. This would not remove the human, it would just let the human teach a computer to do something with higher accuracy and speed than a person could do in real-time.

If we wanted to go a step further, just as Fencing has done. We can add sensors to mix. A LIDAR array could measure exact motion (actually measuring that bent knee or over-rotation). Motion- capture (mo-cap) would make this accuracy even better. Both would also create amazing advanced sports training technology.

It’s More Feasible Then You May Think

All of this technology sounds pretty expensive: computers, sensors, data capture, programming, testing, verification, deployment, etc. However, it is not nearly as expensive and “sci-fi-ish” as one might think (or fear).

Tens of thousands of hours of video already exists to train computers to judge events (the same videos that judges, athletes and coaches review in training libraries—libraries even better than robo.watch). Computing time is getting cheaper every year thanks to Moore’s Law and public cloud computing. An abundant number of Open Source libraries for machine learning are available (some companies have opened proprietary libraries; others are offering Machine Learning-as-a-Service). There are now even low-cost LIDAR sensors available for less than $500 that can resolve distances of 1 cm or less (for $2,000 college programs and Tier I competitive venues can get sensors that resolve to 1 mm or less).

Given the millions of dollars poured into these sports (and the billions into transmission rights), it would not require an Apollo Program to build a pilot of this in time for the 2020 Olympics (or even 2018 Winter Olympics). Companies like Google and IBM likely donate some R&D to show off their capabilities. Universities like MIT, Carnegie Mellon, and Stanford are already putting millions of dollars in biomimetics, computer vision, and more. Even companies like ILM and Weta Digital might offer their mo-cap expertise as they would benefit from joint R&D. Thousands of engineers would likely jump in to help out via Kaggle Competitions and Hackathons as this would be really fun to create.

Some Interesting Side Benefits

There are benefits to technology outside of “just” providing more accurate judging and better training tools. This same technology could create amazing television that would enable spectators to better appreciate and understand these amazing sports. Yes, you could also add your Oculus Rift or similar AR technology to create some amazing immersive games (creating new sources of funding for organizations like the US Olympic Team or USA Gymnastics to help pay for athlete training).

The Expanding (Digital) Universe: Visualizing How BIG a Zettabyte Really Is

Note: This post was originally published at Oulixeus Consulting

A lot of news articles recently (Google News currently shows 1,060 articles) are citing the annual EMC-IDC Digital Universe studies of the massive growth of the digital universe through 2020. If you have not read the study, it indicates that the digital universe is now doubling every two years and will grow 44-fold 50-fold now 55-fold from 0.8 Zettabytes (ZB) of data in 2009 to 35 40 now 44 Zettabytes in 2020. (Every year IDC has revised the growth curve upward by several Zettabytes.)

Usually these articles show a diagram such as this:

DigitalDecade

This type of diagram is great at showing how much 44-fold growth is. However it really does not convey how big a Zettabyte really is—and how much data we will be swimming (or drowning in) by 2020.

A Zettabyte (ZB) is really, really big – in terms of today’s information systems. It is not a capacity that people encounter every day. It’s not even in Microsoft Office’s spell-checker, Word “recommended” that I meant to type “Petabyte” instead 😉

The Raw Definition: How big is a Zettabyte?

A Computer Scientist will tell you that 1 Zettabyte is 270 bytes. That does not sound very big to a person who does not usually visualize think in exponential or scientific notation—especially given that a one-Terabyte (1 TB) solid state drive has a capacity to store 240 bytes.

Wikipedia describes a ZB (in decimal math) as one-sextillion bytes. While this sounds large, it is a hard to visualize. It is easier to visualize 1 ZB (and 44 ZBs) in relation to things we use everyday.

Visualizing Zettabytes in Units of Smartphones

The most popular new smartphones today have 32 Gigabytes (GB) or 32 x 230 bytes of capacity. To get 1 ZB you would have to fill 34,359,738,368 (34.4 billion) smartphones to capacity. If you put 34.4 billion Samsung S5’s end-to-end (length-wise) you would circle the Earth 121.8 times:

1ZB-Earth-Distance
Click to see a higher resolution image and the dot that represents Earth to-scale vs. the line

You can actually circumnavigate Jupiter almost 11 times—but that is not obvious to visualize.

The number of bytes in 44 Zettabytes is a number too large for Microsoft Excel to compute correctly. (The number you will get is so large that Excel will cut off seven digits of accuracy–read that as a potential rounding error up to one million bytes). Assuming that Moore’s Law will allow us to double the capacity of smartphones three times between now and 2020, it would take 188,978,561,024 (188+ trillion) smartphones to store 44 ZB of data. Placing these end-to-end- would circumnavigate the world over nearly 670 times.

This is too hard to visualize, so lets look at it another way. You could tile the entire City of New York two times over (and the Bronx and Manhattan three times over) with smartphones filled to capacity with data to store 44 ZBs. That’s a big Data Center!

Clik
Amount of Smartphones (with 2020 tech) you would need to store 44 ZB (click for higher resolution)

This number also represents 25 smartphones per person for the entire population of the planet. Imagine the challenge of managing data spread out across that many smartphones.

Next Page: Visualizing Zettabytes in Units of Facebook