Products & Marketing

Product positioning and tech marketing

On the 45th Anniversary of the Moon Landing: 5 Lessons the Apollo’s Program Manager taught me at MIT

I originally posted a version of this on five years ago, on 40th Anniversary of the Apollo Moon landing. At that time, social media and smartphone were just starting to explode. Today, as social sharing and mobile are giving rise to IoT, these lessons from 1969 are perhaps even more important.

Putting things in perspective

It is easy to feel really proud of our accomplishments, whether we are scaling a consumer application a 1,000-fold in one year, rolling out a huge ERP program or even creating a new technology. However these accomplishments pale in comparison to what the Apollo, Gemini, and Mercury Missions achieved 45 years ago. Imagine this scenario:

You are listening to the radio and the President announces that the country is going to put a man on the Moon by the end of the decade. Keep in mind that no one has ever even escaped low earth orbit–let alone escaped Earth’s gravity, executed Holman transfers AND navigated to another body. Now you have to implement the largest engineering project in history, while inventing not only technologies, but also whole fields of study. All under the watch of the press—and all completed within one decade.

This is inconceivable to most of us in our work today. It is inspirational.

Success: One small step for man, one giant leap for mankind. (Credit: NASA)
Success: One small step for man, one giant leap for mankind. (Credit: NASA)

My lucky exposure to the people of Apollo

At the time I studied aerospace engineering at MIT, we were lucky enough to have several veterans of the Apollo Program on staff as our instructors. Not only were they great instructors; they also could recount first-hand experiences of events that the rest of us could only read about in the history books.

One of these professors was Joe Shea, the original Program Manager of NASA’s Apollo Program (portrayed by Kevin Pollack on HBO’s excellent series, “From the Earth to the Moon”). Contrary to what that series depicted, it was Joe who came up with concept of splitting the Apollo Program into missions that achieved never-before-achieved technology marvels.

Joe is also considered by some a founder of the Systems Engineering profession (many consider him the greatest systems engineer who ever lived). This made him the perfect person to each the capstone class of the aerospace curriculum: Systems Engineering (Fred Wilson of USV has written a great post on how fun Systems Engineering is and how important it is for engineering leadership). Every year, he would get a project from NASA and guide his students through all aspects of design, simulation, planning and even cost analysis. Our midterms and finals were real-life presentations to the Administrator of NASA.

Under Joe, I got to work on something called “Project Phoenix,” returning to the moon—but now with a re-usable capsule and landing four astronauts at the pole and keeping there for 30 days (a much harder prospect). In this project I learned about everything from active risk management to critical path costing to lifting bodies to Class-E solar flares. (How cool was that for a 20-year-old?)

Life lessons I learned from Joe

The technical things I learned from Joe got me my first job at Lockheed Martin (then GE Aerospace). It was great to be able to say that I had worked on a NASA program, helped create both a PDR (Preliminary Design Review) and CDR (Critical Design Review) and present elements of them to the Administrator of NASA in Washington.

However, I learned five much more important lessons — independent of aerospace or any other technology – that I have used in the eighteen twenty-three years since:

  1. Break Big Challenges into Small Parts. Any obstacle can be achieved if you break it down to smaller items. If these are too large, break them down again. Eventually you will get to things that have clear, straightforward paths for success. Essentially this is the engineer’s version of “a journey of a thousand miles begins with a single step”
  2. Know Your Stuff Inside and Out. You cannot be a technology leader who only manages from above. You must understand how the components work. This is the only way you will see problems before they happen. Remember, you are the leader who is the only one positioned to connect the “Big Picture” to the execution details.
  3. S#!% Happens. Things break. Schedules are late. People leave the project. Plan for this. Ask yourself every week what can go wrong. Put contingency plans together to address the biggest or most likely of these. Today, this is done in everything from Risk Management to DevOps.
  4. There is No Such Thing as Partial Credit. Yes, unlike a rocket, you can “back out” (essentially un-launch) software. However, the costs of this type of failure are enormous: not only does it cost 3-5x more to back-out, fix and regression test changes, it also frequently results in lost revenue and customers. Get things right in development – then certify them in testing (not the other way around). Don’t count on being able to “back-out” after a failed launch–this will be come more and more true as we push software to millions of “things” comprising IoT. Joe hammered a lesson into our heads with a chilling story: when people forgot this and rushed three astronauts died during a basic systems test on the Apollo 1.
  5. Take Ownership. If you are the leader, you are responsible for the team’s or product’s success. If you are a line manager, you are not only responsible for your area but are being relied upon by your peers for success. If you are a hands-on analyst or engineer you are actually delivering the work that leads to success. In all cases, ensure you do your job right, ask for help when you need it and never lie or hide anything.

Five really important lessons. I am grateful I had the opportunity to learn them before I entered the full-time career work force. I try to “pay this back” by teaching these lessons and concepts everywhere I go.

Before I forget…

Thank you to the men and women of Apollo. Thank you also to the men and women of Gemini and Mercury (it is easy to forget them on this day). You achieved miracles on a daily basis and inspired whole generations of scientists and engineers.

Who Won Sochi? Wrangling Olympic Medal Count Data

I have always been a big fan of the Olympics (albeit I like the Summer Games better given my interest in Track & Field, Fencing and Soccer). However, something that has always bothered me is concept of the Medal Count. For years I have seen countries listed as “winning” because their medal count was higher—even though several countries “below” it often had many more Gold medals. Shouldn’t a Gold medal count for more than a Silver (and much more than a Bronze)? What would you rather have as an athlete: three Gold medals or four Bronzes?

Evidently, I am not the only one debating this point. Googling “value of olympic medals for rank count” yielded a range of debates on the first page alone (Bleacher Report, USA Today, The New Republic, the Washington Post and even Bicycling.com). Wikipedia even has an entry on this debate.

This year, however, I noticed that throughout the games that Google’s medal count stats page (Google “olympic medal count”) was not ranking countries by absolute medal count. For quite a while Norway and Germany were on top—even when they did not have the highest total number of medals—because they had more Gold medals than anyone else. Clearly Google was using a different weighting than “all medals are alike.” Not a surprise given their background in data.

Winter_Games_CoverartI started to wonder what type of weighting they were using. In 1984 (when the Olympics were in Los Angeles) a bunch of gaming companies came out with various Olympic games. Konami’s standup arcade game Track & Field was widely popular (and highly abusive to trackballs). The game I used to play the most (thanks to hacking it) was Epyx’s Summer (and Winter) Games. This game had the “real” challenge of figuring out a “who won the Olympics” as it was a head-to-head multi-player game (someone had to win). It used the 5:3:1 Medal Weighting Model to determine this: each Gold medal was worth 5 points, each Silver 3 points, each Bronze 1. I wondered if Google was using this model, so I decided to wrangle the data and find out.

Data processing

I used Google’s Sochi Olympic Medal Count as my source of data as this had counts and Google ranks of winners (I go this via their Russian site so I could get final results, there were 26 countries who won any Olympic Medal).

Of course, by the end of the Olympics it was a bit less interesting as Russia had both the most medals and the highest rank. However, I still wanted to figure out their weighting as a curious exercise. I built a model that calculated ranks for various Medal Weighting Model (MWM) approaches and calculated the absolute value of all Rank Error deltas from Google’s ranking. I both computed both the sum of these errors (Total Rank Error or TRE) and highlighted any non-zero error, enabling me to quickly see any errors in various MWM weightings.

Trying out a few random models

The first model I tried was the “Bob Costas Model” where every medal is the same (1:1:1). This was a clearly different than Google’s as it a TRE of 72. I then tried the Epyx 5:3:1 model… no dice: this one had a TRE of 35 (better than Bob, but not great). I tried a few other mathematical series:

  • Fibonacci: 0,1,1 (TRE=50); 1,1,2 (TRE=42); and 1,2,3 (TRE=43)
  • Fibonacci Prime (TRE=54)
  • Abundant Numbers (TRE=54)
  • Prime Numbers: (TRE=42)
  • Lucas Numbers (TRE=28)
  • Geometric Sequence (TRE=23)
  • Weird Numbers (TRE=2)
  • Happy Numbers (TRE=39)

I then tried logical sequences such as the lowest ratios where a Silver is worth more than a Bronze, and Gold is worth more than both (TRE=31). Still not luck

Getting more systematic

I decided to get more systematic and begin to visualize the TRE based on different MWM weights. I decided to keep Whole Number weights as I was operating under the general principal that each Medal has N points and that points (true in most sports—but not in things like Diving, Figure Skating and Gymnastics—nevertheless, I wanted to keep things simple).

I first looked at Gold Weight influence, WGOLD:1:1 where I varied WGOLD from 1 upwards. This clearly showed a rapid decay in TRE that flattened out at 2 with Gold was worth 13x that of a single Silver or Bronze medal:

Rapid decay in TRE as Gold medals gain higher weighting
Rapid decay in TRE as Gold medals gain higher weighting

This reinforced that Gold was King, but that Silver was better than Bronze by some value (not surprising). I then kept WGOLD at 13 and started to reduce WBRONZE. I found an interesting result: as soon as I made Bronze worth any value smaller than Silver (even ε = 0.001), I got Zero TRE (a complete match to Google’s Rank). However, I could not image a scoring system of 13:1:<1 (or 13:1:0.99). It was just too geeky. As such I tried a different approaches, all with Whole Number ratios of Gold:Silver:Bronze. The lowest ratios I found with Zero TREs were the following:

  • Gold=21, Silver=2, Bronze=1
  • Gold=29, Silver=3, Bronze=1
  • Gold=40, Silver=4, Bronze=1
  • Gold=45, Silver=5, Bronze=1

TRE never went to zero when Bronze was given Zero weight. Of these models, 40:4:1 had the most symmetry (10:1 to 4:1), so used that is my approximated Google Olympic Rank MDW (it did have zero TRE for all medal winners).

So who won?

I figured I would look at the Top Five Ranked Countries over various models:

Demonstration of how easy it is to add a Grading Curve to the rankings. The higher the TRE the more underweighted winning Gold medals (i.e., truly winning events) is. The country in bold is the one that benefits most from the Grading Curve
Demonstration of how easy it is to add a Grading Curve to the rankings. The higher the TRE the more underweighted winning Gold medals (i.e., truly winning events) is. The country in bold is the one that benefits most from the Grading Curve

Obviously, Russia is the all around winner as they won the most medals and the most Golds and the most Silvers. (Making this exercise a bit less interesting than it was about a week ago). However, it will be fun to apply this in 2016.

And at least Mr Putin is happy.