Tag Archives: public policy

Planning for a post-work future

It’s that time of year where many bloggers make their predictions for next year. Rather than do that, I wanted to look a generation out, when those who entering college today send their children to college, and think about the events of automation on our future. This is not a prediction per se. Instead it is more of a RFI (a request for ideas).

As a caveat, I work in automation (machine intelligence and machine learning, sensor and computer vision, automated controls and planning systems). I also have a prior background in policy—one that is driving me to think about the bigger picture of automation. However, this post is not about the work I am doing now. It is about the near-term “practical realities” I can imagine.

We are at the onset of an “Automaton Renaissance.” Five years ago, most people outside of tech thought about self-driving cars as something from the Jetsons. Last week, the governor of Michigan signed a bill allowing live testing of self-driving cars without human testers. Chatbots are not just the stuff of start-ups. Last month, I attended a conference where Fortune-500, large-cap companies were sharing results of pilots to replace back office help desks and call centers with chatbots. Two weeks ago, I was at a supply chain conference where we discussed initial pilots that a replacing planning and decision-making with machine learning (pilots involving billions of dollars of shipments). Automation is not coming—it is here already, and accelerating. Last week, I was at a conference for advanced manufacturing, we the speakers discussed the current and future impacts (good and bad) on jobs in the US.

So what will life (and work) be like in 20 years? Here are just a few things that we already have the technology to do today (what is left is the less-futuristic problems of mass production, rollout, support and adoption):

  • If you live in the city and suburbs, you will not need to own a car. Instead you can call an on-demand autonomous car to pick you up. No direct insurance costs, no auto loans, less traffic and pollution. In fact the cars will tell Public Works about detected potholes (street light and infrastructure sensors will tell Public Works when maintenance is needed).
  • If you work in a manufacturing plant, you will have fewer workers who are monitoring and coordinated advanced manufacturing (automation + additives). The parts will have higher durability and fewer component suppliers—also a reduction in delays, cost and pollution.
  • If you work on a farm you will demonstrate (supervised learning) to drones how you want plants pruned and picked, holes dug, etc. These drones will reduce back-breaking labor, reduce accidents and automatically provide complete traceability of the food supply chain (likely via Block Chain)
  • If you do data entry or transcription, your work will be replaced with everything from voice recognition-based entry, to Block Chain-secured data exchange, to automated data translation (like the team is doing at Tamr)
  • 95% of call centers will be chatbots. Waiting for an agent will be eliminated (as well as large, power-hungry call centers). The remaining 5% of jobs will be human handling escalation of things the machines cannot.

These are just five examples. They are all “good outcomes” in terms of saving work, increasing quantity and quality of output, reducing cost (and price), and even improving the environment. (If you are worried about the impact of computing on energy, look at what Google is doing with making computing greener.)

However, they will all radically change the jobs picture worldwide. Yes, they will create new, more knowledge-oriented jobs. Nevertheless, they will greatly reduce the number of other jobs. Ultimately, I believe we will have fewer net jobs overall. This is the “post-work future” — actually a “post-labor future”, a term that sounds a bit too political. What do we do about that?

We could ban automation. However any company or country that embraces it will gain such economic advantage that it will force others to eventually adopt automation. The smarter answer is to begin planning for an automation-enhanced future. The way I see it, our potential outcomes fall between the following two extremes:

  1. The “Gene Roddenberry” Outcome: After eliminating the Three D’s (dirt, danger, and drudgery) and using automation to reduce cost and increase quantity, we free up much capacity for people to explore more creative outcomes. We still have knowledge-based jobs (medicine, engineering, planning). However, more people can spend time on art, literature, etc. This is the ideal future.
  2. The “Haves vs. Have Nots” Outcome: Knowledge workers and the affluent do incredibly well. Others are left out. We have the resources (thanks to higher productivity) but we wind up directing this to programs that essentially consign many people to living “on the dole” as it was called when I lived in the UK. While this is humane, it omits the creative ideas and contributions of whole blocks of our population. This is a bad future.

Crafting where we will be in 20 years is not just an exercise in public policy. It will require changes in how we think and talk about education, technology, jobs, entitlement programs, etc. Thinking about this often keeps me up at night. To be successful, we will need to do this across all of society (urban and rural, pre-school through retirement age, across all incomes and education levels, across all countries and political parties).

Regardless of what we do, we need to get started now. Automation is accelerating. Guess how many autonomous vehicles will be on the roads in the US alone by 2020 (virtually three years from now):

10 million

Note: The above image is labeled for re-use by Racontour. Read more on the post-work word at The Atlantic magazine and Aeon magazine.

Twitter traffic jams in Washington, created by… John Oliver

Summary: In the first week of June, 20% of the Tweets about traffic, delays and congestion by people around the Washington Beltway were caused by John Oliver’s “Last Week Tonight” segment about Net Neutrality.

At work, we are always exploring a wide range of sensors to obtain useful insights that can used to make work and routine activities faster, more efficient and less risky. One of our Alpha Tests is examining use of “arrays” of high-targeted Twitter sensors to detect early indications of traffic congestion, accidents and other sources of delays. Specifically we are training our system how to use Twitter is a good traffic sensor (by good, in “data science speak” we are determining whether we can train a model for traffic detection that has a good balance of precision and recall, and hence a good F1 Score). To do this, I setup a test bed around the nation’s second-worst commuter corridor: the Washington DC Beltway (our my backyard).

Earlier this month our array of geographic Twitter sensors picked up an interesting surge in highly localized tweets about traffic-related congestion and delays. This was not an expected “bad commuter-day”-like surge. The number of topic- and geographically-related tweets seen on June 4th was more than double the expected number for a Tuesday in June around the Beltway; the number seen during lunchtime was almost 5x normal.

So what was the cause? Before answering, it is worth taking a step back.

The folks at Twitter have done a wonderful job at not only allowing you to fetch tweets based on topics, hash tags and geographies. They have also added some great machine learning-driven processing to screen out likely spammers and suspect accounts. Nevertheless Twitter data, like all sensor data, is messy. It is common to see Tweets with words spelled wrong, words used out of context, or simply nonsensical Tweets. In addition, people frequently repeat the same tweets throughout the day (a tactic to raise social media exposure) and do lots of other things that you must train the machine to account for.

That’s why we use a Lambda Architecture to process our streaming sensor data (I’ll write about why everyone–from marketers to DevOps staff should be excited about Lambda architectures in a future post). As such, not only do use Complex Event Processing (via Apache Storm) to detect patterns as they happen; we also keep a permanent copy of all raw data that we can explore to discover new patterns and improve our machine learning models).

That is exactly what we did as soon as we detected the surge. Here is what we found: the cause of the traffic- and congestion-related Twitter surge around the Beltway was… John Oliver:

  1. In the back half of June 1st’s episode of “Last Week Tonight” (HBO, 11pm ET), John Oliver had an interesting 13-minute segment on Net Neutrality. In this segment he encouraged people to visit the FCC website and comment on this topic.
  2. Seventeen hours later, the FCC tweeted that “[they were] experiencing technical difficulties with [their] comment system due to heavy traffic.” They tweeted a similar message 74-minutes later.
  3. This triggered a wave of re-tweets and comments about the outage in many places. Interestingly this wave was delayed in the Beltway. It surged the next day, just before lunchtime in DC, continuing throughout the afternoon. The two spikes were at lunchtime and just after work . Evidently, people are not re-tweeting while working. The timing of the spikes also reveals some interesting behavior patterns on Twitter use in DC.
  4. By 4am on Wednesday the surge was over. People around the Beltway were back to their normal tweeting about traffic, construction, delays, lights, outages and other items confounding their commute.

Of course, as soon as we saw the new pattern, we adjusted our model to account for this pattern. However, we thought it would be interesting to show in a simple graph how much “traffic on traffic, delays and congestion” Mr. Oliver induced in the geography around the Beltway for a 36-hour period. Over the first week of June, one out of every five Tweets about traffic, delays and congestion by people around the Beltway were not about commuter traffic, but instead around FCC website traffic caused by John Oliver:

Tweets from people geographically Tweeting around the Washington Beltway on traffic, congestion, delays and related frustration for first week of June. (Click to enlarge.)
Tweets from people geographically Tweeting around the Washington Beltway on traffic, congestion, delays and related frustration for first week of June. (Click to enlarge.)

Obviously, a simple count of tweets is a gross measure. To really use Twitter as a sensor, one needs to factor in many other variables: use text vs. hash-tags, tweets vs. mentions and re-tweets, the software client used to send the tweet (e.g., HootSuite is less likely to be a good source for accurate commuter traffic data); the number of followers the tweeter has (not a simple linear weighting) and much more. However, the simple count is simple first-order visualization. It also makes interesting “water-cooler conversation.”