After Moving to Slack: Inbox Zero (at least twice per week)

TL,DR Version: Moving my entire Engineering organization to Slack and adopting a ChatOps collaboration mindset has reduced email volume over 95% and now enables us to resolve issues in 1/4th of the time.

I have always been a fan of using automation, web hooks, and chat-assisted operations to streamline work and enable collaboration across different locations. However, this traditionally required some Engineering and Operations (i.e., DevOps) investment in setting up collaboration servers, programming bots, etc. Slack has changed this, virtually eliminating all technical friction of moving to a ChatOps model. Here is our of how Slack enabled us to move to a ChatOps environment with far less email, faster response times, and greater overall productivity.

In late 2013, I joined a new company that—at the time—had no one on staff at the time with a DevOps-style background. After coming from several years of Chat-integrated operations, it felt like hitting the brakes. One night a few weeks later, I saw a Tweet by @mparca on February 9 about this new great service called Slack. I took a quick look and realized I could achieve everything that HipChat and Hubot did—with a simple push-button SaaS service. My initial set up (our organization, some channels, and hooks to Github) took less than 10 minutes. As it was free (for many features), I did not even need to process a Purchase Order (even better). I was off to setting ChatOps at a new place.

Initially, things went pretty slow. At the time, most of our tech stack and tools were hosted on-premise. Our chat tool of choice as a Skype (not a hook-friendly app). I got a few people to move to Slack, but not many.

Over time, as we implemented a full continuous integration and deployment chain, we added more and more hooks into Slack. First came a move to Atlassian (Cloud). Next Jenkins. Next Sentry. Then Ansible. Then Icinga. Then came custom RTM scripts for more complex things, such as letting our Data Scientists know they have left an idle PySpark context running for more than eight hours. What made this so easy was that everything but the custom RTM scripts could be done in less than five mouse-clicks (it is very helpful that so many collaboration and monitoring tools have enabled web hooks).

As we added more hooks, and started to bring more people onboard, I noticed an interesting shift. People joining our team began to just use Slack to communicate. One developer would come across an interesting new open source repo or article and share it with the rest in #general. Developers with DevOps privileges would jump on issues as soon as they saw a Sentry alert in #prod (saving the need to even text or phone the on-call engineer). Some people even answering questions in code review while they were doing things like waiting at the airport to depart on vacation.

Today our Engineering Teams (Product, Hardware, Software, Data, and Ops) all now primarily use Slack for communications. Most even use it in favor of texting. We Slack each other tickets that are ready for work, UAT, or release). We use group chats to have conversations to answer questions about stories, designs, bugs, and more. We use Slack channels integrated with Github for better code reviews. We use Slack to facilitate pair programming (and pair testing). Slack is now our default tool  for issue diagnosis, as sharing log messages, code snippets, and JSON is much clearer thanks to native markdown.

We achieve this with the following channel model:

  • One channel for each environment (so we can let people know if we are about to add nodes, run a load test, etc.). We have our respective Jenkins, Ansible, Icinga and Sentry hooks tied into each environment channel as well.
  • One channel for each code repository (to see PRs and conduct code reviews). We have aligned our JIRA projects with these to integrate tickets as well.
  • We have some basic team channels for more focused group conversations
  • We also have a #rm channel for simple-to-read log of what was released, when

As our organization moved to this model, life and work got easier in some rather visceral ways:

First, I have been able to dial down my notifications to only ping me when four things happen: I get a call, I get a text, I get a direct Slack message, or there is public Slack in a mission-critical channel. I no longer get endless interruptions, making me more productive at work (and more attentive in meetings and at home). If my phone does ping, it means there is something very important—which actually lets me react to these issues faster.

Second, my email volume is down over 95%. The bulk of the emails I now get are related to true business questions (vs. endless status messages and FYIs). As a result, I can answer email faster and now regularly hit “Inbox Zero” at least twice a week—while managing a 24x7xForever SaaS Engineering organization with follow-the-sun development and operations spanning California, Washington, Europe, and Asia.

Our full embrace of Slack did not happen overnight. It organically evolved over a period of about 18 months–a natural rate of adoption for organizational change. Because it was organic, we did have to institute policies  that forced usage. Instead we allowed our teams to naturally adopt Slack in ways that made work easier. I hope more organizations can make this transition as everyone could benefit from less email and fewer interruptions.

Natural adoption of Slack over other forms of communication would have happened if the usability was not as good as it is. One of my favorite features is how well Slack detects when I am no longer at my desk: if I walk down the hall, my phone chirps on key Slack messages; when I sit back down my phone stops and my laptop takes over.  This happens within seconds.

Oh BTW, we do all of this with the baseline free Slack account. That’s one less excuse to not give it a try.

PS – Want to work in an environment like this? Check us out.

If a tree falls in the woods and no one heard it, did it happen? Not in Streaming Analytics

Interest in “Streaming Analytics” has exploded over the past few years. The reasons are two-fold. First, the rise of the Internet of Things has made it possible for the first time ever to get data directly (and automatically) from infrastructure, cars, homes, factories and more—all without a human people ever having to do something. To put this in perspective, last quarter more new automobiles were connected to mobile networks than new cellphones were. Second, the technology is now readily available to implement streaming analytics at massive scales without needing to invent your own frameworks. Not one, but three technology projects (Storm, Spark, and Flink) are available for your choice. One of them, Apache Spark, is now the second-fastest growing open source project in history.

Streaming Analytics is a very fun field to be in (I have been in for 22 years—in the national security arena, eCommerce, med-tech, and now Industrial IoT). Taking in data faster than any human being could examine it and analyzing in near real-time to make split-second decisions creates provide omnipresent knowledge and enormous business value. However, Streaming Analytics presents a new challenge that does not exist in traditional After-the-fact Analytics:

You need to figure out how to make decisions on data that you do not know about yet—and may not ever find out about it time to make it worth your time.

Three real-world examples

To put it in philosophical parlance, how does anyone know if a tree in the forest fell down if no one ever sees (or hears) that it fell? As philosophical as this sounds, it can have multi-million-dollar impacts in the real world. Here are three examples:

Example 1: eCommerce Chatbot

My chatbot is engaged with a new prospective customer who may eligible—based on her mobile number—for our bank’s highest value credit card. Unfortunately, that data is delay in getting to my bot. As a result, at this point in time, I do not know whether the customer is: very valuable, average, or a credit risk. What does my chatbot do?

chartbot_400px

Example 2: Guaranteed Shipping

I have a booking to delivery high-value cargo to a customer site by end of business today. It is now 15 minutes after the day is over. I might be inclined to escalate to my carrier that the container has not arrived. However, at this point in time, I cannot tell if: the container arrived but the signal from the carrier is delayed getting to me or of the container did not arrive. What do I do?

container_3

Example 3: Infrastructure Security Monitoring

I run a cattle farm that is hundreds of thousands of acres. I have equipped all gates in my Smart Ranch with sensors to alert me if any are open (so I can prevent the cattle from getting away). The sensors send updates every 15 minutes. However, one of the gate sensors is a few minutes late. At this point in time, I do not know if the gate is open or closed. Does my system trigger an alert?

140647

What makes Streaming Analytics different

All of these challenges are based on lack of information. Lack of information is typical in analytics (as well as messy data, data gaps, corrupt data, duplicate data and many other issues). However, in the streaming analytics there is one critical difference: you will eventually have the data you need right now to answer your question. However, by the time you receive it, it will be too late to make your decision: the eCommerce customer will be gone; your freight contract will be honored or broken; the cattle will be safe or have gotten away.

What makes this especially different is that all the parties involved with your business will know the answer as well. If my chatbot fails to offer a valuable customer the best credit card, the line of business GM will ask why “it was so stupid.” If I call the customer up to tell them the freight has not arrived and they respond with “but it got here 10 minutes before closing”, I will look stupid. It all boils down to this:

56818465People may not know when After-the-fact Analytics miss a point; however, everyone will know that your Streaming Analytics made a mistake.

 

That can be stressful 😉

What’s a person to do?

The essential thing to remember when designing your Streaming Analytics solution is this:

Close enough and in-time is much more valuable than perfect and too late

This means you need to build your solution to make a decision based on the information available (rather than waiting until the critical moment has passed). The trick is determining what is “close enough”. The answer to that question depends on your business context. Specifically, given your context, is it better to accidentally do something you should not have (a Type I error) or is it better to not doing something you should have done (a Type II error).

Let’s looks at how this works in each of the three examples:

Example 1: eCommerce Chatbot

Our business context determines it is far worse to get a prospective customer excited about an offer that we cannot deliver instead of offering a less valuable package (i.e., we are Type II biased, something typical in ad-tech and eCommerce). We do not make the highest-value offer.

Depending on our Risk Policies we make the normal offer (one for which a majority of customers qualify) or shunt the customer to a slower process (email vs. chat) to wait time for the data to catch up (essentially shifting to batch). Most commerce companies have created default packages that allow the former action, allowing them to make more money in the “80% most likely case”. We could also apply a machine learning algorithm to guess the best alternative offer, maximizing revenue and minimizing the risk of an angry customer (or wasted time).

Example 2: Guaranteed Shipping

Our business context indicates that it does not make sense to alert that we are late if we do not know it (yet)—especially given the likelihood that this could result in some “egg on our face” when the customer asks why we did not know the container arrived 20 minutes ago. As a result, we do not alert we are late at 5:00pm. We make the call when we know for sure that the container was on time vs. late (i.e., when the delivery message actually arrives). This scenario is also Type II biased.

However, we do not want to expose ourselves to a completely irate customer in high-value circumstances. As such, we place a secondary streaming analytic in place: if we do not receive confirmation within more than 60 minutes from scheduled delivery we trigger an alert to reach out to our delivery carrier and find out the real status (i.e., by taking the expensive step of talking to a person vs. a sensor). We determined the “magic number” of 60 minutes by doing After-the-fact Analytics that determined waiting this long will automatically resolve the 80% of false positives while still giving us enough heads up to detect the true issues. If we are even smarter we can have our After-the-fact Analytics system automatically calculate the magic number to delay alerts based on location, time-of-day, day-of-week and other features.

Example 3: Infrastructure Security Monitoring

Our business context indicates that is not good to close the farm door after all the cattle got away. As such, we have programmed our Streaming Analytic system to alert us if the gate is opened (before a human has sent a “I am opening the gate” message) OR if we have not received confirmation that the gate is closed for period of longer than 15 minutes. Essentially we are Type I biased (not uncommon in safety and security situations).

Unfortunately this bias will result in lots of alerts. Essentially any time the sensor message is delay in the cell network our alarm will go off. Luckily, we have some more advanced analytic techniques to help with this. Namely, we can use a Lambda Architecture model that provides self-healing: the initial lack of confirmation that the gate is closed triggers an alert; the arrival of the delayed message that the gate WAS closed then cancels this alert (with a resolution message). This is still a bit chatty. However, it short-circuits false positives and prevents the need to send a worker (or a drone) all the way out to the gate to check if it is open.

Conclusion

Yes, Streaming Analytics is a harder than After-The-Fact Analytics. However, it the near real-time omnipresence (not omniscience) offers tremendous benefits. You just need to think in philosophical terms when designing your analytic rules.

entropy

5 points where tech balances between life and work