Tag Archives: software engineers

After Moving to Slack: Inbox Zero (at least twice per week)

TL,DR Version: Moving my entire Engineering organization to Slack and adopting a ChatOps collaboration mindset has reduced email volume over 95% and now enables us to resolve issues in 1/4th of the time.

I have always been a fan of using automation, web hooks, and chat-assisted operations to streamline work and enable collaboration across different locations. However, this traditionally required some Engineering and Operations (i.e., DevOps) investment in setting up collaboration servers, programming bots, etc. Slack has changed this, virtually eliminating all technical friction of moving to a ChatOps model. Here is our of how Slack enabled us to move to a ChatOps environment with far less email, faster response times, and greater overall productivity.

In late 2013, I joined a new company that—at the time—had no one on staff at the time with a DevOps-style background. After coming from several years of Chat-integrated operations, it felt like hitting the brakes. One night a few weeks later, I saw a Tweet by @mparca on February 9 about this new great service called Slack. I took a quick look and realized I could achieve everything that HipChat and Hubot did—with a simple push-button SaaS service. My initial set up (our organization, some channels, and hooks to Github) took less than 10 minutes. As it was free (for many features), I did not even need to process a Purchase Order (even better). I was off to setting ChatOps at a new place.

Initially, things went pretty slow. At the time, most of our tech stack and tools were hosted on-premise. Our chat tool of choice as a Skype (not a hook-friendly app). I got a few people to move to Slack, but not many.

Over time, as we implemented a full continuous integration and deployment chain, we added more and more hooks into Slack. First came a move to Atlassian (Cloud). Next Jenkins. Next Sentry. Then Ansible. Then Icinga. Then came custom RTM scripts for more complex things, such as letting our Data Scientists know they have left an idle PySpark context running for more than eight hours. What made this so easy was that everything but the custom RTM scripts could be done in less than five mouse-clicks (it is very helpful that so many collaboration and monitoring tools have enabled web hooks).

As we added more hooks, and started to bring more people onboard, I noticed an interesting shift. People joining our team began to just use Slack to communicate. One developer would come across an interesting new open source repo or article and share it with the rest in #general. Developers with DevOps privileges would jump on issues as soon as they saw a Sentry alert in #prod (saving the need to even text or phone the on-call engineer). Some people even answering questions in code review while they were doing things like waiting at the airport to depart on vacation.

Today our Engineering Teams (Product, Hardware, Software, Data, and Ops) all now primarily use Slack for communications. Most even use it in favor of texting. We Slack each other tickets that are ready for work, UAT, or release). We use group chats to have conversations to answer questions about stories, designs, bugs, and more. We use Slack channels integrated with Github for better code reviews. We use Slack to facilitate pair programming (and pair testing). Slack is now our default tool  for issue diagnosis, as sharing log messages, code snippets, and JSON is much clearer thanks to native markdown.

We achieve this with the following channel model:

  • One channel for each environment (so we can let people know if we are about to add nodes, run a load test, etc.). We have our respective Jenkins, Ansible, Icinga and Sentry hooks tied into each environment channel as well.
  • One channel for each code repository (to see PRs and conduct code reviews). We have aligned our JIRA projects with these to integrate tickets as well.
  • We have some basic team channels for more focused group conversations
  • We also have a #rm channel for simple-to-read log of what was released, when

As our organization moved to this model, life and work got easier in some rather visceral ways:

First, I have been able to dial down my notifications to only ping me when four things happen: I get a call, I get a text, I get a direct Slack message, or there is public Slack in a mission-critical channel. I no longer get endless interruptions, making me more productive at work (and more attentive in meetings and at home). If my phone does ping, it means there is something very important—which actually lets me react to these issues faster.

Second, my email volume is down over 95%. The bulk of the emails I now get are related to true business questions (vs. endless status messages and FYIs). As a result, I can answer email faster and now regularly hit “Inbox Zero” at least twice a week—while managing a 24x7xForever SaaS Engineering organization with follow-the-sun development and operations spanning California, Washington, Europe, and Asia.

Our full embrace of Slack did not happen overnight. It organically evolved over a period of about 18 months–a natural rate of adoption for organizational change. Because it was organic, we did have to institute policies  that forced usage. Instead we allowed our teams to naturally adopt Slack in ways that made work easier. I hope more organizations can make this transition as everyone could benefit from less email and fewer interruptions.

Natural adoption of Slack over other forms of communication would have happened if the usability was not as good as it is. One of my favorite features is how well Slack detects when I am no longer at my desk: if I walk down the hall, my phone chirps on key Slack messages; when I sit back down my phone stops and my laptop takes over.  This happens within seconds.

Oh BTW, we do all of this with the baseline free Slack account. That’s one less excuse to not give it a try.

PS – Want to work in an environment like this? Check us out.

Moving from Storm to Spark Streaming: Real-life Results and Analysis

In my last post, I explained why we decided to move the Speed Layer of our  Lambda Architecture from Apache Storm to Apache Spark Streaming in . In this post, I get the “real meat”:

  • How did it go?
  • What did we learn?
  • And most important of all… would we do it again?

This post recounts our detailed experiences moving to a 100%-Spark architecture. While Spark is arguably the most popular open source project in history (currently its only rival in terms of number of contributors is AngularJS), our experience with it was not all wine and roses. Some experiences were great. Others remain frustrating today after nearly nine months in live operation, streaming mission-critical data. Whether you love Spark or Storm, there are some bragging rights for your favorite platform.

Before I get started I should warn you that this post is pretty long. I could have broken it up into different posts,  one on each category of analysis. However, I thought it was more useful as a single blog post.

Our Real-world Environment

This is not one of those simple streaming analytic run-offs using the the canonical “Twitter Word Count” test (Spark version, Storm version). This is a real-life comparison of Storm vs. Spark Streaming after months of live operation in production, analyzing complex, real-life data from many enterprise customers.

We do not use either technology by itself, but instead use it in conjunction with Apache Kafka (Cloudera’s distribution), Apache Cassandra (DataStax’s distribution), and Apache Hadoop (also Cloudera’s distribution, storing data in Apache Parquet format). I am not divulging any trade secrets here, as we list these technologies on our job descriptions for recruiting.

Similarly, we do not simply pass data through a single stage graph (no robust real-world system uses a single-stage DAG). Instead our DAG processing traverses from 3-7 stages, depending on the type of data we receive. At each stage we persist data back to Kafka for durability and recovery.

Obviously, everything we run is clustered (no single servers). Along these lines,  we only use native installations of downloaded distributions. Everything here can be hosted anywhere you like: your own data center, GCE, AWS, Azure, etc. The results are not tied to IaaS solutions like AWS EMR.

This comparison is also not a short-duration test (which would also  be artificial). We run our streaming processing 24×7, without scheduled downtime. Our Lambda Architecture enabled us to stream the same data into Storm and Spark at the same time, allowing a true head-to-head comparison of development, deployment, performance and operations.

Finally, these results are not just based on uniform sample data (e.g., 140-character Tweets). We used a wide range of real-life sensor data, in multiple encoding formats, with messages ranging from 100 Bytes to 110 Megabytes in size (i.e., real-world, multi-tenant complexity). We tested this at data rates exceeding 48 Gbps per node. We have come up with novel ways to stream data larger than the message.max.bytes size in Kafka in real-time along our DAG–disclosing how we do this would be a trade secret 😉

So what did we learn? I will discuss the results from four perspectives:

  1. Developing with each (a.k.a., the software engineering POV)
  2. Head-to-head performance comparison
  3. Using each with other “Big Data” technologies
  4. Managing operations of each (a.k.a., the DevOps POV)

BTW, of course Spark Streaming is a micro-batch architecture while Storm is a true event processing architecture. Storm Trident vs. Spark Streaming would be a true “apple-to-apple” comparison. However, that was not our real-life experience. The move from one-transaction-at-a-time to micro-batches presented some changes in conceptual thinking (especially for “exactly once” processing). I include some learnings from this.

Next Page: Storm vs. Spark Streaming: Developing With Each