Tag Archives: recommendation engine

If a tree falls in the woods and no one heard it, did it happen? Not in Streaming Analytics

Interest in “Streaming Analytics” has exploded over the past few years. The reasons are two-fold. First, the rise of the Internet of Things has made it possible for the first time ever to get data directly (and automatically) from infrastructure, cars, homes, factories and more—all without a human people ever having to do something. To put this in perspective, last quarter more new automobiles were connected to mobile networks than new cellphones were. Second, the technology is now readily available to implement streaming analytics at massive scales without needing to invent your own frameworks. Not one, but three technology projects (Storm, Spark, and Flink) are available for your choice. One of them, Apache Spark, is now the second-fastest growing open source project in history.

Streaming Analytics is a very fun field to be in (I have been in for 22 years—in the national security arena, eCommerce, med-tech, and now Industrial IoT). Taking in data faster than any human being could examine it and analyzing in near real-time to make split-second decisions creates provide omnipresent knowledge and enormous business value. However, Streaming Analytics presents a new challenge that does not exist in traditional After-the-fact Analytics:

You need to figure out how to make decisions on data that you do not know about yet—and may not ever find out about it time to make it worth your time.

Three real-world examples

To put it in philosophical parlance, how does anyone know if a tree in the forest fell down if no one ever sees (or hears) that it fell? As philosophical as this sounds, it can have multi-million-dollar impacts in the real world. Here are three examples:

Example 1: eCommerce Chatbot

My chatbot is engaged with a new prospective customer who may eligible—based on her mobile number—for our bank’s highest value credit card. Unfortunately, that data is delay in getting to my bot. As a result, at this point in time, I do not know whether the customer is: very valuable, average, or a credit risk. What does my chatbot do?

chartbot_400px

Example 2: Guaranteed Shipping

I have a booking to delivery high-value cargo to a customer site by end of business today. It is now 15 minutes after the day is over. I might be inclined to escalate to my carrier that the container has not arrived. However, at this point in time, I cannot tell if: the container arrived but the signal from the carrier is delayed getting to me or of the container did not arrive. What do I do?

container_3

Example 3: Infrastructure Security Monitoring

I run a cattle farm that is hundreds of thousands of acres. I have equipped all gates in my Smart Ranch with sensors to alert me if any are open (so I can prevent the cattle from getting away). The sensors send updates every 15 minutes. However, one of the gate sensors is a few minutes late. At this point in time, I do not know if the gate is open or closed. Does my system trigger an alert?

140647

What makes Streaming Analytics different

All of these challenges are based on lack of information. Lack of information is typical in analytics (as well as messy data, data gaps, corrupt data, duplicate data and many other issues). However, in the streaming analytics there is one critical difference: you will eventually have the data you need right now to answer your question. However, by the time you receive it, it will be too late to make your decision: the eCommerce customer will be gone; your freight contract will be honored or broken; the cattle will be safe or have gotten away.

What makes this especially different is that all the parties involved with your business will know the answer as well. If my chatbot fails to offer a valuable customer the best credit card, the line of business GM will ask why “it was so stupid.” If I call the customer up to tell them the freight has not arrived and they respond with “but it got here 10 minutes before closing”, I will look stupid. It all boils down to this:

56818465People may not know when After-the-fact Analytics miss a point; however, everyone will know that your Streaming Analytics made a mistake.

 

That can be stressful 😉

What’s a person to do?

The essential thing to remember when designing your Streaming Analytics solution is this:

Close enough and in-time is much more valuable than perfect and too late

This means you need to build your solution to make a decision based on the information available (rather than waiting until the critical moment has passed). The trick is determining what is “close enough”. The answer to that question depends on your business context. Specifically, given your context, is it better to accidentally do something you should not have (a Type I error) or is it better to not doing something you should have done (a Type II error).

Let’s looks at how this works in each of the three examples:

Example 1: eCommerce Chatbot

Our business context determines it is far worse to get a prospective customer excited about an offer that we cannot deliver instead of offering a less valuable package (i.e., we are Type II biased, something typical in ad-tech and eCommerce). We do not make the highest-value offer.

Depending on our Risk Policies we make the normal offer (one for which a majority of customers qualify) or shunt the customer to a slower process (email vs. chat) to wait time for the data to catch up (essentially shifting to batch). Most commerce companies have created default packages that allow the former action, allowing them to make more money in the “80% most likely case”. We could also apply a machine learning algorithm to guess the best alternative offer, maximizing revenue and minimizing the risk of an angry customer (or wasted time).

Example 2: Guaranteed Shipping

Our business context indicates that it does not make sense to alert that we are late if we do not know it (yet)—especially given the likelihood that this could result in some “egg on our face” when the customer asks why we did not know the container arrived 20 minutes ago. As a result, we do not alert we are late at 5:00pm. We make the call when we know for sure that the container was on time vs. late (i.e., when the delivery message actually arrives). This scenario is also Type II biased.

However, we do not want to expose ourselves to a completely irate customer in high-value circumstances. As such, we place a secondary streaming analytic in place: if we do not receive confirmation within more than 60 minutes from scheduled delivery we trigger an alert to reach out to our delivery carrier and find out the real status (i.e., by taking the expensive step of talking to a person vs. a sensor). We determined the “magic number” of 60 minutes by doing After-the-fact Analytics that determined waiting this long will automatically resolve the 80% of false positives while still giving us enough heads up to detect the true issues. If we are even smarter we can have our After-the-fact Analytics system automatically calculate the magic number to delay alerts based on location, time-of-day, day-of-week and other features.

Example 3: Infrastructure Security Monitoring

Our business context indicates that is not good to close the farm door after all the cattle got away. As such, we have programmed our Streaming Analytic system to alert us if the gate is opened (before a human has sent a “I am opening the gate” message) OR if we have not received confirmation that the gate is closed for period of longer than 15 minutes. Essentially we are Type I biased (not uncommon in safety and security situations).

Unfortunately this bias will result in lots of alerts. Essentially any time the sensor message is delay in the cell network our alarm will go off. Luckily, we have some more advanced analytic techniques to help with this. Namely, we can use a Lambda Architecture model that provides self-healing: the initial lack of confirmation that the gate is closed triggers an alert; the arrival of the delayed message that the gate WAS closed then cancels this alert (with a resolution message). This is still a bit chatty. However, it short-circuits false positives and prevents the need to send a worker (or a drone) all the way out to the gate to check if it is open.

Conclusion

Yes, Streaming Analytics is a harder than After-The-Fact Analytics. However, it the near real-time omnipresence (not omniscience) offers tremendous benefits. You just need to think in philosophical terms when designing your analytic rules.

entropy

Why business owners should care about this thing called the Lambda Architecture

Updated on April 19, adding “Mapping this back to…” final section

In the past 25 years I have seen four things that really made me step back and say, “This changes everything.” The first was the browser (before that we got data from the Internet using news groups and anonymous FTP). The second was open source distribution (we could get whole architectures up in hours, not weeks or months). The third was App Stores (Amazon and Apple allowed us to distribute software with zero marginal cost). The most recent was the Lambda Architecture

Yep, it is that big.

If into a business owner or product manager who is into Big Data, data-driven decision-making, iterative A/B testing, machine learning-driven recommendation or any similar analytics application you have probably heard a passing reference about this thing called the Lambda Architecture. However, anyone digging in deeper immediately finds a menagerie of arcane terms that could only appeal to developer: Kafka, Storm, Spark, Cassandra, Elephant DB, Impala, Speed Layer, Batch Layer, Immutable Data Store, etc. This is unfortunate, because it obscures how disruptive of a change the Lambda Architecture represents. As a result, many people with decision-making authority to fund technology changes are missing out on something really big.

Life in the traditional architecture world

Traditional architectures are based on transactions. They force collection of data into formats required to complete a given transaction (i.e., I need to collect N fields of information to process sale of an item). In addition, traditional architectures enable data to be changed: I can update my profile, update my shopping cart, update my order status, etc. This makes perfect sense if your object is to complete a transaction.

But what if I want to understand more about who buys what, who is doing what, or often more importantly what leads something to happen (or not happen)? I cannot get this from the transaction data but instead have to perform “data archaeology” stitching multiple sources of data together to create what happened just before and after the transaction. If I am lucky, I have all this data. However, more often than not I need to engage in development efforts to: collect more data at the time of transaction, log more info, pull it into a data warehouse, change my reports, then dig in to see if I can figure things out. This not only takes much time and effort; it is also a ripe source of errors.

Lambda flips how we view data on its head

The Lambda Architecture starts with an entirely different premise: that it is impossible to understand today all the future uses and interpretations we will need from our data.

This is not just a platitude. It is underlying philosophy that the value of data comes from the ability to ask it to answer as many questions for you that would every want to ask. This drives entirely different approaches to how data is captured, stored, interpreted—and most importantly of all—continuously reinterpreted as you learn and discover more about your company, customers and operations:

  • First data is preserved in its original form and never changed or destroyed. This lets you look at any piece of data at any point in time and factor in changes over time. For example, you could re-segment your customers every year, quarter, or even day as you learn new patterns.
  • Second data is not forced into arbitrary formats (i.e., schemas) but is preserved raw as you may want to go back and gleam different elements. For example you could later realize a variable such as source IP address of a customer visit to your site may entirely change how you measure, interpret and react to customers from this address
  • Third data is engineered to allow it to be easily reinterpreted as you learn more. This does not just focus on making reinterpretation fast; it also makes reinterpretation fault-tolerant (i.e., easy to correct in the event of a bug—without any loss of information)
  • Finally it allows all of this in real-time with two points of view: a just-in-time view and the deep cross-sectional view (both of which are always current). This lets you make decisions quickly without sacrificing the 100% loss-less accuracy needed for important business areas (such as finance, medicine, or mission-critical operations).

Once you have these capabilities, the things you can do with data—quickly and at scale—are pretty amazing. I will share some of these in future posts, as I want to keep this post short.

However, I will close this post out with a simple analogy…

“Think Like I Chef” vs. the Fast Food Menu

Traditional architectures are like fast food menus. You have these options. If you want to change the menu, we can do some market research, see what works and rollout a new menu. If you want to change again (or explore “what if we had done this?”) we can repeat this process.

Lambda architecture is like the pantry of a great chef. You have all these ingredients. If you feel like duck à l’orange, we can make this. If you want a duck confit salad, we can re-purpose the ingredients. If you want really rich potatoes, we can render the fat and cook the potatoes in it. If you want vegan, we can pull other items out of the pantry and make something else. There are so many more options.

Mapping This Back to Things Business People Care About

So what does this mean for your business? Do you remember the last time heard these comments:

  • “You’ll see that report. It will be in our Data Warehouse–tomorrow around 10am.”
  • “Oh, that’s in our warehouse. We can build a program to convert and and load the data into production. It will only take 3 weeks. Can you submit your TPS form to the Steering Committee so we can prioritize this?”
  • “Gee, it’s too bad we did not capture that data. We can start to capture it now. In a few months we can start analyzing it.”

With Lambda, all of these comments–and many more–go away. Data is never thrown away. It is always in production, ready to be used–for analysis or real-time transactions. There is no delay between transactional use and analysis–data flows down both paths as once.

Just imagine what problems you can solve when these limitations go away.