Design & Development

Design and development of products and platforms

Four Common IoT Security Holes

If you follow the Internet of Things space, not a day passes where you do not see an analyst report or news article talking about IoT security vulnerabilities across every sector: consumer, enterprise, industrial and government/Smart City.

I’ve been working with Internet-connected devices (medical devices, industrial actuators, sensors for environmental, security monitoring, even military systems) for many years. In my job, I am lucky enough to able to work with industrial and enterprise devices daily. At home, I play with them both as a consumer and developer. Time and again, I see the following IoT security holes with alarming frequency:

Security Hole #1: Not Using Strong Encryption

It is amazing that in 2016 people are still not using strong encryption to protect important data. However, I frequently see IoT devices that use no encryption at all: they store and transmit data in the clear. Other devices use homegrown encryption techniques that are are unproven by peer review and relatively easy to hack.

Most of the arguments I have seen against encryption fall into three camps: 1) it is too computationally expensive for low-powered devices, 2) it is too hard to use for IoT protocols, and 3) the device data is too obscure to understand. Let’s look at each:

  1. Yes, encryption is computationally expensive. However, ongoing investments in the space are providing more efficient RSA, AES, and ECC algorithms that work on smaller devices. In addition, Moore’s Law is even allowing penny-sized devices to have enough power to use these.
  2. IoT protocols are also getting better and better at providing strong encryption and secure connections (see Security Hole #2).
  3. Finally, the old “Our-data-is-too-obscure-for-hackers-to-understand Argument” was proven a fallacy years ago, first by the credit card industry’s Cardholder Information Security Program, and later by its replacement: PCI DSS. Any disgruntled employee (or hacker masquerading as a contractor) can bypass the “obscurity protection.”

Not using strong encryption is probably the most egregious security vulnerability. Any 14-year-old can use downloadable packet sniffing programs to capture your data. Solutions that mitigate this risk are readily available. There is no excuse to not encrypting your data.

Security Hole #2: Not Using Secured Sessions

A common error is information/cyber security is forgetting that secure communication consists of two components:

  1. Encryption of data and
  2. Establishment of secured sessions

Secured sessions use protocols to establish mutual authentication and to exchange  shared secret that only the transmitter and receiver have. If you do not establish a secured session you are blindly guessing that the recipient of your data is the correct person. When you do not use secured session you invite a Man-In-The-Middle (MITM) attack where the attacker can intercept and redirect your transmissions.

Many people think they are not likely targets of a MITM attack. Here is simple scenario.

  • A disgruntled employee or hacker-posing-as-contractors first intercepts and copies traffic from your devices.
  • From this data, he learns what devices are attached to items of interest (a patient, your house, etc.). He can then also learn the normal pattern of communication from the device.
  • Next he replaces the data from your device to send his own. This can give the appearance that a patient who is sick is now health (or vice versa) or that your house is not being broken into (allowing his partners to break in). The hacker can even intercept your over-the-air commands and download programmable software or send commands to shut-down devices.

This work is technically hard, but doable with software downloadable on the Internet. If communication between your IoT devices and your secured (and encrypted), the hacker would have to gain enough permissions to get a hold of your SSL certificates and hijack DNS (if he has this, you are in a lot of trouble already). However, if the communication between your IoT devices and servers is not secured, a hacker can conduct this MITM attack from anywhere. By the time you learn about it, the damage will be long done.

Thankfully, there are many solutions available in the IoT domain that provide both strong encryption and secured sessions (plugging Security Holes #1 and #2):

  • If you are using standard “Internet of Servers” protocols, simply installing a full compliment of certificates will enable you to use SSL over TLS for HTTPS and FTPS (but not SFTP).
  • If you are using MQTT (one of my favorites), there are many brokers available that also provide SSL over TLS.
  • If you are using CoAP (which rides over UDP), you can use DTLS.
  • If your devices have edge constellations, you can turn on Bluetooth Security Mode 4 and get SSL with the same Elliptic Curve Diffie-Hellman secret key exchange used by the NSA.
  • You can even download and borrow the wonderful MTproto protocol designed by the folks over at Telegram (it is designed for low-powered, lossy, distributed communication).

None of these solutions are perfect. However, all reduce security risks significantly. Furthermore, all are evolving in the open source community as people find new vulnerabilities. Why more people do not use them is puzzling.

Security Hole #3: Not Protecting Against Buffer Overflow

When a hacker triggers a Buffer Overflow vulnerability, she typically causes a program to do two things: dump critical data and crash.

The first documented cases of Buffer Overview exploits data back to 1972. As more and more computers were connected to the Internet, these attacks became more pervasive. Fifteen years ago, Code Red highlighted to much of the general public what a Buffer Overflow exploit can do.

Over the past few years, application framework libraries have and higher-level languages, have added many defensive programming protection to make these vulnerabilities less prevalent than they were in the past. (As anyone who has encountered an awlful error page that shows you a stack trace error, these defenses are still far-from-perfect). Nevertheless, they have plugged many holes.

However, IoT devices are bringing this vulnerability back into the mainstream again. As most IoT devices operate with far less memory and CPU than expensive devices like your laptop or smartphone, their firmware and applications are primarily written in lower level programing languages. It is much easier to trigger buffer overflows in these languages than more forgiving higher level languages. Exception handling libraries are less robust. More often than not, memory management is handled using good old-fashioned C/C++ programming (there is no Garbage Collector to save you). This significantly raises the risk of buffer overflows in devices.

When buffer overflow crashes occur in the data center there is at least someone around to fix things. When they happen to a remote IoT device in the field, they can literally shut down a security or medical sensor. There is no IT or Ops department nearby to fix it. The device is shut down (at best, or bricked at worst). Essentially device is dead to world. Depending on what is was responsible for, lots real-world physical damage can ensure.

Devices that maintain continuously open Internet connections (like all those connected baby monitors) are especially prone to buffer flow attacks as remote hackers can discover them using port-scanning software. However, even industrial IoT devices that only pull commands and programs down over-the-air are vulnerable to MITM attacks that can shut them down by flooding data to the device (this reinforces the need to plug Security Holes #1 and #2 discussed above).

The fix to this problem is fairly clear:  implement defensive programming and test it aggressively. Today’s automation technologies for continuous integration and delivery make this a much easier and trustworthy process than it was even a decade ago.

Security Hole #4: Weak Systems Engineering

The fourth big security hole I commonly see spans the intersection of technical design, system processes, and human behavior. It essentially boils down to this: if you use flawless technology in ways that it is not intended, you can create big vulnerabilities. If I design perfectly secure medical device but put it on the wrong patient (accidentally or maliciously), I will prevent capture of data about that sensor. If someone who installs the security sensors in my house sets my account up to call their cell phone (and not mine), they can break in while I am gone and I trick the company into thinking it is a false alarm.

The way around this is to design IoT devices that work when things (humans, the network, servers, etc.) fail.

  • Build in redundancy (devices, network paths and servers) to mitigate technical failures
  • Build in positive and negative feedback looks to mitigate human failures. For example, I should not just be notified if my home security sensor goes off. I should should be notified if my smartphone and my security companies servers both cannot communicate with my home security IoT devices.

Plugging this systems engineering IoT security hole takes a combination of technology engineering and business process design.  This is a natural fit to the enterprise, where IoT can be used as a component of business transformation. In the consumer segment the answer is usually an ecosystem solution. Amazon’s and Google’s solutions stand out regarding robustness and security.

***

The Internet of Things offers great potential to transform how we work and live by removing many tedious tasks from our day-to-day activities. Making this a reality requires a secure Internet of Things. We will never make security perfect. However, we have the tools to make it trustworthy. What is needed is just the discipline to include them as we build new IoT devices, systems and processes.

Moving from Storm to Spark Streaming: Real-life Results and Analysis

In my last post, I explained why we decided to move the Speed Layer of our  Lambda Architecture from Apache Storm to Apache Spark Streaming in . In this post, I get the “real meat”:

  • How did it go?
  • What did we learn?
  • And most important of all… would we do it again?

This post recounts our detailed experiences moving to a 100%-Spark architecture. While Spark is arguably the most popular open source project in history (currently its only rival in terms of number of contributors is AngularJS), our experience with it was not all wine and roses. Some experiences were great. Others remain frustrating today after nearly nine months in live operation, streaming mission-critical data. Whether you love Spark or Storm, there are some bragging rights for your favorite platform.

Before I get started I should warn you that this post is pretty long. I could have broken it up into different posts,  one on each category of analysis. However, I thought it was more useful as a single blog post.

Our Real-world Environment

This is not one of those simple streaming analytic run-offs using the the canonical “Twitter Word Count” test (Spark version, Storm version). This is a real-life comparison of Storm vs. Spark Streaming after months of live operation in production, analyzing complex, real-life data from many enterprise customers.

We do not use either technology by itself, but instead use it in conjunction with Apache Kafka (Cloudera’s distribution), Apache Cassandra (DataStax’s distribution), and Apache Hadoop (also Cloudera’s distribution, storing data in Apache Parquet format). I am not divulging any trade secrets here, as we list these technologies on our job descriptions for recruiting.

Similarly, we do not simply pass data through a single stage graph (no robust real-world system uses a single-stage DAG). Instead our DAG processing traverses from 3-7 stages, depending on the type of data we receive. At each stage we persist data back to Kafka for durability and recovery.

Obviously, everything we run is clustered (no single servers). Along these lines,  we only use native installations of downloaded distributions. Everything here can be hosted anywhere you like: your own data center, GCE, AWS, Azure, etc. The results are not tied to IaaS solutions like AWS EMR.

This comparison is also not a short-duration test (which would also  be artificial). We run our streaming processing 24×7, without scheduled downtime. Our Lambda Architecture enabled us to stream the same data into Storm and Spark at the same time, allowing a true head-to-head comparison of development, deployment, performance and operations.

Finally, these results are not just based on uniform sample data (e.g., 140-character Tweets). We used a wide range of real-life sensor data, in multiple encoding formats, with messages ranging from 100 Bytes to 110 Megabytes in size (i.e., real-world, multi-tenant complexity). We tested this at data rates exceeding 48 Gbps per node. We have come up with novel ways to stream data larger than the message.max.bytes size in Kafka in real-time along our DAG–disclosing how we do this would be a trade secret 😉

So what did we learn? I will discuss the results from four perspectives:

  1. Developing with each (a.k.a., the software engineering POV)
  2. Head-to-head performance comparison
  3. Using each with other “Big Data” technologies
  4. Managing operations of each (a.k.a., the DevOps POV)

BTW, of course Spark Streaming is a micro-batch architecture while Storm is a true event processing architecture. Storm Trident vs. Spark Streaming would be a true “apple-to-apple” comparison. However, that was not our real-life experience. The move from one-transaction-at-a-time to micro-batches presented some changes in conceptual thinking (especially for “exactly once” processing). I include some learnings from this.

Next Page: Storm vs. Spark Streaming: Developing With Each