internetOfUnsecureThings

Four Common IoT Security Holes

If you follow the Internet of Things space, not a day passes where you do not see an analyst report or news article talking about IoT security vulnerabilities across every sector: consumer, enterprise, industrial and government/Smart City.

I’ve been working with Internet-connected devices (medical devices, industrial actuators, sensors for environmental, security monitoring, even military systems) for many years. In my job, I am lucky enough to able to work with industrial and enterprise devices daily. At home, I play with them both as a consumer and developer. Time and again, I see the following IoT security holes with alarming frequency:

Security Hole #1: Not Using Strong Encryption

It is amazing that in 2016 people are still not using strong encryption to protect important data. However, I frequently see IoT devices that use no encryption at all: they store and transmit data in the clear. Other devices use homegrown encryption techniques that are are unproven by peer review and relatively easy to hack.

Most of the arguments I have seen against encryption fall into three camps: 1) it is too computationally expensive for low-powered devices, 2) it is too hard to use for IoT protocols, and 3) the device data is too obscure to understand. Let’s look at each:

  1. Yes, encryption is computationally expensive. However, ongoing investments in the space are providing more efficient RSA, AES, and ECC algorithms that work on smaller devices. In addition, Moore’s Law is even allowing penny-sized devices to have enough power to use these.
  2. IoT protocols are also getting better and better at providing strong encryption and secure connections (see Security Hole #2).
  3. Finally, the old “Our-data-is-too-obscure-for-hackers-to-understand Argument” was proven a fallacy years ago, first by the credit card industry’s Cardholder Information Security Program, and later by its replacement: PCI DSS. Any disgruntled employee (or hacker masquerading as a contractor) can bypass the “obscurity protection.”

Not using strong encryption is probably the most egregious security vulnerability. Any 14-year-old can use downloadable packet sniffing programs to capture your data. Solutions that mitigate this risk are readily available. There is no excuse to not encrypting your data.

Security Hole #2: Not Using Secured Sessions

A common error is information/cyber security is forgetting that secure communication consists of two components:

  1. Encryption of data and
  2. Establishment of secured sessions

Secured sessions use protocols to establish mutual authentication and to exchange  shared secret that only the transmitter and receiver have. If you do not establish a secured session you are blindly guessing that the recipient of your data is the correct person. When you do not use secured session you invite a Man-In-The-Middle (MITM) attack where the attacker can intercept and redirect your transmissions.

Many people think they are not likely targets of a MITM attack. Here is simple scenario.

  • A disgruntled employee or hacker-posing-as-contractors first intercepts and copies traffic from your devices.
  • From this data, he learns what devices are attached to items of interest (a patient, your house, etc.). He can then also learn the normal pattern of communication from the device.
  • Next he replaces the data from your device to send his own. This can give the appearance that a patient who is sick is now health (or vice versa) or that your house is not being broken into (allowing his partners to break in). The hacker can even intercept your over-the-air commands and download programmable software or send commands to shut-down devices.

This work is technically hard, but doable with software downloadable on the Internet. If communication between your IoT devices and your secured (and encrypted), the hacker would have to gain enough permissions to get a hold of your SSL certificates and hijack DNS (if he has this, you are in a lot of trouble already). However, if the communication between your IoT devices and servers is not secured, a hacker can conduct this MITM attack from anywhere. By the time you learn about it, the damage will be long done.

Thankfully, there are many solutions available in the IoT domain that provide both strong encryption and secured sessions (plugging Security Holes #1 and #2):

  • If you are using standard “Internet of Servers” protocols, simply installing a full compliment of certificates will enable you to use SSL over TLS for HTTPS and FTPS (but not SFTP).
  • If you are using MQTT (one of my favorites), there are many brokers available that also provide SSL over TLS.
  • If you are using CoAP (which rides over UDP), you can use DTLS.
  • If your devices have edge constellations, you can turn on Bluetooth Security Mode 4 and get SSL with the same Elliptic Curve Diffie-Hellman secret key exchange used by the NSA.
  • You can even download and borrow the wonderful MTproto protocol designed by the folks over at Telegram (it is designed for low-powered, lossy, distributed communication).

None of these solutions are perfect. However, all reduce security risks significantly. Furthermore, all are evolving in the open source community as people find new vulnerabilities. Why more people do not use them is puzzling.

Security Hole #3: Not Protecting Against Buffer Overflow

When a hacker triggers a Buffer Overflow vulnerability, she typically causes a program to do two things: dump critical data and crash.

The first documented cases of Buffer Overview exploits data back to 1972. As more and more computers were connected to the Internet, these attacks became more pervasive. Fifteen years ago, Code Red highlighted to much of the general public what a Buffer Overflow exploit can do.

Over the past few years, application framework libraries have and higher-level languages, have added many defensive programming protection to make these vulnerabilities less prevalent than they were in the past. (As anyone who has encountered an awlful error page that shows you a stack trace error, these defenses are still far-from-perfect). Nevertheless, they have plugged many holes.

However, IoT devices are bringing this vulnerability back into the mainstream again. As most IoT devices operate with far less memory and CPU than expensive devices like your laptop or smartphone, their firmware and applications are primarily written in lower level programing languages. It is much easier to trigger buffer overflows in these languages than more forgiving higher level languages. Exception handling libraries are less robust. More often than not, memory management is handled using good old-fashioned C/C++ programming (there is no Garbage Collector to save you). This significantly raises the risk of buffer overflows in devices.

When buffer overflow crashes occur in the data center there is at least someone around to fix things. When they happen to a remote IoT device in the field, they can literally shut down a security or medical sensor. There is no IT or Ops department nearby to fix it. The device is shut down (at best, or bricked at worst). Essentially device is dead to world. Depending on what is was responsible for, lots real-world physical damage can ensure.

Devices that maintain continuously open Internet connections (like all those connected baby monitors) are especially prone to buffer flow attacks as remote hackers can discover them using port-scanning software. However, even industrial IoT devices that only pull commands and programs down over-the-air are vulnerable to MITM attacks that can shut them down by flooding data to the device (this reinforces the need to plug Security Holes #1 and #2 discussed above).

The fix to this problem is fairly clear:  implement defensive programming and test it aggressively. Today’s automation technologies for continuous integration and delivery make this a much easier and trustworthy process than it was even a decade ago.

Security Hole #4: Weak Systems Engineering

The fourth big security hole I commonly see spans the intersection of technical design, system processes, and human behavior. It essentially boils down to this: if you use flawless technology in ways that it is not intended, you can create big vulnerabilities. If I design perfectly secure medical device but put it on the wrong patient (accidentally or maliciously), I will prevent capture of data about that sensor. If someone who installs the security sensors in my house sets my account up to call their cell phone (and not mine), they can break in while I am gone and I trick the company into thinking it is a false alarm.

The way around this is to design IoT devices that work when things (humans, the network, servers, etc.) fail.

  • Build in redundancy (devices, network paths and servers) to mitigate technical failures
  • Build in positive and negative feedback looks to mitigate human failures. For example, I should not just be notified if my home security sensor goes off. I should should be notified if my smartphone and my security companies servers both cannot communicate with my home security IoT devices.

Plugging this systems engineering IoT security hole takes a combination of technology engineering and business process design.  This is a natural fit to the enterprise, where IoT can be used as a component of business transformation. In the consumer segment the answer is usually an ecosystem solution. Amazon’s and Google’s solutions stand out regarding robustness and security.

***

The Internet of Things offers great potential to transform how we work and live by removing many tedious tasks from our day-to-day activities. Making this a reality requires a secure Internet of Things. We will never make security perfect. However, we have the tools to make it trustworthy. What is needed is just the discipline to include them as we build new IoT devices, systems and processes.

twoBrainsBetterThanOne

Data Scientists vs. Data Engineers: Facts vs. Interpretation

Some of the things we build at work are closed-loop, Internet-scale machine learning micro-services. We have created algorithms that run in milliseconds that we can invoke via REST calls, thousands of times per second. We also have created data pipeline processes that process new (mostly sensor) data and build and publish new models when critical thresholds are reached. This work requires the collaboration of two very in-demand specialists: Data Scientists and Data Engineers.

Contrary to the classic Math vs. Coding vs. Domain Expertise Venn diagram, Data Scientists and Data Engineers share many similarities. Both love data. Both have domain expertise. Both are great functional programmers. Both are good at solving complicate mathematical problems—both discrete and continuous. Both use many similar tools and languages (in our case, Spark, Hadoop, Python and Scala).

However, over the past two years, as we have improved the collaboration between each to build better machine learning services, we have some key differences between each role. These differences are not just based on skill set or disposition. They also include differences areas of responsibility that are essential to creating fast, scalable, and accurate machine learning services.

It is easy to muddle raw data from fully deterministic derived data from algorithmically derived data. Raw data never changes. Rules may change but are easy to manage with clean version controls. However, even the same deterministic algorithms can produce different results (one example: whenever you refit or rebuild a model using new data, your results can change). If you are building algorithmic services you need to keep everything clean and separate. If not, you cannot cleanly “learn” from new data and continuously improve your services.

We have found a very nice separation of responsibility that prevents muddling things:

  • Our Data Engineers are responsible for determinist facts
  • Our Data Scientists are responsible for interpretation of these

This boils down to this: determinist rules are the purview of engineers while algorithmic guesses come from scientists. This is a gross simplification (as both engineers deal in many, many complexities). However, this separate keeps it very clear, not only in determining “who does what” but also preventing errors, guesses, and other unintended consequences that pollute data driven decision-making.

Let’s take Google Now’s “Where you parked” service as an example. Data Engineers are responsible for processing the streaming sensor updates from your phone, combining this with past data, determining motion vs. at rest, factoring out duplicate transmission, geospatial drift, etc. Data Scientists are responsible for coming up with the algorithm to determine whether your detected stop state is a place where you parked (vs. simply being at work, at home, or at a really bad stop light). Essentially, Data Engineers capture and process the data to extract required model features while Data Scientists come up with the algorithm to interpret these features and provide an answer.

Once you have separation down, both teams can collaborate cleanly. Data Scientists experiment and test algorithms while Engineers design how to apply at scale, with sub-second execution. Data Scientists determine what approach is used to build models (and what triggers model optimization, build and re-fitting). Data Engineers build seamless implementation of this. Data Scientists build algorithm prototypes and MVPs; Data Engineers scale these into fast, reliable, services. Data Scientists worry about (and define rules) to exclude outliers that would wreak havoc on F-tests; Data Engineers implement defensive programming and automated test coverage to ensure unplanned data does not wreak havoc on production operation.

5 points where tech balances between life and work