The rise of Industrial/Enterprise IoT

As the IoT hype peaks, the time has come for marketing to become implementation, for promise to become fulfillment. However, not all variants of the IoT are created equal, and this interview with Steve Jennis, Senior Vice President, PrismTech, reveals how the Industrial/Enterprise IoT differs from consumer-oriented versions of the IoT, as well as the requirement for an Industrial IoT data connectivity architecture and projections for when the major industry players will make the IIoT a reality for big business.

What’s the difference between the and the ?

JENNIS: It’s a question of market segmentation and I think many people are still trying to work it out to be honest. Let’s start by looking at all the contributory parts.

You’ve got the consumer world, the world of Adidas and Nike, where large manufacturers are selling connected products to the consumer. That’s one flavor of the IoT. Then you’ve got the market with Google Nest and Apple and those guys, which is not quite the consumer market of Nike and Adidas because it’s really looking at establishing a foothold in the family home and selling all sorts of connected devices in a plugged-together systems environment. So it’s consumer, but it’s more systems-led than product-led. Then another flavor is what you might call the traditional space, with applications like fleet management and vending machine monitoring that are more business than consumer, but are still what I’d call tactical rather than strategic – point solutions that are very often silos of data and lacking some of the vendor support or robustness that major enterprises are looking for.

And then you’ve got the enterprise, and I include Industrial IoT in Enterprise IoT. As the operational technology (OT) and (IT) worlds converge, they start to become indistinguishable from each other in the sense that enterprises want end-to-end systems, enterprise-wide data sharing, and to be able to extract new insights – or business value – from data wherever it makes sense, whether it’s at the system edge device, or in a gateway, or in , or even in an inter-company supply chain where you abstract above an individual enterprise. [The inter-company supply chain] is one of the thrusts of Germany’s Industry 4.0 initiative, where they’re not looking at just factory automation, but inter-factory automation – to integrate the supply chain from raw materials generation, whether in a mine or an oil refinery or any other raw material production, right through to the new connected car sitting in your driveway.

So you’ve got what I call Enterprise IoT, and I think that’s a thrilling place to be right now for several reasons. Firstly, it’s clearly going to be the biggest market. Secondly, it’s also the most demanding market because of all the data sharing requirements for the optimization and coordination of edge devices – like the “Brilliant Machines” GE talks about that stream terabytes of data – as well as the analysis of Big Data in the for trend analysis, preventative maintenance, or failure prediction, or locally for fine-tune control, fault analysis, etc. Whether the data delivery requirements are in microseconds, milliseconds, or seconds, the system has to cope. So there is lots of technical innovation and opportunity for new players. Thirdly, it is a “hot” market with lots of press coverage and investment activity.

Enterprise IoT is also where IT and OT organizations have to interface. So it’s forcing the OT guys to get familiar with things like tablets and cloud services and using the Internet for communications rather than proprietary in-house LANs, and it’s forcing the IT guys to understand how their company actually makes money – what they make, how they make it, how they deliver it, what suppliers they have to work with to do that, and so on. The IT guys are coming out of their computer rooms and away from their Oracle databases and SAP configurations, and they’re having to get their shoes “dirty” on the shop floor.

In summary, the consumer and tactical space is the IoT sandpit right now where people are checking out self-contained applications, seeing what pays off, what works, what makes money, and what doesn’t – toe-in-the-water stuff. But if we’re really going to move towards the digital enterprise, that requires much more than tactical IoT implementations or single-purpose consumer products. What’s required is an end-to-end strategy for the enterprise that makes data – whether it’s operational data or corporate data – available on demand for control and analysis and fine-tuning and insights and feedback and new services, wherever that value add needs to be. The big Enterprise IoT market is just over the horizon, but it will not take off until the trusted vendors to that enterprise market have viable solutions.

How does help facilitate the IT/OT integration?

JENNIS: The way we look at it is that, inevitably, in any scenario, you’re going to have multiple protocols in your system. It’s never going to be a single-protocol solution in an enterprise environment. The reality is that most Enterprise IoT systems will have elements of brownfield or legacy subsystems, and you will have to extract data from or deliver data to those subsystems, which could be running on anything from a proprietary protocol to an out-dated standard, for example. So you have to have some way of bridging to and from those subsystems. That’s the first thing. You’ve got to have access to the data and be able to deliver, so you need good gateway technology.

Secondly, for a corporate data connectivity platform you need to satisfy a number of requirements, so you’re going to have to have a high quality of service (QoS) data delivery backbone that is not in any way constraining the potential of the system. So it’s a bit like telecoms backhaul, where you have to be able to support a lot of calls and you don’t know what peak demand is going to be or what people are going to be streaming to their cell phones, but you’ve got to be able to deliver the data and you’ve got to be up and ready to do that 99.9999 percent of the time. Regardless of what the system needs, you’ve got to be able to deliver the data.

So the data connectivity backbone protocol has to be able to do many things in terms of QoS over and above just moving data around. When you start looking at candidate protocols for this sort of enterprise-wide backbone, you start to discount many pretty quickly. Anything that’s proprietary is going to be out the window straight away because nobody today is going to tie their whole corporate infrastructure to one particular vendor. The days of being 100 percent single-vendor are gone – users want openness, they want standards, and they even want open source in some cases.

So, you discount all the proprietary protocols – and that rules out a lot of the traditional industrial/OT protocols, by the way. Then you have to discount the lightweight protocols that don’t have the QoS for an enterprise backbone like MQTT or CoAP. These support very few QoS – MQTT supports three – and don’t support sophisticated data-centric features such as content-based filtering, traffic optimization for bandwidth constraints, dynamic discovery, and the many other QoS that are needed for the high performance, efficiency, fault tolerance, and reliable recovery required in an enterprise system; all the clever stuff that needs to be supported in a backbone so that you can manipulate the data in multiple dimensions – in terms of latency, in terms of determinism, in terms of routing, bandwidth efficiency, security, and so on.

In addition, you need a protocol that can be implemented in resource-constrained edge devices, the cloud, and everything in between.

So if you’re going to go with an enterprise strategy you need a backbone protocol that’s going to be able to support everything you need to do in the future, so you need to choose one that isn’t specific to a particular vertical market or a particular tactical application. You need an enterprise-grade data connectivity solution, and one that can easily bridge to other protocols. What’s it going to be for the enterprise IoT? We think DDS has a great shot at being the protocol of choice because of its comprehensive QoS and proven capability in mission-critical systems.

Already a lot of the lightweight protocols touted for the IoT today are showing their limitations – particularly for edge computing, where low-latency peer-to-peer interoperability is essential (in addition to device to cloud). There are a number of influential papers that have been published recently from experts in markets such as / that basically say DDS is essential because it’s the only protocol that has the suite of QoS required for a backbone data connectivity platform in critical systems.

PrismTech’s Vortex platform utilizes the DDS protocol to provide 23 QoS that support it being used as an enterprise data connectivity platform. It also includes the Vortex Gateway, which utilizes Apache Camel bridging technology to provide support for integration of legacy or new subsystems into the Enterprise IoT environment, with specific connectors for DDS and over 80 other protocols (Figure 1).

Figure 1: The PrismTech Vortex utilizes DDS as the transport and management protocol for enterprise IoT deployments, but with numerous connectors/plugins for additional protocols utilized by brownfield or other subsystems. (Click graphic to zoom by 1.8x)

We’re not shy about telling people that they can choose any protocol they’d like, but to be aware that it might be the weakest link in their chain. If you have extremely high-performance hardware and software running at the edge and you’ve got unlimited, affordable resources in the cloud, you also need the right data-connectivity platform to make the best use of all that computing power or it could be like trying to drive a Ferrari down a road full of potholes; great data, but poor delivery infrastructure. It doesn’t really matter if you have “Brilliant Machines” at the edge and incredible Big Data analytics in the cloud if you can’t move the data in a reliable and timely way to take advantage of it in real time. You’d really be wasting a lot of your investment in the “Things” and the cloud services without the right data-connectivity glue between them.

A lot of the IoT buzz is about device-to-cloud, but seems to ignore device-to-device. Maybe this is because many vendors are selling cloud services, and when you’re in the cloud everything at ground level only looks like a simple data source. But if you don’t really care about anything except what data goes up to the cloud then you’re missing a key price of the puzzle, for instance your requirements for edge computing are conveniently ignored. However, when you talk to forward-looking enterprises or an advanced OEM, they need to do their computing wherever it generates value in the system. Not just in the device, not just in the cloud, but in the device, in the cloud, in the gateway, and anywhere between where new value can be generated. Anywhere that you can extract value from the data you need to be capable of control and/or analytics, and if you’re going to have an enterprise-wide strategy you need to have this distributed computing paradigm and you need to have the IoT tools and infrastructure to support it. That’s when experts arrive at DDS, because DDS, with its heritage as a real-time distributed computing protocol, handles really demanding performance at the edge as well as in the cloud.

DDS is a protocol that’s already cracked the problem at the edge and has now been extended to the cloud. It’s much easier to do that than to start in the cloud and then try to solve edge problems because you just don’t have the low-latency capability you require. If your protocol has been designed to support applications that have one-second updates, you’re a million times away from handling microsecond computing. But if you start at microseconds and crack that problem, doing one-second updates is easy. You just take a break.

What we’re finding is that people look at a cloud-based approach, and that’s all well and good until they get out to the sharp end of the spear where you need to integrate machines that are operating in real time and require data analytics at the edge. People are getting to realize now that both are required: Cisco has recognized this with Fog Computing; Intel has recognized this with their background in industrial automation and real-time systems such as their Wind River franchise; and the IBMs and Microsofts are beginning to realize it as they begin to face IT/OT integration challenges.

When are we going to see the Industrial/Enterprise IoT living up to all the hype?

JENNIS: We have a strange situation today in that we have an extremely exciting market that everyone now pretty much agrees is going to be massive, but you don’t see the major players really coming in yet with full force. You might argue that IBM has had their Smarter Planet initiative around for several years, and that’s true, and there’s a lot of talk about the IoT from people like Cisco in terms of the Internet of Everything and Fog Computing. But I’m not aware that they have yet delivered complete Enterprise IoT solutions. And when you look at the traditional M2M vendors, whether you’re talking about Xively or SeeControl or Digi International or Eurotech, they’re all relatively small companies primarily focused on tactical applications.

But I fully expect that in the next 12-18 months we’re going to see the big IT guys coming to market with much more complete platform solutions and really energize the enterprise space to move aggressively forward to exploit the IIoT. They are going to do very well in that market because, basically, very large organizations like to do business with very large organizations. You’re not going to provide a strategic IoT platform to a major global enterprise if you’re a smaller vendor. Most major enterprises will want to be partnered with an Intel or a Cisco or an IBM or a Microsoft for their IoT infrastructure. The same applies to many of the large OEMs (with maybe the obvious exception being GE, which is doing it all themselves at the moment and marketing the heck out of it and positioning themselves as the dominant player, but that doesn’t come free – they claim over 1,000 software engineers are working on technologies and deployments[1]). The majority of large enterprises and OEMs are not going to build platforms themselves or go to smaller vendors, they’re going to turn around to their trusted large vendors and say, “you’re our enterprise platform provider, where’s your IIoT platform?”

The big users and OEMs, the guys who spend seven or eight figures a year on their IT infrastructure and applications and knitting it all together, those with thousands of employees in hundreds of countries, they’re waiting for their major vendors to come to market with Enterprise IoT solutions, and I think 2015 is going to be the year that starts to happen. The big IT vendors will select and assemble the component parts into comprehensive IIoT platforms and get their acts together over the next year, and then you’re going to start seeing the fight over billions of IoT dollars globally. We’ll also see Asian and European players getting seriously into the game as well, such as some of the German OEMs who are as advanced as anyone in the IIoT market.

So when the major vendors eventually come to market with their soup-to-nuts enterprise IoT platforms, with software functionality that does everything you need so you can just build your analytical apps, share data system-wide, and deploy within weeks – and, by the way, also interface with every other part of your enterprise so you can liberate data as and when you’re ready and manage security because you can partition the data – there will be all sorts of IIoT options available to you as an enterprise customer. And, at that point, enterprise executives will say, “If my vendor has this IIoT platform for me I better be doing something with it since the board of directors has asked for my plan to exploit the IoT in the last four board meetings.”







1. The Harvard Business Review. “HBR on GE’s Revolutionary Evolution.”