Edge Computing: Meeting Things Where They Live

Written by: Alex Jablokow
The edge is where the Internet part of the IoT encounters the actual Things part, via sensors and actuators. As the number and capabilities of connected things grow, the question of how much of the IoT’s processing and analytics should take place at or near that edge, and how much should take place centrally, in the cloud, becomes more pressing.

Edge computing, sometimes called fog computing, is currently hot, and a lot of market players are moving into it, including Dell, Cisco, HPE, Microsoft, and PTC.

What is the issue of edge computing?

Processing data locally isn’t new. That’s pretty much how things were done before widespread network connectivity. Edge computing is a matter of meshing the local and the cloud appropriately through a gateway, distributing data storage and analytics in the optimal way, while ensuring that result is seamless to the user.

Choosing between edge and cloud

If data transmission were free and took no time to get to and from the cloud, then it would make sense to process everything centrally, in a giant Google, Rackspace, or AWS server farm. It’s cheap, easy to scale, and ensures that data is always available for various levels of analytics.

But data transmission’s costs rise with data volume and distance, and transmit to receive time (usually called latency) can become significant for time-critical operations. The importance of edge processing grows as that edge gets farther away and harder to access.

Two main considerations govern edge computing choices:

  • How hard and expensive is it to get data to and from the edge location? The more remote the site, or the larger the amount of generated data, the more it makes sense to handle much of that data locally, distilling it and sending only the most relevant data onward to storage and further analysis in the cloud.

  • How quickly must the response be, and how slow is central processing relative to that speed requirement? What is the latency? Processes that are farther away, or need fast response times, are relatively more likely to benefit from edge computing.

There are also issues of security, survivability, and robustness, but these are much less important.

The benefits, and the costs

Edge computing has the advantages of minimizing transmission costs and response times. But these advantages don’t come free. Remote applications can malfunction in ways hard to analyze and debug, maintaining and upgrading distributed computing hardware can be complicated and costly, and balancing loads between edge and center will always be tricky.

Distributing processing means moving more complex devices out into the world, sometimes to harsh and difficult environments. Edge processors aren’t getting slid into a rack in a climate-humidity-and-access-controlled data center. The cost and complexity of ruggedizing and maintaining these processors must be factored in.

The farther the edge and the more critical its decisions, the smarter it needs to be

So edge capability becomes relatively more important as the location grows remote, the production of data greater, and the need for quick response more critical. As a result, many of the most active edge computing efforts are taking place in various parts of the energy industry, with its remote production facilities and wide-flung transmission networks, complex processes, and minimal-downtime requirements.

Oil and gas

An offshore oil rig generates over 1TB of data per day, much of it related to time-sensitive functions and equipment status. Transmitting that mass of data, processing it, and returning instructions can take days. A malfunction can cost hundreds of thousands of dollars per day in lost production. Since each device often has its own diagnostics, while the platform as a whole has significant computing resources, along with trained staff, it makes sense to keep time-sensitive decisions on-site, while sending cleaned and processed data to the cloud where it can be used to make larger-scale production and utilization decisions.

Wind turbines

Remotely located wind turbines need to respond individually on a second-by-second basis to changes in the wind and other conditions to maximize efficiency, while the wind farm as a whole needs to integrate the changes in each turbine to make larger-scale decisions on a slightly longer time horizon. Again, edge processing can then forward actionable data to a central location for higher-level business decisions.

Autonomous vehicles

Self-driving cars are on the way. These will be perhaps the most visible manifestation of edge computing and analytics for the IoT. Each vehicle needs to make instant-by-instant decisions that affect human safety as well as longer-term decisions about their engines and other systems, while also providing traffic control and other systems with the appropriate granularity of data.

Keep your eyes on the edge(s)

There is more than one edge, and more than one way to balance those edges against the center. Look for a wide variety of implementations involving both familiar and new industry players, each with a specific take.

Tags: CAD Industrial Connectivity Industrial Internet of Things Electronics and High Tech Oil and Gas Connected Devices Digital Transformation

About the Author

Alex Jablokow

A former engineer, Alex is now a writer on technical and healthcare business topics. He also provides marketing content for technical and healthcare businesses of all kinds at www.sturdywords.com.