Insight by Red Hat and Intel

Reduced cost, lower latency, faster decision making, greater flexibility: The benefits of edge computing

A primary benefit for edge computing is reducing the dependency on networks, a costly element both in terms of bandwidth and time. By moving the compute closer to...

It’s difficult to find an article today that talks about digital transformation that doesn’t mention the growth of data at the edge. Enterprises are experiencing a transformative shift in the generation and storage of data from centralized repositories to highly distributed locations in what we broadly term the network edge. The rapid transition, powered by a surge in Internet of Things and smart sensors, increased mobility, greater focus on user experience, and demands for real-time analytics, is redefining the way we capture, process, secure, and analyze data.

This is the first of a three-part series on edge that explores the benefits of improving edge computing, the implications to data security, and strategies for managing complex edge systems.

A primary benefit for edge computing is reducing the dependency on networks, a costly element both in terms of bandwidth and time. By moving the compute closer to the data, analysis can be performed on location, allowing just the results to be transmitted back across the network.

Consider toll booths in smart cities. They generally do not have much network connectivity, and they’re using what they have to transmit photos of license plates. By integrating an edge system to apply more compute and storage on site, those toll booths don’t have to transmit those large picture files anymore. Software running on the edge can analyze those photos and extrapolate the license plate numbers; a series of seven characters is far cheaper and easier to transmit than a photo. Organizations are able to gain insight into edge-produced data faster.

“The lifecycle of data goes from data to information to knowledge. Raw data is collected by sensors, often in the field. Historically that raw data was sent somewhere else, where it was processed, analyzed and would become information. In the case of edge systems, transmitting that raw data is expensive and very time consuming,” said Damien Eversmann, chief architect for education at Red Hat. “And in the case of disconnected or disadvantaged networks or high latency networks, it takes forever to send a lot of data.  So the concept of edge is simply pushing the processing of data into information out to where the data is gathered, or at the least, sifting out the noise and only sending relevant data.”

That’s a huge benefit to decision makers out in the field, because one of the chief challenges to the network piece is latency. It takes time for large amounts of data to be sent back to a central data center, analyzed, and then have the results returned. But by moving the compute closer to where the data originates, achieving a certain level of results in the field, and then only sending those results themselves back, the latency becomes less of a barrier to decision-making. That’s known as mean-time-to-insight.

To illustrate this, take the toll booth scenario one step further, and add national security implications. Customs and Border Protection uses similar systems at border crossings, scanning license plates and sending that information to be analyzed and cross referenced for red flags. If that analysis is conducted by machines learning in the field at the source of the data, just like the toll booths, the reduction in latency allows CBP to act faster to mitigate potential risks.

“What’s vital to enable this capability is to place machine learning algorithms and what can be loosely termed AI running close to where the data originates, so that you can process that data where it sits in a flexible way,” said John Dvorak, chief technology officer for North America Public Sector at Red Hat. “But we’re trying to not create a new operational reality for people at the edge. When it comes to running these AI/ML workloads, we want them to be operated and maintained in the same way as they are in the data center, in a performant way for the kinds of devices encountered at the edge.”

That’s because going to the edge often introduces extra layers of complexity. But it doesn’t have to; by using edge management platforms that orchestrate workloads consistently across any environment, developers and operators can avoid the complexity of managing multiple tools.

One important step in making edge computing more flexible is the introduction of containers.

Operators in the field, be they investigators, security personnel or first responders to natural disasters don’t always know what tools they’re going to need until they’re there. Containerization allows those digital tools to be modular and interchangeable on the fly; it’s much like having a bit driver in your toolkit versus a dozen different screwdrivers. It’s faster, more portable, and more versatile, and now you have room for a ratchet too, just in case.

“The tools that data scientists use are so rapidly updating and so quickly advancing that if you don’t have this sort of modular toolset, the containerization, it’s very easy to fall behind,” Eversmann said. “But when you look at a containerization solution like OpenShift, you can easily update your tools to the latest version without redeploying the entire stack and rebuilding your entire solution. You can have the latest and greatest at your fingertips.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories