Insight by Red Hat

Management of edge systems requires planning for unique circumstances, limitations

Edge computing happens in the kinds of environments where you can’t necessarily manage them moment to moment: the bottom of the ocean, outer space, battlefiel...

Edge computing happens in the kinds of environments where you can’t necessarily manage them moment to moment: the bottom of the ocean, outer space, battlefields, etc. Government agencies need to be able to control, manage and scale the environment remotely, without having to send humans to far off or dangerous places. Managing at scale at the edge requires a different approach than traditional datacenter locations.

“I think it’s all wrapped up in the distance piece. Because with distance comes a lot of other challenges,” said Damien Eversmann, chief architect for Education at Red Hat. “Connectivity is a big one, especially with federal agencies. You have systems that are rarely connected or never connected. Latency, when you talk about running workloads in space, that communication is not happening over a 6G network. It’s much slower than that. And you have systems that are somewhat autonomous themselves, too. So they are changing during that disconnected period, too. You often have more information that needs to be sent to sync that up.”

Agencies need to architect edge systems with constraints in mind, long before they are deployed to production. And to reduce the cognitive load on teams, it is best if remote management solutions utilize the same tools and user interfaces employed for other workloads in the network, whether that’s on-premises, in a data center or in the cloud. Not only will this reduce training requirements, but it will simplify management.

For example, on the International Space Station, which has recently begun using edge systems enabled by Red Hat, NASA can change its use cases remotely. That eliminates the need to send up new physical equipment between experiments, which saves NASA significant amounts of time and money, considering the cost to send cargo to the ISS in measured in thousands of dollars per pound. Similarly, the Defense Department could push workload updates to a Humvee in the field, eliminating the need for it to return to base and potentially delay the mission.

It all comes down to planning ahead – making edge usable and easy to manage requires significant planning before deployment. Practically speaking, the best way agencies can accomplish that is through automation.

“There are three fundamental axes for running any workload. Those are compute, storage and network. The goal of these edge systems and automating them is to orchestrate those three parameters for any given specific need,” said Michael Epley, chief architect and security strategist for the public sector at Red Hat. “So if you’re operating in an edge environment or you have intermittent network connectivity — for example, in the space station, you might only have a ground station downlink capability periodically — is to orchestrate across those three dimensions. So while you have network, take advantage of the network, buffer something — a workload or data to storage — and then move that into compute when you have available compute capacity. And you can do that across all the different workloads that you might need to manage. And the automation that we need at the edge is to detect when what the current capability along those dimensions are and then apply whatever business rules that you’ve got to optimize that utilization of those resources.”

The key here, Epley said, is to use a declarative approach to automation. Declaring an end state, rather than individual rules, provides more confidence in getting the desired end result, and reduces the likelihood for unintentional side effects. It also makes errors less likely, and diminishes the risk of losing that edge system to misconfiguration or misdeployment. Also, if the system knows the end state, it can account for and adapt to procedural or environmental failures by taking different courses of action, rather than adhering to a single procedural path.

So in concrete terms, how do agencies accomplish this?

The key, said John Dvorak, chief technology officer for North America public sector at Red Hat, is to choose a consistent management platform that is edge-aware and capable of operating in disrupted, disconnected, intermittent and low-bandwidth environments.”

It’s also important to do risk and threat analysis, because edge environments have different risks and threat profiles than, say, a data center. That means it also needs to be able to employ different security controls and approaches to mitigate those risks and threats.

“We talk about being able to write software once and run it anywhere; that’s the concept around containerization. So the idea is it can run on your laptop, it can run in the data center. But of course the reality is to be performant it’s got to work well for the edge devices that you have,” Dvorak said. “Make sure that you understand the scale of the compute and storage, those limitations, and plan around those as well. Make sure that the software is tested for those environments before you deploy.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    Qlik Brendan Grady

    Protected: Industry Exchange Cloud 2024: Qlik’s Brendan Grady on latest data integration, analytics capabilities

    Read more
    SAP Joe Ditchett

    Protected: Industry Exchange Cloud 2024: SAP’s Joe Ditchett on how agencies can move procurement to the cloud

    Read more