Edge computing reveals new risks, requires new security strategies

As federal agencies begin to take advantage of new capabilities at the tactical edge, they also need to adjust their security strategies to compensate for new v...

Advancements in technology like 5G, cloud and Internet of Things are making computing at the edge more and more viable. As federal agencies begin to take advantage of the new capabilities that provides, they also need to adjust their security strategies to compensate for new vulnerabilities.

“The idea is that if you’re going to go gather data, it’s best to process the data when you get it; there’s no sense in sending all the data backwards to some other station someplace where it can be processed because it takes a while to actually send the data down, also takes time to actually process the data,” Frank Konieczny, Air Force chief technology officer, said during a July 21 FedInsider webinar. “So why can’t you just think of it as a computing platform out there that’s in the skies, that grabs the data. And the data could be reconnaissance data or any kind of other data that’s going on, and processes in the sky and a computer resource that is sitting there as opposed to anyplace else. That’s the tactical edge.”

For the Air Force, that tactical edge is the aircraft itself. For example, the F-35 Joint Strike Fighter collects massive amounts of data while in the air. Chris Cleary, chief information security officer for the Department of the Navy, even shared a joke about it during the webinar: “Is it an airplane that carries a computer, or a computer with an airplane wrapped around it?” Autonomous vehicles are also being developed along the same concept.

But that exposes a weak link, transferring all of that data back to a central location for processing and analysis stresses networks and bandwidth. And that’s if it’s even safe to do so – warfighters have to assume the environment they’re operating in could be degraded or even denied by an adversary, leaving them with no good options to send that data out, Cleary said. Even if that’s not the case, adversaries could use such transmissions to geolocate warfighters and their equipment.

That’s one reason the Navy has essentially always operated at the edge. Its ships, in effect miniature mostly-self-sufficient cities, have what they need to process and analyze data on the spot. The same goes for most military bases as well.

“The idea is always wherever you gather the data is where you want to process the data. And I think that as we go to a more 5G relevant communications technology, the data is going to stream in faster than we had before,” Konieczny said. “And there’s no way of actually moving the data down to production network to some centralized computing facility. You really need to process that mostly where it’s located. And I think everybody’s going to go that route, especially when we have smaller devices now, and we have more power than we ever had.”

To that point, Alma Cole, CISO for Customs and Border Protection, said CBP is already implementing this in the field. Ports of entry and border crossings are already heavily monitored by video, which used to mean employees monitoring screens for abnormalities all day. Now CBP is applying logic on the edge to do the watching, and only the most relevant data gets sent over the network. That also provides agents with better, more timely information on demand.

Originally, Cole said, many of those monitoring systems were air gapped to prevent tampering. Now, with the data being processed onsite, those networks can be rolled together with others from more remote outposts with poor bandwidth to provide a more complete operational picture. But it also creates new vulnerabilities.

“Obviously, we’re making a lot of investments into zero trust types of technologies,” Cole said. “We’re no longer [limited to] ‘either you’re trusted and you’re on the network’ or ‘you’re off the network.’ Now we’re doing more micro segmentation in the cloud and other areas in [software-defined networking] where we can more and more comply to connect. We’re really just changing the face of everything we do to push that data out to the edge where it’s needed in the most efficient way possible.”

But with all these devices gathering and analyzing data, new challenges in implementing that zero trust framework begin to appear.

“We’re getting to machine-to-machine communication, as opposed to person-to-machine communication,” Konieczny said. “And so you have to look at what kind of authentication capabilities that I can trust between a machine to another machine to actually exchange information.”

Read more: Technology news

So many authentication measures currently rely on a person being on the other end of the device, such as personal identity verification cards like CAC, or even biometrics. Machines don’t have those. And even standard machine-to-machine authentication methods happen at the device level, which doesn’t fit into a zero trust framework. The question becomes how to authenticate at the data level.

Supply chain is another concern here, and it extends beyond recent concerns and bans on hardware.

“If I now am depending on this particular device to actually make decisions for me, obviously, that means that where the hardware came from, where the software came from, and the processes that go into that software and that intelligence, that those are things that are all part of my plan, and that I have really high assurance on,” Cole said. “Because now we’re pushing a lot of these decisions, a lot of this trust that normally we’d like to do in house, potentially to third parties that might be developing machine learning code. And so now I really want to make sure also that I have some sort of a way to manage third party risk.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories