Insight by Red Hat

AI/ML are critical to near-peer competition but require support from automated infrastructure

Sensors gather the data on the battlefield. Information systems like AI and machine learning process that data faster, for better decision making. But what’s...

This content is sponsored by Red Hat.

The Defense Department has been focused on bringing artificial intelligence and machine learning into the decision making process more prominently for some time. AI and ML are key components of DoD’s near-peer competition strategy.

To prevail in a conflict, Defense agencies need to make better decisions than their adversaries, and they must make them faster. That means gathering more data and processing that data faster and more efficiently. That’s where sensors and AI/ML come into play, the two stars of this initiative. But what’s talked about far less often is the infrastructure that supports them.

Sensors gather the data on the battlefield. Information systems leveraging AI/ML learning triage the data, decide how far up the chain of command to elevate it, and help those various tiers of decision-makers process it and identify the best courses of action. This function is similar to the way first responders prioritize care in an emergency situation. And just like, once that triage is performed, ambulances and helicopters are required to transport people to hospitals, basic network infrastructure is required to bridge the battlefield with the forward operating base or up echelon resources like an intelligence data center in the continental U.S. and ensure the data gets where it needs to go.

“The military needs to bring that capability with them; we’re not going to be in a foreign country, and be able to use their existing telecom networking infrastructure. We can’t depend on that,” said Chris Yates, chief architect at Red Hat. “So we need to have this ability to rapidly deploy these networking capabilities. The challenge here is that this is a really sophisticated and complex way of managing data. And that means that there’s opportunities to introduce human error. And so the way that we see this being ameliorated is through automation.”

Automation supports network scalability

Automation allows this infrastructure to rapidly scale up in any environment. The first thing that’s needed is a networking capability, whether that’s electromagnetic spectrum, over the wire or satellite uplinks. The information needs a path from the battlefield to the appropriate decision-making echelon. Once that’s stood up, then the information systems that process the data and decide which echelon the data goes to needs to be deployed.

“We need to have in our back pocket this plan and automation for how we’re going to deploy these components so that when we face these challenges, we aren’t trying to solve them anew, on the fly, every time we encounter them,” Yates said.

That’s where operational efficiency comes in. That requires understanding the difference between parallelizable work and serial work. Parallelizable work can be split up into multiple pieces that can be accomplished simultaneously. Serial work has to be accomplished in a specific order. The roof of a building can’t be constructed before the walls. That’s serial, not parallel. Operational efficiency requires understanding what work is serial, what work is parallel and, after that, what can be automated.

Automation can help reduce the impact of serial tasks on work timetables through reductions in domain knowledge. For example, many times a specific capability needs to be deployed in a certain theater, and it needs to be reliable. If only certain experts understand how to deploy and operate an information system, then these experts need to be transported to the necessary geographic location, along with whatever support these individuals need, such as physical protection in a conflict area.

Automation extends human expertise

“Automation around that reduces the need for these deep human experts to be available and local to deploy that capability in theater. So we can gain benefit through the automation of encapsulating applications, deployment configurations and dependencies to be able to stand these systems up. We can encapsulate that automation through containerization. And then we don’t need system experts, we need a person who understands how to deploy a containerized application, which has a lot more general purpose applications,” Yates said.

“Now we have a person who understands how to deploy containers, and they can deploy multiple different applications without being domain experts on those applications, instead of each application being an artisanal, bespoke deployment. To an extent, it’s like bringing interchangeable parts and the assembly line concept to the deployment and building of applications.”

That also simplifies processes through repeatability, saving man hours, meaning those personnel can be repurposed to other critical functions. It also reduces the need for esoteric expertise, allowing personnel to have a wider focus and more well-rounded training. That also saves money, because expertise is more expensive, while simplified, repeatable processes can be accomplished more quickly. And it saves on opportunity costs as well.

“It doesn’t have to just be financial,” Yates said. “It could be that I can’t execute my mission goals within the timeframe that I need. That could mean critical infrastructure is lost, like the power grid, or it could mean that I lose a tactical advantage, because I wasn’t able to execute on that need in that timeframe.”

To listen to and watch all the sessions from the 2022 Federal News Network DoD Cloud Exchange, go to the event page.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    Courtesy of: https://www.mfan.org/

    How children of military service members are at war

    Read more
    Army recruitment

    Prep courses, policy changes mainly contributed to successful recruiting year

    Read more