Building trust in AI is key to autonomous drones, flying cars

DoD is experimenting with ways to improve autonomy in unmanned vehicles, and build trust in that autonomy between the platforms, their operators and the command...

Drones have been a mainstay in military conflicts and operations for nearly the past two decades. So it’s only natural that the military is looking to enhance the next generation of unmanned vehicles with technology that’s expected to have some of the most significant repercussions on the battlefields of the next decade: artificial intelligence. Organizations like the Defense Advanced Research Projects Agency, the Army’s Artificial Intelligence Task Force, and AFWERX – the Air Force’s in-house technology incubator – are already experimenting with ways to improve autonomy in unmanned vehicles, and build trust in that autonomy between the platforms, their operators and the commanders that will deploy them.

“When you look at not just 10 years out, or 20 years out, we’re looking to design a future system from the ground up that’ll be fully AI enabled, perhaps, in some farther end of the spectrum in terms of autonomy. We’ll have legacy systems and hybrid systems on the way,” Colonel David Barnes, chief AI ethics officer for the Army AI Task Force, said during an Aug. 18 webinar.

The goal for these systems is total zone reconnaissance – in other words, each unmanned aerial or ground vehicle will be another node of sensors deployed to provide information to the warfighter. One way AI will be involved in these efforts is through automatic target recognition. That will be accomplished through a full spectrum of sensors, including optical, thermal and electromagnetic.

That goal is shared by DARPA’s Gremlins program. The Gremlin is a UAV designed to be launched from and recovered by another aircraft.

“So when we first started the program, we were really looking at using the Gremlins air vehicle as a sensor. It’s a node,” Scott Wierzbanowski, program manager for the Tactical Technology Office at DARPA said. “That’s the biggest thing to consider about this is it’s like any other UAV out there as you could kind of swap out different payloads. You could do different things with it. But what we were really focusing on was using this to put non kinetic type systems on board that can be used for geolocation, it could be used for different types of gathering of electronic emissions, or to be used in terms of putting electrical optical type payloads on there. So really, it was more of an enhancer of pushing our sensors forward so that the actual manned aircraft and high value assets don’t have to get as close to the fight.”

But the AI has to be able to understand and interpret the data from these sensors correctly.

“Where the task force is focused on is where can artificial intelligence enhance and allow for better autonomy, and then therefore better teaming?” Barnes said. “Because again, what’s really going to matter in terms of employment on the battlefield and elsewhere is the trust that the commanders and operators are going to have in that system.”

One of the technical challenges to this will be the processing power of the systems themselves. In a battlefield environment, they’re likely to be comms-restricted. That’s why autonomy is so important; it won’t be able to send data back for processing. It will have to perform the analysis and draw conclusions on the spot. So part of the question is how to push that AI to the edge in order to give the warfighter the best information possible.

“All of this is an important aspect moving forward to ensure that the human and the machine are both cooperating and we develop a trust that goes both ways. And so we use a concept: soldier centered design. We’re sort of riffing on what’s going on in industry,” Barnes said. “But how do we get multiple touch points through the sprints and throughout these different projects, so we can ensure that we are taking the considerations of the operator and the commander that are going to actually be employing these systems on the battlefield, as well as the input from the smart folks in academia as well as the industry piece. And so together, we can move forward because if we don’t, we’ll find ourselves restricted or going down a certain path. But I think that this is beginning to show promise.”

And this goes beyond strictly military applications. AFWERX is experimenting with this fusion of autonomy and AI in its Agility Prime program, where it’s working alongside the Federal Aviation Administration and NASA to make flying cars a reality. Again, this is going to require building trust between the AI and the operator of the vehicle.

In this particular case, the autonomy around the sensors applies particularly to systems that would allow the vehicle to detect and avoid other vehicles.

Another application of this fusion of AI and autonomy revolves around automating critical parts of the flight operations. This would have the effect of making flight operations safer, as well as reducing the amount of training pilots would have to go through.

“In the near term, we’ve been doing some great things with our Pilot Training Next program to reduce the time it takes to do pilot training. When you take advanced approaches to education training, combine those with improvement in autonomy, where are the places where you start to gain significant savings in the development of operators?” Colonel Nathan Diller, AFWERX director, said. “Eventually, where are those cases where you do some of the manned/unmanned teaming that the operators are out of the system? Obviously the value is reduced cost, as well as reduced risk of taking a life. So that near term autonomy piece with multiple programs that we’re seeing, I think probably has the biggest value.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories