The Defense Department has pinned its hopes on someday putting artificial intelligence tools in the hands of warfighters to help them make data-driven decisions on the battlefield, but given the current state of the technology and the dearth of training data that algorithms need, that goal appears difficult to achieve in the short-term.
The defense community, including the Defense AI Center stood up last year, have rolled out AI pilots on everything from predictive maintenance of aircraft and vehicles to autonomous ships. But for all of DoD’s aspirational projects, AI tools tend not to fare well in situations where data is spare or not structured in a way that the algorithm can’t process.
Insight by LookingGlass: Federal technology experts provide insight into how agencies are approaching cybersecurity in the new virtual climate in this exclusive executive briefing.
Ranjeev Mittu, the head of the Naval Research Lab’s information management and decision architectures branch, said the AI algorithms of today are starved for reliable training data to make informed decisions in the real world.
“It’s not really clear how much data is needed under what scenarios, for what kinds of problems yet, and I think it’s kind of emerging. There’s a lot of research going on, but I think fundamentally there’s still a lot more research that needs to be done in the relationship between data and training, and what the right tradeoffs are for the different kinds of problems,” Mittu said in an interview with Federal News Network.
As part of taking AI to the next level, the Naval Research Lab explored using bots and machine learning tools to streamline hiring, retention and retraining. Mittu said some of these ideas include using AI algorithms to filter through applications and have virtual assistants help conduct interviews or screen candidates.
“I think we should be looking at that and looking at what’s successful — what’s working in industry and what isn’t, and figuring out how we can use those techniques for the DoD,” Mittu said.
In order to track employee engagement and satisfaction, the lab has also brainstormed ways to use AI and machine learning tools to correlate the number of publications and patents NRL workers have produced over the years, and overlay that data with changes to the workforce environment, such as new management approaches.
“You can see what kinds of things actually improved the climate and the productivity, and what things tend to make it decline,” Mittu said. “You can fix those along the way, to make sure that your employees are happy, [that] they’re growing. And you can see that by the tangible outcomes they produce, as opposed to learning it at the end when they’re exiting and it’s too late to do anything to bring them back or keep them on. I think these are things we ought to be looking at.”
While the Trump administration has looked at ways to get ahead of AI’s impact on the workforce by launching efforts to reskill federal employees and working on an ethical framework for the government’s use of AI, Mittu said a general lack of quality data remains a fundamental challenge to greater AI transparency and explainability.
“We don’t control our own data. We’re interfacing with systems that are providing us data that we don’t necessarily own. We might just be ingesters of data,” he said. “So I think from that perspective, it gets very, very difficult to understand that lineage of data: Where did it come from? Who generated it and what processes were executed on that data, before it got to you? I think it’s really an issue of control [and] management, that the less control and management you have over data, you don’t really know how it was fused together and was integrated to give you that piece to train [an algorithm].”
Agencies, for the most part, still have some catching up to do with the private sector when it comes to the rollout of AI tools. Compared to most private companies, agencies have a harder time sharing their data with each other, and have data siloed in a variety of different networks with different classification systems. In addition, industry can acquire and experiment with emerging tech faster than the government.
“I think there are just some roadblocks that the DoD has to overcome, with not only the acquisition process of fielding these kinds of systems, but how do they work across different organizations, when the different organizations themselves could be very different from an IT perspective,” Mittu said.
While all of these challenges exist writ large for agencies looking to roll out AI and automation solutions, these problems become more apparent for the military services, which operate in warfighting situations where AI technology, in its current stage of maturity, is not as robust or as flexible to handle the environments that the Navy operates in, Mittu said.
“They’re dealing with chaotic environments where data is very poor, communications may be degraded. You’re having all of these issues and you’re not operating in a clean environment … In areas where there’s so much uncertainty, I think traditional techniques in the AI community, the research that’s being done, can’t really be applied in a straightforward way,” he said.
Given those challenges, Mittu said NRL is looking to address the gaps in AI technology that industry is unlikely to solve, such as figuring out ways to train an algorithm in situations where personnel can’t guarantee access to good training data.
But in the meantime, AI tools are just as good as the data that feeds them.
“If you can control your data and curate it, and manage it, and know how it’s flowing down the information-processing pipeline, I think you’re in a better position to use that and have a measure of quality on it to be able to train any kind of algorithm in defense,” Mittu said.