The Defense Department has made no secret about its ambitions to employ commercial advances in artificial intelligence in military systems. But adapting the government acquisition process to those technological developments is no small matter.
Three new contract vehicles the department is in the process of deploying should make things a whole lot easier — at least that’s the hope.
For starters, last month, the department’s Joint Artificial Intelligence Center reached an agreement that’s intended to create a rapid procurement process for AI. Defense officials are hopeful that the project, called Tradewind, will help the JAIC meet one of its core missions: Finding use cases for AI throughout the military and getting the right algorithms in the right places.
Thus far, the JAIC has been using traditional contracts that follow the standard Federal Acquisition Regulation to establish new relationships with industry. But officials said there’s reason to believe those contracts aren’t a great fit for integrating AI into military systems — a project a recent request for information characterized as “complicated and fraught with risk.”
Instead, Tradewinds is building off of DoD’s authority to use other transaction agreements (OTAs). DoD selected the nonprofit Indiana Innovation Institute to manage the OTA process. It will spend most of its effort on AI market research and finding new teaming arrangements with industry, with a heavy emphasis on nontraditional vendors.
“It will focus on developing efficient AI acquisition processes, and of course ordering through Tradewind will not be limited to the JAIC; it’ll extend to the entire DoD,” Jane Pinelis, the JAIC’s chief for test, evaluation and assessment said Tuesday during an online forum hosted by AFCEA D.C. “The idea is that it’s a collaborative ecosystem, partnering with commercial, academic and industry partners. We’ll work together to develop, design and implement new AI capabilities. It’s an environment that will be transparent between DoD, academia and industry, hopefully fostering a kind of healthy, trustworthy whole-of-nation approach to support DoD with AI innovations.”
DoD said it wants to use the flexibility of the OTA regime to “customize” acquisition process for AI innovations. At the same time, it wants the Tradewind project to build a model that DoD agencies and the military services can use to onboard those technologies.
The current schedule calls for that prototype business model to be usable by next month, further refined in April and May, and fully deployed throughout DoD by June.
But adapting government procurement processes is only one part of DoD’s AI challenge. Another major one is testing how well the nascent technologies can integrate with military systems, and whether they’ll work at all for the department’s use cases.
“That’s just tremendous. Previously, even when we had a willing industry partner and even when we had the money, it was really hard to obligate. This will enable us to do this so much easier,” she said. “And then a parallel effort that we have is on the data side, where we’ll have a data readiness multiple award contract as well. That will enable us to evaluate various parts of the DoD for AI readiness when it comes to data. Another piece of that contract allows us to pilot software before we buy it, so it will allow us to run something for some time at the JAIC and evaluate the capability before we decide to invest heavily.”
For the test and evaluation BPA, the department is looking for vendors who can not only assist with actual testing work, but also find new ways to improve the department’s processes for testing, evaluating and integrating new AI technologies.
Pinelis said there will be a heavy emphasis on automating the process as much as possible.
“What we’re looking for is the entire spectrum of support along our test and evaluation framework. We already have some technology to test our AI tools, so that’s a less pressing need,” she said. “But for things like system integration, for instance, we need a way to automate that to make it very repeatable and quick so that as soon as software is updated, we can very easily test not just how well it works, but how well it integrates and whether it’s interoperable with everything around it. We need an automated, quick and repeatable way of doing that. We need help with integrating humans into our systems and properly testing that. We’re working right now with some cognitive behavioral scientists to try to figure out what are the aspects of human-system integration that we should actually be measuring, and which parts of it can be automated.”
Pinelis said work under the contract will also extend into the operational test and evaluation phases of AI integration.
“[It includes] the enormous amount of modeling and simulation and live virtual constructive testing that is required before we can put some of these systems on a range, as well as how we properly instrument both the systems and the range to measure what we need to measure during operational test and evaluation,” she said. “I think we need help with the entire ecosystem, with the main goal of automating as much of it and making it as quick as possible so we can continue to be part of the agile development process.”