After 15 months of brainstorming, the Defense Innovation Board has pinned down the thorniest problems that artificial intelligence could raise for the Pentagon if...
After 15 months of brainstorming and gathering feedback from the public, the Defense Innovation Board has pinned down the thorniest problems that artificial intelligence could raise for the Pentagon if left unchecked.
In the final draft of its recommendations on AI ethics to Defense Secretary Mark Esper, the board of tech industry experts strongly urged DoD to have the ability to pull the plug on AI systems, just in case those algorithms begin to make decisions that are beyond their pay grade.
Drawing on his experience building AI systems, Danny Hillis, the co-founder of Applied Invention, said AI bots are adept at evolving their behavior in ways that are very difficult for the designer to predict, and have an aptitude for discovering “very tricky ways around the rules I try to put on them.”
“Sometimes those forms of behavior are actually kind of self-preserving, that can get a bit out of sync with the intent and goals of the designer,” Hillis said. “I think this is actually one of the most dangerous potential aspects about them.”
AI systems deployed by DoD, Hillis said, should have a reliable “off switch” that can be triggered automatically or manually by a human operator. That emphasis on human intervention stands out as one of five principles the board outlined for DoD to consider.
“The issue we’re bringing to light here is what happens if a system in the field begins to demonstrate behavior that was not the intended behavior, and we believe that it’s important to recognize that a principle should allow us to be able to disengage and deactivate that behavior,” said Michael McQuade, Carnegie Mellon University’s vice president for research.
Milo Medin, Google’s vice president for wireless services, told reporters after the presentation that building that kind of off-switch into AI systems would look much like the process government and industry use today for standard IT systems.
“You have a different system saying, ‘Look, it’s never supposed to do this,’ so I’m going to put hardware or a dedicated processor or something in place to make sure that’s supervised, just to make sure that the software never, ever does that,” Medin said.
Starting with a list of 25 draft principles, the board selected the following as the core ethical values for DoD to consider when implementing this emerging technology:
DoD isn’t the only agency to wrestle with the concept of ethical AI. The Trump administration has also tapped the National Institute of Standards and Technology to lay down ground rules for AI use in civilian government.
But the Pentagon, unlike NIST, needs to come up with an AI strategy that ensures ethical uses, but can also be used on the battlefield to help warfighters make lethal, irreversible decisions.
“It becomes much more complicated when you go into a domain like cyber, where you don’t have the ability to limit scope in the same way that we do in the physical world, and where you may have interlinked systems, which may end up having consequences that are not as easily understood,” Medin said. “I think in the cyber domain, it is where the department will face these complex questions.”
Depending on the task and level of risk, the board recognized in its final recommendations that some AI systems may work “with little human supervision,” and conceded that when deployed at scale, “humans cannot be ‘in the loop,’ all the time.”
But board chairman Eric Schmidt, the former CEO of Google and its parent company Alphabet, said human operators must have authority over the AI systems they deploy.
“I think there’s a line, and I think that line should not be crossed, because it would violate this notion of a human in the loop — human control, human responsibility,” Schmidt said.
In its list of recommendations, the board has urged DoD to establish an AI steering committee led by the deputy defense secretary to maintain top-level AI expertise.
“It is a rapidly evolving discipline, there’s investments that need to be made in the underlying technology, but there significant investments that need to be made in the way you engineer these systems – test, evaluate and deploy those systems,” McQuade said.
In handing this work over the DoD, the board also recommends the agency’s Joint AI Center hold an annual conference on “AI safety, security and robustness” in order to stay on top of this rapidly changing technology.
“We endeavored to create a set of recommended principles that are not just for today and that they should disappear tomorrow. I would not, however, use the word ‘timeless,’” McQuade said. “It is an evolving field … We believe that the department should continue in its convening and discussing and engaging with the public – both inside the department and outside – to continue to understand how the field and how the ethical issues surrounding the field continue to evolve.”
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Jory Heckman is a reporter at Federal News Network covering U.S. Postal Service, IRS, big data and technology issues.
Follow @jheckmanWFED