The Principles of AI Ethics for the Intelligence Community and AI Ethics Framework draw inspiration from the DoD’s own set of AI ethics principles that Secret...
The intelligence community, following in the footsteps of the Defense Department, has rolled out its own set of ethics policies for the use of artificial intelligence in its operations.
The Principles of AI Ethics for the Intelligence Community and AI Ethics Framework draw inspiration from the DoD’s own set of AI ethics principles that Secretary Mark Esper approved in February.
Ben Huebner, the chief of the Office of the Director of National Intelligence’s Civil Liberties, Privacy and Transparency Office, who previewed the policies in March, told reporters in a call Thursday that the intelligence community crafted its policies broadly to cover all 17 agencies in the intelligence community, each with different authorities and missions.
“Creating a common document for all of them that is equally applicable had to be conducted in a different way than the Department of Defense could do, when they are doing one document for one department,” Huebner said. “Our approach was to start with these principles, to move to the ethical framework to provide that sort of practical guidance in terms of how to apply these. We will then be moving to use cases to show how different intelligence agencies do this across different parts of their mission set.”
The documents include input from IC data scientists as well as privacy and civil liberties officers, and provide guidance on how agency personnel should develop and use AI and machine learning as part of their intelligence-gathering responsibilities.
Huebner said the principles and the framework lay the groundwork for ensuring that AI applications in the intelligence community are secure and resilient to attacks from adversaries. They also emphasize the need for AI-powered analysis transparent enough for officials to understand how an algorithm reaches certain conclusions.
“No one has a stronger case than the IC that AI needs to produce results that our policymakers outside the IC, our customers for our intelligence, can interpret and understand and then use their human judgment to act upon. If a member of the Cabinet or any senior policymaker turns to their intelligence briefer and says, ‘How do we know that?’ We never have the option of saying, ‘Well, we don’t really know, that’s kind of what the algorithm is telling us.’ That’s just inconsistent with what intelligence is,” he said.
The documents also align with the broader Principles of Professional Ethics for the IC and ensure that AI technology always complies with “limits on the authorities that the American people have granted to our agencies to conduct our national security mission,” Huebner said.
Dean Souleles, the chief of ODNI’s Augmenting Intelligence Using Machines Innovation Hub, said the “science of AI is not 100% complete yet,” but the ethics documents give intelligence officials a current roadmap of how best to use this emerging technology.
“It is too early to define a long list of do’s and don’ts. We need to understand how this technology works, we need to spend our time under the framework and guidelines that we’re putting out to make sure that we’re staying within the guidelines. But this is a very, very fast-moving train with this technology,” Souleles said.
The intelligence community does not use the same “human in the loop” terminology the Defense Department uses in its AI principles. However, Souleles said the documents seek to define the role AI algorithms should play in intelligence-gathering, which remains a “judgment and reasoning issue, which is really beyond the capabilities of current generations of technology.”
“We can build a terrific computer-vision algorithm to count the number of airplanes on runways and have pretty high confidence that we’re going to get that right every single day. That’s the work that intelligence analysts, imagery analysts have been doing for years. They look at an image from space, and they count things and they identify things. We can build a computer vision that will assist with analyzing that mission,” he said. “What we can’t do is build an AI that tells that analyst why there are less airplanes today than there were yesterday, or where they went or what was the intention of the leaders in repositioning those airplanes to different parts of the world. That is the job of an intelligence analyst who writes intelligence to be read by others.”
Huebner said the intelligence community expects to release updates to the AI documents as the emerging technology evolves.
“The important question as we talk to our data scientists in the intelligence community about this is where and when should the human being engaged and most critically, what does that person need to know to properly interpret the information,” he said. “If I’m using a voice-to-text application, I might need to know, for example, what was the underlying test data, with respect to dialects. If an analytic was mostly trained on a dialect from one region of the world, even though it is purportedly the same language, it would be a lot less accurate with respect to the dialect of another. That’s something that I need to know about it.”
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Jory Heckman is a reporter at Federal News Network covering U.S. Postal Service, IRS, big data and technology issues.
Follow @jheckmanWFED