The Office of the Director of National Intelligence will soon release its own public set of AI principles.
Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.
Building off the Defense Department’s recent adoption of five artificial intelligence principles, the Office of the Director of National Intelligence will soon release its own public set of AI principles.
Ben Huebner, the chief of ODNI’s Civil Liberties, Privacy and Transparency Office, gave a preview of that upcoming strategy Wednesday at an Intelligence and National Security Alliance (INSA) conference.
“None of them will be surprises,” Huebner said. “We have been very integrated with the work of the [DoD Joint AI Center] and others, and fundamentally, there’s a lot of consensus here.”
Aside from DoD, other corners of the federal government have begun to roll out their own AI ethics platforms.
The National Security Commission on Artificial Intelligence, led by former Google CEO Eric Schmidt and former Deputy Defense Secretary Robert Work, released an interim report last November that warned of a “brain drain” if federal research and development funding declines. The commission expects to send its final report to Congress in March 2021.
The National Institute of Standards and Technology last August set a roadmap for the government’s role in developing future AI breakthroughs, while the White House’s Office of Science and Technology Policy in January released what it described as a “first of its kind” set of principles that agencies must meet when drafting AI regulations.
But the intelligence community can’t copy-and-paste from any of these other AI frameworks, Huebner said, because so far it hasn’t found an analog in the public or private sector that addresses its unique concerns.
“I am less concerned about the kind of custom-built AI used to do one thing. We’ve done that pretty well,” he said. “Where we could potentially run into trouble is that it works so well that someone steals it to do something else, without thinking through, ‘What was the underlying test data that we used to train that analytic? Is it going to work in the same way with the level of accuracy that we need?’”
While the intelligence community faces a growing volume of data that continues to outpace what its workforce can process, Huebner, in clearing up some of the “hype” around AI, said the intelligence community has worked with automation tools now for decades.
“The amount of data that we’re processing worldwide has grown exponentially, but having a process for handling large data sets for the intelligence community is not new either,” he said.
But if the intelligence community has decades of experience with automation, why does the intelligence community need new standards for AI?
Huebner explained that the IC’s assessment of privacy and civil liberties issues with automation, as well as concerns about accuracy and mission impact, generally assumes that the analysis remains “fundamentally static.”
“We go back and check them — certainly the world changes on us, and the underlying dataset does as well — but that the analytic is static. That is entirely untrue by design when we’re talking about machine learning. And the difference is that it is not a human there making those modifications to the analytic,” he said. “So does that mean that we don’t do artificial intelligence? Clearly, no. But it means we need to think a little bit differently how we’re going to manage the risks and ensure we’re providing the accuracy and objectivity that we need to.”
While the ability to explain and interpret the analysis made by an AI bot remains an elusive goal for agencies and industry, Huebner said the intelligence community has the strongest business case for making sure that the information it’s gathering from AI holds up to scrutiny.
“If we’re providing intelligence to the president that is based on AI analytics, and he asks — as he does — ‘How do you know this?’ that’s a question we have to be able to answer. And that goes to the need for explainability and interpretability in this space,” Huebner said.
Huebner said ODNI’s upcoming AI principles will focus on a “human-centered approach,” but clarified that won’t always require a “human in the loop” for AI decisions.
“We need to figure out, for the various risks that we’re taking, where’s the human best positioned and what does that person actually need to know to improve the process,” he said.
These AI principles must also confront concerns of bias, although the intelligence community already has policies in place for its analysts.
Intelligence Community Directive 203, signed in July 2015 by former Director of National Intelligence James Clapper, states that: “Analysts must perform their functions with objectivity and with awareness of their own assumptions and reasoning. They must employ reasoning techniques and practical mechanisms that reveal and mitigate bias. Analysts must be alert to influence by existing analytic positions or judgments and must consider alternative perspectives and contrary information.”
In addition, the intelligence community must also mitigate the threat of adversaries trying to deliberately bias AI systems to produce incorrect analysis.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Jory Heckman is a reporter at Federal News Network covering U.S. Postal Service, IRS, big data and technology issues.
Follow @jheckmanWFED