The Defense Innovation Board is looking for feedback about ethical AI, but the National Science Foundation says policy has yet to define exactly what that means...
The Defense Innovation Board wants opinions on how the Defense Department should use artificial intelligence. It will hold its first public listening session next week. The DIB is looking for feedback about ethical AI and how to use it responsibly.
“We emphasize responsibility in use of AI through our vision and guidance principles for using AI in a safe, lawful and ethical way,” Dana Deasy, DoD’s chief information officer, said at an AI event last month. “We’ve done this for every technology in the past, and we will do so for AI.”
Likewise, Lt. Gen. Jack “John” Shanahan said at the same event that the DIB is undertaking a months-long effort to develop principles around DoD’s use of AI, of which the upcoming listening session will be a part.
But what exactly is ethical AI?
Jim Kurose, assistant director of the National Science Foundation for computer, information, science and engineering, shed a little light on that subject during a Feb. 27 webinar. Essentially, ethical AI is a do-no-harm philosophy for how operators and stakeholders use the technology. He said it’s a concept some international standards bodies and inter-governmental organizations have started discussing recently, but not much actual work has gone into it yet.
Drawing on the discussions of those organizations, he laid out some early draft principles for an ethical AI framework. These aren’t set in stone, nor are there any official policies yet.
First, ethical AI has to include a human rights component. This especially applies to the idea of data privacy. An ethically-aligned AI system has to include individual access controls so that people have the ability to protect their data. They also need to account for human and economic well-being, a literal do-no-harm clause and ensure that the growth in their use is sustainable.
But it would also include further research into the way AI systems work. For one thing, Kurose raised the question of explainability, which goes hand-in-hand with transparency.
“In the U.S., DARPA has undertaken a research program explicitly on explainability of outcome on machine learning systems,” he said. “So that when answers come out or predictions come out of machine learning, or classifications come out of a machine-learning classifier, that we want to be able to ask the question ‘how is it that this answer came out,’ or maybe ‘why didn’t this alternate answer come out?’ Let me tell you, that’s a deep research problem.”
There’s also the problem of validating fairness. AI systems follow a standard lifecycle: Design, develop, implement, operate, monitor and maintain the system.
Designers and operators of ethical AI systems will have to consider, constantly ask and satisfactorily answer a series of questions to determine where and how bias could occur, and what the implications of that bias are.
“When we’re building the system, and first designing it, has there been a process of inclusive design?” he asked. “Is there selection bias in the data? Is the data representative of the full set of data which is going to operate? When we choose a model, is there experimental bias associated with it? Is there bias in the verification? So we’re testing it against other data to see how well it performs. How representative is that data? When we deploy it, is it fully deployed? Who’s getting access to it when we think about use? Is there bias in the use application or monitoring?”
Kurose said it’s important to look for issues with fairness and bias early on in the lifecycle of an AI system, calibrate the models wherever possible and continue to probe throughout. This will require a validation process.
“I think any of the questions one has about ethics in AI, you can unpack it down to this next layer and actually start asking much more meaningful and sort of contentful questions about fairness,” he said. “If we’re talking about fairness, what does fairness mean in the context, not of broad AI systems, but unpacking it and saying ‘when we deploy it, what does fairness mean? When we gather data, what does fairness mean?’ We could talk about bias then in the context of the data being used to train a machine learning classifier.”
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Daisy Thornton is Federal News Network’s digital managing editor. In addition to her editing responsibilities, she covers federal management, workforce and technology issues. She is also the commentary editor; email her your letters to the editor and pitches for contributed bylines.
Follow @dthorntonWFED