Given the data and computational prowess available today, agencies should cautiously implement AI, advises Noblis’ Abby Emrey. She shares practical tips on ho...
With all of the hype around artificial intelligence, federal agencies need be extra vigilant when designing and deploying AI systems — ensuring that those early capabilities do what they’re supposed to do, ethically and transparently.
It is fundamentally critical as AI delivery systems grow in power and scope that the government actively plan and manage AI use within federal organizations, said Abby Emrey, senior machine learning engineer and principal investigator at Noblis.
Emrey described what’s transpiring with AI right now as a bit of an AI renaissance driven by the coming together of multiple technological advancements.
“We’re seeing huge improvements in terms of both machine learning architectures that are used to accomplish this incredible performance … and also in the amount of scale that’s available to accomplish these things, such as both data that’s available and computational power,” she said during the Tackling Government Challenges Through Science and Technology podcast on Federal News Network.
After all, Emrey noted, the concepts of AI go back decades. It’s just that now data and computing capabilities are catching up.
That’s why, she advised, “it’s important to take a step back and take a breather, and think, ‘How can we make the best out of the situation? How can we mitigate the potential harms that can come from this new technology?’ ”
Would-be federal AI adopters need to understand any potential risks going in, Emrey said.
In the public sector, “I would say one of the most important risks to pay attention to is actually the risk of overzealous adoption, especially by government agencies,” she said.
As part of developing a project plan, an agency should analyze and detail possible risks from introducing AI, Emrey recommended.
Any AI-specific application should operate under an overarching approach that encompasses ethics and accountability, Emrey said.
“As part of this plan, generally, you want to make sure you’re … declaring ethical principles for AI development,” she said, and then following them. “Being strategic about adopting AI is incredibly important.”
Emrey noted that the various policymaking entities within the government recognize the need for these types of controls.
“We’ve seen a variety of executive orders come out directly targeting the issue of responsible AI development,” she said. “It’s really becoming a priority of the current administration to make sure that the responsible path is being followed.”
Equally important, Emrey added, is that agencies make it apparent to the public that the government is on top of AI ethics and transparency requirements.
“We want to prioritize a social good,” she said. “We want to understand the kinds of biases that can be present in AI systems and to limit them to provide for chains of accountability and to protect user rights.”
While there are risks with any new technology, that doesn’t mean the government should refrain from deploying AI. In fact, choosing the right uses can help mitigate the risk, Emrey said.
That way, federal organizations can develop best practices and build on lessons learned.
So, what applications for AI make sense in the federal government? “The most feasible use cases for AI are those that we would call augmentation,” Emrey said. “This is where human work is assisted by an AI agent.” That way, judgement and decision-making stay with humans and automation and AI algorithms sort through datasets and deliver insights from the culled information.
In augmentation, “the work is still basically a human effort,” Emrey said. “AI is used to scale beyond what’s feasible if only humans are involved, such as if you’re trying to sift through lots of records.”
For instance, augmentation might let an agency speed up the ability for program leaders to make near-real- or real-time decisions “when there’s just this huge influx of data,” she said.
That might seem simple and something that machines have always done. But Emrey said AI augmentation produces a step function increase in human ability to use data.
“AI as a device to be able to process more data in a quicker fashion or possibly limit the amount of data that actually requires human review — that can be a game changer,” she said. “I would say that’s generally the most feasible use case when it comes to the technology right now.”
Right now, AI still requires humans to be successful. Autonomous systems that can produce repeated identical results, without any human involved in the verification process, promise enormous efficiencies, Emrey said, and people are therefore admittedly excited about them.
“But I would like people to understand that this is not feasible in a lot of cases that exist,” she said, citing the experience in self-driving vehicles, an effort marked, in some cases, by fatal failures.
Emrey noted that to help agencies looking to expand their use of AI, Noblis has developed a Responsible AI Framework (RAIF) that’s based on trustworthiness as a key attribute.
“The goal is to embed ethical and responsible practices into our full AI pipelines at the project level, everywhere at Noblis,” Emrey said.
She described RAIF’s three main elements:
The framework “is a free resource catered to the public sector,” Emrey said.
To listen to the full discussion between Noblis’ Abby Emrey and The Federal Drive’s Tom Temin, click the podcast play button below:
Discover other Tackling Government Challenges Through Science and Technology podcasts and insights here.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.