The Biden administration is planning to set new rules for how federal agencies use emerging artificial intelligence tools to do their jobs.
The Biden administration is planning to set new rules for how federal agencies use emerging artificial intelligence tools to do their jobs.
The Office of Management and Budget will release draft guidance this summer on the use of AI systems within the federal government.
The OMB guidance will establish specific policies for federal agencies to follow when it comes to the development, procurement and use of AI systems — all while upholding the rights of the American public.
A senior administration official told reporters Wednesday that the upcoming draft guidance reflects the reality that “AI is coming into virtually every part of the public mission.”
“Our north star here is this idea that if we’re going to seize these benefits, we have to start by managing the risks,” the official said.
The Biden administration on Thursday announced the upcoming OMB guidance as part of a broader launch of several AI initiatives.
OMB expects the upcoming guidance will also enable agencies to tap into AI tools to meet their missions and equitably serve Americans. It also expects the guidance will serve as a template for state and local governments to follow.
The senior administration official said agencies and industry need to confront a wide array of potential risks from AI tools. Those AI risks include safety and security concerns that stem from autonomous vehicles and cybersecurity tools.
Agencies also need to be wary of AI’s impact on civil rights, including bias embedded in AI tools used in housing or employment decisions, as well as the use of AI tools for surveillance.
The senior administration official said AI also poses risks to the economy and “job displacement from automation now coming into fields that we previously thought were immune.”
“It’s a very broad set of risks that need to be grappled with,” the official said.
The senior administration official said the focus on the federal government’s use of AI tools allows agencies to set an example, and “show how serving the public can be a place to lead on using AI wisely and responsibly.”
“There’s so many public missions that the government does, for which AI can be enormously beneficial. But the whole ballgame is going to be how it’s implemented,” the official said. “And it’s really on the shoulders of the federal employees, and I’m sure they’re going to step up to it.”
The Biden administration is also getting a commitment from top developers to have their generative AI systems go through a public assessment process.
The White House says AI developers — including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI — will participate in a public evaluation of their generation AI systems at DEFCON, at one of the world’s leading hacker conventions.
The senior administration official said the DEFCON event will be a first-of-its-kind public assessment of multiple large language models.
“This is a technology that has many, many, many different applications. That’s true commercially, and it’s equally true in terms of public purposes,” the official said.
During the assessment, thousands of AI experts will examine how these AI systems meet the standards outlined in the Biden administration’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework.
“It’s going to be done in a way that responsibly discloses any of the issues that are discovered to the companies to let them mitigate those issues,” the senior administration official said. “Red-teaming has been really helpful and very successful in cybersecurity for identifying vulnerabilities. That’s what we’re now working to adapt for large language models.”
The White House Office of Science and Technology Policy last October released a Blueprint for a “Bill of Rights” for the design, development and deployment of artificial intelligence and other automated systems.
The Bill of Rights outlines what more than a dozen agencies will do to ensure AI tools deployed in and out of government align with privacy rights and civil liberties.
NIST last January also rolled out new, voluntary rules of the road for what responsible use of artificial intelligence tools looks like for many U.S. industries.
“Here’s the bottom line, the Biden administration has been leading on these issues since long before these newest generative AI products.”
Last week, the Federal Trade Commission, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission, and Department of Justice’s Civil Rights Division issued a joint statement on efforts to root out discrimination and bias in automated systems.
As part of the Biden administration’s latest AI actions, the National Science Foundation is spending an additional $140 million to launch seven new National AI Research Institutes.
The institutes serve as hubs for research and development across federal agencies, academia and industry to accelerate breakthroughs in trustworthy AI.
NSF’s latest investment will bring the total number of National AI Research Institutes up to 25, and will extend an R&D network that includes participants from nearly every state.
The new institutes will focus on AI breakthroughs that impact climate, agriculture, energy, public health, education, and cybersecurity. They also focus on broadening the diversity of the national AI workforce.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Jory Heckman is a reporter at Federal News Network covering U.S. Postal Service, IRS, big data and technology issues.
Follow @jheckmanWFED