White House sets ‘binding requirements’ for agencies to vet AI tools before using them

The Biden administration is calling on federal agencies to step up their use of artificial intelligence tools, but keep risks in check

The Biden administration is calling on federal agencies to step up their use of artificial intelligence tools, but in a way that keeps the risk of misuse in check.

The Office of Management and Budget on Thursday released its first governmentwide policy on how agencies should mitigate the risks of AI while harnessing its benefits.

Among its mandates, OMB will require agencies to publicly report on how they’re using AI, the risks involved and how they’re managing those risks.

Senior administration officials told reporters Wednesday that OMB’s guidance will give agency leaders, such as their chief AI officers or AI governance boards, the information they need to independently assess their use of AI tools, identify flaws, prevent biased or discriminatory results and suggest improvements.

Vice President Kamala Harris told reporters in a call that OMB’s guidance sets up several “binding requirements to promote the safe, secure and responsible use of AI by our federal government.”

“When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people,” Harris said.

‘Concrete safeguards’ for agency AI use

OMB  is giving agencies until Dec. 1, 2024, to implement “concrete safeguards” that protect Americans’ rights or safety when agencies use AI tools.

“These safeguards include a range of mandatory actions to reliably assess, test, and monitor AI’s impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI,” OMB wrote in a fact sheet.

By putting these safeguards in place, OMB says travelers in airports will be able to opt out of AI facial recognition tools used by the Transportation Security Administration,” without any delay or losing their place in line.”

The Biden administration also expects AI algorithms used in the federal health care system will have a human being overseeing the process to verify the AI algorithm’s results and avoid biased results.

“If the Veterans Administration wants to use AI in VA hospitals, to help doctors diagnose patients, they would first have to demonstrate that AI does not produce racially biased diagnoses,” Harris said.

A senior administration official said OMB is providing overarching AI guidelines for the entire federal government, “as well as individual guidelines for specific agencies.”

“Each agency is in its own unique place in its technology and innovation journey related to AI. So we will make sure, based on the policy, that we will know how all government agencies are using AI, what steps agencies are taking to mitigate risks. We will be providing direct input on the government’s most useful impacts of AI,” the official said. “And we will make sure, based on the guidance, that any member of the public is able to seek remedy when AI potentially leads to misinformation or false decisions about them.”

OMB’s first-of-its-kind guidance covers all federal use of AI, including projects developed internally by federal officials and those purchased from federal contractors.

Under OMB’s policy, agencies that don’t follow these steps “must cease using the AI system,” except in some limited cases where doing so would create an “unacceptable impediment to critical agency operations.”

OMB is requiring agencies to release expanded inventories of their AI use cases every year, including identifying use cases that impact rights or safety, and how the agency is addressing the relevant risks.

Agencies have already identified hundreds of AI use cases on AI.gov.

“The American people have a right to know when and how their government is using AI, that it is being used in a responsible way. And we want to do it in a way that holds leaders accountable for the responsible use of AI,” Harris said.

OMB will also require agencies to release government-owned AI code, models and data — as long as it doesn’t pose a risk to the public or government operations.

The guidance requires agencies to designate chief AI officers — although many agencies have already done so after it released its draft guidance last May Those agency chief AI officers have recently met with OMB and other White House officials as part of the recently launched Chief AI Officer Council.

OMB’s guidance also gives agencies until May 27 to establish AI governance boards that will be led by their deputy secretaries or an equivalent executive.

The Departments of Defense, Veterans Affairs, Housing and Urban Development and State have already created their AI governance boards.

“This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris said.

A senior administration official said the OMB guidance expects federal agency leadership, in many cases, to assess whether AI tools adopted by the agency adhere to risk management standards and standards to protect the public.

Federal government ‘leading by example’ on AI

OMB Director Shalanda Young said the finalized guidance “demonstrates that the federal government is leading by example in its own use of AI.”

“AI presents not only risks, but also a tremendous opportunity to improve public services,” Young said. “When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services, improve accuracy and expand access to essential public services.”

Young said the OMB guidance will make it easier for agencies to share and collaborate across government, as well as with industry partners. She said it’ll also “remove unnecessary barriers to the responsible use of AI in government.”

Many agencies are already putting AI tools to work.

The Centers for Disease Control and Prevention is using AI to predict the spread of disease and detect illegal opioids, while the Center for Medicare and Medicaid Services is using AI to reduce waste and identify anomalies in drug costs.

The Federal Aviation Administration is using AI to manage air traffic in major metropolitan areas and  improve travel time.

OMB’s guidance encourages agencies to “responsibly experiment” with generative AI, with adequate safeguards in place. The administration notes that many agencies have already started this work, including by using AI chatbots to improve customer experience.

100 new AI hires coming to agencies by this summer

Young said the federal government is on track to hire at least 100 AI professionals into the federal workforce this summer, and holding a career fair on April 18 to fill AI roles across the federal government

President Joe Biden called for an “AI talent surge” across the government in his executive order last fall.

As federal agencies increasingly adopt AI, Young said agencies must also “not leave the existing federal workforce behind.”

OMB is calling on agencies to adopt the Labor Department’s upcoming principles for mitigating AI’s potential harm to employees.

The White House says the Labor Department is leading by example, consulting with federal employees and labor unions the development of those principles, as well as its own governance and use of AI.

Later this year, OMB will take additional steps to ensure agencies’ AI contracts align with its new policy and protect the rights and safety of the public from AI-related risks.

OMB will be taking further action later this year to address federal procurement of AI. It released a request for information on Thursday, to collect public input on that work.

A senior administration official said OMB, as part of the RFI, is looking for feedback on how to “support a strong and diverse and competitive federal ecosystem of AI vendors,” as well as how to incorporate OMB’s new AI risk management requirements into federal contracts.

The public has until April 28 to respond to the RFI.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories