OMB tells agencies to name chief AI officer to accelerate tech adoption across government

The Biden administration is setting new rules for how federal agencies should accelerate the use of artificial intelligence tools, and set up guardrails for thi...

The Biden administration is setting new rules for how federal agencies should accelerate the use of artificial intelligence tools, and set up guardrails for this emerging technology.

The Office of Management and Budget on Wednesday released proposed guidance for how agencies should implement the AI executive order that President Joe Biden signed Monday.

OMB Director Shalanda Young tells agencies in the memo that AI “is one of the most powerful technologies of our time, and the president has been clear that we must seize the opportunities AI presents while managing its risks.”

The guidance focuses on building up agency leadership around AI, accelerating the adoption of AI tools within agencies and managing the risks of those AI tools.

“Every day, the federal government makes decisions and takes actions that have profound impacts on the lives of Americans. Federal agencies have a distinct responsibility to identify and manage AI risks because of the role they play in our society,” the White House wrote in a press release.

OMB is accepting comments on the draft guidance through Dec. 5.

Vice President Kamala Harris discussed OMB’s draft guidance with foreign leaders at an AI Security Summit in the United Kingdom.

“President Biden and I reject the false choice that suggests we can either protect the public or advance innovation. We can — and we must — do both. The actions we take today will lay the groundwork for how AI will be used in the years to come,” Harris said,” Harris said.

The draft guidance OMB closely resembles details that Federal News Network first reported last month. Senior administration officials previewed the OMB guidance in May.

The proposed guidance charges agency heads with accelerating the adoption of AI tools, and ensuring that their agencies follow all legal requirements when implementing AI tools. It also directs agency heads to consider the funding and resources they’ll need to adopt AI tools in upcoming budget proposals.

OMB under the draft guidance gives agencies 60 days to name a chief AI officer who will lead implementation efforts. OMB said CAIOs will primarily focus on “coordination, innovation and risk management for their agency’s use of AI.”

The draft guidance states AI is “deeply interconnected” with data, IT, cybersecurity, civil rights, customer experience and workforce management functions of agencies, and that CAIOs must coordinate their work with other senior executives within their agencies.

Among their duties, agency CAIOs will work with the top financial and workforce officials within their agencies to better understand the funds and personnel they’ll need to manage AI tools.

The draft guidance suggests agencies in some cases may designate an existing official, such as a chief technology officer or chief data officer, as their CAIO, “provided they have significant expertise and AI.”

The draft guidance requires 24 large agencies covered under the CFO Act to develop an enterprise AI strategy.

OMB is also requiring those same agencies to create AI governance boards, led by the deputy secretary of each agency and their newly appointed chief AI officer.  OMB is requiring that these agency AI governance boards meet no less than quarterly to oversee the internal use of AI tools.

Members of the board will include senior leaders working on IT, cybersecurity, data, human capital, procurement, budget, agency management, customer experience, performance evaluation, statistics, risk management, equity, privacy, civil rights and civil liberties.

Agencies must ensure that AI issues receive adequate attention from the agency’s senior leadership,” OMB wrote in its draft guidance.

The proposed guidance also directs agencies to explore the use of generative AI tools, like ChatGPT, with “adequate safeguards and oversight mechanisms.”

“Agencies should assess potential beneficial use cases of generative AI in their missions and establish adequate safeguards and oversight mechanisms that allow generative AI to be used in the agency without posing undue risk,” the draft guidance states.

Several agencies — including the General Services Administration, Environmental Protection Agency and the Department of the Navy — have issued policies this year restricting internal use of generative AI and large language models.

OMB directs agencies to remove “barriers to the responsible use of artificial intelligence.” That includes a better understanding of AI’s impact on agencies’ IT infrastructure, data, cybersecurity, and workforce.

The draft guidance calls on agencies to provide resources and training to upskill federal employees and develop AI talent internally. It also directs them to increase AI training offerings, “including opportunities that provide federal employees pathways to AI occupations and that assist employees affected by the application of AI to their work.”

OMB is giving agencies a year to develop an AI strategy that will be available online to the public. Each agency’s AI strategy will detail its top use cases of AI, as well as planned uses of AI that are under development.

The annual strategy should describe the state of the agency’s AI workforce and anticipated workforce needs, “as well as a plan to recruit, hire, train, retain and empower AI practitioners and achieve AI literacy for non-practitioners involved in AI to meet those needs.”

“Agencies should take full advantage of available special hiring and retention authorities to fill gaps in AI talent, encouraging applications from individuals with diverse perspectives and experiences, and ensure the use of recruitment best practices for AI positions, such as descriptive job titles and skills-based assessments,” the draft guidance states.

Agencies have until August 2024 to complete an AI impact assessment, and implement minimum risk management requirements for AI use cases that impact safety or rights, “or else stop using any that is not compliant with the minimum practices.”

“When AI is used in agency functions, the public deserves assurance that the government will respect their rights and protect their safety,” the White House in its press release.

The draft guidance states that AI use cases that cover health, education, employment, housing, federal benefits, law enforcement, immigration, child welfare, transportation, critical infrastructure, safety and the environment are presumed to impact safety or rights.

After finalizing the guidance, OMB says it will work on ensuring federal contracts adhere to the requirements set out in the Biden administration’s latest AI executive order.

“While agencies will realize significant benefits from AI, they must also manage a range of risks from the use of AI,” the memo states.

The White House states AI is already helping the government better serve the public.

National Oceanic and Atmospheric Administration, for example, is using AI to notify households about severe weather events. The Department of Homeland Security is using AI to protect civilian agencies from cyber threats.

Federal agencies identified over 700 ways they use AI to advance their missions. Agencies are expected to keep updating their inventory of AI use cases under OMB’s proposed guidance.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories