Federal agencies flag 1,200 AI uses cases, but see few actually implemented

Federal agencies see lots of possibilities for using artificial intelligence tools in their day-to-day work. But they've only put a fraction of those ideas into...

Federal agencies see lots of possibilities for using artificial intelligence tools in their day-to-day work. But they’ve only put a fraction of those ideas into practice.

The Government Accountability Office, in a report released Tuesday, found 20 nondefense agencies identified more than 1,200 use cases for AI in government.

According to GAO, these agencies have implemented about 16% of all the AI use cases they’ve submitted to the Office of Management and Budget.

The Office of Personnel Management, for example, is using AI to provide job seekers on USAJobs with recommendations for positions, based on the skills they’ve identified.

The Department of Health and Human Services is also using an AI chatbot to give automated email responses to general physical security questions, which allows its help desk team to better assist employees and contractors.

Federal Chief Information Officer Clare Martorana said last week that AI tools are already being used to rewrite federal website content for search engine optimization.

GAO said most of the reported AI use cases are “in the planning phase, and not yet in production.” The report states agencies are also using AI to combat fraud, identify cybersecurity threats and review large datasets more quickly.

The report states that AI “holds substantial promise for improving the operations of government agencies.”

But auditors also warned AI poses risks to the public — such as producing biased outcomes that can “amplify existing inequity” among historically underserved populations.

“Given the rapid growth in capabilities and widespread adoption of AI, the federal government must manage its use of AI in a responsible way to minimize risk, achieve intended outcomes, and avoid unintended consequences,” GAO wrote.

NASA identified 390 AI use cases, the most submitted by any agency — followed by the Commerce Department, with 285 use cases.  The audit excludes AI use cases from the Defense Department.

GAO found federal agencies in fiscal 2023 requested $1.8 billion for nondefense AI research and development.

Agencies made 888 of their AI use cases public. The remaining 353 use cases were considered sensitive or not disclosed to the public.

Three agencies — the Department of Housing and Urban Development (HUD), the Nuclear Regulatory Commission (NRC), and the Small Business Administration (SBA) — reported they didn’t have any AI use cases.

GAO notes that the inventory of AI use cases across the government isn’t comprehensive. Auditors found agencies submitted some duplicate use cases, didn’t include all the OMB-requested data for each use case and in some cases, submitted use cases that were “determined to be not AI.”

The report also highlights foundational steps agencies have completed to accelerate the use of AI in government.

The General Services Administration, as required under the 2020 AI in Government Act, stood up its AI Center of Excellence. The center expands GSA’s mission as a shared service provider and seeks to facilitate the adoption of artificial intelligence technologies in the federal government

The National Institute of Standards and Technology in January released its AI Risk Framework, which serves as new, voluntary rules of the road for what responsible use of artificial intelligence tools looks like for many U.S. industries

The framework gives public and private-sector organizations several criteria on how to maximize the reliability and trustworthiness of AI algorithms they are about to develop or deploy.

The Office of Management and Budget last month released draft guidance, directing agencies to build up agency leadership around AI and accelerate the adoption of AI tools.  OMB accepted public comments on the draft through Dec. 5.

OMB Deputy Director for Management Jason Miller told GAO that, once it finalizes its AI memo, each agency will be required to submit an implementation plan.

“As proposed, the draft memorandum would establish AI governance structures in federal agencies, recommend approaches for AI use, and manage risks from government uses of AI, including through a series of required steps to identify and mitigate discrimination and disparate impact caused by AI,” Miller wrote.

National Science Foundation Director Sethuraman Panchanathan told reporters on Monday that the “AI of today has been made possible by five to six decades of investments by NSF” — even through “AI winters,” or periods of stagnation in underlying research.

“Even when things don’t look like … a technology will make it, or make it in the form that is being talked about, NSF invests in those ideas that we believe have the potential to ensure that we are able to get those to transcend those AI winters,” Panchanathan said.

The Biden administration, under a recent AI executive order, is also focused on a “governmentwide AI talent surge.” 

“How do we take people who are not in the STEM workforce — how can they be steered into the STEM workforce? These are things that we’re very much committed to and engage in,” Panchanathan said.

OPM in July outlined more than 50 total competencies agencies should consider when hiring staff into AI-related positions.

But the agency still hasn’t established or updated a federal occupational category for employees performing AI work. It also has yet to prepare a two-year and five-year forecast of the federal employees in these positions.

Under the 2020 AI in Government Act, OPM is also on the hook for providing an inventory of federal rotational programs that could help expand the number of federal employees with AI expertise.

It’s also expected to issue a report with recommendations on expanding AI expertise in government.

“Until OPM completes these actions, agencies will likely have issues identifying requirements for AI positions,” GAO wrote.

OPM officials told GAO that creating a single occupational series or variations of AI series “is not conducive to the agency’s needs and missions, as AI work impacts many occupational series.”

Agency officials also said, “competing legislative priorities have further delayed the agency’s decision to establish a new occupational series or update an existing series.”

GAO released the report the same day as the first meeting of the White House AI Council.

A White House official said the council discussed ways to bring more AI talent and expertise into government. The council also discussed safety testing for new AI models, efforts to prevent AI fraud and deception, protections against bias and discrimination, maximizing benefits to workers and enhanced privacy.

The AI Council also received a classified intelligence briefing from the president’s national security team, with a focus on “the international dimensions and capabilities of artificial intelligence.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories