Web of policies not syncing up into single federal AI strategy, report warns

A tangled web of policies on artificial intelligence is coming from the White House, Congress and agency leadership, but those policies aren’t syncing up yet ...

A tangled web of policies on artificial intelligence is coming from the White House, Congress and agency leadership, but those policies aren’t syncing up yet into a single strategy for how the federal government should develop or field AI tools.

Agencies have launched dozens of AI and machine-learning tools and pilots in recent years that make back-office functions easier for federal employees. But a recent report from the Advanced Technology Academic Research Center (ATARC) finds agencies are focused on getting the policy right before pushing ahead with greater adoption of AI systems.

The report from ATARC’s AI data policy working group found dozens of separate AI ethics, policy and technical working groups scattered across the federal government.

The working group includes members from the General Services Administration, the Defense Health Administration, the National Institutes of Health and the Veterans Affairs Department.

“While a few overall governance structures for AI policy have begun, we are concerned that the resulting policies may be incomplete, inconsistent or incompatible with each other,” the report states.

The report identifies six entities working on AI policies that apply to the entire federal government:

  • White House Office of Science and Technology Policy
  • General Services Administration
  • National Security Commission on Artificial Intelligence
  • Commerce Department
  • Office of Management and Budget
  • Senate and House Artificial Intelligence Caucuses

The Trump administration in January stood up a National AI Initiative Office aimed to promote research and development of AI systems throughout the federal government. The office, which the Biden administration kept in place, is also focused on advancing data availability and assessing issues related to the AI workforce.

The Biden White House in June also stood up a National AI Research Resource Task Force with the National Science Foundation. The NAIRR, required under the 2020 National AI Initiative Act,  will look at how to expand access to AI education and other critical resources.

The task force includes members from NIST, the Energy Department and top universities.

Meanwhile, the National Institute for Standards and Technology is working on a framework focused on helping agencies design and implement AI systems that are ethical and trustworthy.

It’s more common, however, to see agencies develop their own AI strategies. The working group found that at least 10 agencies are developing AI policies or tools focused on internal use, rather than developing a cross-agency framework.

That group includes the Defense Department, which has its Joint AI Center (JAIC) and its AI Center of Excellence. DoD adopted a set of AI ethics principles in February 2020, developed by the Defense Innovation Board,  a panel of science and technology experts from industry and academia.

The JAIC this March also reached initial operating capability for its Joint Common Foundation, a DoD-wide development platform meant to accelerate testing and adoption of AI tools across the department.

The intelligence community borrowed elements of the DoD AI ethics principles to create its own AI ethics policy in July 2020.

DoD and the intelligence community stood up these policies at the urging of the National Security Commission on AI, which urged both to achieve AI readiness by 2025 to retain a tactical advantage over international adversaries. The NSCAI issued its final report in March and will sunset Friday.

Agencies take cautious approach to ‘high-risk’ AI

Agencies already fielding AI tools are still taking a cautious approach to this technology.

Oki Mek, the chief AI officer at the Department of Health and Human Services, said HHS is training its workforce to better understand what AI and machine learning can do for the agency, while also ensuring that it builds and acquires AI systems in a way that meets legal requirements.

“We want to make sure that we have this ecosystem of support because AI and machine learning, it’s a new, innovative, transformative domain,” Mek said Tuesday during an ATARC panel. “The failure rate is going to be high. Even just IT projects, the failure rate is quite high, but with new emerging technology, it’s going to be really high.”

Pamela Isom, director of the Energy Department’s AI and Technology Office, said her office is holding listening sessions as part of an effort to create an agency AI risk management playbook.

The playbook, she added, will highlight some of the agency’s best practices in fielding ethical AI systems and avoiding bias when training algorithms.

“These are issues that could transpire not necessarily intentionally. It’s a matter of awareness and understanding what are some of the things that we can do to prevent inherited biases,” Isom said.

While DoE is looking at how AI can provide insights into climate change and its health impact in communities, Isom said the agency also needs to get a handle on fundamentals like AI labeling and standards for annotation.

“AI in itself is not about the technology. AI in itself is about mission challenges and addressing those challenges. And in order to do so, we can’t think about it the way we have traditionally thought about software development. The whole life cycle is much more incremental, it’s much more iterative, it’s much more agile. You are training with production data,  you’re training with the real deal, because the AI is going to be making real-life safety decisions. You’ve got to test it, you’ve got to train it well,” Isom said.

Ed McLarney, NASA’s digital transformation lead for AI and ML, said the agency is backing AI pilots for use cases in situations where it assists the workforce in roles including mission support, human resources and finance. He said NASA also created an ethical AI framework.

“It’s really important that we have a more widespread discussion about AI ethics, AI risks and mitigations and approaches. It’s such a kind of turbulent, Wild West, new frontier, that there’s just a lot of discussion and debate that our overall communities have got to have,” he said.

Brett Vaughn, the Navy’s chief AI officer, said the service is looking at AI to enable autonomous and unmanned systems, as well as improve the efficiency of back-office functions that support readiness.

The Navy, Vaughn said, faces its biggest AI challenge in making sure these tools keep operating in rugged environments with limited connectivity.

“For us, that means possibly literally the middle of the Pacific Ocean, so it’s a challenging technologic and operating environment,” he said.

But the Navy also needs to overcome organizational and cultural challenges

“We’re an organization that’s hundreds of years old. For us, getting digital, which is a predicate to being effective in AI — let’s say we’ll have to work at it. It’s a constant challenge, it’s widespread,” Vaughn said.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories