NIST sets AI ground rules for agencies without ‘stifling innovation’

The National Institute of Standards and Technology has set a roadmap for the government’s role in developing future AI breakthroughs.

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.

As agencies continue to experiment with artificial intelligence as a tool to transform the way they do business, the National Institute of Standards and Technology has set a roadmap for the government’s role in developing future AI breakthroughs.

After months of feedback from industry and elsewhere in government, as well as an in-person workshop in May, NIST has laid down some ground rules of what agencies should and shouldn’t do with AI tools going forward.

NIST’s plan marks the federal government’s first major effort to provide clarity and guidance to agencies looking to adopt a technology that, while buzzworthy now, actually dates back to the 1960s, yet still remains in its infancy.

In recognition that AI technology remains in its early stages, Elham Tabassi, the acting chief of staff at NIST’s Information Technology Lab, and the lead author of NIST report. said one of the report’s main takeaways is striking the balance between regulation and innovation.

“It’s important that the standards should not be over-prescriptive. Otherwise, they are stifling innovation. And it’s extremely important for the case of AI, being a fast-moving technology. So flexibility is the key,” Tabassi said in an interview.

The report stems from an executive order President Donald Trump signed in February, which tasked NIST with developing a strategy to “minimize vulnerability to attacks from malicious actors,” build public confidence in AI and develop international standards for AI use.

For the first two goals, Tabassi said NIST has more guidance in the works. In the next few months,  NIST will seek public comment on its recommendations on confronting adversarial machine learning. And early next year, the agency will also seek feedback on upcoming standards for explainable AI.

On its third goal, the government has already made progress. In May, 40 countries including the U.S. signed off on a common set of AI principles through the International Organization for Economic Cooperation and Development.

NIST, in its report, found industry groups had already established AI standards on everything from data to risk management, but no organization had yet finalized a template for developing public trust in decisions reached by AI algorithms. Tabassi said that remains a significant hurdle for rolling out AI tools at agencies.

“We need them to be explainable, rather than just give an answer. They should be able to explain how they derive that prediction or to that decision,” she said. “That goes a long way on increasing trust in the system.”

The Defense Advanced Research Projects Agency has invested billions over the past few years in developing the next generation of “explainable” AI. But Tabassi said the agencies need different standards of AI explainability, considering use cases for AI in government range from the Transportation Department’s work on self-driving cars to the Patent and Trademark Office improving its patent approval process. 

“If you’re explaining the results of an AI system that looks at an MRI of a brain, and it says if there’s a tumor or not — if you’re explaining it to a patient, rather than a technician, there’s a different level of explainability that’s needed,” Tabassi said.

In order to improve AI reliability, NIST has advised agencies to work on developing data standards, in order to “make the training data needed for machine learning applications more visible and more usable to all authorized users.”

The report also recommends developing better standards for agency “AI testbeds,” to ensure that “researchers can use actual operational data to model and run experiments on real-world systems … and scenarios in good test environments.”

As for next steps, NIST suggests that White House National Science and Technology Council’s Machine Learning and Artificial Intelligence Subcommittee appoints a standards coordinator to “gather and share AI standards-related needs.”

NIST also recommends that the Office of Personnel Management and the Commerce Department to take the lead with hiring and training the federal employees so that they’ll have the skills they need to work with AI tools.

That includes building a “clear career development and promotion path that values and encourages participation in and expertise in AI standards and standards development,” the report states.

While most employees who work with AI tools will need a foundation in math and computer science skills, Tabassi said agencies will need to rely on a “multidisciplinary” skillset that includes engineering, statistics, social studies and psychology.

“This particular recommendation, just like the whole context of the plan, is around standards development,” she said. “And when it comes to standards development, there is an extra set of skills that people need to have: Do you have to be a technical expert to have knowledge of the topic, to be able to spot and discuss the needs of industry and … figure out how the government can help with that?”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    Uniformed and civilian cyber and military intelligence specialists monitor Army networks in the Cyber Mission Unit’s Cyber Operations Center at Fort Gordon, Ga. (Photo by Michael L. Lewis)

    Artificial intelligence vs. ‘snake oil:’ Defense agencies taking cautious approach toward tech

    Read more

    Lawmakers, agencies looking to build trust in using AI for government services

    Read more