NIST seeks input on guidance to pin down trustworthy AI

NIST, in a request for information posted Wednesday, said the upcoming framework will define trustworthy AI in terms of transparency, fairness and accountabilit...

The National Institute of Standards and Technology is seeking public input on what to include in forthcoming guidance that will set rules of the road for fielding trustworthy artificial intelligence in and out of government.

NIST, following the recommendations of the National Security Commission on AI, is working on an AI Risk Management Framework that will set voluntary standards for agencies and industries to consider when adopting AI solutions.

NIST, in a request for information posted Wednesday, said the upcoming framework will define trustworthy AI in terms of transparency, fairness and accountability. The agency plans to release the framework as a “living document” that adapts to changes in technology and practices.

“Defining trustworthiness in meaningful, actionable, and testable ways remains a work in progress,” the agency wrote in its RFI. “Inside and outside the United States there are diverse views about what that entails, including who is responsible for instilling trustworthiness during the stages of design, development, use, and evaluation.”

NIST will write the framework in plain language accessible to senior executives and non-AI professionals, but still provides technical depth for multiple disciplines.

The agency points out that AI-enabled systems are already benefitting industries that include healthcare, transportation and cybersecurity.

But aside from these opportunities, NIST is seeking feedback on how users should consider “unintentional, unanticipated, or harmful outcomes” that stem from AI algorithms.

NIST is seeking public comments through Aug. 19, and is seeking feedback on how best to define the following concepts as they pertain to AI standards:

    • Accuracy
    • Explainability and interpretability
    • Reliability
    • Privacy
    • Robustness
    • Safety
    • Security (resilience)
    • Mitigation of unintended and/or harmful bias

OSTP, NSF seek feedback on shared AI data resource

The task force the Biden administration is launching to expand access to federal data and tools for artificial intelligence research is seeking public feedback on how it should get started.

The White House Office of Science and Technology Policy (OSTP) and the National Science Foundation (NSF) are asking for input on steps the administration’s National AI Research Resource (NAIRR) Task Force should take to meet its mission.

The task force will stand up a shared computing and data infrastructure that will give AI researchers and students from multiple scientific backgrounds access to a shared computing infrastructure, high-quality data and training resources to support AI breakthroughs.

OSTP and NSF are seeking comments through Sept. 1 on a broad set of questions, including how the administration should fund and operate the NAIRR, and how to balance the sharing and security of federal data on this shared infrastructure.

Both agencies, in their request for information posted Friday, are also asking for feedback on what “building blocks” already exist across the public and private sectors to help stand up the NAIRR, and how this shared infrastructure can democratize access to AI R&D.

“The goal for such a national resource is to democratize access to the cyberinfrastructure that fuels AI research and development, enabling all of America’s diverse AI researchers to fully participate in exploring innovative ideas for advancing AI, including communities, institutions, and regions that have been traditionally underserved — especially with regard to AI research and related education opportunities,” the RFI states.

Congress mandated the NAIRR task force as part of the National AI Initiative Act that passed as part of this year’s National Defense Authorization Act.

The task force includes members from NIST, the Energy Department and top universities.

Lynne Parker, the director of OSTP’s National AI Initiative Office, co-chairs the task force along with Erwin Gianchandani, NSF’s deputy assistant director for computer and information science and engineering.

The task force will submit two reports to Congress — an interim report in May 2022 and a final report in November 2022.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories