The National Institute of Standards and Technology is seeking public input on what to include in forthcoming guidance that will set rules of the road for fielding trustworthy artificial intelligence in and out of government.
NIST, following the recommendations of the National Security Commission on AI, is working on an AI Risk Management Framework that will set voluntary standards for agencies and industries to consider when adopting AI solutions.
NIST, in a request for information posted Wednesday, said the upcoming framework will define trustworthy AI in terms of transparency, fairness and accountability. The agency plans to release the framework as a “living document” that adapts to changes in technology and practices.
“Defining trustworthiness in meaningful, actionable, and testable ways remains a work in progress,” the agency wrote in its RFI. “Inside and outside the United States there are diverse views about what that entails, including who is responsible for instilling trustworthiness during the stages of design, development, use, and evaluation.”
The task force will stand up a shared computing and data infrastructure that will give AI researchers and students from multiple scientific backgrounds access to a shared computing infrastructure, high-quality data and training resources to support AI breakthroughs.
OSTP and NSF are seeking comments through Sept. 1 on a broad set of questions, including how the administration should fund and operate the NAIRR, and how to balance the sharing and security of federal data on this shared infrastructure.
Both agencies, in their request for information posted Friday, are also asking for feedback on what “building blocks” already exist across the public and private sectors to help stand up the NAIRR, and how this shared infrastructure can democratize access to AI R&D.
“The goal for such a national resource is to democratize access to the cyberinfrastructure that fuels AI research and development, enabling all of America’s diverse AI researchers to fully participate in exploring innovative ideas for advancing AI, including communities, institutions, and regions that have been traditionally underserved — especially with regard to AI research and related education opportunities,” the RFI states.
The task force includes members from NIST, the Energy Department and top universities.
Lynne Parker, the director of OSTP’s National AI Initiative Office, co-chairs the task force along with Erwin Gianchandani, NSF’s deputy assistant director for computer and information science and engineering.
The task force will submit two reports to Congress — an interim report in May 2022 and a final report in November 2022.