What’s in the ‘black box’ of AI? NIST invites industry to brainstorm standards

The National Institute of Standards and Technology has just over two months to submit a plan for artificial intelligence technical standards, and a lot needs to...

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.

As agencies still grapple with what artificial intelligence can do for them, and as public and private-sector researchers continue to push the boundaries of what’s possible, the National Institute of Standards and Technology is laying down some AI ground rules.

An executive order signed by President Donald Trump in February gave NIST the job of coming up with technical standards for AI tools, and gave NIST 180 days to develop its plan. The agency now has just over two months to submit that plan.

As part of that goal, the agency held a workshop Thursday at its headquarters to better understand both the challenges and the opportunities of AI.

NIST put out a request for information on May 1, asking for public feedback on what the agency should include in its AI standards plan. But on Thursday, the agency extended the comment deadline from May 31 to June 10.

NIST Director Walter Copan said setting technical standards for AI would reduce the government’s vulnerability to attacks from malicious actors. He also said standards would reflect federal priorities for innovation and would aim to build public confidence in systems that use AI technology.

“AI is already transforming so many aspects of our lives and it has the potential to do so much more,” Copan said. “We’re living in an environment where there’s concern, and so the standardization process, as well as the development of the appropriate tools, is an important initiative. We need to work together to ensure that we make the most of this technology while ensuring safety, privacy and security.”

International push for AI standards

But it’s not just NIST who’s coming up with AI ethics guidelines. On May 22, 40 countries including the U.S. signed off on a common set of AI principles through the International Organization for Economic Cooperation and Development.

Those principles include a greater sharing of data, as well as equipping workers with the skills they’ll need to compete for jobs in the future.

NIST Director Walter Copan

“The principles begin to answer the questions of how we should think about developing and deploying AI technologies,” Copan said. “We now need to work on how we implement these principles in the real world — hence, this workshop. We need science and technology-based standards that can use and can translate these aspirational principles into actionable and measurable solutions. Our efforts today are an important step in defining these solutions.”

Those international principles also underscore the global arms race around countries striving to have the best AI. Lynne Parker, the assistant director for Artificial Intelligence in the White House’s Office of Science and Technology Policy, said competition has pushed the U.S. to be more proactive with staying on the cutting edge of AI. That marks a different tact from the rise of the computer and information age, when American companies developed most of the tech innovations and set the standards for how they’d function.

“We have to recognize that we’re in a new climate now, so that global competitiveness now requires us to be more intentionally proactive and promoting that open, transparent process, so that we can make sure that all of our good ideas that are coming out of the United States have an equal footing. We’re not afraid of competition internationally … in some sense, the federal government, not NIST in particular, but broadly speaking, has not recognized the importance of standards for AI, because we presumed that the process in the past continues to work well going forward,” Parker said.

Standards first, oversight second

But there’s a lot of concepts to break down when talking about AI. Joshua New, a senior policy analyst at the Center for Data Innovation, said when most people talk about AI, they’re really talking about two separate things: First, there are technical standards — measurements like reliability, performance, accuracy. Then there are oversight and ethics issues. Those conversations include what AI should do, and whether the data feeding AI could bias its outcome.

Lawmakers have already focused on the oversight piece of this, but New said those conversations can’t really get off the ground without figuring out the tech standards first.

“Unfortunately, this prioritization of oversight is seemingly coming at the expense of a focus on standards development,” he said. “You know, the activities required to develop standards require a really robust scientific understanding that can serve as a technological underpinning for this oversight.”

In April, Sens. Cory Booker (D-N.J.) and Ron Wyden (D-Ore.), as well as Rep. Yvette Clarke (D-N.Y.), introduced the Algorithmic Accountability Act, which would require the Federal Trade Commission to set standards to avoid algorithmic bias. However, New said algorithmic transparency doesn’t have a standard definition.

“From a technical perspective, we don’t know what that means. We don’t know how to compare the transparency of one system to another, and rushing to make those rules without actually doing the scientific legwork behind it is going to be short-sighted, as any rules will necessarily be arbitrary. So, I guess the challenge for us is how do we get non-technical policymakers to care about this really important technical work? It’s a challenge,” New said.

In the health care field, the stakes around AI and data privacy are a lot higher. Jon White, the deputy national coordinator for health information technology at the Department of Health and Human Services, said his team has been working with HHS’s Office of Civil Rights on updating guidance for the Health Insurance Portability and Accountability Act (HIPAA) to reflect the coming age of AI.

“If you want to participate in a research initiative that is training a really powerful algorithm, you can choose to share your information if you want to, but what are the policies around that? What does the research initiative have to make clear to you about your data and the conditions under which it’s used? This initiative actually has been really for looking in that respect,” White said.

AI conversation is bigger than IT

Talks about AI standards will include more than engineers and the IT community. Jason Matusow, the general manager of the Corporate Standards Group at Microsoft, said professional ethicists need to part a part of these conversations as well.

“If you talk to an engineer about … different ethical models, they’re going to look at you really blankly because it’s not something they’ve studied,” Matusow. “We have a need to have people who are truly trained ethicists, people who have a basis for the discussion.”

The Defense Advanced Research Projects Agency has spent billions on the next generation of “explainable” AI. Think of a bot that can give an answer, and can then show the steps it took to get that answer. But until researchers make that next big breakthrough, Parker said it’s a challenge to build public trust in AI’s reliability.

“Some of the techniques, particularly the deep-learning techniques that are very popular today that have contributed to these use cases being successful now, that technology is a black box. In that sense, because we’re using it very frequently, and we can’t look in the black box, that raises these questions about is it being used appropriately, because we can’t really understand what’s going on,” Parker said.

For all the concern about AI ethics, the actual rollout of AI technology in government has been slow-going. The Defense Department has tested AI for predictive maintenance on vehicles and aircraft. Civilian agencies have experimented with robotic process automation. RPA pilots at the General Services Administration and the IRS helped employees save time on repetitive, low-skill tasks.

But even if AI adoption remains in its early stages, Parker said developing standards will help get the ball rolling.

“There’s always this question about the timing: When is the right time to push for standards in a particular area — and an AI is still very new — but also people are concerned about issues of governance and oversight,” Parker said. “You can’t govern and oversee if you don’t have good ways of measuring is a system fair … There’s this concern that if you standardize too early, then you’re missing out on good ideas that are still coming down the pipe, because the area is still new. But it can also really accelerate progress in the field.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.