White House releases ‘first of its kind’ set of binding AI principles for agency regulators

Administration officials said this set of principles marks the first “binding document” that holds agencies accountable to how they regulate the private sec...

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.

In a follow-up to President Donald Trump’s executive order on artificial intelligence, the White House’s Office of Science and Technology Policy has released what it has described as a “first of its kind” set of principles that agencies must meet when drafting AI regulations.

Other agencies have spent considerable effort pinning down ethical AI use over the past year. The defense and national security communities, for example, recently mapped out their own ethical considerations for AI, and the United States last spring signed off on a common set of international AI principles with more than 40 other countries.

But administration officials told reporters Monday that this set of principles marks the first “binding document” that holds agencies accountable to how they regulate the private sector’s use of AI technology.

U.S. Chief Technology Officer Michael Kratsios said agencies must demonstrate to the White House that they’ve met all 10 of its AI principles when proposing rules governing the private sector’s deployment of AI technology. These principles, he said, demonstrate the Trump administration’s commitment to “maintain and strengthen the U.S. position of leadership” on AI.

“The U.S. AI regulatory principles provide official guidance and reduce uncertainty for innovators about how the federal government is approaching the regulation of artificial intelligence technologies,” Kratsios said. “By providing this regulatory clarity, our intent is to remove impediments to private-sector AI innovation and growth. Removing obstacles to the development of AI means delivering the promise of this technology for all Americans, from advancements in health care, transportation, communication — innovations we haven’t even thought of yet.”

The public will have 60 days to comment on the White House’s draft guidance. Following those 60 days, the White House will issue a final memorandum to federal agencies and instruct agencies to submit implementation plans.

Deputy U.S. Chief Technology Officer Lynne Parker said these agency implementation plans will cover a wide swath of policy issues and will help avoid a “one-size-fits-all” approach to regulating AI.

The White House policy would, for example, come into effect when the Department of Transportation sets rules for AI-enabled drones, or the Food and Drug Administration approves AI-powered medical devices. However, Parker clarified that the government’s use of AI remains “outside the purview” of this document.

“While there are ongoing policy discussions about the use of AI by the government, this action in particular though addresses the use of AI in the private sector,” Parker said. “It’s also important to note that these principles are intentionally high-level. Federal agencies will implement the guidance in accordance with their sector-specific needs. We purposefully want to avoid top-down, one-size-fits-all blanket regulation, as AI-powered technologies reach across vastly different industries.”

Parker outlined OSTP’s 10 AI principles:

  1. Public trust in AI —  “The government’s regulatory and non-regulatory approaches to AI must promote reliable robust and trustworthy AI applications.”
  2.  Public participation — Agencies should provide “ample opportunities” for the public to participate in all stages of the rulemaking process.
  3.  Scientific integrity and information quality — “Agencies should develop technical information about AI through an open and objective pursuit of verifiable evidence that both inform policy decisions and foster public trust in AI.”
  4.  Risk assessment and management — “A risk-based approach should be used to determine which risks are acceptable, and which risks present the possibility of unacceptable harm or harm that has expected costs greater than expected benefits.”
  5. Benefits and costs – Agencies should “carefully consider the full societal benefits, and distributional effects” before considering regulations.
  6.  Flexibility – Regulations should “adapt to rapid changes and updates to AI applications.”
  7.  Fairness and non-discrimination – Agencies should consider issues of fairness and non-discrimination “with respect to outcomes and decisions produced by the AI application at issue.”
  8.  Disclosure and transparency — “Transparency and disclosure can increase public trust and confidence in AI applications.”
  9.  Safety and security — “Agencies should pay particular attention to the controls in place to ensure the confidentiality, integrity and availability of information processed stored and transmitted by AI systems.”
  10.  Interagency coordination — “Agencies should coordinate with each other to share experiences and ensure consistency and predictability of AI-related policies that advance American innovation and growth and AI.”

Parker said agencies should also consider non-regulatory oversight of AI, including pilot programs, and reducing the barrier to AI development by releasing more federal data sets publicly.

OSTP’s AI principles meet some of the goals of the American AI Initiative that Trump created through an executive order last February. That executive order mandated that the administration develop a national action plan to “protect the advantage” of the U.S. in AI research and development.

Other agencies have already made progress toward the administration’s AI goals. The National Institute of Standards and Technology, in developing technical standards for AI use, has proposed that agencies set ground rules for AI ethics without “stifling innovation.”

Administration officials said state or local governments banning the use of facial-recognition AI tools, for example, sets too much of a regulatory burden.

“When particular states or localities make decisions like banning facial recognition across the board, you end up in tricky situations where civil servants may be breaking the law when they try to unlock their government-issued phone,” an administration official said. “We want to avoid situations where we’re outright banning new technologies and rather creating a robust regulatory framework, which allows these technologies to thrive in our country.”

The Defense Innovation Board and the National Security Commission on Artificial Intelligence have also released their own recommendations for ethical AI use. The former panel has sent its non-binding proposals to Defense Secretary Mark Esper for consideration, while the latter group will submit its final recommendations to Congress later this year.

NSCAI chairmen Eric Schmidt, the former CEO of Google and its parent company Alphabet, and former Deputy Defense Secretary Robert Work, told reporters back in November that the U.S.’s role as the top authority on AI research could be in jeopardy, with rivals like China gaining momentum in certain key areas.

Meanwhile, Kratsios said the Trump administration encourages the European Commission, which will release its own AI regulatory document in the coming months, to use this OSTP policy as a framework.

Meanwhile, an administration official said the White House’s AI principles will serve as an example for other Western countries.

“The West needs to lead the world in next-generation technological development because if we do not, some of the other players around the world will be attempting to insert their values into these next-generation technologies, and it is more imperative than ever before for the United States and the West to make the next great technological breakthroughs,” the official said.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.