The White House is calling on the federal government and industry to keep the risks of artificial intelligence in check.
The Biden administration, in its latest effort to promote ethical and technical standards for increasingly common AI tools, is getting top AI developers to agree to certain safeguards for their products.
A senior administration official told reporters Thursday that the White House is also developing an executive order that “will ensure the federal government is doing everything in its power to advance safe, secure and trustworthy AI,” as well as manage its risks.
“We’re looking at actions across agencies and departments, given how cross-cutting AI is,” the administration official said.
The Biden administration, the official added, is also pursuing legislation “to help America lead the way in responsible innovation,” and is currently working with members of Congress on proposals.
“We know that legislation is going to be critical to establishing a legal and regulatory regime to make sure these technologies are safe,” the official said.
The White House is also getting a voluntary commitment from top AI developers to make sure their products meet certain ethical and technical standards.
The companies, according to a fact sheet released Friday, will commit to internal and external security testing of their AI systems before their release.
“Companies that are developing these emerging technologies have a responsibility to ensure their products are safe,’ the fact sheet states. “To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety.”
President Biden is hosting seven of those leading AI companies at the White House on Friday: Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI.
The senior administration official said the White House is building external monitoring into some of the agreements with tech companies.
“We’re working on government actions here to make sure we manage the risks posed by this technology, and I wouldn’t suggest that this is anything like the end of the road. But this is a step in the right direction and a step getting us towards where we need to go with this technology,” the official said.
The companies have also committed to investing in cybersecurity and insider threat safeguards to protect AI models from tampering, and will allow third-party discovery and reporting of vulnerabilities in their AI systems.
“Some issues may persist even after an AI system is released and a robust reporting mechanism enables them to be found and fixed quickly,” the White House wrote in its fact sheet.
The companies also agree to share information across the industry and with governments, civil society and academia on managing AI risks.
To help build up public trust in AI systems, the White House said companies will prioritize research on the societal AI risks, including avoiding harmful bias and discrimination, as well as protecting civil liberties.
Companies will also publicly disclose the capabilities and limitations of their AI products, as well as areas where AI tools are appropriate and inappropriate for use.
With effective safeguards in place, the Biden administration expects AI tools can help lead to scientific breakthroughs.
“From cancer prevention to mitigating climate change to so much in between, AI — if properly managed — can contribute enormously to the prosperity, equality and security of all,” the fact sheet states
Later this summer, the Office of Management and Budget will release draft guidance on the use of AI systems within the federal government.
The OMB guidance will establish specific policies for federal agencies to follow when it comes to the development, procurement and use of AI systems — all while upholding the rights of the American public.
The White House Office of Science and Technology Policy last October released a Blueprint for a “Bill of Rights” for the design, development and deployment of artificial intelligence and other automated systems.
The Bill of Rights outlines what more than a dozen agencies will do to ensure AI tools deployed in and out of government align with privacy rights and civil liberties.
Last January, the National Institute of Standards and Technology also rolled out new, voluntary rules of the road for what responsible use of artificial intelligence tools looks like for many U.S. industries.