DoJ says its policy will encourage independent security and safety research, but will large AI companies follow suit in encouraging vulnerability disclosure?
The Justice Department is revising its policy aimed at curbing legal risks for third-party cybersecurity researchers to address vulnerability reporting for artificial intelligence systems.
Nicole Argentieri, principal deputy assistant attorney general in DoJ’s criminal division, said her division is “hard at work” updating the 2017 framework for vulnerability disclosure programs.
The updates will “address the reporting of vulnerabilities for AI systems and to contemplate issues that might arise under intellectual property laws,” Argentieri said during an event Wednesday hosted by the Center for Strategic and International Studies.
“As we work to update this document, the criminal division, along with other department components, will engage with external stakeholders, including researchers and companies, to solicit feedback so that they may share any concerns that they have, including about the potential applicability of criminal statutes to good faith AI red teaming efforts,” she said.
Argentieri noted third-party security research has helped defend computer systems and networks from previously known cyber bugs. Federal agencies, including the Defense Department and the Department of Homeland Security, have pursued vulnerability disclosure programs to find cybersecurity issues in their systems and even contractor systems.
But Argentieri noted independent research on AI systems can go beyond pure security concerns.
“It can also help protect against discrimination, bias and other harmful outcomes,” she said. “As AI becomes more prevalent in our lives, it is critical that we do not let it undermine our shared national principles of fairness and equity.”
The DoJ effort is in line with the White House’s September 2023 voluntary AI commitments, which encourage companies to incentivize third-party vulnerability discovery.
While leading AI companies have pledged to follow the White House’s commitments, AI researchers have expressed concerns about their following through on protecting good faith research. More than 350 leading AI researchers earlier this year called on companies to establish a safe harbor for AI evaluation, arguing that current policies “chill independent evaluation.”
Argentieri noted DoJ also supports a push for the U.S. Copyright Office to clarify that its exemption allowing good faith security research to AI systems would also include research into bias and “other harmful and unlawful outputs of such systems.”
“Although we believe that good faith research can coexist with the statutes that we enforce in the criminal division, we know that we are just at the beginning of this AI revolution, and there is much that is still unsettled in the world of AI from some of the core technology to the technical details of AI systems and how research into them can most effectively be conducted,” Argentieri said.
DoJ’s plan to revise its vulnerability disclosure framework comes as the criminal division sets out a new “Strategic Approach to Countering Cybercrime.”
The goals include boosting efforts to disrupt ransomware gangs, botnets and other cyber-criminal activities; strengthening DoJ’s tools for combatting cyber crime; and promoting more “capacity building,” public education and information sharing about the misuse of emerging technologies.
“Criminals have always sought to exploit new technology, but while their tools may change, our mission does not,” Argentieri said. “The criminal division will continue to work closely with its law enforcement partners to aggressively pursue criminals, including cyber criminals, who exploit AI and other emerging technologies, and to hold them accountable for their misconduct.”
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Follow @jdoubledayWFED