Rep. Will Hurd (R-Texas) said Tuesday that he and Rep. Robin Kelly (D-Ill.) will introduce a resolution next month that would raise the profile of AI’s implic...
Two of the leading voices in Congress on artificial intelligence will introduce a resolution recognizing the steps the federal government has taken so far in AI research – but also recognizing how much more progress is needed.
Rep. Will Hurd (R-Texas) said Tuesday that he and Rep. Robin Kelly (D-Ill.) will introduce a resolution next month that would raise the profile of AI’s implications on the workforce, national security, research and ethics.
The resolution will include recommendations specific to those four topics, and would serve as a jumping-off point for future legislation.
This approach borrows in some respects from the bipartisan Cyberspace Solarium Commission, whose leading recommendations have taken hold in the annual defense spending bill. Hurd said this approach would also see more progress than individual bills making it out of committee and onto the House floor.
“A resolution can be a broader stroke and say, ‘This is the direction to go.’ This is going to recognize those previous bipartisan accomplishments not only in Congress, but in the executive branch under the previous and current administration,” Hurd said in a virtual conference hosted by the Bipartisan Policy Center.
With the future of cybersecurity leading to “good AI versus bad AI,” Hurd said agencies should work with the private sector on a kind of “cyber National Guard,” in which private sector workers spend about six weeks a year on detail in the federal government.
National Security Commission on AI, a federal advisory committee created by Congress, reached a similar conclusion last month when it recommended standing up a Digital Service Academy to develop new AI talent and a reserve corps modeled after the National Guard to tap private sector expertise.
While the global race to develop the most cutting-edge AI has been compared to the space race of the 1960s, Eric Schmidt, the commission’s chairman and former head of Google, expressed concerns that federal R&D spending – about 0.7% of gross domestic product – has dropped below “pre-Sputnik” levels. At the height of the space race, he said U.S. R&D spending peaked at 2%.
To remain a leader in AI research, Schmidt urged the U.S. to “place some big bets” and double the amount of funding for research within the next five years.
“There are things that the federal government is the only potential funder of,” he said. “We need more money, because money does drive the signals around hiring, building organizations, making experiments and so forth.”
Without increased investment, Schmidt said China is on track to surpass the U.S. in many aspects of AI capabilities.
“They’re going to end up with a bigger economy, more R&D investments, better quality of research, wider applications of technology and a stronger computing infrastructure. How is that OK? It’s clearly not OK. We’ve got to act, and the good news is we have time,” he said.
In national security applications of AI, Schmidt emphasized the need for a “human in the loop” of decision-making. The Defense Innovation Board he also chairs crafted the AI ethics principles that the Pentagon adopted earlier this year.
“The time is crunched, everything is happening incredibly quickly, and the AI system says ‘press the button.’ Do you really think the human is going to have the kind of judgment and quality of thinking and insight to say, ‘Let me debate the accuracy of the AI system while this thing is coming at me?’”
Hurd said the U.S. has an opportunity to “take advantage of this technology before it takes advantage of us,” and shape the ethical use of AI on a global scale.
“An authoritarian country is always going to have more data than us; they don’t care about civil liberties,” he said. “So in order to beat them at their game, we’re going to need more data or we’re going to need algorithms that work on less data.”
As for domestic uses of AI, the National Institute of Standards and Technology has led the charge on setting standards to prevent bias in AI algorithms. To mitigate against this, Hurd said agencies and industry should recruit a workforce from a wide set of demographics to prevent unintentional bias taking root in AI algorithms.
“We have laws on the books. If you have a teller at a bank [that] can’t discriminate against issuing a home loan, the algorithm can’t either — and whether it’s the person using the algorithm in an improper way, or it’s the algorithm itself, it’s still a violation of the law,” he said.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Jory Heckman is a reporter at Federal News Network covering U.S. Postal Service, IRS, big data and technology issues.
Follow @jheckmanWFED