AI & Data Exchange 2024: Sen. Mark Warner on creating AI guardrails

Virginia senator sees AI’s value but expresses concern about potential harm to federal programs, elections and financial markets without the right guardrails.

For members of Congress, no less than for the users and sellers of technology, artificial intelligence presents a challenging work in progress. That’s how Virginia Sen. Mark Warner described AI — a technology still changing and one that Congress as a body struggles to understand.

Speaking Federal News Network’s AI and Data Exchange, Warner said that even with his own technology background, and many hours studying the potential ramifications of AI, he and fellow members of Congress aren’t yet sure of the regulations that might be needed for AI.

As for the White House’s October 2023 executive order on AI, Warner said: “The president’s executive order was good and broad. But it’s an executive order. EOs are great, but they don’t go as far as legislation.”

4 potential areas for AI legislation

That’s why he has sponsored a bill with Sen. Jerry Moran (R-Kansas) that he called “kind of” a back-up to the EO. The EO requires the National Institute of Standards and Technology to establish specific AI standards and a framework for federal use of AI. The Warner-Moran bill would codify the  NIST framework.

“When the NIST standards come out, they ought to be the rules at least all government operates against,” Warner said. “You drive government as first actor — not the whole society — but start with government.”

A second area for potential legislation, he said, would focus on, “Where can AI tools immediately have a screw-up tomorrow?”

He named two areas of national concern: “Messing in our elections or potentially manipulating our public markets.” A bill he co-sponsored with Sen. John Kennedy (R-Louisiana) would amplify penalties for already-illegal market activities if those activities are abetted through the use of AI.

Another percolating legislative idea would mandate watermarking of AI-generated content so that the digital trace would be impossible to remove.

“And then, questions around national security are areas where I think we might see some action,” Warner said.

“What I hope we don’t have happen is some tragic manipulation of the market or some gross interference in one of the primary elections,” with Congress subsequently rushing to do something, he added.

Seeking broad collaboration across agencies, with Hill on AI

When it comes to congressional action on federal agency use of AI, Warner advocates a collaborative approach.

“This is where we really do need the input, ideas and suggestions of our federal workforce,” he said. “The federal workers should not say, ‘Well, we’re just going to be worried about how AI tools are going to affect us.’ We need your ideas. We need your suggestions.”

Warner expressed particular concern about the potential for bias if agencies train algorithms with the wrong data. He noted how federal legislation over the years has curtailed discrimination in housing and access to capital.

“If you haven’t trained on the right set of humans or you’re not inputting the right set of data, then all of these inherent biases could be exponentially worse,” Warner pointed out. “How you train the model will have a dramatic effect on the output. And I hope our federal workers who’ve had experience, particularly on this front-line issue around bias, will keep us policymakers on our toes to make sure we don’t lose sight of that.”

Warner said he wants agencies to exercise as much vigilance about synthetic data generated for training purposes as they exercise over program-generated data.

He acknowledged that for Congress to deal with AI in the context of federal agency internal use or for its potential to affect elections and financial markets, members themselves need to understand AI and its implications from the get-go.

“AI is an area where most members, myself included, need to be a little bit humble and acknowledge that we don’t know everything on this subject matter,” Warner said.

Short of creating a single agency as the locus for federal AI, he said, members — like himself, with oversight rank — must have an idea of how the specific agencies they oversee deal with AI. That by itself is a moving target. As an illustration, he pointed to the 19 agencies under the purview of the Senate Intelligence Committee, for which he is chairman.

Just a year ago — “way, way long ago in a galaxy far, far away,” he quipped — Warner’s team sat down with the Intelligence Community leadership and generative AI pioneer Sam Altman.

“It felt like the Intel Community said they were going to build one large language model that would include all our pixels from overhead, most of our intercepts from the National Security Agency, the thumb drives that our spies obtained and a host of other things,” he recalled.

The premise that such an approach might succeed, he said, “has fundamentally changed in the last, say, seven or eight months.” Now the IC is asking whether each unit should have its own LLM. The point is that no single best practice has emerged, Warner said.

“It gets back to this notion of how you’re using AI, agency by agency or department by department, which you oversee,” he said.

Making sure protections keep pace with AI innovations

Warner also cautioned against the sort of infatuation with AI that characterized the government’s thinking about all things Silicon Valley. This is especially so in light of negative developments in social media, and Congress seeming to go in circles about whether or how to deal with it.

He recalled the 2009 to 2010 timeframe by way of comparison. Between “normal ‘let’s trust business’ Republicans” and Democratic “techno optimists” inhabiting the Obama administration, Warner said the whole country bought the notion of: “OK, Silicon Valley, you guys are really smart, you go break things, and we’ve got to innovate as fast as we can. We’ll come in after the fact and put up the guardrails.”

But that didn’t work, Warner said and sped ahead to the recent Senate Judiciary Committee hearing about protecting children on social media. “Well, they innovated, they broke things. We saw at a Judiciary Committee hearing a small sampling of the number of families who have literally had their kids die.”

For Congress, Warner said, the challenge lies in finding what he called the sweet spot between necessary AI guardrails and stifling legitimately needed innovation.

“The reason why I’m more afraid of not getting some guardrails right for AI, even more than social media, is that AI can accomplish things at a speed and scale unlike anything we’ve seen in the past,” he said.

Although Congress may not be able to craft any grand design AI legislation, Warner said that he hopes some blocking and tackling can occur on the most potent threats, like market manipulation and deep fakes in politics.

He’s working with tech companies now and asking, “Can’t we at least get a set of voluntary guidelines for AI misuse in elections?”

Discover more articles and videos now on our AI & Data Exchange event page.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories