How exactly should the government regulate artificial intelligence anyway?

An old saying goes like this: "If it moves, tax it. If it moves too fast, regulate it. If it stops moving, subsidize it." Well, Artificial Intelligence is in th...

An old saying goes like this: “If it moves, tax it. If it moves too fast, regulate it. If it stops moving, subsidize it.” Well, artificial intelligence is in that fast-moving stage, but no one seems to quite have any sense of how or even why to regulate it. Federal Drive Host Tom Temin‘s guest has a few clues: Dr. Charles Clancy, the Senior Vice President and General Manager of MITRE Labs.

Interview Transcript: 

Tom Temin And you have written with a team here put together some really strong recommendations for what are the cogent ways to maybe regulate artificial intelligence. But let’s back up for a moment to the whys. It seems to be a lot of people have a vague notion or a fear of this thing, and it’s not any single thing. It’s a lot of things. So tell us the background here.

Charles Clancy Sure. I’d say over the last six months, we’ve really seen AI spring from something that was very much researcher focused and a very small community focused on it to something that’s now much more ubiquitous in the public consciousness. And it really has to do with the growth of large language models and broadly accessible tools like ChatGPT, that have, I think, shifted our understanding of the relationship between humans and AI.

Tom Temin In some sense, it’s parallel to how Bitcoin made people realize there’s something called blockchain and cryptocurrency out there. ChatGPT made people realize the reality of AI.

Charles Clancy Yeah, and certainly AI has been incrementally improving almost every imaginable industry in small ways. But I think the visible leap forward that we saw with ChatGPT is one that’s, I think, causing people to think that we maybe needed it more.

Tom Temin Right because earlier AI applications did augmentation of human activity. You call these newer ones AI with agency that it can almost act on its own.

Charles Clancy Exactly. I think there’s kind of three tiers here. One is the AI that’s already around us, involved in almost everything we do digitally, and that’s where AI is just a small component in a much more complex system. Think about an autonomous vehicle, right? It’s got cameras that it’s using to detect the road, other cars, traffic signs and lights. All of that is AI. That’s pretty well understood. We know how to test it. We know how to train it. We know how to assure it. But these sort of more almost AI with agency or AI that can execute tasks autonomously that just sort of lives out on the Internet is sort of this new frontier.

Tom Temin And as people worry about half truths or non-truths or fake news, there’s a million words for it. We have seen that these generative programs can create something that looks truthful. But to any expert or someone that delves deeply or people that have like created their own biographies using it and see the falsehoods in there, it’s not the machine deliberately lying, but it could be made to do that, too. And that’s one of the concerns.

Charles Clancy Of course. Yeah. The role of AI in accelerating mis- and disinformation is a major concern I think that many people share. And it’s not so much that we can now create an image that we never could create before. Certainly there were digital artists who could create completely convincing images. It’s just now that an untrained amateur can create something that is undetectably different from that of real. And also the people who do this for a living, the propagandists, the mis- and disinformation folks, now have tools that can allow them to do ten 100 times more than they could before.

Tom Temin Yeah, it’s like people that can get ten people together on Twitter and you’d think the whole world is saying something when it’s ten people on Twitter.

Charles Clancy Exactly. The same holds true for cyber as well. Right. So large language models have the potential to take amateur hackers and turn them into world class and take world class hackers and turn them into people who can hack hundreds of targets at once. That’s other companion concern.

Tom Temin And before we get into some of the details of your regulatory recommendations and there’s a good list, I think, of about seven of them. What’s the methodology by which your group arrived at this framework?

Charles Clancy Well, first, I think we took the approach that we already have a lot of regulatory agencies that have significant responsibility for either regulated industries or critical infrastructure, and they’re the domain experts already. And AI is really just kind of the next phase. Many of these industries have already seen the migration from hardware to software, from software to AI, and so they’re the most equipped to understand the context and the risks of their industry. And that’s where the regulation should be happening, rather than some new agency that would be out trying to regulate AI and ubiquity.

Tom Temin We’re speaking with Dr. Charles Clancy. He’s senior vice president and general manager of MITRE Labs. And let’s get into some of the framework items. You have a list of possible regulatory approaches and what are some of the highlights of those?

Charles Clancy Yeah. So first, I think we want to empower the existing regulatory agencies to really incorporate AI as part of the existing framework. So good examples would be aviation. Aviation has gone from a very hardware centric model to things like the Boeing 787, which increasingly relied on software and updatable software. That was a big change in how we had to think about certifying and regulating aviation. And you can only think about. Is the next generation of that in terms of a whole new class of software with new capabilities. I think the FDA is another great example as you think about how we regulate medical devices. We already have a shift from hardware now to software based medical device regulation. Right. AI is that next frontier. And so I think the first is really about helping those agencies understand in a systematic way what the risks are and be able to apply that effectively to their industry.

Tom Temin Yeah, those agencies have their domains, so they should become expertise in AI in that domain. For example, the FDA has a program now to test whether AI can be used to speed up drug approval, for example, that kind of thing.

Charles Clancy Exactly. Yeah. And then there’s roles for agencies like NIST who are responsible for technology standards in general to create the frameworks and the standards that those other agencies could apply in the regulatory process.

Tom Temin And establishing liability for AI caused harms. That’s where you’re going to really get into the political buzzsaw and cross-currents of Washington. Tell us more about that one.

Charles Clancy Yeah, from a liability perspective, we’re seeing this already in the software domain, right, where the national cybersecurity strategy that came out of the White House earlier this year suggested that we start holding software vendors more accountable for vulnerabilities in their code, not just the companies that deploy and operate that software. And I think as we go into this AI space, it’s still an open question as to who’s accountable. Right. Is it the people who design the model, train the model, deploy the model or use the model? Where is the liability in that chain? And we really just don’t have the answers to that from a legal perspective.

Tom Temin Because someone could design a model and it’s a perfectly good model that doesn’t know A from B, and then someone would feed it deliberately biased training data to make it do something that would seem, Wow, okay, it works great not knowing the bias that was fed into it. And so that’s not really the model creator, but it’s rather the trainer that is the issue.

Charles Clancy Exactly. So the industry today is adopting what’s called model cards. And so this is when you create an AI model, you have to say how you trained it and how you tested it and what assumptions you had about its use. And you can release the model into the wild, but it’s sort of a buyer beware disclaimer so that if someone uses the model in a way that it wasn’t trained or wasn’t intended or wasn’t tested, then they know that the performance may be degraded or it may not work as expected. So the first step is kind of this nutrition labeling for AI. So people at least are informed in their their use of the models.

Tom Temin Yeah, it seems like the aviation analogy is an apt one because the earlier planes were just cables and pulleys and you could twist a rudder back here and it would make the thing move in front. And you knew your system was good because it was just cables and pulleys, whereas now they’re just these software stuffed black boxes and people getting on presume they’ll do what they say they will do. But as we found out in some recent incidents and regulatory and liability disasters in aviation, it doesn’t always work out that way.

Charles Clancy 100%. And I think you can imagine all of the innovation that we have today in autonomous vehicles around being able to do machine perception, orchestration, control. Imagine applying that in an aviation context, Right? You’d have even greater concerns from a safety perspective.

Tom Temin A very old form of certification for the consumer who can’t test meat and produce themselves as the USDA stamp. So maybe do you envision that type of output when this is all done in the agencies have the expertise they need to regulate that there would be some kind of equivalent of the Pennsylvania Agriculture Department or the USDA stamp on AI driven products?

Charles Clancy So I think there have been holes for third party auditing and certification of AI models. I think that can be part of the ecosystem. But I guess I really want to tie it back to the agencies who understand the context of the areas that they’re regulating in helping bring that context forward. I would be very concerned about a third party air auditor who’s responsible for auditing AI in every imaginable application because they just lack the necessary domain expertise to make the needed risk informed decisions. So like I said, I think that role is important, but it should be sort of managed through the existing regulatory functions we have.

Tom Temin And we could go on for hours. This report is pretty detailed and it gives the pros and cons and possible approaches to many ways. Who’s reading it? Where are you promulgating it and any reaction to it?

Charles Clancy We have been circulating it certainly among the US government agencies that MITRE supports through our federally funded research and development centers, working at the White House of the Hill and getting a lot of positive feedback. Trying to do something constructive that is implementable within our current government regulatory ecosystem and so far a lot of interest. So we’ll see how things go on the Hill with the sort of current legislative session. But I expect some of these things will find their way into legislation.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories