Is the AI leash on federal agencies too long?

That new White House guidance on agency use of artificial intelligence embodies guardrails, but also a few opt-out scenarios.

That new White House guidance on agency use of artificial intelligence: It embodies guardrails, but also a few opt-out scenarios that give agencies plenty of discretion. One analyst thinks it gives them too much discretion. For more on the Federal Drive with Tom Temin, Federal News Network’s Eric White spoke to the senior counsel at the Brennan Center for Justice, Amos Toh.

Interview Transcript: 

Amos Toh I think that the OMB memo does provide one of the most robust frameworks for regulating how governments, how federal agencies should regulate their use of AI. But the problem is that this robust framework has major loopholes that government agencies may invoke to avoid the very strong and sensible safeguards that the OMB has outlined. So let’s start with the good, for AI that impacts rights and safety. The OMB memo requires agencies to conduct an impact assessment before they deploy the system. During the impact assessment phase they have to consider whether the expected benefits of the system will actually outweigh the costs, including its risks to civil liberties and rights. They also have to assess whether an AI system is better suited to accomplishing the tasks that the agency wishes to accomplish compared to a non AI strategies. So all of these criteria and mechanisms are really in place to compel agencies to think very carefully about deploying AI and whether it’s suitable for that function and that task. There’s also ongoing risk monitoring that the federal agency must do once they’ve deployed the system. There are requirements for human training and oversight and transparency and providing notice to people the fact that, by the system, as well as requiring agencies to provide options to opt out of an AI system, and providing AI alternatives to people directly affected by these systems. So there’s a lot of good in there. But, there are ways that agencies can exempt themselves from these safeguards. And there are two main ways. One is obtaining a waiver when they determine that the system that complying with these safeguards will somehow impact. So safety or would be an impediment to critical agency operations. And they can also define their AI systems in such a way where they are exempt from the guidance.

Eric White So yeah, those are the two critical loopholes that you think. Is your beef with them that they give agency leaders almost a little too much leeway? Because, yeah, I’ve read that same part of the rules. And yeah, I got a little bit of sense of the implied powers clause from the Constitution there. Just saying kind of describing how leaders can make the decision if they feel the need to almost.

Amos Toh Right. So the OMB memo, grants, chief artificial intelligence officers at agencies, which are essentially the agency appointed leads on overseeing the AI systems for that particular agency. It essentially grants CAIO’s discretion to apply waivers to the systems, and also to define whether an AI system is a principle or basis for a given agency, decision or action. If it is, then it would be covered by the OMB memo, and the minimum practices would apply if it does not qualify as a principal basis according to the CIO and according to the agency, then it would be exempt from these practices. So that’s really kind of the two ways, in which agencies can exempt themselves from the minimum practices. And I think the criteria is highly subjective and potentially also overbroad. Particularly in the law enforcement context, we have great concern that agencies will seek to waive minimum practices in the name of critical law enforcement imperatives. When we’ve seen time and time again, when agencies don’t follow safeguards, that leads to significant impact on civil rights and civil liberties.

Eric White So this is all new territory for everybody. I think I get a sense that you’re not saying that the waivers aren’t unnecessary. There does need to be some sort of escape clause for agencies if they feel the need. But what you’re saying is that should those waivers should be under more scrutiny.

Amos Toh Well, I do think that there are certain minimum practices. And in the OMB memo, that should never be waived. I do think, for example, that the duty to conduct an impact assessment is before you deploy an AI system is something that should be applied regardless of whether there are emergency circumstances or not. AI can be a very powerful tool that can have a lot of harm in rolling it out without even an assessment of its potential impact on people and rights and safety can really need to mask violations of people’s rights and may actually be damaging to safety at scale. So you can imagine an agency, if they need to roll something out quickly, because it’s a critical agency operation. And that’s happening, and in a limited time frame. They could opt to do a shorter impact assessment and do a more deliberate one later on when there is less time pressure. But to waive that requirement entirely is something that actually can lead to the opposite of what the agency intends, which is that it might not, it may be even counterproductive to the operation that the agency is trying to undertake. So I do think that there are certain requirements that shouldn’t be waived. For requirements that arguably waivable, such as certain transparency requirements and kind of public disclosure requirements. I do think that needs to be really combined to instances where the AI system, for example, aspects of the AI system may actually be sensitive, law enforcement or national security information that can be genuinely classified as such. So we’ve seen a glut of over classification of information in the government. And so there’s not a lot of confidence that agencies, will apply classification markings that actually adhere to both the letter and the spirit of classification directives. And I think to OMB’s credit, you’ve said in some of their guidelines that even if some aspects of an AI system may not be disclosed to the public, the agency should make kind of best efforts to disclose the rest of the system that can be and should be disclosed to the public.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories