Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.
The Select Committee on the Modernization of Congress has been at work for several years now. It’s found that nearly every function in the Legislative Branch is due for updating. Recently, the committee held a hearing on whether artificial intelligence technology could help and if so, where? Our next guest testified at that hearing. He is the...
The Select Committee on the Modernization of Congress has been at work for several years now. It’s found that nearly every function in the Legislative Branch is due for updating. Recently, the committee held a hearing on whether artificial intelligence technology could help and if so, where? Our next guest testified at that hearing. He is the research manager for Deloitte’s Center for Government Insights, Joe Mariani talked to the Federal Drive with Tom Temin.
Joe Mariani: We’ve been looking at the impact that artificial intelligence have in government for about five years now and across the whole breadth of government, from executive to legislative. And from that, I think, we found kind of two ways that AI or artificial intelligence could help in the legislative process. The first is kind of, what we’re kind of calling AI’s microscope and this idea of using AI to examine the impact of existing legislation. And then the other is using kind of AI kind of like a flight simulator, where you create a model and be able to test things that you may want to do in the future and see what types of future legislation may actually have positive impact and what type?
Insight by Pegasystems: During this exclusive webinar, moderator Jared Serbu and guest Lily Zeleke, acting DCIO for information enterprise, Office of the DoD CIO with the Department of Defense will discuss software modernization strategy at the Department of Defense.
Tom Temin: Well, getting to the idea of the impact legislation might have, they have the Congressional Budget Office, which scores legislation with what it might cost and to whom the costs might be borne, based on econometric models they have, and they always make the judgment that assuming nothing changes with policy, or law and so on. Could AI enhance that? Is that what you’re driving at?
Joe Mariani: Yeah, exactly. I think it’s kind of taking that the next two or three steps. So imagine the limitations of the CBO analysis, as great as it is. On one hand, it’s just looking at one type of outcome of legislation, right? Exactly like you said, it’s just looking at the cost and budget implications. With AI models, you could look at other implications as well. You can look at will these interventions achieve the intended effects? Will there be kind of second and third order consequences, so on and so on? And then it also has the idea of exactly like you said, much of those budget models are kind of built on this assumption that the past will, that the future will replicate the past. And what some of these AI models can do, especially things like agent-based models, is actually do away with that assumption, and see how individual agents whether individuals or companies might actually act based on what we know, from different types of inputs. So AI allows you to kind of test more and do it in that kind of more varied way than I think we do other analysis today.
Tom Temin: And what would a model like that look like? It sounds like an economist’s model, which takes various inputs and tries to impute future behavior from them with the understanding, for example, that if you tax something, you get less of it? And if you incentivize something, you get more of it? Is that all quantifiable? And is the data out there that could actually be honestly imported by a congressional panel or office such that they could really get an objective and not a political evaluation?
Joe Mariani: Exactly. Yeah, absolutely. And I think there’s the models, the data, and then exactly like you said, what’s the kind of implication here from that like political angle? And the good thing is on the model side, there’s lots of different models. There’s kind of those agent-based models we described, there’s kind of systems models, and those can look very much like the economic ones. So for example, Ireland had some researchers that created a kind of systems based model to look at certain parts of their economy so they could model if different interventions were likely to kind of grow high-tech sectors in certain parts of the country. Then for the data, I think, actually, government’s in a good spot here, because so much of government data is actually already open and discoverable. And whether that’s things like the census where we can drill down and get details about the population or other types of data, having that open data available can allow Congress to use it, and then also have these knock on benefits of creating clean datasets for others to use. And then finally, in terms of the outcomes, I think AI has a great advantage there, because actually, it has limitations. So AI is not going to be able to tell you what’s right, what’s wrong, what’s the best solution, what’s desirable was not desirable. It’s just going to tell you the answer to the question you ask it. And what that can do, and especially in a lot of these models is uncover the assumptions that we all bring to many of these, especially kind of emotionally charged topics. And there’s some research that shows just kind of experimenting with these models can help drive consensus even on those emotionally-charged issues. So just kind of playing around with the models itself can help improve the legislative process.
Tom Temin: We’re speaking with Joe Mariani, he’s a research manager for Deloitte’s Center for Government Insights. And really the question of bias has to come up because this comes up across the artificial intelligence idea, is that what biases are built into algorithms, biases can come from the data that is used. I don’t mean racial bias, which is the most commonly cited one, but just biases in favor of one outcome or another. So it seems like alongside all of these tools and databases, there would be some sort of broker mechanism to make sure that we all agree that what’s going in here will be accurate for the questions we have.
Joe Mariani: Yeah, exactly. And I think you can imagine a governance process that stretches throughout the lifecycle of these models as used in legislative context. And in fact, maybe even before the models because exactly as you say, even before the creation of the model, one of the most important steps in ensuring kind of accuracy and equity of the models is getting the right data, having the right amount of data, having clean data, having data that’s for the purpose of equity fit for purpose. So exactly like you say, if you gather a data set, it’s representative in one case, that could be markers like race or gender, or even exactly like you said, different policy outcomes. If suddenly now you use that in a different context to answer a different question, it can begin to introduce some of those biases unintentionally. So you know, having those governance processes and data selection, tagging them for the context of use, having similar but different governance processes through the creation and use of the models, especially ensuring transparency around kind of what are the weights, what are the assumptions we built into some of these models? And then even in use, one of the things we talked about with the members of the committee was ensuring that the members of Congress have the kind of data literacy to be able to understand what they’re actually seeing when these models are giving them outputs so they’re viewing it in context, and understanding what the outputs of that model are, and so that they can understand even what its limitations are. Because, for example, AI is not an infallible oracle, it’s much more of just a decision aid, a structured analytical tool. And then most importantly, that the members can then communicate how those AI tools are being used to their constituents, and so that the general public can have confidence in what Congress is doing and how these tools are being used.
Tom Temin: I guess if he can teach members of Congress about Facebook, we can teach them about artificial intelligence. But you mentioned Ireland earlier. Are there examples such as Ireland or any other places, you might be able to name, states even, that have successfully used these tools in crafting legislation and modeling it?
Joe Mariani: Yeah, exactly. I think we’re seeing the first kind of tentative steps of legislatures and parliaments around the world into the domain of applying AI. South Africa, for example, is using an AI-enabled personal assistant to help members understand what legislation is all about. And even things like where their conference rooms are. And when they need to be there. You’re seeing the Netherlands use natural language processing to automatically transcribe and tag debate to what topics it fits most and Brazil is actually taking the next step in incorporating that same technology into their legislative workflows, where if a member is entering a bill into the system, it’s actually going to automatically suggest to them what some of the relevant pieces of debate other pieces of legislation are. So we’re seeing those tentative steps. And I think the important thing for the U.S. Congress is those are starting to uncover some of the unique challenges of applying AI to a legislative context. And if you combine that with the experience that you’ve seen in the Executive Branch and other industries, I think you combine those two together and you get some powerful lessons that can help guide how to implement AI in Congress.
Tom Temin: Yeah, because I’ve had members of the committee itself on the show, and they are honestly committed to this type of technological development. What do you think it takes to inculcate these tools into an organization, an institution that’s pretty hidebound, and where people have expectations that go back centuries in some cases?
Joe Mariani: Yeah, exactly. And I think from the five years we’ve been looking, as we found a number of common traits of success to AI adoption, especially adoption at scale, across all manner of organizations, government industry, and those things are like having a clear and common strategy. Being able to bind either in your ecosystem or by hiring the right people, and so on. And so we talked about the governance structures already. But I think to your point, some of the unique challenges that Congress may face are some of those ones around kind of process. So one of the things we found is if you’re able to change workflows to take advantage of AI, you’re kind of 36% more likely to succeed with AI at scale. And for Congress, that can often be a challenge, right, where you’re talking about an organization where rules may be governed by legislation or by rules of the chamber that are difficult to change. So paying attention to those unique challenges specific to Congress may be one of the great places to start and then layering on the more traditional factors of success that we see across every industry.