Incorporating artificial intelligence has been a key goal for agencies across the executive branch for quite some time. But now, Congress is considering jumping on the bandwagon as well. Lawmakers on the House Select Committee on the Modernization of Congress are interested in exploring just what AI might be able to help them accomplish.
Joe Mariani, a research manager for the Deloitte Center for Government Insights, told the committee during a July 28 hearing about two ways AI might be able to help. The first is what he referred to as “AI as microscope.”
“That is using AI to assess the impact of existing legislation,” he said. “So machine learning or ML models can accurately find patterns in data without having to specify ahead of time what those patterns should be. So just as a microscope can look at a leaf for example, and find structures and patterns invisible to the human eye. These machine learning models can look at programs and find patterns in their outcomes that may be invisible to humans just because of the size, scope or even age of the data. So for example, machine learning models have found that patterns in government R&D investment during World War Two have impacted the location of innovation hubs even today.”
So by examining the outcomes of previous policies, AI might be able to identify elements that had the best outcomes, or were most effective. That can help Congress make better decisions when considering – or deciding not to consider – legislation.
The second way AI might be valuable to Congress is what he described as “AI as simulation.” This idea builds on the first; with data about the outcomes of previous policies, AI can project the outcomes of future policies as well, and enable Congress to model the outcomes of trying something new.
“Using AI in this way to simulate the complex systems that Congress deals with every day can actually improve the quality of debate, and do so in three key ways. First, it can articulate the often unspoken assumptions and values that we all bring to these issues. Second, it can uncover the drivers of particular problems. And third, it can help us understand which interventions will be the most effective and at what cost,” he said. “Ultimately, these simulations can help members agree on what they disagree on. And, in fact, there’s even evidence that just experimenting with these models alone can help drive consensus on emotionally charged issues.”
He did caution that implementing AI would bring challenges of its own. Much like federal agencies that adopt AI, Congress would require new skills, business processes and security requirements.
Committee Vice Chairman William Timmons (R-S.C.) suggested one problem facing Congress that AI might be able to help with: scheduling. With 23 standing committees, five select committees and 104 subcommittees, Timmons pointed out that that’s around 130 people with scheduling authority, making conflicts the rule rather than the exception. And that’s not even taking into account conference and caucus meetings, floor votes, constituent services and fundraising.
Mariani said AI should be able to help with that as well.
“The good news is what you’re describing is basically just an optimization problem. Like there’s a ton of data and we need within these defined parameters to find the optimal solution. And the good news is that’s exactly the type of thing that AI is really good at,” he said.
When it comes to crunching an incomprehensible amount of data, that’s where AI prevail over humans. But humans are still required to make value judgements about AI’s solutions, highlighting the importance of human and AI teaming, Mariani said. For example, an AI might optimize Congress’ schedule perfectly, but humans would need to be there to inform it that no one will be willing to work on Christmas.
Mariani pointed out that this is actually a perfect example of his “AI as simulator” concept. By presenting AI with all of the variables, it can model any number of potential changes and project how well they would work. This could provide Congress with more than one alternative to the current scheduling system, and allow them to pick and choose between them.
This is exactly the kind of scenario AI is used for every day in the executive branch, as well as private sector, Mariani said. That means there are any number of proofs of concept Congress can learn from, not to mention contractors whose expertise they can leverage.
“I think … the challenge is everyone out there has experience adopting AI models and using them potentially at scale. How do you then cross that with the unique context of having those technological tools work in Congress?” he said. “And I think what we’re starting to see is, as we heard from some of the other examples, other legislatures, other parliaments starting to take those first tentative steps and using AI at small scale proceeds. South Africa has a AI enabled personal assistant that gets kind of where you’re talking about. Members can ask it questions, and it’ll automatically respond back about ‘here’s the content in a bill,’ or ‘here’s the time and conference room you need to get to in the next two minutes.’”
The challenge, he said, is crossing those two experiences: implementing AI at scale and implementing AI in a legislative context.
One other challenge Mariani addressed is the potential biases inherent in AI. Any such solution Congress implemented would have to be not only neutral to both parties, but also the power dynamics between majority and minority.
That starts even before the models and algorithms are built, he said. As in any AI implementation, it starts with ensuring clean, accurate, equitable data. Context is key here; a dataset can be assembled for one purpose, but if it’s introduced to another entirely different question, it may create a bias. And those data controls then need to be extended into the model itself.
“Focusing on transparency and those steps are probably the most important if you can identify what are the model weights, what are the variables, what are the assumptions that we’re using, and then even into the members yourselves to make sure that when you’re using the outputs, that you have kind of the literacy of how those models work, so you can understand kind of their left and right lateral limits,” he said. “Because AI is a powerful tool, but it’s not an infallible Oracle, really. It’s just more of a decision aid for yourself. And probably also unique to the legislative context, then also having an enough knowledge to be able to communicate to constituents how those models are being used, so you can build their trust and confidence in how AI is being used as well.”
Daisy Thornton is Federal News Network’s digital managing editor. In addition to her editing responsibilities, she covers federal management, workforce and technology issues. She is also the commentary editor; email her your letters to the editor and pitches for contributed bylines.