The General Services Administration is starting an interagency community on AI, to help agencies and private industry work together for civilian services.
When heavy August rain flooded Louisiana neighborhoods, residents self-organized in online communities to reach out for help.
For federal service providers, the problem was taking all that information and using it to provide assistance.
“The difficulty was how do you take that data and make it A) digestible and B) actionable in a way, or even to be able to spot trends before they happen,” said Justin Herman, SocialGov community leader at the General Services Administration. “This isn’t just like an outreach thing, this isn’t just a chatbot thing, this is being able to look at where to provide lifesaving food, water, shelter into areas.”
That struggle was one of a growing list of instances for GSA, highlighting the need to be able to collect, translate and use massive amounts of data for citizen services, and one of GSA’s solutions to that is in next week’s launch of an interagency AI for citizen services community.
“What we’re trying to develop is a ‘no wrong door’ approach to citizen service, whether you’re accessing through a website, a third party platform, you’re going through SMS, you’re going through a call center, going through email, accessing a survey — no matter how you’re accessing your public services, that data is used and managed in such a way that we can draw actionable conclusions out of it and use it,” Herman said.
Speaking on the first day of Fedstival, a Washington, D.C. event hosted by Government Executive and Nextgov, Herman said the hope for the interagency community is to provide opportunities for agencies to learn more and work better with other agencies that have expertise in the realm of AI, as well as with the private sector companies,”who are going to likely be developing the cognition as a service tools that agencies will probably be purchasing in order to cover these services.”
GSA already manages about 15 interagency communities, including ones for agile development and open data, and about 10,000 employees are actively involved in it.
“For 4 1/2 years we’ve been getting agencies and working with them to really meet that mission of citizen engagement,” Herman said. “It often times has been a great frustration, the limitations we’ve had, because ultimately the citizen to government interaction too often is reliant upon whoever is on the other side of a keyboard at that given time.”
What that did, Herman said, was create a model completely reliant on individual product updates, and recreating call centers of people who need to respond one on one.
“That might be nice unless you’re a person for instance who’s different than whoever is on the other end,” Herman said. “We’ve in a lot of ways failed to meet the needs of accessibility for persons with disabilities, bilingual services, all these things. … There’s such a need that we see, we’ve been working so hard to address and we have come up short thus far.”
Herman told Federal News Radio the newest interagency grew out of a recent workshop on AI for industry and agencies.
“Our workshop we did was one of the most frustrating I’ve ever put together,” Herman said. “It became clear there was such a divide. We have to get people in a room, talking in new ways, together. Right now everybody is coming at it from such a far, distant angle.”
The launch of this newest interagency community reflects the growing interest and acceptance of artificial intelligence and machine learning within the federal government as well as the military.
Lawrence Schuette, director of the Office of Research at the Office of Naval Research, said during the Fedstival panel that the Navy has a history “of betting on interesting science.”
In the area of AI, Navy was one of the original sponsors of Marvin Minksy, the “father of artificial intelligence.” The work of another recent sponsor, Stanford University’s Christopher Ré and his focus on “unstructured data,” reflect what ONR’s investigators are thinking about, Schuette said.
“We see those systems as being necessary, that the data streams that are coming in are such that a human can’t do it,” Schuette said. “A human might be able to observe and mentor and coach and teach this system to react better, but I’m very comfortable with that. Just like we’ve gotten used to pumping our own gas and not talking to the gas station attendant and we’ve gotten used to using an ATM machine, we are going to get used to these other inventions.”
Asked whether the advancement of AI would ultimately result in a utopian or dystopian future, Timothy Persons, chief scientist at the Government Accountability Office, said there is a lot of possibility in the middle as long as a policy is figured out that enhances innovation.
In late September GAO published a report on data and analytics.
“Every technology is a two-edged sword, it’s agnostic, it doesn’t care about constitutional republics, it doesn’t respect sovereign borders, it’s going to be able to be used for good or for ill,” Persons said. “So what we try and do in our technology studies is how do we create policy that mixes the upside, the opportunities, the potential, and not kill innovation. Yet minimize the downside, recognizing that earlier and trying to support a more prospective governance approach than a reactive, potentially damaging, unintentional recourse.”
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.