The Government Accountability Office's chief scientist, Timothy Persons, says testing AI's risks in a controlled environment is crucial to implementation at federal...
Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.
Implementing artificial intelligence at federal agencies requires emphases on safety, ethics and testing, according to the Office of Government Accountability’s Chief Scientist Timothy Persons. As departments consider AI’s applications in their own operations, he said regulators need to be a part of preliminary conversations.
So far, the software is mostly used in data-crunching or customer service capacities. But some agencies such as the Defense Department are eager to expand AI’s scope.
GAO’s comptroller general held a forum last month to discuss the possibilities of federal AI applications, as outlined in the agency’s report to the House Committee on Science, Space and Technology. Persons told Federal Drive with Tom Temin Congress wants a forward-looking approach when it comes to AI for economic as well as regulatory purposes.
“They want to have a prospective look at what’s emerging in this area,” he said. “This is obviously a key technological development that’s having tremendous impact on all sectors of the economy and the public sector.”
The common concern —that AI will replace human jobs — is something that Jeff Lau, acting regional commissioner for the Northeast and Caribbean region of General Service Administration’s Federal Acquisition Service, discussed last month at event sponsored by the American Council for Technology and Industry Advisory Council (ACT-IAC). He said automation tools can help relieve employees of menial tasks, but people will likely still have to do more difficult work.
Persons said last month’s forum included people from academic and nonprofit circles who have studied potential AI applications in criminal justice and automobile safety. Regulators interested in gauging checks on the technology were also present.
“We like to have that cross sectoral conversation because the federal nexus is quite important and all these various agencies have a particular role, but it’s much broader in terms of what’s happening in the private sector and what needs to happen in terms of education and other nonprofit activities,” Persons said.
He used self-driving vehicles as an example of the collaboration needed to utilize AI. With automated vehicles, safety is paramount and the machines must adhere to both federal and state transportation statutes. Persons recommended testing the vehicles in “regulatory sandboxes” where regulators can determine what questions or standards to enforce on tech developers.
He also said AI could help with the constant challenges of cybersecurity. AI has the potential to help human operators, of which Persons said the public and private sectors will never have enough, to monitor networks and deal with other problems.
“The key issue is doing it in a reliable way,” he said. “How do you do it while preserving personal privacy and constitutional civil liberties at the same time?”
On the latter point, Persons said ethics are a major concern regarding AI. Algorithms can have unintended consequences that conflict with constitutional values of equal opportunity and other protections, he said.
“There’s just a need to look at things and get explanability on the algorithms so you can ask the question of why did you come out with this answer, given these inputs?” he said.
Another ethical concern is bias in the data that drives the algorithms of AI. Those biases may be unintended, but programmers will need an “appropriate sampling of data,” whether for criminal justice or financial systems.
“You can have the best algorithm and if you don’t have quality data or sufficient data, unbiased data, then no algorithm will be unable to overcome that,” Persons said. “And conversely if you have great data but your algorithm isn’t good or the weighting factors are wrong in various things then that also can compound things.”
To avoid that, Persons said agencies should create an AI testing environment that fosters innovation, can be contained and takes ethical risks into account.
Persons said those who conduct AI tests for federal agencies should ask themselves what is the goal and what ethical norms need to be reinforced, he said.
“This isn’t something you just take and throw right into mission operations right away,” he said. “You need to have a robust research development and testing environment. And that’s entirely possible in today’s computational world.”
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Amelia Brust is a digital editor at Federal News Network.
Follow @abrustWFED