Agencies are scoring quick wins with robotic process automation, deploying bots that can handle rote tasks more quickly and efficiently than federal employees, which frees them up to handle higher-value tasks.
The demonstrated value of these bots, however, represents only a fraction of the time and money agencies could save if they allow these bots to run 24 hours a day, rather than eight hours a day supervised by on-the-clock employees.
Making that shift, however, requires collaboration between agency IT offices and their counterparts in information security. To assist in that dialogue, the General Services Administration is launching a Digital Worker Credentialing Handbook that will give agencies a common reference.
GSA has not yet publicly released the handbook, but Daria Medved, the agency’s deputy director for emerging technology, said the idea came from an interagency workshop, where GSA interviewed officials from other agencies on how they’re currently using and credentialing digital workers.
GSA analyzed the common policy and implementation challenges, and put together a set of recommendations. From there, the agency’s Identity Assurance and Trusted Access Division used those recommendations as the basis for the handbook.
Medved said the playbook should give agencies a common reference for launching and overseeing bots. More significantly, however, she said the handbook aims to help agencies overcome a common “mental barrier” with leaving bots unattended.
“That leap is actually very minor compared to some of the other factors that really need to be assessed — more so about what data does this digital worker have access to, and what systems does this digital worker have access to? How much damage can this digital worker actually do?” Medved said Tuesday in a webinar hosted by the Advanced Technology Advanced Research Center (ATARC).
While nearly all agencies have fielded some level of automation, few agencies have taken the leap of leaving a bot unattended.
The GSA handbook aims to meet the goals of a 2019 Office of Management and Budget memo that calls on agencies to manage the digital identities of automated technologies, including RPA bots and artificial intelligence algorithms.
The handbook outlines a three-step process for agencies to consider when fielding a supervised or unsupervised bot. The first step calls on agencies to determine the bot’s impact level, which is determined through a six-factor impact score.
Ken Myers, a GSA cyber policy and strategy planner, said the handbook makes it clear that not every digital worker requires a digital identity, especially if the bot is considered low-impact, but that determination can vary by agency.
“We found that a lot of agencies were using human-based identity processes to credential a digital worker, and it doesn’t always work like that. There’s some things that are specific to humans that don’t correlate to a digital identity,” Myers said.
Agencies should expect a higher risk score for unsupervised bots. Medved said a digital worker with a critical-risk level, for example, would be if the Federal Aviation Administration fielded AI for air-traffic control.
“That is not something we’re trying to prevent from implementing, we are just trying to say that if you do want to go in that direction, let’s just make sure that we have the right identity for this bot, or for this AI as well as the right controls to ensure that there’s validations and certifications that this bot is monitored,” she said.
The handbook’s recommend agencies routinely conduct access reviews to determine whether a digital work has the privileges needed to complete a task, but none beyond that. During this review, agency program offices should consider if a digital worker has access to privileges that could result in fraud, theft or other errors.
Myers said agencies should also determine how often they should review the code for a digital worker and how to conduct an ethics and bias review of a digital worker. He said formal guidance doesn’t exist yet for an ethics or bias review, but said agency communities of interest could put together their own sets of recommendations.
The handbook’s final step recommends agencies provision and govern the digital identities of bots.
Along with this step, GSA urged agencies to assign both a sponsor and custodian for the bots they deploy. A sponsor is usually an executive, such as a chief information security officer, that’s accountable for the digital worker, while a custodian oversees the day-to-day functions of the bot.
Bill Bunce, the director for federal sales at Automation Anywhere, said agencies have made it through the “low-hanging fruit” of using attended bots to handle simple tasks. The next step, he said is to save more time and money through unattended bots.
“They’re getting pressure from management up above, to say, ‘What else can we apply this to, what else can we do?’ As soon as you begin to push the boundaries of what you’re looking at doing with automation, security becomes the number-one issue,” Bunce said.