Federal agencies are using artificial intelligence to do everything from cataloging milk prices to monitoring immigration status. Department leaders say the...
Machines are taking your job.
Text readers cheaply and quickly catalog workforce data, virtual assistants answer customer service calls on immigration status, and artificial intelligence applications save endangered animals.
But talk to the federal employees working alongside these machines and they’ll tell you that while computers and codes are performing some jobs better than their human colleagues, they’re also creating opportunities.
Mark Krzysko, deputy director for enterprise information, at the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics said during a Nov. 29 FCW event in Washington, that artificial intelligence (AI), “is a technique, it’s not an answer.”
“I think we all need to kind of get our minds around what that is,” Krzysko said. “It can contribute quite immensely, but you have to have humans in the loop certainly to be a part of it, and humans that understand what’s going on from that perspective and that’s going to require a bit of cultural change and workforce change associated with that.”
“I think often times we talk in hyperbole that it’s going to fix all the problems,” he continued. “It helps identify some of the opportunities. I think we need to think about it as an opportunity for us to either refactor our management, refactor our data, refactor our organizations. It is just a tool and technique to do that. It is not an answer in and of itself.”
Federal News Radio is exploring how agencies are using artificial intelligence to meet their mission, and how it impacts the federal workforce, in a two-day special report “AI: The Reality in Your Agency.”
The White House in its October report called Preparing for the Future of Artificial Intelligence, admits “the diversity of AI problems and solutions, and the foundation of AI in human evaluation of the performance and accuracy of algorithms, makes it difficult to clearly define a bright-line distinction between what constitutes AI and what does not.”
“What is important,” the report stated, “is that a core objective of AI research and applications over the years has been to automate or replicate intelligent behavior.”
The Bureau of Labor Statistics (BLS) is doing just that through its use of text readers. The BLS mission includes collecting and categorizing job data for its monthly reports on the health of the economy.
The need to regularly produce reports, combined with the growing amount of data and data sources, led to the bureau’s exploration of text readers as a way to standardize and categorize information.
“At its basic level the program is scanning large amounts of data and trying to make human-like (or better) decisions,” said Rick Kryger, director of survey processing at the BLS, in an email to Federal News Radio. “When an organization gives us employment and wage data we need to translate their proprietary occupation titles into the Standard Occupational Classification (SOC) codes and titles so that we can provide data on employment, unemployment, wages, etc., over time. Similarly, for SOII (Survey of Occupational Injuries and Illnesses), we have to translate descriptions of injuries into injury codes for aggregation and publication. For prices, (such as the Consumer Price Index), product descriptions have to be translated into product codes. For example, to determine if the price is for ‘whole milk’ or ‘milk other than whole.'”
Kryger said the text readers have been used more in the last two or three years, as budgets have remained flat but operating costs continue to rise.
“The large volumes of data that are available now, you really have to implement more automation,” Kryger said. “We can’t afford to hire more people to look at the data and code it, you really have to move to automation. It also produces more consistency, too. If you have five people looking at the same occupation, you could end up with two or three different ways that they classify, just because they kind of view things differently. This also can help to bring more consistency in how the data are classified.”
The readers are constantly learning, Kryger said, to keep up with changes in product descriptions and new occupations.
“As you go through time you have to continue to update and review the accuracy of those readers, to make sure they’re still producing high quality results and are not producing something erroneous,” Kryger said.
A core team of IT specialists, economists, statisticians and mathematicians help monitor and develop the readers. But there is also quality and review processes, which are standard throughout the BLS, Kryger said.
Kryger admitted there were some growing pains, but “just in what we do, ultimately the results prove themselves.”
“We see ourselves in the early stages of this still,” Kryger said. “We need to be confident about the methodology, we need to be confident about what we’re getting out of it. Then we can really implement it in a more strategic way from a cost savings and cost avoidance perspective.”
Some might argue a fee-based agency comes with a higher level of responsibility for customer service. Factor in the services your agency provides include naturalization and citizenship, and you get an idea of the weight of the work at the U.S. Citizenship and Immigration Services.
To help customer service employees cut back on repetitive calls or calls that customers could answer for themselves online, USCIS launched their virtual assistant “Emma.”
Emma went live on Dec. 1, 2015 after months of input and testing from customers and USCIS employees. The Spanish version of Emma launched in June of this year.
“She is a form of artificial intelligence, meaning that customers can ask Emma questions in a million, billion different ways, and Emma has the ability to analyze the input and determine what’s the most appropriate response,” said Emma’s Program Manager Vashon Citizen .
Citizen said in researching how Emma would work, her team reached out to USCIS employees to customers to even the Homeland Security Department as a whole, asking for feedback on questions that would normally prompt a call to the customer service department.
Some of the most popular query topics include checking case status, how to replace an expired Green Card, and how to contact USCIS.
The team behind Emma conducted similar surveys for the Spanish version, but Jessica Membreno, project manager for Spanish Emma, said it was a bit of a challenge incorporating all of the different phrasing and dialects across Spanish-speaking areas.
“She’s learning across the channels of customers inputting from different backgrounds, from different countries,” Membreno said. “We’re trying to create that knowledge base, in addition with our analysts that go in there and put these difference phrases into Spanish.”
Each day there is feedback among the team behind Emma and the USCIS Office of Communications, regarding ongoing development and requirements.
Those daily meetings are put into a weekly update, which can then lead to additional updates of the virtual assistant.
For example, the team added more specific answers to some of the most popular questions, such as “How do I get my papers?” Citizen said.
Before, Emma would apologize for not understanding the question, but now she might respond with an answer about a green card or work authorization.
Emma has about 1,300 responses, and collects about 20,000 customer inputs each day. The virtual assistant answers 90 percent of English questions, as opposed to saying “I don’t know,” and she has an answer rate of 87 percent in Spanish.
“It’s really an ongoing, iterative process with Emma,” Vashon said. “You have to get her out there, understand what’s most important to customers — definitely case status is one of them — then work with the team to develop the right experience.”
Citizen and Membreno said employees have embraced the AI as a colleague, and some even use the program when they are looking for an answer on the site.
“Everything has been positive internally,” Membreno said. “Our department as well as our other interagency departments within the Homeland Security family, have loved the fact that they can refer their customers that are asking about the immigration process on our side of the house, to follow up using Emma as a first channel of communication.”
Rick Collins, president of enterprise at Next IT, the Washington-based company that built Emma, said when it comes to people who are hired to provide customer support, he doesn’t believe they want to spend all their time on “tier 1 issues.”
“They would like to spend more time — certainly the metrics show — they want to spend time on those things that are difficult,” Collins said of the customer support environment. “Still the goal is always time-to-resolution. So if you can take the tier 1 and in some cases tier 2 customer support issues and automate that, it’s better for everybody.”
In Next IT’s 2016 “Special Report for the Travel Industry” whitepaper, several advisory firms reported that within the next four years, customer service would see a decrease in human interaction.
“Gartner predicts that by 2020, customers will manage 85 percent of their relationships with businesses without ever interacting with a human,” the report stated. “Forrester saw the same trend unfolding when their channel adoption survey revealed that customers are increasingly accessing self-service options for customer support. Use of customer-support intelligent virtual assistants (IVAs) and other intelligent interfaces — think Siri, but with company-specific expertise — spiked 10 percent in 2014, FAQ page usage surpassed voice calls for the first time and online chat adoption rose from 38 percent in 2009 to 43 percent in 2012 to 58 percent in 2014.”
The team behind the Protection Assistant for Wildlife Security (PAWS) has hard evidence that artificial intelligence can make a difference.
With support from the National Science Foundation and the Army Research Office, PAWS looks at forensic crime data — in this case, wildlife crimes such as poaching, and makes predictions about future attacks and where they might happen, explained Milind Tambe, a Helen N. and Emmett H. Jones Professor in Engineering and founding co-director for the University of Southern California’s Center for AI in Society.
The AI algorithms then generate a randomized patrol strategy.
“We don’t just want to go to a hot spot,” Tambe said. “If you just go to a hot spot the adversary will just shift somewhere else. You want to sort of project where they will shift to, and provide an unpredictable sort of patrol strategy, so they can’t guess where we will be tomorrow, and that’s kind of this game theory-based randomized patrol strategy.”
So how do you predict a poacher’s next move?
Tambe said using data on past attacks helps educate the team about poaching sites, or the type of location favored by poachers for setting up traps. The data comes from NGOS and groups like the Uganda office of the Wildlife Conservation Society.
“We may be able to record the location, the pattern of land, whether there’s a slope there, whether there’s trees there, what kind of vegetation was there, and so there may be features of that spot,” Tambe said. “We may have knowledge of how much patrolling had gone on in the past. And so based on all of these features, the machine’s learning algorithm that we have will be able to say, ‘OK, given the hundreds of attacks we have seen, we see a particular pattern to these attacks, where they tend to happen, where the snares tend to be kept by the poachers and therefore predict where they would do similar types of action in the future.'”
The second component of PAWS is setting up patrols based on the “probability calculation” from that past data, Tambe said.
When Tambe and the PAWS team talked to Federal News Radio earlier this autumn, they said they’d recently gotten news that several snares were recovered in Uganda before they were able to trap animals.
“We had made predictions on where to patrol, we had chosen a spot where there weren’t that many patrols in place previously,” Tambe said. “Normally, they wouldn’t go there as frequently; we directed those patrols. This is an ongoing test, so we just have started receiving feedback of results of this test. We understand this is a significant find in terms of snares placed, snares that were caught before they were deployed.”
A similar effort is also underway to prevent illegal logging in Madagascar, and Tambe said while this kind of AI algorithm is still in its research phase, it could be used to help fight drug smuggling or other illegal activities “that might be closer to home.”
Whether it’s protecting citizens at home or fighting the illegal drug market abroad, the federal government continues to apply AI technology to its missions and look at future applications.
“I think you’ll hear a lot more about artificial intelligence, which in my opinion is probably more machine learning than artificial intelligence, said Kris Rowley, chief data officer at the General Services Administration, during the November FCW event. “We have to get down into talking about algorithms, statistics, mathematics and how the machine learning actually works. You have to combine that with the right type of coding talent to come in and build the rules associated with those algorithms and then you have to build the capability to present those in a way as options to leadership.”
Rowley said the first two parts are starting to work well, the problem is the final piece.
“You get in to present it and people start going into talking about the algorithm rather than talking about the outcome and they lose people,” Rowley said. “So I think it’s going to be all of those pieces, and I think the government on a case by case basis, on an acceptable basis, it’s going to be working through those.”
Jason Matheny, Ph.D, director of the Intelligence Advanced Research Projects Activity (IARPA), told Federal News Radio in an interview, that not only are there a lot of smart people in and outside government looking artificial intelligence and its impact on the human workforce, but machines might make it even more obvious that a person is needed for a role — particularly one within national intelligence, medicine or law enforcement.
Surveying the community, getting a sense of what’s going on from neighbors, “[making] sense of events that may not be caught by cameras,” can’t be captured by an automated security system, Matheny offered as an example.
“That kind of policing is I think going to probably increase, that sort of human touch because we’ll have the ability to reposition police officers to focus on that, as opposed to watching screens,” Matheny said. “So ultimately I think a lot of these technologies will free human labor to focus on where human cognition and human abilities are most needed.”
Read the special report, AI: The Reality in Your Office.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.