Government creating a mind of its own

Artificial intelligence has many definitions and even more uses, and the federal government is embracing this emerging technology. In a two-day special report,...

A knife is brandished on a crowded subway platform. A backpack sits abandoned in a busy airport terminal. Images of sickness and death appear on social media.

Security cameras often play a forensic role after a crime is committed, while an online platform can serve as both a megaphone and recorder.

The Intelligence Advanced Research Projects Activity (IARPA) hopes to get to a point where a camera not only recognizes the difference between a standard and suspicious action, but can alert law enforcement before something bad happens, and a trend on Twitter can signal a disease outbreak to doctors.

IARPA’s Deep Intermodal Video Activity (DIVA) and Open Source Indicators (OSI) Program are just two of the machine-learning initiatives the federal government is exploring, as it embraces the emerging technology of artificial intelligence (AI).

“AI is a category for technologies that aim to replicate some aspect of human cognition, so that could be human planning, reasoning, the ability to learn, the ability to develop strategy,” said Jason Matheny, Ph.D, IARPA director. “AI has also started to have a real influence in the way that the government operates. Some government agencies are using automated assistance, for instance, to help with managing calls to individual federal agencies who want information regarding say, health and benefits. Some agencies are using automated tools to help with making projections or forecasts about things such as changes in attendance, or changes in workflow. Some federal agencies like the ones that we work with in the intelligence community, are applying AI to understand trends and patterns in the world in order to make sense of the risks to say, U.S. national security or to global security.”

Federal News Radio is exploring the use and impact of artificial intelligence on agency missions and the government workforce, in a two-day special report “AI: The Reality in Your Agency.”

A key issue for federal use of AI, Matheny said, is to be able to trust that the system or computer being used is correct; that it’s reached a conclusion — such as a legitimate threat of violence or confirmed disease outbreak — through good evidence and reasoning.

This explainability, as well as investments in research, safety and testing in real environments, are all aspects of AI that need to be addressed, Matheny said, and the White House is leading the effort to get those aspects under one policy.

Machine learning

Before laying out the Obama administration’s AI policy, it’s important to define what is artificial intelligence. The problem: even the White House can’t put AI under one label.

“There is no single definition of AI that is universally accepted by practitioners,” stated the administration’s October report called Preparing for the Future of Artificial Intelligence. “Some define AI loosely as a computerized system that exhibits behavior that is commonly thought of as requiring intelligence. Others define AI as a system capable of rationally solving complex problems or taking appropriate actions to achieve its goals in whatever real world circumstances it encounters.”

Kris Rowley, chief data officer at the General Services Administration, told Federal News Radio in the context of the federal government, AI is more “machine learning.”

“When you see an article posted on artificial intelligence and it always starts with a robot on top and people are like, ‘Oh my god. Robots are going to do my job,’ and that’s not really what we’re talking about when we’re talking about machine learning,” Rowley said during a Nov. 29 FCW event in Washington. “We have to get down into talking about algorithms, statistics, mathematics and how the machine learning actually works. You have to combine that with the right type of coding talent to come in and build the rules associated with those algorithms and then you have to build the capability to present those in a way as options to leadership, so there’s this whole bandwidth of capability of culture, technology and the ability to present it.”

GSA launched in October an Artificial Intelligence for Citizen Services Community. Justin Herman, GSA’s SocialGov community leader, told Federal News Radio in October, the interagency community of practice grew out of a recent workshop on AI for industry and agencies.

The hope is to provide opportunities for agencies to learn more and work better with other departments that have expertise in the realm of AI, as well as with the private sector companies, “who are going to likely be developing the cognition as a service tools that agencies will probably be purchasing in order to cover these services.”

In October, the Obama administration published its report, and an accompanying National Artificial Intelligence Research and Development Strategic Plan.

The plan has two goals: provide a platform for identifying and developing AI opportunities, and provide a workforce to support that platform.

The report includes information on everything from self-driving cars to workforce development, and makes recommendations for agencies when considering the future of AI and how it can apply to their missions.

Those recommendations include prioritizing short and long-term research on artificial intelligence, exploring ways to use AI in agency work, and communicating with industry to keep up to date on AI progress in the private sector.

“Government has several roles to play,” the report stated. “It should convene conversations about important issues and help to set the agenda for public debate. It should monitor the safety and fairness of applications as they develop, and adapt regulatory frameworks to encourage innovation while protecting the public. It should support basic research and the application of AI to public goods, as well as the development of a skilled, diverse workforce. And government should use AI itself, to serve the public faster, more effectively, and at lower cost.”

A centralized capability

Lower cost is one reason why the Bureau of Labor Statistics (BLS) turned to artificial intelligence for its mission to taxpayers

BLS is charged with providing information on the labor market, which informs both the government and the private sector on the health of the economy, and helps them make decisions.

Rick Kryger, director of survey processing at BLS, said the agency is using text readers to analyze large amounts of data for its job reports, like the monthly survey of the Consumer Price Index (CPI).

The text readers are learning things like product descriptions and new occupations for how data is categorized. For example, one company might label a job “doctor,” while another is more specific and uses a label like “pediatrician” or “oncologist.”

“New names are being created every day by companies, new titles for occupations,” Kryger said. “Ultimately, we need to be able to analyze that and map that back to the standard occupation classification system, the SOC, which is a governmentwide classification system. That’s what we publish data on to say this is the average wage of doctors, or the employment of doctors, how many doctors are employed in the United States.”

When employers provide information to the bureau, it’s not always categorized the same way as other businesses, or the way the bureau classifies data.

Thanks to the growing amount of data — and the variety of ways it is classified — BLS needed a fast and affordable way to digest information and prepare it for its reports.

“A lot of it is resource driven,” Kryger said. “The smaller government budgets, especially being a non-Defense entity, we’re kind of second tier in the government model, typically. The budgets have shrunk, or just simply the cost of us operating, let’s say the budgets are flat, but our operational costs continue to go up. Federal pay raises or other contracting increases. So we’ve had to try to maintain and do basically the same thing we’ve been doing, with fewer financial resources and fewer positions. So we’ve had to turn to finding more automation to maintain the quality of our products at the same financial level, but it’s really not the same financial level, it’s really a reduced level financially.”

Kryger said BLS has been using the readers for the last two or three years, and the hope is to develop the readers into a kind of “centralized capability” for the agency, since there are similarities among its surveys. Kryger said the bureau also wants to analyze how the public is using its data.

“Doing things like analyzing responses to our Twitter feed, for example, or analyzing news articles that are being published about our data to determine is there some data gap or something that we should focus on, publishing some article about, that would help better inform the public,” Kryger said. “Core to our mission is informing the public through the production of data and through the articles that we produce.”

Learning on the job

U.S. Citizenship and Immigration Services (USCIS) is using artificial intelligence to improve customer service because not only is it a fee-based agency — which means it operates largely on the money it charges for its work — but the business it provides may directly impact a person’s future in America.

Emma is a virtual assistant on the USCIS website, and a form of artificial intelligence. Visitors to the website get a quick hello from Emma in a dialogue box on the screen. Users can type in English or Spanish — she’s also programmed to decipher spelling errors and some slang — to get answers or shortcuts to online resources.

“Immigration is very complex, and so we wanted to provide a way where customers can get access to information 24-hours a day,” said Vashon Citizen, project manager for Emma. “So access is always available as well as, because our customers are all over the world, they speak different languages, we wanted a way where customers could get answers but they use their own ‘natural language,’ they could speak in their own words, they don’t need to know government speak.”

The team behind the Emma program said they hope to continue to improve the design and accessibility of AI. That includes adding more popular questions and reducing the number of “I don’t know” responses for the English and Spanish versions of Emma.

“Right now, the focus is we want customers to walk away feeling it was a good experience,” Citizen said. “The key to that is content, knowledge base, and having her be able to understand as many questions as possible to provide good, accurate responses.”

That knowledge base is built over time by collecting feedback, and then the teams behind Emma learn and apply it to the program.

Citizen’s team, along with the USCIS Office of Communications, Verizon and its subcontractor Next IT — the Washington-based company that built the Emma program — have weekly meetings to improve the quality of information provided to Emma.

“Prior to the meeting, our content analysts analyze a statistical sample of customer questions and Emma’s responses using a sophisticated suite of tools,” Citizen said. “The results of the analysis are discussed during the meeting with our team of subject matter experts and developers, to identify content areas that need improvement.  We focus on Emma responses that need to be strengthened and new topics that Emma currently does not support.”

Striking a balance

Virtual assistants can ease the burden for a workforce, which is why these “chatbots” are seeing an explosion of development interest in both industry and government for specific use cases, said Rick Collins, president of enterprise for Next IT.

“Definitely government is recognizing how with a move toward self help, within websites and other channels and things like that, the most effective way to deal with self help is through conversational interface,” Collins said.

As any market or technology matures, Collins said, “You look for a place where you can strike balance.”

Collins said when government and private clients approach Next IT, they bring a problem statement, which often is about more effectively engaging and supporting both customers and employees.

“I think with the awareness of artificial intelligence, machine learning and Siri and Google now, and all the rest, there is a rise in expectation around having access to a conversational approach or a conversational agent,” Collins said. “So some customers are saying hey, our customers are asking for these things, so where do we get started?”

Jen Snell, vice president at Next IT, said traditional channels like call centers, are not what people want, and it’s more about connecting with customers over multiple channels to better serve them.

There’s always going to be people who are defensive or afraid about a new technology and what it means for their job, Snell said, but new technology means an evolution of different and more exciting jobs.

“I think the government — a lot like in the private sector — is going through a digital transformation,” Snell said. “There is also the opportunity within the workforce for optimizing and augmenting how employees can work, and work more effectively.”

Read the full special report, AI: The Reality in Your Office.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories