Federal agency officials are trying to strike a balance between embracing the potential benefits of artificial intelligence, while also addressing ethical, safety and transparency concerns.

The Office of Management and Budget’s latest guidance requires agencies to publicly report on how they’re using AI, while also describing how they’re managing the risks associated with the technology.

The Labor Department, a crucial federal agency for monitoring how AI is being used in workplaces across the country, has 18 active AI use cases internally.

Lou Charlier, the chief AI officer and deputy chief information officer at the Labor Department, said his agency wants to “innovative while we balance that risk.”

“I really believe that AI can be a force multiplier for the department,” Charlier said the Federal News Network webinar, “AI progress 2024: Challenges and opportunities in government.”

“AI is often viewed as an IT or a tech movement,” Charlier continued. “We have our CIO’s office taking the lead on the technical approach. But we’re also using our business agencies to help drive the model. They do the mission. And we want them to consider, from an ethical point of view, how to drive use.”

Labor’s internal AI platform

With many federal officials concerned about the safety and security of internet-accessible large language models, Charlier said Labor has built an internal platform that allows employees to use generative AI without the risk of exposing sensitive data.

“We’re trying to utilize techniques like guardrails, the fine tuning, prompt engineering and the retrieval augmented generation, where we’re using our own data to help mitigate the potential for hallucination in the models,” Charlier said.  “We’re communicating about the quality data and creating a pathway to get to the data faster, and build business cases and use cases while we’re conducting the quality checks that are needed to ensure that we’re putting out a model that is useful but also factual and benefits what we’re trying to accomplish to support the missions.”

Charlier said the Labor Department has sought to include its employees in conversation about AI use through public forums and “lunch-and-learn” meetings, as well as trainings. Labour has also included public sector employee unions in its conversations around AI. He said the aim is to ensure AI is used ethically while protecting workers’ rights.

“Our goal is job enrichment, not job replacement,” Charlier said.

Army eyes predictive personnel models

The Army is looking at how it can use AI to streamline some dull, time-consuming tasks in its personnel system, while also considering how predictive models could be used to shape both recruitment and retention.

Kris Saling, acting director and chief of staff for the innovation directorate at Army Recruiting Command, said the proliferation of text analytics and large language models can help speed up the command’s paperwork-intensive processes.

“We sometimes just have to get in there and automate some of the busywork out of the way so we can get into the truly transformational things,” Saling said.

The Army has been exploring using AI to expedite file screening and augment performance evaluations, including as part of the services’ new talent attribute framework.

Saling said the Army is also experimenting with a predictive models to help with retention.

“We have a retention prediction model right now that creates an individual vector for everybody who’s on active duty service, so we can see when different populations are likely to make a decision to leave,” Saling said. “We’ve been looking at different incentives to test and actually to see statistically what kind of things people are willing to make that decision to stay for.”

The Army is also looking at a similar use case on the recruiting side.

“We’re looking at it from the recruiting side, looking at different populations desire to serve, what kind of things can we offer, not just in that initial enlistment, but throughout somebody’s career, to entice those high performers to stay longer,” Saling said. “So it’s pretty exciting what we’ve been able to do so far. But I think we’re still just kind of making those initial tweaks. When we have a little bit more free space to maneuver in, we’re really going to be able to make more cool stuff happen.”

Treasury looks to bolster anti-fraud capabilities

The Treasury Department, meanwhile, is looking at how AI can bolster its work internally, while it also oversees how AI is being used throughout the financial sector.

Brian Peretti, Treasury’s deputy chief AI officer and director for domestic and international cyber policy in the office of cybersecurity and critical infrastructure protection, said an early win for the agency has been the Bureau of Fiscal Service’s use of an AI-driven fraud protection process to recover more than $375 million in fiscal 2023.

“We see that as one good use case where it’s already starting to pay off,” Peretti said. “How do we look across the rest of the organization to see where different areas and how it’s being used may be able to help us deliver the services better at the end of the day.”

Treasury is also setting up an AI governance board, in line with OMB guidance, to help address some of the major ethical considerations around AI, while also responding to OMB’s requirements. For instance, OMB is giving agencies until Dec. 1 to implement “concrete safeguards” that protect Americans’ rights or safety when agencies use AI tools.

Peretti said Treasury’s governance board will include representatives who come from offices across the agency, including those involved in cybersecurity, diversity and inclusion, legal, risk management, human capital and other areas, “to make sure we’re taking into account all these potential challenges.”

“With any kind of new technology, we always see there’s opportunities and challenges,” Peretti said. “How do we work to be able to balance that and have that broad conversation across the organization to get that holistic approach so we’re all looking at it from all the different angles, but then also looking at it in a way in which we’re going to be able to use it in that safe, secure and trustworthy manner.”

Learning objectives:

  • Trends of using AI in the government space
  • Considering risks/concerns of using AI 
  • Future technology and milestone goals for 2024

By providing your contact information to us, you agree: (i) to receive promotional and/or news alerts via email from Federal News Network and our third party partners, (ii) that we may share your information with our third party partners who provide products and services that may be of interest to you and (iii) that you are not located within the European Economic Area.

Please register using the form on this page.
Have questions or need help? Visit our Q&A page for answers to common questions or to reach a member of our team.

Speakers

Lou Charlier

Chief AI Officer and Deputy Chief Information Officer

Labor Department

brian peretti

Brian Peretti

Deputy Chief AI Officer and Director, Domestic and International Cyber Policy

Office of Cybersecurity and Critical Infrastructure Protection, Treasury Department

kris saling

Kris Saling

Acting Director and Chief of Staff, Innovation Directorate

Army Recruiting Command

Amy Jones EY

Amy Jones

Public Sector AI Lead

EY

Justin Doubleday

Justin Doubleday

Reporter

Federal News Network

Sponsors

By providing your contact information to us, you agree: (i) to receive promotional and/or news alerts via email from Federal News Network and our third party partners, (ii) that we may share your information with our third party partners who provide products and services that may be of interest to you and (iii) that you are not located within the European Economic Area.