S&T official picked to lead intelligence community’s AI work

The appointment of a chief AI officer comes as the IC looks to safely adopt large language models and other technologies.

The top U.S. spy office has tapped a research official to spearhead the intelligence community’s work on AI.

John Beieler, who serves as Director of National Intelligence Avril Haines’ top science and technology advisor, has been named chief artificial intelligence officer at the Office of the Director of National Intelligence. Beieler confirmed his additional role during a speech today at an event hosted by the Intelligence and National Security Alliance in Arlington, Va.

Beieler now leads a council of chief AI officers from the 18 elements of the intelligence community, including the CIA, the National Security Agency and the Defense Intelligence Agency. He said the council, which reports directly to Haines, has been meeting every two weeks for the last two months.

“What we’re focusing on as a group is AI governance,” Beieler said.

He said the group is writing the first IC-wide directive on AI. It will describe what intelligence agencies need to do to deploy AI and machine learning.

“Things like documentation, standards, [application programing interfaces], what sort of data documentation needs to happen, how all these things fit together, the responsible adoption, ongoing monitoring,” Beieler said, describing what goes into the directive. “The responsibility of an individual developer, the responsibility of management and leadership. We’re really focusing on that responsible, ethical adoption.”

He added that the directive will also lay out civil liberties and privacy protections that need to be included in the algorithms developed by the intelligence community.

The new AI council is also leading an update to ODNI’s AI strategy.

“We want to make sure that we have that one consolidated viewpoint of, what do we think is important for AI and the IC, to drive some of those resource conversations,” Beieler said.

Concerned with rapid advances in AI by China and other countries, lawmakers have also urged the intelligence community to prioritize the adoption of AI, with safeguards.

The Fiscal 2024 National Defense Authorization Act directs the DNI to establish new policies “for the acquisition, adoption, development, use, coordination, and maintenance of artificial intelligence capabilities,” including minimum guidelines for the performance of AI models used by spy agencies.

Beieler has a background in data science and machine learning. Prior to joining ODNI in 2019, he led research programs on human language technology, machine learning and vulnerabilities in AI at the Intelligence Advanced Research Projects Agency.

At ODNI, he has also helped lead the intelligence community’s Augmenting Intelligence using Machines or “AIM” strategy. With many intel agencies dealing with a deluge of data, the goal of AIM has been to coordinate the adoption of AI and automation across spy agencies.

While spy agencies have used forms of artificial intelligence and machine learning for decades, the emergence of widely available large language models like ChatGPT has added both new considerations and renewed urgency to the AI race.

“A lot of this is focused on making sure that folks that are using these tools understand them,” Beieler said.

ODNI has already funded various training and upskilling programs across intelligence agencies. And he acknowledged the challenges with generative AI and other large language models, such as hallucination errors, copyright issues, and privacy concerns.

“Getting analysts, collectors and the broad base of the IC workforce familiar with these things, so they understand some of these failure modes, but doing that in such a way that they don’t immediately write off the technology,” Beieler said. “That’s the tricky part in upskilling across the workforce.”

With just a handful of companies — rather than government labs or academia — developing the so-called advanced “frontier AI models,” Beieler acknowledged the intelligence community finds itself in a unique “LLM moment.”

He said it will be crucial to test and evaluate the models for different failure modes. He added that the IC isn’t interested in just “buying a widget” from companies, but partnering with industry and academia test and evaluate how AI will impact the world of intelligence.

“That doesn’t mean that we won’t have humans. In fact, I think it might mean that we have more humans, but again, is what is the role?” Beieler said. “What is that teaming, what is that partnership, and how do we work? And how do we put some of those guardrails in so that analysts understand and collectors understand some of these models that they’re working with.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    Getty Images/iStockphoto/Who_I_amCloud computing technology internet storage concept with circuit board. Internet data services. Vector illustration

    Two senators look to reign in big tech’s influence in defense AI, cloud contracts

    Read more