Cyber Leaders Exchange 2024: Los Alamos’ Chris Rawlings on advancing AI while ensuring security

Los Alamos National Laboratory explores risks and opportunities AI presents as it becomes integrated into more complex systems.

Ever since artificial intelligence started permeating the federal government, the White House has made implementing AI technologies safely and securely a priority, issuing the AI executive order in 2023, followed by major guidance and directives.

Meanwhile, research institutions such as Los Alamos National Laboratory, an organization at the forefront of AI research for years, are exploring opportunities and risks that AI presents as it becomes more integrated into complex systems, especially in defense and critical infrastructure systems.

At LANL, multiple cyber groups tackle the immediate and long-term cybersecurity challenges. Some groups perform day-to-day firefighting, defending networks used by defense and civilian agencies. Other groups explore short and medium-term cyber challenges that are not necessarily big research projects. Meanwhile, several more teams conduct long-term research.

AI research to stay ahead of adversaries

Chris Rawlings, group leader for advanced research in cyber systems at LANL, said that AI being integrated into complex systems introduces more attack surfaces that bad actors can exploit. And even when an issue is detected, understanding the reason it happened — whether it’s a malfunction or an attack — can be challenging.

“We know that our adversaries are going to be doing more and more work in the traditional cyber sense with automating attacks against computer systems, whether that be an automated scan, whether that be automated vulnerability detection — trying to find new zero-days — whether it be self-contained AI system code that’s an agent, if you will, that works its way through a network and learns how we defend, learns how we instrument our systems and finds creative ways around that,” said Rawlings during Federal News Network’s Cyber Leaders Exchange 2024.

Beyond cyberattacks, the risks associated with AI being integrated into traditional kinetic warfare include the potential for bad actors to exploit vulnerabilities in automated systems or mislead AI-driven decisions, to name a few.

One example of how AI already delivers an information advantage to frontline soldiers is the AI-powered Maven Smart System. The system pulls data from a wide range of sources — existing intelligence databases, satellite intelligence, publicly available data, such as social media feeds and open source intelligence — and provides a single interface that lets users access, analyze and make decisions based on data.

AI research to reveal potential fragility, biases

Rawlings said although AI provides significant operational advantages, the fragility of AI cannot be overlooked.

“One of my spiels that I always get when I explain AI and how fragile it can be — and why we really need to think about this and put the time and effort into it — is if a system misidentifies a cat is a dog, or a dog is a cat, that doesn’t really matter,” he said. “But if we start putting in automated defenses into our military systems and one of our adversaries starts intentionally going at its processing, trying to trick it into doing something, that can have all sorts of unintended consequences. We have to be exceptionally cognizant and careful of that as we go forward.”

In addition, one of the areas LANL researchers focus on is understanding AI at the systems engineering level — how it’s created, connected and integrated.

“This ties directly back into the system of AI. How are these systems at a systems engineering level? How are they created? How are they connected? Is it something that, with enough poking and prodding, you are able to get it to run code on that system and it’s sort of a modern version of a remote code execution via AI, or persistence via AI, or data exfiltration via AI? You couple that with some of these really complex military systems or intelligence systems, and it can get scary pretty quick if you jump in and start doing things before you really think about all of the threats and all the potential impact,” said Rawlings.

In addition to security vulnerabilities, AI carries the risk of perpetuating biases. Because AI models are trained on historical data, systems can inherit and automate biases of that data.

“AI is based on what we’ve already done in the past and that has some concerns. We know that we can be biased. Are we embedding our existing biases into automated systems and making them automated biases?” Rawlings said. “There was a human resources system that was trained on all the people that have been hired in an organization, and so it reflected what their biases were, whether it be gender, whether it be race, whether it be education. And so we have to be very careful with that.”

Accelerating security approvals for AI uses

As agencies work to integrate AI, cloud computing and machine learning into their operations, getting these systems approved for secure use is cumbersome.

Rawlings said Los Alamos is exploring using AI to speed up the system approval process.

“Ironically enough, that’s one of the things that we’re looking at accelerating. How can we get AI systems and cloud systems and new compute systems approved for use faster? It’s the next evolution of a federal system accreditation,” he said.

“For the longest of times, the computer networks, federal systems were pretty plain and boring. You had a bunch of desktop computers. You had a bunch of laptops. You had a data center somewhere on site, and everything was nice and contained and behind your firewall and all your protections. But with cloud services, whether it be infrastructure as a service, building a network in the cloud, whether it be a service as a service, we think about things like zero trust. We think about things like people working remotely and how we can enable that. How can we enable that for the variety of missions?”  

Discover more articles and videos now on Federal News Network’s Cyber Leaders Exchange 2024 event page.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories