Air Force’s new policy sets guardrails around generative AI

Airmen and guardians can now experiment with commercial generative AI and a new policy lays out how they can do it safely.

The Air Force is putting in place guardrails for the service to test and experiment with commercial generative artificial intelligence.

Air Force Chief Information Officer Venice Goodwine, who signed a new AI policy on Wednesday, said that she created an innovation zone within the Air Force’s Office 365 environment that will allow service members to experiment with the technology within a safe environment.

“The one thing that we are very cautious of and, you saw this, there are those that didn’t understand the technology,” Goodwine said at the AFCEA Air Force IT day Thursday. “We’re figuring out ways how do we really just within our own confines of our own data then use this technology.”

Goodwine said the new policy requires training so airmen and guardians can understand how the technology works. She also cautioned against using government accounts to sign up for commercial generative AI platforms and advised against putting non-public data into the GenAI tools.

In addition, Goodwine’s office set up a specialized team with a deep knowledge of the technology to help the service understand the technology better and to foster it safely and responsibly.

“If I just want to use GenAI to help me create PowerPoints, I can do that as an innovator. If I want to use GenAI for something more sensitive, because remember what I do for a living. I drop bombs. And I have to be very careful how I use GenAI and AI and any technology in that environment. So understand that we look at the innovation cycle and we figure out based on risk, where we want to go and how fast we want to, should we be an innovator in that space or early adopter?” Goodwine said.

The Air Force is also participating in taskforce Lima. Led by the DoD Chief Digital and Artificial Intelligence Office (CDAO), the task force’s objective is to monitor, evaluate, and guide the implementation of the nascent technology, including identifying how it can pose security risks or ensuring the technology is being adopted responsibly. 

The Air Force set a goal to be AI-ready by 2025 and AI-competitive by 2027. As of now, the service has 44 different AI projects. For example, the Advanced Battle Management System (ABMS), the Air Force’s contribution to the Defense Department’s Joint All Domain Command and Control (JADC2) framework, uses artificial intelligence to enable decision-making processes for combat operations. The service is also using this technology to help predict how a decision could affect a particular program and its budget.

Last year, the Office of the Secretary of Defense issued a policy framework for AI to accelerate its implementation safely and responsibly across the Defense Department. While the military service branches have been implementing policies around safe and responsible use of AI, action is needed from Congress. The National Defense Authorization Act for fiscal 2024, which is currently awaiting President Joe Biden’s signature, includes key priorities around AI. Once passed, the bill will require the deputy secretary of defense to establish data management, AI, and digital solutions for business systems and capabilities that will accelerate the decision-making process.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories