While generative artificial intelligence and large language models could transform Navy operations, the service’s top technology official is warning that they...
While generative artificial intelligence and large language models could transform Navy operations, the service’s top technology official is warning that they also could create operational security risks and would require human review.
As noted in a September memo from the Navy’s acting Chief Information Officer Jane Rathbun, LLMs present an operational security risk because they save every prompt they are given. Rathbun said these tools must be verified and validated by humans.
“These models have the potential to transform mission processes by automating and executing certain tasks with unprecedented speed and efficiency,” the memo said. “In order to effectively leverage the complete potential of these tools, they must be complemented by human expertise.”
For general usage there must be a thorough human review process using critical thinking skills to counter hallucinations—generative AI responses with false data or information that appears to be correct.
The memo said users should also proofread and fact-check inputs and outputs, check source credibility, address inaccuracies and potential intellectual property issues. AI-generated code must go through a thorough review, evaluation and testing process in a controlled, non-production environment before it can be used.
However, the military usage of commercial AI language models, such as OpenAI’s ChatGPT, Google’s Bard and Meta’s LLaMA, “are not recommended for operational use cases until security control requirements have been fully investigated, identified and approved for use within controlled environments,” the memo said.
Rathbun said the use of sensitive or classified information in unregulated AI LLM could pose a security risk by inadvertently releasing sensitive or classified information. Therefore, the existing policy governing the use of sensitive information and the proper usage of distribution A content applies in this scenario. The Navy will create rules and access to LLMs via its enterprise data and analytics platform Jupiter. The Navy will also establish security measures to protect data.
Organization leadership is responsible and accountable for user-created vulnerabilities, violations and unintended consequences brought by the use and adoption of LLMs, the memo said.
While the Navy recently issued guidance on generative AI and LLMs, other branches of the Defense Department are encouraged to explore utilizing this technology and to establish the proper channels to do so. The National Academies of Sciences, Engineering and Medicine said the Air Force and Space Force must invest in AI development, testing and evaluation. The September report stated that building expertise within the workforce will be important as will AI-related testing, infrastructure, methods, policies and tools. However, a restrictive Space Force policy would hinder such a suggestion.
Federal News Network has learned that a Space Force memo prohibits the use of generative-AI for any work publication, including in situations where a web-accessible generative-AI tool was given publicly accessible or low confidentiality unclassified information as input.
The recent Navy guidance follows other agencies like the General Services Administration and the Environmental Protection Agency who have regulated employees’ usage of these tech tools. It also comes after the Office of Management and Budget started working on new requirements to manage AI in September.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Kirsten Errick covers the Defense Department for Federal News Network. She previously reported on federal technology for Nextgov on topics ranging from space to the federal tech workforce. She has a Master’s in Journalism from Georgetown University and a B.A. in Communication from Villanova University.
Follow @kerrickWFED