NARA sees ‘unacceptable risks’ from using ChatGPT for agency business

The National Archives and Records Administration says there's an "unacceptable risk" of its sensitive data making its way into ChatGPT's large language model.

The National Archives and Records Administration has become the latest federal agency to bar its employees from using ChatGPT for work purposes, citing “unacceptable risk” to the agency’s data.

The policy decision stems from what agency officials said are concerns that any data employees enter as prompts into the commercial version of the AI service might be used not only to train the ChatGPT model, but that the same data could make its way into responses to other users.

“Various media reports indicate there is a growing amount of personally identifiable information and corporate proprietary information showing up in ChatGPT and other AI services,” Keith Day, NARA’s chief information security officer, wrote in a memo to employees Wednesday. “Employees who want to use AI to help them in their jobs often don’t realize that these types of AI services keep your input data for further training of the AI. If sensitive, non-public NARA data is entered into ChatGPT, our data will become part of the living data set without the ability to have it removed or purged.”

The new policy takes effect on Monday, and the agency will enforce it by blocking access to OpenAI’s ChatGPT websites on all NARA desktop and laptop computers.

OpenAI press representatives did not immediately respond to a request for comment. The new NARA policy was first reported by 404 Media.

NARA is not alone in its concerns, as federal agencies continue to seek a balance between incorporating AI into their operations and safeguarding sensitive information.

Last June, for example, the General Services Administration implemented an interim policy that blocks all publicly-available large language model generative AI tools from GSA computers. The policy allowed exceptions in some areas, such as for research projects, but even in those cases, barred employees from inputting any non-public data into the tools.

The Environmental Protection Agency issued a similar directive last summer, warning employees that entering agency information into publicly available LLMs “could lead to potential data breaches, identity theft, financial fraud or inadvertent release of privileged information.”

NARA’s policy doesn’t go quite as far — at least not yet. Wednesday’s memo singles out ChatGPT for prohibition.

“NARA understands the benefits of using AI tools to help you with work products. To ensure the responsible use of these tools, we are exploring the use of other AI solutions, such as Microsoft Copilot and Google Gemini, which provide services similar to ChatGPT, but in a more controlled environment,” Day wrote. “These tools differ from ChatGPT because they protect data input by federal agencies placing it in a private repository that is not shared with others.”

The Biden Administration, meanwhile, has been trying to establish more consistency across agencies’ AI policies.

In late March, the Office of Management and Budget issued its first-ever governmentwide policy on how agencies should leverage AI while taking account of some of the technology’s risks, including 10 specific steps each of them must take.

Among other requirements, by Dec. 1, all agencies will have to outline “concrete safeguards” to protect data and mitigate safety risks. The policy also insisted on more transparency about how agencies are using AI, including annual inventories of each agency’s use cases, and metrics about programs that are too sensitive to disclose to the public.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories