Driving agency AI literacy utilizing guardrails and frameworks
With functionalities ranging from translation services to automated content creation, these tools can streamline complex tasks and free up valuable resources.
Generative artificial intelligence, particularly through the use of large language models (LLMs), has the capacity to revolutionize the public sector.
An LLM is a type of AI that has been trained on vast amounts of text data to understand and generate human-like text, enabling it to answer questions, create content and provide insights.
For public sector leaders, it serves as a powerful tool to enhance decision-making, automate routine tasks and engage with constituents more effectively through advanced natural language processing capabilities. With functionalities ranging from translation services to automated content creation, these tools can streamline complex tasks and free up valuable resources.
The agility and adaptability of GenAI make it an attractive option for modernizing and improving government services and fostering a more dynamic interaction with citizens to enhance user experience. GenAI can quickly synthesize data and present it into a coherent response that is easily understandable and actionable. For example, instead of manually sifting and reading through hundreds of emails on a particular email thread or subject topic, GenAI (Microsoft Copilot) can scan all relevant emails to generate a content summary of the email thread to help with human understanding and recommend any follow-up actions.
LLMs enable machines to learn and understand textual information to provide the GenAI capabilities described above.
AI literacy: Challenges and risks to adoption
Despite the promise of GenAI, its implementation is not without obstacles. The primary concerns revolve around data security, privacy and the integrity of the models themselves. Risks include:
Sensitive information disclosure: The use of GenAI tools, especially those that are open source or web based, can lead to inadvertent exposure of sensitive information. User prompts may become part of the training data, potentially leaking proprietary or confidential data into the public domain. LLMs may inadvertently reveal confidential data in responses, leading to unauthorized data access, privacy violations and security breaches.
Prompt manipulation: Malicious actors could exploit the responsiveness of LLMs to prompts, leading to the generation of false or harmful content, or execution of exploitative activities.
Training data poisoning: This type of cyber attack involves manipulating data or fine-tuning processes to introduce vulnerabilities, backdoors or biases that could compromise the model’s security, effectiveness or ethical behavior. Inherent biases in training data can be perpetuated and amplified by LLMs, leading to skewed outputs that may affect decision-making processes.
The advent of GenAI brings with it a proliferation of social engineering threats. As GenAI technologies become more sophisticated and accessible, they provide powerful tools that can be used to craft highly convincing and manipulative content. This capability significantly lowers the barrier for malicious actors to conduct social engineering attacks, which are designed to exploit human psychology rather than technical vulnerabilities.
The pervasiveness of GenAI has made it easier for threat actors to generate phishing emails, fake news and other forms of deceptive content at scale. These materials can be tailored to target specific individuals or organizations, making them more difficult to detect and resist. The challenge is further compounded by the speed at which GenAI can produce such content, outpacing traditional security measures that rely on manual detection and response.
The complexity and potential risks associated with LLMs might instinctively lead some chief information security officers to block these tools until they have a better handle on the risk and mitigation techniques. However, this approach presents its own challenges. Large agencies have implemented various rules over the years, limiting access to certain applications and social media platforms. Employees, often unaware of the nuanced reasons behind these decisions, might equate their inability to access LLMs to concerns about productivity rather than sensitivity toward data protection. Often, when applications are blocked on official networks, employees access them through personal devices. According to the 2024 Work Trend Index Annual Report by Microsoft and LinkedIn, at least three in four employees are using GenAI at work, and over half of users are reluctant to admit to it. Cyberhaven Labs recently analyzed ChatGPT usage for 1.6m workers across various industries and detected thousands of attempts to paste corporate data into ChatGPT (and employees copied data out of ChatGPT even more, at a nearly 2:1 ratio). We’re beginning to understand the impact of tools such as ChatGPT on organizations — and the enterprise risks it creates.
Strategic recommendations for AI literacy
In this environment, the workforce at large must be empowered to become the first line of defense against these emerging threats. By integrating AI capabilities into both mission-focused and support functions, employees gain firsthand experience with leading-edge technology, fostering a deeper understanding of its potential and limitations. This familiarity is crucial for developing security literacy, as it allows individuals to recognize the nuances of AI-generated content, discern patterns that may indicate manipulation and appreciate the sophistication of the technology that adversaries might employ.
When the workforce is well-versed in how GenAI operates and what its outputs entail, they are better equipped to spot inconsistencies or anomalies that could signal malicious attempts to exploit AI tools for disinformation or deception. In essence, the more knowledgeable individuals are about the inner workings and outputs of GenAI, the less susceptible they become to the social engineering tactics that increasingly sophisticated threat actors might use to craft persuasive and manipulative narratives.
To harness the benefits of GenAI while mitigating the associated risks, government agencies should consider the following recommendations:
Education and training: Develop systematic programs to educate the workforce on the nuances and risks of GenAI technology. Understanding the capabilities and limitations of AI is crucial for recognizing potential threats and reducing susceptibility to social engineering and unintentional sensitive data leaks. Encourage a culture of security literacy where employees can discern AI-generated content and identify patterns indicative of manipulation.
AI guardrails and frameworks: Implement AI governance frameworks that provide both reactive and proactive measures to guide the use and security of GenAI tools. This will promote self-directed compliance and minimize the risk of errors.
Risk management: Adopt novel approaches to risk management that extend beyond traditional cyber risk frameworks, addressing the inherent vulnerabilities of LLMs attributable to threat surfaces associated with training data, model bias and prompts.
Security as a service and AI application development blueprints: Provide a low barrier to entry for security and LLM risk detection services, inclusive of GitHub-style shared tools that can be deployed across multiple environments for consistent management.
The evolving technological landscape necessitates a growth mindset — embracing proactive and reactive strategies and fostering understanding and familiarity with AI among employees. This will pave the way for government agencies to fully harness the benefits of AI while mitigating risks, ensuring accountability and forging a resilient, digitally driven future.
Amy Jones is U.S. public sector AI lead at EY. The views reflected in this article are the views of the author and do not necessarily reflect the views of Ernst & Young LLP or other members of the global EY organization.
Driving agency AI literacy utilizing guardrails and frameworks
With functionalities ranging from translation services to automated content creation, these tools can streamline complex tasks and free up valuable resources.
Generative artificial intelligence, particularly through the use of large language models (LLMs), has the capacity to revolutionize the public sector.
An LLM is a type of AI that has been trained on vast amounts of text data to understand and generate human-like text, enabling it to answer questions, create content and provide insights.
For public sector leaders, it serves as a powerful tool to enhance decision-making, automate routine tasks and engage with constituents more effectively through advanced natural language processing capabilities. With functionalities ranging from translation services to automated content creation, these tools can streamline complex tasks and free up valuable resources.
The agility and adaptability of GenAI make it an attractive option for modernizing and improving government services and fostering a more dynamic interaction with citizens to enhance user experience. GenAI can quickly synthesize data and present it into a coherent response that is easily understandable and actionable. For example, instead of manually sifting and reading through hundreds of emails on a particular email thread or subject topic, GenAI (Microsoft Copilot) can scan all relevant emails to generate a content summary of the email thread to help with human understanding and recommend any follow-up actions.
Learn how high-impact service providers have helped the government reinvent the way they deliver their mission and services to the public in this exclusive ebook, sponsored by Carahsoft. Download today!
LLMs enable machines to learn and understand textual information to provide the GenAI capabilities described above.
AI literacy: Challenges and risks to adoption
Despite the promise of GenAI, its implementation is not without obstacles. The primary concerns revolve around data security, privacy and the integrity of the models themselves. Risks include:
The advent of GenAI brings with it a proliferation of social engineering threats. As GenAI technologies become more sophisticated and accessible, they provide powerful tools that can be used to craft highly convincing and manipulative content. This capability significantly lowers the barrier for malicious actors to conduct social engineering attacks, which are designed to exploit human psychology rather than technical vulnerabilities.
The pervasiveness of GenAI has made it easier for threat actors to generate phishing emails, fake news and other forms of deceptive content at scale. These materials can be tailored to target specific individuals or organizations, making them more difficult to detect and resist. The challenge is further compounded by the speed at which GenAI can produce such content, outpacing traditional security measures that rely on manual detection and response.
The complexity and potential risks associated with LLMs might instinctively lead some chief information security officers to block these tools until they have a better handle on the risk and mitigation techniques. However, this approach presents its own challenges. Large agencies have implemented various rules over the years, limiting access to certain applications and social media platforms. Employees, often unaware of the nuanced reasons behind these decisions, might equate their inability to access LLMs to concerns about productivity rather than sensitivity toward data protection. Often, when applications are blocked on official networks, employees access them through personal devices. According to the 2024 Work Trend Index Annual Report by Microsoft and LinkedIn, at least three in four employees are using GenAI at work, and over half of users are reluctant to admit to it. Cyberhaven Labs recently analyzed ChatGPT usage for 1.6m workers across various industries and detected thousands of attempts to paste corporate data into ChatGPT (and employees copied data out of ChatGPT even more, at a nearly 2:1 ratio). We’re beginning to understand the impact of tools such as ChatGPT on organizations — and the enterprise risks it creates.
Strategic recommendations for AI literacy
In this environment, the workforce at large must be empowered to become the first line of defense against these emerging threats. By integrating AI capabilities into both mission-focused and support functions, employees gain firsthand experience with leading-edge technology, fostering a deeper understanding of its potential and limitations. This familiarity is crucial for developing security literacy, as it allows individuals to recognize the nuances of AI-generated content, discern patterns that may indicate manipulation and appreciate the sophistication of the technology that adversaries might employ.
When the workforce is well-versed in how GenAI operates and what its outputs entail, they are better equipped to spot inconsistencies or anomalies that could signal malicious attempts to exploit AI tools for disinformation or deception. In essence, the more knowledgeable individuals are about the inner workings and outputs of GenAI, the less susceptible they become to the social engineering tactics that increasingly sophisticated threat actors might use to craft persuasive and manipulative narratives.
To harness the benefits of GenAI while mitigating the associated risks, government agencies should consider the following recommendations:
Read more: Commentary
The evolving technological landscape necessitates a growth mindset — embracing proactive and reactive strategies and fostering understanding and familiarity with AI among employees. This will pave the way for government agencies to fully harness the benefits of AI while mitigating risks, ensuring accountability and forging a resilient, digitally driven future.
Amy Jones is U.S. public sector AI lead at EY. The views reflected in this article are the views of the author and do not necessarily reflect the views of Ernst & Young LLP or other members of the global EY organization.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
White House says agencies hired 200 AI experts so far through governmentwide ‘talent surge’
Federal contractors get some guidance on using AI when hiring
Why artificial intelligence will never replace your job