AI promises to revolutionize every industry, empowering employees to work smarter and faster – and many are eager to get started. But AI is a double-edged sword: Without proper management, unmonitored AI usage can expose organizations to serious risks. While more than half of U.S. employees are already using generative AI tools, 38% are using them without their manager’s knowledge.
This unauthorized use of AI, dubbed shadow AI, is a growing threat that must be top of mind for federal IT leaders. To mitigate shadow AI, leaders should build cross-functional teams to leverage interdisciplinary expertise and foster organizational awareness, practice structured risk categorization across AI initiatives, and implement monitoring tools to ensure detection of unauthorized AI use. By taking these steps, federal IT leaders can better control AI deployment and safeguard their organizations from hidden vulnerabilities.
Shadow AI and the risks of experimentation in the dark
Let’s first unpack shadow AI and the risks it poses. Shadow AI is the unregulated use of AI within an organization without the explicit approval or awareness of the organization’s IT department or governance framework. While employees often use shadow AI with good intentions — like aiming to independently streamline their tasks — this hidden usage actually does more harm than good.
In fact, shadow AI can cause costly organizational inefficiencies because unauthorized AI tools are often incompatible with existing infrastructure. More seriously, shadow AI opens the door to security vulnerabilities that can result in hefty fines for violating compliance rules or data privacy regulations. But above all, the use of shadow AI means that IT leaders have lost control and visibility of the organization’s AI landscape, making it challenging to manage risks or ensure compliance.
Faced with the consequences of AI misuse, what can IT leaders do? The first step is to build the right team. IT leaders can implement fusion teams — cross functional teams that combine the technical expertise of IT with the business knowledge of other departments — which can help comprehensively assess AI risk and enhance communication across an enterprise. For example, including legal and compliance teams in these fusion teams is particularly important to ensure regulatory and ethical considerations are appropriately addressed. This collaborative approach enables organizations to create robust policies and clear guidelines for the safe development and deployment of AI technologies throughout the entire enterprise.
Risk categorization is another key tool IT leaders should leverage to protect their organization against shadow AI. This tactic helps organizations identify, evaluate and prioritize potential risks associated with shadow AI. By providing a structured, standardized approach to risk categorization for AI initiatives, organizations can allocate resources more effectively to address the most pressing vulnerabilities. To get started, leaders can use the National Institute of Standards and Technology’s risk management framework to effectively identify generative AI risks and implement strategies for responsible use. Key recommendations include mapping AI use cases across the organization to understand where and how AI is deployed and identifying potential risks; implementing measurements to ensure the safe use of AI, such as protocols to assess accuracy, fairness and reliability; and continuously monitoring AI usage to detect unauthorized activities.
And the time to start is now. As of December 1, 2024, all AI applications within federal agencies that demonstrate the potential to impact human lives must undergo risk management practices in the United States. Therefore, it’s more important than ever for federal IT leaders to internally enforce risk management to ensure compliance with rapidly evolving regulations.
Lastly, IT leaders should implement monitoring tools that can detect the use of unauthorized AI across their organization. Such tools include cybersecurity protections frameworks that limit and control interaction with AI systems as well as track all use, effectively flagging unauthorized usage. Model observability tools can also ensure algorithmic accountability by providing reasoning behind model decisions, unlocking enhanced visibility into AI usage and flagging potential deviations from normal model behavior. Above all, it’s important to recognize that there isn’t a one-size-fits-all approach when it comes to AI governance and risk management — organizations should identify the best tools that fit their unique needs and capabilities.
Ensuring a balanced approach to innovation
IT leaders face a difficult balancing act with shadow AI. While leaders must foster a transparent and collaborative culture to encourage AI experimentation, they must also ensure that their organization remains secure and controlled. As IT leaders implement the above tactics, it’s important to always clearly communicate with employees and continue to encourage them to experiment with AI, albeit with the proper guardrails. Instilling safe and responsible AI habits now will only continue to pay off as the technology becomes even more intertwined in our daily work and lives.
Shining a light on shadow AI: Three ways to keep your enterprise safe
To mitigate shadow AI, leaders should build cross-functional teams to leverage interdisciplinary expertise and foster organizational awareness.
AI promises to revolutionize every industry, empowering employees to work smarter and faster – and many are eager to get started. But AI is a double-edged sword: Without proper management, unmonitored AI usage can expose organizations to serious risks. While more than half of U.S. employees are already using generative AI tools, 38% are using them without their manager’s knowledge.
This unauthorized use of AI, dubbed shadow AI, is a growing threat that must be top of mind for federal IT leaders. To mitigate shadow AI, leaders should build cross-functional teams to leverage interdisciplinary expertise and foster organizational awareness, practice structured risk categorization across AI initiatives, and implement monitoring tools to ensure detection of unauthorized AI use. By taking these steps, federal IT leaders can better control AI deployment and safeguard their organizations from hidden vulnerabilities.
Shadow AI and the risks of experimentation in the dark
Let’s first unpack shadow AI and the risks it poses. Shadow AI is the unregulated use of AI within an organization without the explicit approval or awareness of the organization’s IT department or governance framework. While employees often use shadow AI with good intentions — like aiming to independently streamline their tasks — this hidden usage actually does more harm than good.
In fact, shadow AI can cause costly organizational inefficiencies because unauthorized AI tools are often incompatible with existing infrastructure. More seriously, shadow AI opens the door to security vulnerabilities that can result in hefty fines for violating compliance rules or data privacy regulations. But above all, the use of shadow AI means that IT leaders have lost control and visibility of the organization’s AI landscape, making it challenging to manage risks or ensure compliance.
Learn how federal agencies are preparing to help agencies gear up for AI in our latest Executive Briefing, sponsored by ThunderCat Technology.
Three ways to combat shadow AI
Faced with the consequences of AI misuse, what can IT leaders do? The first step is to build the right team. IT leaders can implement fusion teams — cross functional teams that combine the technical expertise of IT with the business knowledge of other departments — which can help comprehensively assess AI risk and enhance communication across an enterprise. For example, including legal and compliance teams in these fusion teams is particularly important to ensure regulatory and ethical considerations are appropriately addressed. This collaborative approach enables organizations to create robust policies and clear guidelines for the safe development and deployment of AI technologies throughout the entire enterprise.
Risk categorization is another key tool IT leaders should leverage to protect their organization against shadow AI. This tactic helps organizations identify, evaluate and prioritize potential risks associated with shadow AI. By providing a structured, standardized approach to risk categorization for AI initiatives, organizations can allocate resources more effectively to address the most pressing vulnerabilities. To get started, leaders can use the National Institute of Standards and Technology’s risk management framework to effectively identify generative AI risks and implement strategies for responsible use. Key recommendations include mapping AI use cases across the organization to understand where and how AI is deployed and identifying potential risks; implementing measurements to ensure the safe use of AI, such as protocols to assess accuracy, fairness and reliability; and continuously monitoring AI usage to detect unauthorized activities.
And the time to start is now. As of December 1, 2024, all AI applications within federal agencies that demonstrate the potential to impact human lives must undergo risk management practices in the United States. Therefore, it’s more important than ever for federal IT leaders to internally enforce risk management to ensure compliance with rapidly evolving regulations.
Lastly, IT leaders should implement monitoring tools that can detect the use of unauthorized AI across their organization. Such tools include cybersecurity protections frameworks that limit and control interaction with AI systems as well as track all use, effectively flagging unauthorized usage. Model observability tools can also ensure algorithmic accountability by providing reasoning behind model decisions, unlocking enhanced visibility into AI usage and flagging potential deviations from normal model behavior. Above all, it’s important to recognize that there isn’t a one-size-fits-all approach when it comes to AI governance and risk management — organizations should identify the best tools that fit their unique needs and capabilities.
Ensuring a balanced approach to innovation
IT leaders face a difficult balancing act with shadow AI. While leaders must foster a transparent and collaborative culture to encourage AI experimentation, they must also ensure that their organization remains secure and controlled. As IT leaders implement the above tactics, it’s important to always clearly communicate with employees and continue to encourage them to experiment with AI, albeit with the proper guardrails. Instilling safe and responsible AI habits now will only continue to pay off as the technology becomes even more intertwined in our daily work and lives.
Kyle Tuberson is chief technology officer at ICF.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
Rethinking defense technology: A blueprint for US success in a complex global landscape
How should software producers be held accountable for shoddy cybersecurity products?
Feds with Benefits: Open Season checklist for federal employees