Use of AI comes with new security threats. Agencies will need innovative strategies to protect their AI models
By next year, 75% of government agencies will have launched at least three enterprise-wide initiatives for AI-assisted automation, according to Deloitte. And 60...
Generative AI and other forms of artificial intelligence and machine learning have captured the public’s attention. Individuals, industry and government alike are captivated by the promise of lightning-fast, laser-accurate analysis, predictions and innovation.
By next year, 75% of government agencies will have launched at least three enterprise-wide initiatives for AI-assisted automation, according to Deloitte. And 60% of agency investments in AI and data analytics will be designed to have a direct impact on real-time operational decisions and outcomes, Deloitte says.
The order imposes new requirements on companies that create AI systems and on organizations that use them. For instance, the National Institute of Standards and Technology (NIST) must develop tests to measure the safety of AI models. Vendors that build powerful AI systems must notify government and share results of safety tests before models are released to the public. And federal agencies must take steps to prevent AI models from worsening discrimination in housing, federal benefits programs and criminal justice.
Such protections are important in ensuring safe and fair use of AI and in maintaining the public’s trust in government. But there’s another aspect of AI that requires similar attention, and that’s security. AI introduces several new cyber threats, some of which agencies might not currently be prepared for. For organizations that hope to benefit from AI, now is the time to address AI-related cyber issues.
New AI functionality, new security threats
AI models and outputs are vulnerable to a number of novel cyberattacks. Here are key threats agencies should look out for, along with strategies for mitigating risks:
Poisoning: Poisoning involves the introduction of false or junk information into AI model training to trick the AI system into making inaccurate classifications, decisions or predictions. For example, subtly altering training data in a facial recognition system could cause the system to misidentify people. Poisoning can have serious consequences in use cases like fraud detection or autonomous vehicles.
To protect against poisoning, agencies should restrict who has access to AI models and training data, with strong access controls. Data validation and filtering can exclude potentially malicious data. Anomaly detection tools can help identify unusual patterns or poisoning attempts. And continuous monitoring can spot unusual outputs or any “drift” toward inaccurate responses.
Prompt injection: In a prompt-injection attack, attackers input malicious queries with the goal of obtaining sensitive information or otherwise misusing the AI system. For example, if attackers think the AI model was trained with proprietary data, they could ask questions to expose that intellectual property. Let’s say the model was trained on network device specifications. The attacker could ask the AI solution how to connect to that device, and in the process learn how to circumvent security mechanisms.
To protect against prompt injection, limit access only to authorized users. Use strong access controls like multifactor authentication (MFA). Employ encryption to protect sensitive information. Conduct penetration tests on AI solutions to uncover vulnerabilities. And implement input-validation mechanisms to check prompts for anomalies such as unexpected characters or access through potential attack vectors.
Spoofing: Spoofing presents an AI solution with false or misleading information in an attempt to trick it into making an inaccurate decision or prediction. For example, a spoofing attack might try to convince a facial recognition system that a photograph is actually a live person.
Protecting against spoofing involves standard security measures such as identity and access control. Anti-spoofing solutions can detect common spoofing techniques, and “liveness detection” solutions can ensure that data is from a live source. Ongoing testing of AI solutions with known spoofing techniques can uncover built-in weaknesses.
Fuzzing: Fuzzing is a valid cybersecurity testing technique designed to identify vulnerabilities in AI models. It presents the AI solution with random inputs, or “fuzz,” of both valid and invalid data to gauge how the solution responds. But fuzzing can also be a cyberattack designed to reveal and exploit AI system weaknesses.
The best defense against malicious fuzzing is legitimate fuzzing tests. Also deploy input filtering and validation to block known patterns or IP addresses associated with malicious fuzzing. Continuous monitoring can detect input patterns that indicate a fuzzing attack.
Government agencies are just beginning to explore the myriad potential use cases for AI. They’ll soon start gaining a growing range of new AI-assisted capabilities, from generating content to writing computer code to making accurate, context-aware predictions. They’ll be able to operate more efficiently and serve constituencies more effectively.
As they deploy AI, however, agencies will also be exposed to AI-specific cyber threats. But by becoming aware of the risks and implementing the right protective measures, they can realize the benefits of AI while ensuring its safe use for their organizations and the people they serve.
Burnie Legette is director of IoT sales and artificial intelligence for Intel Corp. Gretchen Stewart is chief data scientist for Intel Public Sector.
Use of AI comes with new security threats. Agencies will need innovative strategies to protect their AI models
By next year, 75% of government agencies will have launched at least three enterprise-wide initiatives for AI-assisted automation, according to Deloitte. And 60...
Generative AI and other forms of artificial intelligence and machine learning have captured the public’s attention. Individuals, industry and government alike are captivated by the promise of lightning-fast, laser-accurate analysis, predictions and innovation.
By next year, 75% of government agencies will have launched at least three enterprise-wide initiatives for AI-assisted automation, according to Deloitte. And 60% of agency investments in AI and data analytics will be designed to have a direct impact on real-time operational decisions and outcomes, Deloitte says.
At the same time, both citizens and government oversight bodies are increasingly worried about the safe and equitable use of AI data and outputs. In response, on October 30 President Biden issued a 100-page Executive Order on Safe, Secure and Trustworthy Artificial Intelligence.
The order imposes new requirements on companies that create AI systems and on organizations that use them. For instance, the National Institute of Standards and Technology (NIST) must develop tests to measure the safety of AI models. Vendors that build powerful AI systems must notify government and share results of safety tests before models are released to the public. And federal agencies must take steps to prevent AI models from worsening discrimination in housing, federal benefits programs and criminal justice.
Such protections are important in ensuring safe and fair use of AI and in maintaining the public’s trust in government. But there’s another aspect of AI that requires similar attention, and that’s security. AI introduces several new cyber threats, some of which agencies might not currently be prepared for. For organizations that hope to benefit from AI, now is the time to address AI-related cyber issues.
New AI functionality, new security threats
AI models and outputs are vulnerable to a number of novel cyberattacks. Here are key threats agencies should look out for, along with strategies for mitigating risks:
Poisoning: Poisoning involves the introduction of false or junk information into AI model training to trick the AI system into making inaccurate classifications, decisions or predictions. For example, subtly altering training data in a facial recognition system could cause the system to misidentify people. Poisoning can have serious consequences in use cases like fraud detection or autonomous vehicles.
To protect against poisoning, agencies should restrict who has access to AI models and training data, with strong access controls. Data validation and filtering can exclude potentially malicious data. Anomaly detection tools can help identify unusual patterns or poisoning attempts. And continuous monitoring can spot unusual outputs or any “drift” toward inaccurate responses.
Prompt injection: In a prompt-injection attack, attackers input malicious queries with the goal of obtaining sensitive information or otherwise misusing the AI system. For example, if attackers think the AI model was trained with proprietary data, they could ask questions to expose that intellectual property. Let’s say the model was trained on network device specifications. The attacker could ask the AI solution how to connect to that device, and in the process learn how to circumvent security mechanisms.
To protect against prompt injection, limit access only to authorized users. Use strong access controls like multifactor authentication (MFA). Employ encryption to protect sensitive information. Conduct penetration tests on AI solutions to uncover vulnerabilities. And implement input-validation mechanisms to check prompts for anomalies such as unexpected characters or access through potential attack vectors.
Spoofing: Spoofing presents an AI solution with false or misleading information in an attempt to trick it into making an inaccurate decision or prediction. For example, a spoofing attack might try to convince a facial recognition system that a photograph is actually a live person.
Protecting against spoofing involves standard security measures such as identity and access control. Anti-spoofing solutions can detect common spoofing techniques, and “liveness detection” solutions can ensure that data is from a live source. Ongoing testing of AI solutions with known spoofing techniques can uncover built-in weaknesses.
Read more: Commentary
Fuzzing: Fuzzing is a valid cybersecurity testing technique designed to identify vulnerabilities in AI models. It presents the AI solution with random inputs, or “fuzz,” of both valid and invalid data to gauge how the solution responds. But fuzzing can also be a cyberattack designed to reveal and exploit AI system weaknesses.
The best defense against malicious fuzzing is legitimate fuzzing tests. Also deploy input filtering and validation to block known patterns or IP addresses associated with malicious fuzzing. Continuous monitoring can detect input patterns that indicate a fuzzing attack.
Government agencies are just beginning to explore the myriad potential use cases for AI. They’ll soon start gaining a growing range of new AI-assisted capabilities, from generating content to writing computer code to making accurate, context-aware predictions. They’ll be able to operate more efficiently and serve constituencies more effectively.
As they deploy AI, however, agencies will also be exposed to AI-specific cyber threats. But by becoming aware of the risks and implementing the right protective measures, they can realize the benefits of AI while ensuring its safe use for their organizations and the people they serve.
Burnie Legette is director of IoT sales and artificial intelligence for Intel Corp. Gretchen Stewart is chief data scientist for Intel Public Sector.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
Navigating the era of innovation: How artificial intelligence and automation are driving a digital-first government
A leading research group takes on the artificial intelligence cyber threat question
How to build trust in artificial intelligence