Understanding regulation with the AI executive order
The White House executive order on AI is heavily positioned around data protection, privacy and disclosure. All these concepts are important priorities when...
The White House executive order on AI is heavily positioned around data protection, privacy and disclosure. All these concepts are important priorities when considering how AI is used in modeling and accelerating the business process and systems that work with critical user data across the supply chain. The focus categories that will be introduced follow a similar theme to many cybersecurity frameworks that have been introduced across the market.
There are a few initiatives within the order that draw particular interest from the regulatory perspective and that deserve careful inspection to ensure that businesses fully understand their responsibilities under the order and maintain a compliant environment that doesn’t disrupt the business process.
Data testing disclosure requirement: For both AI developers and users there is a requirement to disclose the results of their security testing and make authorities aware of how they have successfully tested their AI models to ensure data protection, safety and trustworthiness. This is similar to several federal laws and frameworks that have emerged in the last year (i.e., the National Institute for Standards and Technology Cybersecurity Framework 2.0, and the new Securities and Exchange Commission security reporting rule 8K filings). These place an additional regulatory burden on businesses to prove the effectiveness of their security controls as well as a need to proactively collect and disclose the evidence-based-data required to prove the posture of their systems. This may also open the door to several types of fraud that can result from spoof attacks that prey on regulations that require data disclosure.
New AI Safety and Security Board: The establishment of the new Department of Homeland Security AI authority that will govern the creation of new standards and technology to help establish security testing and measurements of AI systems will no doubt increase the scrutiny under which companies that employ AI models within their business will experience. From a regulatory or security assessment perspective there will be a clear need for companies to have more dynamic control over the measurement of gaps across their estate so they can be ahead of their security vulnerabilities with a remediation plan before they are subjected to audit. The increase in security control scrutiny is very similar to the positioning of several new cybersecurity framework amendments that we’ve seen recently and as a result of the threats to the national supply chain over recent years. Since NIST will have a hand in designing these new testing frameworks, companies could get a jump on measuring their controls by employing the new NIST CSF 2.0 framework. It has an entire new category (Govern) dedicated to helping companies provide proper cyclical disclosure of their security risk profile and posture from the top down.
Protect Americans from AI-enabled fraud: For developers and users of AI, this initiative could be challenging and will require careful consideration to protect personally identifiable information and consumer data. Other data privacy mandates have required the disclosure of personal or consumer data as a requirement of personal data privacy. Many of those regulations have been abused in the past as well. When the General Data Protection Regulation was introduced and contained the requirement to disclose and hand over PII and private data based on consumer rights, it introduced multiple opportunities for cyber-crime creating data request spoofs that attempted to fool businesses into unknowingly hand over private data. These privacy mandates gave a whole new life to many groups working on the underground such as access brokers and data exposure culprits.
AI tools to help find and fix vulnerabilities in critical software: This initiative is as good as the teams that implement and train the policy under which AI will seek to find gaps and vulnerabilities in systems. Like any other automated technology, it will depend greatly on how well the AI system has been trained on the inspection policy. Careful consideration will need to be paid to monitoring the checks and balances of tests and measurements that are conducted to the systems. Many major data breaches were caused by gaps slipping into systems due to a policy configuration that was not strong enough or that didn’t include a strong enough threshold to identify outlying vulnerabilities that caused a negative security event. Many of the recent breaches that were caused by third-party vulnerabilities were simply a case of failed security policy as opposed to lack of security controls on the parent enterprise.
All in all, for the security assessment and regulatory community, the EO for AI could be a positive initiative and may contribute to a stronger cybersecurity environment. Efforts must be taken to ensure that the policies and legislations that evolve from the order foster an environment that harnesses AI in a positive way while bearing in mind that the frameworks implemented will require variables to protect against errors in automation and lack of human intervention.
Chris Strand is chief risk and compliance officer at Cybersixgill.
Understanding regulation with the AI executive order
The White House executive order on AI is heavily positioned around data protection, privacy and disclosure. All these concepts are important priorities when...
The White House executive order on AI is heavily positioned around data protection, privacy and disclosure. All these concepts are important priorities when considering how AI is used in modeling and accelerating the business process and systems that work with critical user data across the supply chain. The focus categories that will be introduced follow a similar theme to many cybersecurity frameworks that have been introduced across the market.
There are a few initiatives within the order that draw particular interest from the regulatory perspective and that deserve careful inspection to ensure that businesses fully understand their responsibilities under the order and maintain a compliant environment that doesn’t disrupt the business process.
Data testing disclosure requirement: For both AI developers and users there is a requirement to disclose the results of their security testing and make authorities aware of how they have successfully tested their AI models to ensure data protection, safety and trustworthiness. This is similar to several federal laws and frameworks that have emerged in the last year (i.e., the National Institute for Standards and Technology Cybersecurity Framework 2.0, and the new Securities and Exchange Commission security reporting rule 8K filings). These place an additional regulatory burden on businesses to prove the effectiveness of their security controls as well as a need to proactively collect and disclose the evidence-based-data required to prove the posture of their systems. This may also open the door to several types of fraud that can result from spoof attacks that prey on regulations that require data disclosure.
New AI Safety and Security Board: The establishment of the new Department of Homeland Security AI authority that will govern the creation of new standards and technology to help establish security testing and measurements of AI systems will no doubt increase the scrutiny under which companies that employ AI models within their business will experience. From a regulatory or security assessment perspective there will be a clear need for companies to have more dynamic control over the measurement of gaps across their estate so they can be ahead of their security vulnerabilities with a remediation plan before they are subjected to audit. The increase in security control scrutiny is very similar to the positioning of several new cybersecurity framework amendments that we’ve seen recently and as a result of the threats to the national supply chain over recent years. Since NIST will have a hand in designing these new testing frameworks, companies could get a jump on measuring their controls by employing the new NIST CSF 2.0 framework. It has an entire new category (Govern) dedicated to helping companies provide proper cyclical disclosure of their security risk profile and posture from the top down.
Get tips on how your agency should tackle the data pillar of zero trust in our latest Executive Briefing, sponsored by Varonis.
Protect Americans from AI-enabled fraud: For developers and users of AI, this initiative could be challenging and will require careful consideration to protect personally identifiable information and consumer data. Other data privacy mandates have required the disclosure of personal or consumer data as a requirement of personal data privacy. Many of those regulations have been abused in the past as well. When the General Data Protection Regulation was introduced and contained the requirement to disclose and hand over PII and private data based on consumer rights, it introduced multiple opportunities for cyber-crime creating data request spoofs that attempted to fool businesses into unknowingly hand over private data. These privacy mandates gave a whole new life to many groups working on the underground such as access brokers and data exposure culprits.
AI tools to help find and fix vulnerabilities in critical software: This initiative is as good as the teams that implement and train the policy under which AI will seek to find gaps and vulnerabilities in systems. Like any other automated technology, it will depend greatly on how well the AI system has been trained on the inspection policy. Careful consideration will need to be paid to monitoring the checks and balances of tests and measurements that are conducted to the systems. Many major data breaches were caused by gaps slipping into systems due to a policy configuration that was not strong enough or that didn’t include a strong enough threshold to identify outlying vulnerabilities that caused a negative security event. Many of the recent breaches that were caused by third-party vulnerabilities were simply a case of failed security policy as opposed to lack of security controls on the parent enterprise.
All in all, for the security assessment and regulatory community, the EO for AI could be a positive initiative and may contribute to a stronger cybersecurity environment. Efforts must be taken to ensure that the policies and legislations that evolve from the order foster an environment that harnesses AI in a positive way while bearing in mind that the frameworks implemented will require variables to protect against errors in automation and lack of human intervention.
Chris Strand is chief risk and compliance officer at Cybersixgill.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
How 21st-century data management can help leaders provide more effective correctional healthcare services
Breaking down barriers: The challenges of federal micro-purchases for small businesses
The push to upskill the technology workforce in federal agencies