Red Teams, watermarks and GPUs: The advantages and limitations of Biden’s AI executive order
By establishing new standards for AI safety, security and ethics, and promoting innovation, competition and public trust, the Biden executive order positions ...
This week’s release of President Joe Biden’s executive order on artificial intelligence represents a critical stride toward regulating a technology rapidly reshaping every facet of society. The Biden administration has long promised some form of governance around this unregulated technology, and this is an urgent matter: A legislative framework is required to ensure ethical practices, protect public welfare, maintain privacy and security, and mitigate the catastrophic risks associated with superintelligent AI. The executive order addresses a broad range of issues, marking the most substantial action on AI safety by any national government to date. However, it lacks robust enforcement mechanisms and fails to account for the most critical aspects of the capability.
By establishing new standards for AI safety, security and ethics, and promoting innovation, competition and public trust, the Biden executive order positions America as a leader in harnessing the technology’s benefits and mitigating its risks. The order includes sweeping actions for AI safety and security, guidelines for privacy preservation, initiatives to advance equity and civil rights, support for workers, and measures to encourage innovation and competition. Additionally, it calls on Congress to pass bipartisan data privacy legislation.
AI red-teaming: A standard with no enforcement mechanism
Perhaps the most urgent but also limited concept within the executive order is its requirement for large companies to share the results of red-teaming exercises with the U.S. government before officially releasing AI systems. Red-teaming, a process in which independent experts attempt to exploit potential vulnerabilities in a system, is critical to building safe AI systems because it helps to identify and mitigate potential risks against a wide range of scenarios. AI red teaming also seeks to uncover ways the system might produce harmful or problematic content during interactions with ordinary users. Originating from U.S. Cold War military exercises, the concept involves assigning a specific team – a “Red Team” – to represent Soviet forces.
Standardizing application testing and risk analysis is essential. Mandating the release of red-team results fosters a collaborative approach between the private sector and the government, driving down some risk before the deployment of new systems. While this is a positive step, it requires additional mitigation measures. Red-teaming can significantly improve the security and reliability of AI systems. Still, humans run this process and, therefore, cannot anticipate or eliminate all risks posed by superintelligence AI systems. An AI system trained to advance its intelligence and modeled on human behavior may develop its own goals beyond those prescribed by its human overlords. Such a system may learn to deceive the human red-team process. Additionally, the dynamic nature of the technology means that new risks can emerge over time, necessitating continuous updating of federal red-team standards.
Moreover, the executive order is unclear regarding what actions the government can take if an AI model is deemed dangerous through the red-teaming process. Sharing information and evaluating risk holds little value without an enforcement mechanism to ensure safety and mitigate those risks.
A watermark suggestion: No guardrails for disinformation
The executive order also incorporates proposals for watermarking photos, videos and audio produced by these systems to indicate their AI origins transparently. In so doing, the administration would add governance over AI’s ability to generate massive volumes of deep fakes and persuasive disinformation rapidly. Unfortunately, these AI marking guidelines come with no enforceable requirement. With no mechanism to prosecute against deep fakes unmarked as such, this part of the order is a mere suggestion. The lack of enforcement or regulatory guardrails is particularly concerning ahead of the 2024 election, when the voting public is likely facing a deluge of election-related misinformation.
No regulation over GPUs: A glaring omission
Another area for improvement of the executive order is a lack of regulation over computing hardware, particularly high-performance graphic processing units (GPUs) that are the lifeblood of AI development. This is a glaring omission. The power and potential of AI is directly tied to the capabilities of the hardware on which it runs. Without control over this critical resource, efforts to ensure the safe and responsible development of AI are incomplete.
As with bioweapons, superintelligent AI systems, smarter and more powerful than their human engineers, pose a risk to all humanity. These systems can be used against society by terror groups or may develop their own antisocial objectives on their own. Unlike other technology that can threaten large populations, such as the enrichment of weapons-grade uranium and experimental pathogen research, AI is not developed in highly secure facilities with rigorous safety protocols and continuous monitoring. In fact, there are no standards – federally or internationally – for the maintenance, storage or transport of GPUs.
Access to high-performance GPUs should be restricted and carefully regulated. Only entities with a clear understanding of the associated risks and a commitment to hedge against them can develop advanced AI systems. Therefore, hardware regulation is critical.
Earlier this month, the United States imposed restrictions on exporting high-performance chips to China, aiming to hinder its capability to develop large language models, which aggregate extensive data to enhance the efficiency and accuracy of programs like ChatGPT in responding to inquiries and expediting tasks. A set of standards for tracking and monitoring these chips within the U.S. would represent a positive start to an international safety mechanism for the technology.
While the Biden administration’s executive order is a landmark step in the right direction, it falls short of addressing the entirety of the AI ecosystem. By incorporating clear standards and enforcement mechanisms, as well as hardware regulation into the regulatory framework, the government can establish a more comprehensive approach to AI safety.
President Biden’s executive order on AI is a significant advancement in the quest to harness the benefits of AI while mitigating catastrophic risks. Including red teaming requirements showcases a commitment to ensuring the robustness and security of AI systems. However, the imperfections of existing red teaming processes, coupled with the need for enforcement mechanisms and hardware regulation, leave critical gaps in the regulatory framework. To truly ensure the safe and responsible development of AI, the federal government must bridge these gaps and establish comprehensive controls over the entire AI ecosystem, including the computing hardware that powers it. Only then can we create a secure and prosperous future with AI.
Joe Buccino is a retired U.S. Army Colonel who currently serves as an AI research analyst.
Red Teams, watermarks and GPUs: The advantages and limitations of Biden’s AI executive order
By establishing new standards for AI safety, security and ethics, and promoting innovation, competition and public trust, the Biden executive order positions ...
This week’s release of President Joe Biden’s executive order on artificial intelligence represents a critical stride toward regulating a technology rapidly reshaping every facet of society. The Biden administration has long promised some form of governance around this unregulated technology, and this is an urgent matter: A legislative framework is required to ensure ethical practices, protect public welfare, maintain privacy and security, and mitigate the catastrophic risks associated with superintelligent AI. The executive order addresses a broad range of issues, marking the most substantial action on AI safety by any national government to date. However, it lacks robust enforcement mechanisms and fails to account for the most critical aspects of the capability.
By establishing new standards for AI safety, security and ethics, and promoting innovation, competition and public trust, the Biden executive order positions America as a leader in harnessing the technology’s benefits and mitigating its risks. The order includes sweeping actions for AI safety and security, guidelines for privacy preservation, initiatives to advance equity and civil rights, support for workers, and measures to encourage innovation and competition. Additionally, it calls on Congress to pass bipartisan data privacy legislation.
AI red-teaming: A standard with no enforcement mechanism
Perhaps the most urgent but also limited concept within the executive order is its requirement for large companies to share the results of red-teaming exercises with the U.S. government before officially releasing AI systems. Red-teaming, a process in which independent experts attempt to exploit potential vulnerabilities in a system, is critical to building safe AI systems because it helps to identify and mitigate potential risks against a wide range of scenarios. AI red teaming also seeks to uncover ways the system might produce harmful or problematic content during interactions with ordinary users. Originating from U.S. Cold War military exercises, the concept involves assigning a specific team – a “Red Team” – to represent Soviet forces.
The executive order places responsibility for national red-team standards with the National Institute of Standards and Technology, an arm of the Commerce Department that already works with the AI community on safety and accuracy.
Join WTOP Nov. 21 for an exclusive conversation with congressional and health care industry leaders about what is on the nation's health care policy agenda right now. Register today!
Standardizing application testing and risk analysis is essential. Mandating the release of red-team results fosters a collaborative approach between the private sector and the government, driving down some risk before the deployment of new systems. While this is a positive step, it requires additional mitigation measures. Red-teaming can significantly improve the security and reliability of AI systems. Still, humans run this process and, therefore, cannot anticipate or eliminate all risks posed by superintelligence AI systems. An AI system trained to advance its intelligence and modeled on human behavior may develop its own goals beyond those prescribed by its human overlords. Such a system may learn to deceive the human red-team process. Additionally, the dynamic nature of the technology means that new risks can emerge over time, necessitating continuous updating of federal red-team standards.
Moreover, the executive order is unclear regarding what actions the government can take if an AI model is deemed dangerous through the red-teaming process. Sharing information and evaluating risk holds little value without an enforcement mechanism to ensure safety and mitigate those risks.
A watermark suggestion: No guardrails for disinformation
The executive order also incorporates proposals for watermarking photos, videos and audio produced by these systems to indicate their AI origins transparently. In so doing, the administration would add governance over AI’s ability to generate massive volumes of deep fakes and persuasive disinformation rapidly. Unfortunately, these AI marking guidelines come with no enforceable requirement. With no mechanism to prosecute against deep fakes unmarked as such, this part of the order is a mere suggestion. The lack of enforcement or regulatory guardrails is particularly concerning ahead of the 2024 election, when the voting public is likely facing a deluge of election-related misinformation.
No regulation over GPUs: A glaring omission
Another area for improvement of the executive order is a lack of regulation over computing hardware, particularly high-performance graphic processing units (GPUs) that are the lifeblood of AI development. This is a glaring omission. The power and potential of AI is directly tied to the capabilities of the hardware on which it runs. Without control over this critical resource, efforts to ensure the safe and responsible development of AI are incomplete.
As with bioweapons, superintelligent AI systems, smarter and more powerful than their human engineers, pose a risk to all humanity. These systems can be used against society by terror groups or may develop their own antisocial objectives on their own. Unlike other technology that can threaten large populations, such as the enrichment of weapons-grade uranium and experimental pathogen research, AI is not developed in highly secure facilities with rigorous safety protocols and continuous monitoring. In fact, there are no standards – federally or internationally – for the maintenance, storage or transport of GPUs.
Access to high-performance GPUs should be restricted and carefully regulated. Only entities with a clear understanding of the associated risks and a commitment to hedge against them can develop advanced AI systems. Therefore, hardware regulation is critical.
Earlier this month, the United States imposed restrictions on exporting high-performance chips to China, aiming to hinder its capability to develop large language models, which aggregate extensive data to enhance the efficiency and accuracy of programs like ChatGPT in responding to inquiries and expediting tasks. A set of standards for tracking and monitoring these chips within the U.S. would represent a positive start to an international safety mechanism for the technology.
While the Biden administration’s executive order is a landmark step in the right direction, it falls short of addressing the entirety of the AI ecosystem. By incorporating clear standards and enforcement mechanisms, as well as hardware regulation into the regulatory framework, the government can establish a more comprehensive approach to AI safety.
Read more: Commentary
President Biden’s executive order on AI is a significant advancement in the quest to harness the benefits of AI while mitigating catastrophic risks. Including red teaming requirements showcases a commitment to ensuring the robustness and security of AI systems. However, the imperfections of existing red teaming processes, coupled with the need for enforcement mechanisms and hardware regulation, leave critical gaps in the regulatory framework. To truly ensure the safe and responsible development of AI, the federal government must bridge these gaps and establish comprehensive controls over the entire AI ecosystem, including the computing hardware that powers it. Only then can we create a secure and prosperous future with AI.
Joe Buccino is a retired U.S. Army Colonel who currently serves as an AI research analyst.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
Biden AI executive order calls for ‘talent surge’ across government to retain tech experts
Guarding the AI frontier: A proposal for federal regulation
‘Trustworthy AI’ executive order in the works for agencies to keep risks in check