Biden’s executive order on AI: Where to go from here

Thanks to the executive order, 2024 will be a year when not only does AI adoption accelerate but so do initiatives to govern it.

“Generative AI will go mainstream in 2024,” declares a headline in the Economist. “In 2024, Generative AI will transition from hype to intent,” Forbes proclaims. “AI poised to begin shifting from excitement to deployment in 2024,” predicts Goldman Sachs.

As these sentiments show, 2024 is widely expected to be a banner year for mainstream AI adoption, especially generative AI systems like OpenAI’s ChatGPT. But while AI is one of the most transformative technologies in decades, as with any major advance, artificial intelligence will be used for both good and bad.

That’s why President Biden on Oct. 30 signed a wide-ranging executive order detailing new federal standards for “safe, secure and trustworthy” AI – the U.S. government’s farthest-reaching official action on the technology to date.

“Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative and secure,” the order said. “At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”

That’s all true, which is why a purposeful set of guardrails clearly was needed to govern AI development and deployment. The 111-page order aims to do so with sections covering safety and security, privacy, equity and civil rights, jobs, and international cooperation.

But while the order is a promising start toward a responsible framework for regulating AI, it is just that – a framework – and only time will tell how the policies are implemented.

For example, the order is limited in that the President can determine how the federal government uses AI but has less control over the private sector. Congressional action and global agreements will be needed to enact many of the order’s provisions.

With that in mind, here’s one view of four priorities that policymakers and legislators should keep in mind as they move to implement the executive order.

  1. Maintain a balance between managing AI risk and protecting innovation

The order is correct when it says that AI “holds extraordinary potential for both promise and peril.” But federal regulatory efforts will need to strike a balance between addressing legitimate concerns about AI without stifling AI innovation. That innovation, after all, is needed for better decision-making and operational efficiency in businesses, improved healthcare, optimized energy management, stronger cybersecurity and many other benefits of AI.

The Software and Industry Information Association, in a statement after the order was released, praised the order’s focus on AI as it relates to national security, economic security, and public health and safety. But the group said it is concerned the order “imposes requirements on the private sector that are not well calibrated to those risks and will impede innovation that is critical to realize the potential of AI to address societal challenges.”

It’s a good point. As governmental efforts to create AI safeguards proceed, it is crucial that responsible AI research and development not become caught in the crosshairs and that this work remains financially rewarding.

  1. Recommit to public input

The order followed voluntary commitments by seven AI companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI – to foster safe, secure and transparent development of AI technology. Also, the White House Office of Science and Technology Policy led a year-long process to seek input from the public before releasing its Blueprint for an AI Bill of Rights in October 2022.

This inclusive approach must continue in 2024. Insights from myriad stakeholders, including industry, academia and the general public, are critical in pinpointing the unique opportunities and risks associated with AI, shaping policies, and establishing effective and reasonable regulations.

It’s really the only way to ensure the government can promote AI that serves the best interests of society.

  1. Keep our eye on the geopolitical ball

Whatever shape AI regulation takes in 2024 and beyond, the U.S. can’t lose sight of AI’s geopolitical consequences.

AI-powered cyberwarfare is an extremely potent and relatively easy way for adversaries to disrupt world order. For example, warnings earlier this year that Chinese state-sponsored hackers had compromised industries including transportation and maritime illustrated how bad actors can and will use AI-powered hacking tools to disrupt critical infrastructure.

AI guardrails are important, but it’s also essential to foster an aggressive AI development environment that protects national security interests.

  1. Continue collaborating with allies

The United Kingdom, European Union and Canada all have released guidelines encouraging ethical and responsible AI development. The U.S. was a bit late to the party with the executive order, but better late than never.

It was good to see in the order that the administration consulted on AI governance frameworks over the past several months with a slew of countries, including Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea and the U.K.

Such a strength-in-numbers approach is important in tackling AI as the global issue it truly is.

Thanks to the executive order, 2024 will be a year when not only does AI adoption accelerate but so do initiatives to govern it. These four steps would help maximize AI’s potential as a force for good and minimize its dangers. It will be very interesting to see how it all plays out.

Tom Guarente is VP of External and Government Affairs at Armis, the asset intelligence cybersecurity company. 

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories