Artificial intelligence is quickly becoming an indispensable asset in addressing a range of challenges in today’s society – from domestic and international cyber threats to healthcare advancements and environmental management. While there are some mixed opinions on many aspects of this technology and its capabilities, there’s no question that in order for AI to meet its full potential, we will need an agile and dynamic policy framework that spurs responsible innovation – a framework that the United States could soon model.
Every day AI becomes more entrenched in our daily lives and will soon be ubiquitous around the world. Countries need a framework to look to for guidance, a leader. Without a flexible policy framework in place that is broadly accepted, we risk missing out on many of AI’s benefits to society. Trust in AI is pivotal for realizing its full potential, yet this trust will be hard-earned. It demands efforts from both private organizations and governments to develop AI in a responsible, ethical manner. Without trust, the promise of AI could remain unfulfilled, its capabilities only partially tapped.
Efforts and innovations must be coordinated across the globe, guided by a responsible pioneer. Lacking some level of synchronization, society could experience a confusing system of disparate AI regulations, rendering the safe advancement of AI initiatives challenging across the board.
With its flexible governance structure informed by valuable international, public-private input, the U.S. could be a clear choice to lead the world to success in this new age of AI.
Currently, steps are being undertaken globally to regulate the use of AI, enhance its safety, and foster innovation. It’s natural that various jurisdictions have placed different emphases on their priorities, resulting in a diverse range of regulations – some more proscriptive than others. This variation reflects the unique cultural perspectives of different regions, leading to a potential patchwork of AI regulations. As of October 2023, 31 countries have passed AI legislation and 13 more are debating AI laws.
Europe took an early lead in December 2023 by passing the AI Act, the world’s first comprehensive AI law focused on categorizing AI in terms of risks to users. The original text of the AI Act was written in 2021 – long before the mainstreaming of GenAI in 2023. In contrast to the EU’s approach to AI regulation, the United Kingdom took a more pro-innovation stance and underscored its leadership aspirations by hosting an international AI Safety Summit at Bletchley Park in November 2023.
The United States played a prominent role at the summit which focused on the importance of global cooperation in addressing the risks posed by AI, alongside fostering innovation. Meanwhile, China mandates state review of algorithms, requiring them to align with core socialist values. In contrast, the U.S. and UK are taking a more collaborative and decentralized approach.
The U.S. has taken a more proactive approach to asserting its leadership in AI governance, in contrast to its approach to data privacy, where the EU has largely dominated with the General Data Protection Regulation (GDPR). A series of recent federal initiatives, including President Biden’s exhaustive AI executive order, signals a commitment to eventually leading global AI governance. The order lays out a blistering pace of regulatory action, mandating detailed reporting and risk assessments by developers and agencies. Notably, many of these requirements and assessments will come into force long before the EU’s AI Act is settled and enforced.
In the absence of strong federal action, states are stepping in. In the 2023 legislative session, at least 25 U.S. states introduced AI bills, while 15 states and Puerto Rico adopted resolutions or enacted legislation around AI. While it is great to see this progress and innovation being made across the world, we must recognize the next steps needed to move forward on the AI front.
Without harmonizing efforts globally and having a leader to look to for guidance on AI endeavors, we could end up with a complex patchwork of AI regulations, making it difficult for organizations to operate and innovate with AI safely — throughout the U.S. and globally.
The blueprint for AI regulation: The U.S.
Without trust, AI will not be fully adopted. The U.S. and like-minded governments can ensure that AI is safe and that it will benefit humanity as a whole. The White House has begun to pave the way with a recent flurry of AI activity, remaining proactive and agile despite evolving demands. To get ahead, Congress is pursuing niche areas within AI that will inform current and future AI regulations. The U.S. can further promote transparency, confidence and safety by collaborating with industry to ensure that the benefits of this evolving technology can be realized, risk concerns do not stifle innovation, and society can trust in AI.
Domestically, the Biden administration has been exceedingly open to input from all sectors, shaping a holistic viewpoint on what is needed for advancement. Abroad, the U.S. prioritizes collaboration with its allies, ensuring best practices are followed and ethical considerations are made. This is a key component needed from a global leader, as regulations must be developed outside of a vacuum for best results. By linking arms with countries around the world to develop standards, conflicting viewpoints can be mitigated to best shape international AI regulations in a way that is most beneficial to society.
Furthermore, by encouraging strong public-private partnerships, the U.S. sets the precedent needed to take responsible AI innovation to the next level. Just like the public sector, private companies must innovate responsibly, accepting the duty to develop AI in a trustworthy manner. By moving forward with cautious enthusiasm, the private sector can considerably bolster efforts to ensure AI reaches its full potential safely, at home and abroad.
Of course, the geopolitical aspect must be considered, as well. By leading in AI standards and regulations, the U.S. can initiate globally accepted norms and protocols to deter an unregulated arms race or other modern warfare catastrophe. Through its technical prowess and dynamic experience, the U.S. is uniquely positioned to lead in the development of a global consensus on responsible AI use.
The future of AI governance is here
The U.S. is just beginning to establish itself as a global leader in AI governance, spearheaded by initiatives such as President Biden’s executive order, Office of Management and Budget guidelines, the National Institute of Standards and Technology’s AI Risk Management Framework, and widely publicized commitments from AI companies. The U.S. strategy is offering a flexible framework that can swiftly adapt to the rapidly evolving AI landscape. This agility will help keep pace with the quickly changing AI technology landscape.
As the U.S. continues to quietly refine its approach to AI regulation, its policies will not only have far-reaching impacts on American society and government, but also offer a balanced blueprint for international partners. The onus to innovate with AI responsibility does not fall solely on the public sector. Private companies, too, must bear the burden alongside their public counterparts to optimize results. This balanced approach considering a variety of international, public-private insights is bound to shape the future of AI governance and innovation worldwide.
Bill Wright is global head of government affairs at Elastic.
Is the United States primed to spearhead global consensus on AI policy?
The U.S. strategy is offering a flexible framework that can swiftly adapt to the rapidly evolving AI landscape.
Artificial intelligence is quickly becoming an indispensable asset in addressing a range of challenges in today’s society – from domestic and international cyber threats to healthcare advancements and environmental management. While there are some mixed opinions on many aspects of this technology and its capabilities, there’s no question that in order for AI to meet its full potential, we will need an agile and dynamic policy framework that spurs responsible innovation – a framework that the United States could soon model.
Every day AI becomes more entrenched in our daily lives and will soon be ubiquitous around the world. Countries need a framework to look to for guidance, a leader. Without a flexible policy framework in place that is broadly accepted, we risk missing out on many of AI’s benefits to society. Trust in AI is pivotal for realizing its full potential, yet this trust will be hard-earned. It demands efforts from both private organizations and governments to develop AI in a responsible, ethical manner. Without trust, the promise of AI could remain unfulfilled, its capabilities only partially tapped.
Efforts and innovations must be coordinated across the globe, guided by a responsible pioneer. Lacking some level of synchronization, society could experience a confusing system of disparate AI regulations, rendering the safe advancement of AI initiatives challenging across the board.
With its flexible governance structure informed by valuable international, public-private input, the U.S. could be a clear choice to lead the world to success in this new age of AI.
Get tips and tactics to make informed IT and professional services buys across government in our Small Business Guide.
Current AI governance initiatives
Currently, steps are being undertaken globally to regulate the use of AI, enhance its safety, and foster innovation. It’s natural that various jurisdictions have placed different emphases on their priorities, resulting in a diverse range of regulations – some more proscriptive than others. This variation reflects the unique cultural perspectives of different regions, leading to a potential patchwork of AI regulations. As of October 2023, 31 countries have passed AI legislation and 13 more are debating AI laws.
Europe took an early lead in December 2023 by passing the AI Act, the world’s first comprehensive AI law focused on categorizing AI in terms of risks to users. The original text of the AI Act was written in 2021 – long before the mainstreaming of GenAI in 2023. In contrast to the EU’s approach to AI regulation, the United Kingdom took a more pro-innovation stance and underscored its leadership aspirations by hosting an international AI Safety Summit at Bletchley Park in November 2023.
The United States played a prominent role at the summit which focused on the importance of global cooperation in addressing the risks posed by AI, alongside fostering innovation. Meanwhile, China mandates state review of algorithms, requiring them to align with core socialist values. In contrast, the U.S. and UK are taking a more collaborative and decentralized approach.
The U.S. has taken a more proactive approach to asserting its leadership in AI governance, in contrast to its approach to data privacy, where the EU has largely dominated with the General Data Protection Regulation (GDPR). A series of recent federal initiatives, including President Biden’s exhaustive AI executive order, signals a commitment to eventually leading global AI governance. The order lays out a blistering pace of regulatory action, mandating detailed reporting and risk assessments by developers and agencies. Notably, many of these requirements and assessments will come into force long before the EU’s AI Act is settled and enforced.
In the absence of strong federal action, states are stepping in. In the 2023 legislative session, at least 25 U.S. states introduced AI bills, while 15 states and Puerto Rico adopted resolutions or enacted legislation around AI. While it is great to see this progress and innovation being made across the world, we must recognize the next steps needed to move forward on the AI front.
Without harmonizing efforts globally and having a leader to look to for guidance on AI endeavors, we could end up with a complex patchwork of AI regulations, making it difficult for organizations to operate and innovate with AI safely — throughout the U.S. and globally.
The blueprint for AI regulation: The U.S.
Without trust, AI will not be fully adopted. The U.S. and like-minded governments can ensure that AI is safe and that it will benefit humanity as a whole. The White House has begun to pave the way with a recent flurry of AI activity, remaining proactive and agile despite evolving demands. To get ahead, Congress is pursuing niche areas within AI that will inform current and future AI regulations. The U.S. can further promote transparency, confidence and safety by collaborating with industry to ensure that the benefits of this evolving technology can be realized, risk concerns do not stifle innovation, and society can trust in AI.
Domestically, the Biden administration has been exceedingly open to input from all sectors, shaping a holistic viewpoint on what is needed for advancement. Abroad, the U.S. prioritizes collaboration with its allies, ensuring best practices are followed and ethical considerations are made. This is a key component needed from a global leader, as regulations must be developed outside of a vacuum for best results. By linking arms with countries around the world to develop standards, conflicting viewpoints can be mitigated to best shape international AI regulations in a way that is most beneficial to society.
Read more: Commentary
Furthermore, by encouraging strong public-private partnerships, the U.S. sets the precedent needed to take responsible AI innovation to the next level. Just like the public sector, private companies must innovate responsibly, accepting the duty to develop AI in a trustworthy manner. By moving forward with cautious enthusiasm, the private sector can considerably bolster efforts to ensure AI reaches its full potential safely, at home and abroad.
Of course, the geopolitical aspect must be considered, as well. By leading in AI standards and regulations, the U.S. can initiate globally accepted norms and protocols to deter an unregulated arms race or other modern warfare catastrophe. Through its technical prowess and dynamic experience, the U.S. is uniquely positioned to lead in the development of a global consensus on responsible AI use.
The future of AI governance is here
The U.S. is just beginning to establish itself as a global leader in AI governance, spearheaded by initiatives such as President Biden’s executive order, Office of Management and Budget guidelines, the National Institute of Standards and Technology’s AI Risk Management Framework, and widely publicized commitments from AI companies. The U.S. strategy is offering a flexible framework that can swiftly adapt to the rapidly evolving AI landscape. This agility will help keep pace with the quickly changing AI technology landscape.
As the U.S. continues to quietly refine its approach to AI regulation, its policies will not only have far-reaching impacts on American society and government, but also offer a balanced blueprint for international partners. The onus to innovate with AI responsibility does not fall solely on the public sector. Private companies, too, must bear the burden alongside their public counterparts to optimize results. This balanced approach considering a variety of international, public-private insights is bound to shape the future of AI governance and innovation worldwide.
Bill Wright is global head of government affairs at Elastic.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
VA looking at AI tools to reduce workforce burdens, anticipate veterans’ needs
Managing the enthusiasm for AI by taking a measured approach
A university creates an artificial intelligence institute, partly to help government