As the U.S. deepens and explicates its economic and military partnerships with key allies such as India, collaborative development of artificial intelligence is an integral part of those agreements. AI is revolutionizing the international political economy at an unprecedented pace.
The framework for building AI agents across our country and those of our allies must be grounded in the democratic principle of protecting its citizens’ civil liberties, including minimizing surveillance at the individual level. The reason for this is that AI models rooted in this framework learn from more diverse data, resulting in increased AI performance including enhanced reasoning, more accurate predictions, and better decision making. Most importantly, AI agents built in accordance with an emphasis on explainability, reducing bias, ensuring privacy, and enhancing national security – all standards that support a robust and healthy democracy – will result in AI that can be trusted and held accountable.
The changing nature of viral influence
Growing threats such as AI-generated synthetic media and viral disinformation perpetuated by adversaries have the potential to destabilize civil society and threaten democracy. Disinformation is of particular concern, as deep fakes and false narratives are being spread by adversaries through influencers, including those with a relatively low number of followers. The nature of predicting viral influence has changed. AI models employed by social media platforms including TikTok, Instagram and YouTube promote content for reasons that are too opaque for humans to predict.
Traditionally, one would assume that if a TikTok account with a million followers posts content, all one million followers will see the content. In reality, if the TikTok AI model determines the content is irrelevant, only a few hundred of those one million followers may see the content. Conversely, a TikTok account with just a few hundred followers may post content that goes viral for no discernable reason.
Adversaries employ sophisticated AI models designed to spread disinformation by targeting influencers with a history of posting content that gets promoted by major social media platforms and goes viral. Adversaries then create viral content that feed the biases of influencers for the purpose of manipulating thought and exploiting ignorance
The Opportunity for a more democratic future with AI
The U.S. leads the world in innovation, and the innovation we’ve seen in AI, particularly generative AI, has been nothing short of groundbreaking. As profound a technological advance as generative AI has been, the next phase of AI is going to be even more significant, as AI agents will be able to reason, predict, decide and act autonomously. Despite the speed of innovation in AI, both here and in other countries, there is as of yet little ethical oversight in terms of AI bias and explainability. This could have substantial implications if next generation AI agents take action based on biased reasoning that could be discriminatory or even illegal.
In order to ensure ethical and well-considered execution on AI recommendations, technological innovation must be accompanied by policy innovation. For example standards around AI explainability could be put in place to mandate AI agents to explain their reasoning to human supervisors before taking action. Human work could evolve into focusing more on critical reasoning regarding the consequence, legality and ethics of AI actions to ensure AI acts in accordance with democratic values to the benefit of human beings.
Since technological innovation continues to far outpace policy innovation, it’s critical for the private sector to step up and establish ethical frameworks for AI that ensure AI continues to work in favor of human beings. AI that gets smarter alongside human beings such that the AI reduces human bias – and humans reduce AI bias – will result in super intelligence for all and a more utopian outcome. In contrast, AI that gets smarter at the expense of most human beings to favor just a few will result in an asymmetric distribution of super intelligence and, potentially, a more dystopian outcome.
Greater innovation through principled collaboration
One excellent example of the coordinated building of AI based on shared democratic principles is the India – U.S. Defense Acceleration Ecosystem (INDUS-X). In June 2023, India’s Prime Minister Narendra Modi and President Joe Biden announced the launch of INDUS-X, a strategic technology partnership between the U.S. Department of Defense and the Indian Ministry of Defense. INDUS-X will provide government support to encourage collaborative innovation for the defense sector between the enterprise and research institutions of both nations. This initiative will include multiple pathways to encourage and support defense technology innovation from and between India and the U.S., including connecting defense firms with technology startups for mentorship, supply chain collaboration and development acceleration; funding and dual-use case prize challenges; easing of regulation for cross-border development and trade, and more.
India and the U.S. have similar values regarding the importance of maintaining a civil and free society that protects the civil liberties of its citizens, and the development between these two countries will have a shared commitment to promoting democratic principles and ethical AI governance. Both countries have common adversaries that are engaging in political manipulation through AI-based disinformation, so coordination of meeting those attacks through advanced AI solutions is the best way forward.
The INDUS-X initiative will greatly improve rapid progress to meet these modern defense challenges. Hopefully, INDUS-X will serve as a blueprint for the creation of similar “innovation bridges” between allies to encourage future collaboration and hasten the strategic development of defense technologies, including AI solutions, that are grounded in ethical and shared democratic values while appropriately incentivizing the private sector.