The impact and associated risks of AI on future military operations

With the transformative potential of artificial intelligence, we are entering an era of national security reshaped by revolutionary technology. AI presents the ...

With the transformative potential of artificial intelligence, we are entering an era of national security reshaped by revolutionary technology. AI presents the potential to improve future military operations by enhancing decision-making, combat effectiveness and operational efficiency.

The U.S. military is already harnessing AI for autonomous reconnaissance and combat systems, data analysis and cybersecurity. Within the next five years, AI will enable new, advanced applications like swarm intelligence for enhanced situational awareness, predictive analytics to forecast enemy movements, and strengthened cybersecurity. These developments will be facilitated by the convergence of computational growth, big data and emerging technologies in wearables and embedded systems that could make the military more efficient, agile and capable.

Given the pending global “AI arms race,” we must take steps now to ensure that the U.S. military is ready to remain at the forefront of this evolving landscape.

Adapting to AI’s benefits without falling behind

Managing investments and policies to embrace AI will enable the military to maintain technological superiority. However, traditional Defense Department funding strategies, contract vehicles and acquisition paths cannot keep pace with AI’s advance. That must change. An immediate action would be reallocating R&D budget resources to include both near-term applications of AI and long-term foundational research. That change requires parallel preparation of acquisition offices and operational end users, so they can capture and scale foundational results to mission-ready status as soon as they are available.

New data repositories from cross-branch training and mission operations must be continuously fed into a robust corpus designed for interoperability and quick use in validating hypotheses and performance. A new investment ecosystem fostering low-risk, no-barrier and fail-quickly fundamental science feasibility explorations, with necessary guardrails, will also get more candidate technologies across the DoD’s notorious technology valley of death.

Defense Federal Acquisition Regulation Supplement and other policies need revision to facilitate more agile AI procurement. For instance, adopting modular contracting methods under Other Transaction Authority or indefinite delivery, indefinite quantity vehicles could enable fast on-ramping of AI technology offerors without the traditional burdens of prime oversight.

Further, a central repository for AI best practices and development frameworks, that can also incorporate standardized data formats from non-AI technologies, will expedite cross-branch learning and accelerate R&D efforts.

Fostering improved partnerships with industry and academia, promoting AI technology transitions, and investing in startup R&D will also be critical. To motivate partner contributions, the DoD should not restrict IP or data rights. An access-controlled data sharing ecosystem, invested in by our military and our allied nations, will enable AI models to be trained faster and more thoroughly. The advantage will go to the nations who apply those models most effectively for the right outcomes.

Along with rewards, AI presents escalated risks in military operations

AI systems operating without human oversight raise moral, sociopolitical and legal implications, especially when automating parts of the “kill chain.” Carefully considered ethical and legal frameworks for AI-driven actions and decisions will require changes to national and global policies and standards.

This should go beyond simply maintaining a human-in-the-loop. AI is out-performing humans across a myriad of technical, creative and strategic domains. Using AI to quantify risks under valid operational scenarios could refocus humans to only establishing ethically acceptable risk thresholds that partner nations can follow. Remember that blanket policies forbidding or degrading the application of AI create opportunities for foreign actors with less oversight to gain a technological advantage.

AI also elevates security concerns around increased vulnerability to cyberattacks and potential manipulation of data. Every sensor, data transfer and endpoint creates an attack surface that skilled adversaries could target. DoD needs policies and technology investment strategies that address those data collection points. When adversaries do find an attack surface, resilience will be critical. Security methods from other high-risk technical domains like nuclear power offer valuable lessons on how to approach risks to complex AI-based systems.

Accountability and responsibility for development and use of AI in military operations are essential. AI systems will require rigorous testing, validation and safeguards to ensure their reliability, robustness and security. Existing practices for medical devices offer a useful analogy.

There are also legal questions such as we see in social media cases — who has the burden of policing and enforcing content — the ISP, the service provider or the user who provided the data? Policies and constructs that penalize new AI developers striving to establish market share may well drive technology innovators away from DoD applications.

Data privacy presents another consideration. The European Union’s General Data Protection Regulation and similar laws call for a “right to transparency.” Comparably, we all want AI to be able to explain how it arrived at an outcome. The challenge is how to define acceptable standards to achieve such transparency, which requires an understanding of how AI works.

Finally there is the question of how to restore trust in AI once it is lost. If an AI learns a behavior and acts in an unexpected or undesirable manner, how can that behavior not only be prevented in future, but also be unlearned so as not to even be considered again? How far back in the verification process does that AI need to go to achieve a necessary and acceptable level of trust? This will be a challenging issue to negotiate, as different governments and cultures have different risk acceptance criteria. However, lengthy debate timelines may not be sufficient given how quickly AI is advancing.

The human-machine dynamic

Humans have always co-evolved with tools and technologies that make our lives productive. Future AI-based systems must accommodate both humans and AI as users of the system itself. People must learn to approach and regard AI systems as new team members. As with any team, productivity often stems from sharing situational awareness and understanding the goals, motivations, behaviors and/or impacts of actions on teammates. In the military, much of this understanding comes through rigorous joint training and explicit tactics, techniques and procedures that foster trust and breed common ground.

Future AI systems may need to start at this level, onboarding and training human and AI users concurrently as a team. But when the tandem actually needs to execute work, the interfaces each requires are dramatically different.

For example, humans are visually oriented and rely on graphical human machine interfaces (HMI) to identify and make sense of visual patterns. With AI as a collaborator, people will still require a natural HMI to understand their role in the system’s operation. They will also need some means of productive dialogue with the AI, such as natural language models and/or training for the new field of “prompt engineering” for directing the AI’s action. The AI itself will need data that is consistent and well formatted so it can correctly integrate it into its model structure and impact its outcomes.

The AI future is approaching fast

While AI holds great promise for the future of U.S. military operations, clearly there are many complex issues to be sorted out. There is no time for hesitation.

To fully leverage AI’s potential, the DoD must act quickly to understand its current utilization, anticipate upcoming development and address associated risks. By adapting investments, policies and strategies, the U.S. military can maintain its technological edge and ensure the security and success of future operations.

Michael P. Jenkins is chief scientist at Knowmadics inc.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories