That’s not the droid you’re looking for: Why human-AI partnership is the key to responsible AI
There’s currently a significant debate about the appropriateness of using AI in combat. Over thirty nations are working through the U.N. to introduce a ban on...
My kids and I have watched a lot of sci-fi movies over the years. One of our early choices was the original Terminator movie — otherwise, they would miss out on several of life’s most useful quotes! I was also trying to convince them that the protagonist and I have the same name. That part didn’t work out, but the movie helped parse the current discussion about artificial intelligence used in combat, which is often framed in a “Rise of the Machines” perspective.
There’s currently a significant debate about the appropriateness of using artificial intelligence in combat. Over 30 nations are working through the U.N. to introduce a ban on autonomous AI weapons. There are similar discussions among some U.S. AI companies about whether their technology should be used in military applications, the highest profile of which was Google’s decision to not renew a contract with the Defense Department to perform image recognition.
This reticence to use AI in military applications is notable because advances in technology have always been used to improve warfighting capabilities, from radar to computers to smart bombs. Each technical innovation makes the military more effective at their jobs while reducing risk to our soldiers. Countries adopt technology to improve their warfighting abilities, knowing that not doing so would put their national security at risk to adversaries who have better technology support.
In this light, it is necessary to discuss responsible ways to adopt artificial intelligence to support our national defense.
AI is fundamentally different from the previous technologies that have been adopted for defense because it is more difficult for a user to understand what an AI algorithm is doing. Contrast this with an earlier innovation such as the introduction of computer-assisted targeting. In an analog world, a soldier would compute targeting coordinates with a map, protractor and ruler. Adding a computer made this significantly faster and more accurate, but the basic process was the same and was thus something that a targeter could trust.
AI is much more of a black box, where users may not understand how or why a particular direction was given. Why did the algorithm recommend a particular route? How did the algorithm choose a particular target? Our warfighters need AI solutions, but they need to know they can trust and rely on those solutions. That means AI needs to be transparent, with clear explanations behind recommendations.
Responsible and intentional AI
The DoD’s plan for responsible AI (RAI) development signals a critical step forward in accelerating efforts to embrace AI technologies in lawful and ethical ways to keep pace with evolving security threats. The RAI Strategy and Implementation Pathway details that the DoD’s Al ethical standards will be implemented via RAI, a dynamic approach to the design, development, deployment and usage of AI, to increase the trustworthiness of Al capabilities.
According to the pathway document, the following foundational tenets will serve as priority areas to guide the implementation of RAI across the DoD: RAI governance, warfighter trust, AI product and acquisition lifecycle, requirements validation, responsible Al ecosystem and Al workforce. Together, these principles form the framework that DoD will use to deploy effective and secure AI environments.
Keep the human at the core
The tenets of responsible AI are notable in that they are primarily not algorithmic focused. Instead, they tie back to the human participants in the environment. This approach is key to the success of AI in the DoD. It recognizes that AI will work in partnership with soldiers — so we don’t focus just on the algorithms, but also focus on how the solutions will help those soldiers. The importance of human-AI teaming was identified early on as a critical factor in the National Security Commission on Artificial Intelligence’s (NSCAI) final report.
So, what does human-AI teaming look like? Three main attributes will help bring RAI into practice:
Explainable AI is the simple premise that nothing should be a black box. Machine learning algorithms can be too complicated for their creators to readily understand, never mind a less technical user. But soldiers need to understand the algorithms that they are depending upon and be confident in the results.
Open-ended exploration is the concept that the AI environment needs to be flexible and adaptable so that users can understand their data in new ways, including ways not envisioned by the system’s designers. This flexibility will help ensure that systems don’t get locked into a certain way of doing things without regard to a changing mission.
Diversity of background and experience recognizes that currently, AI is often the domain of a small group of data scientists. This limits the effectiveness of the solutions and increases the risk of bias in the resulting algorithms. Opening AI to personnel who are soldiers, analysts, pilots or logisticians will result in more thorough and robust solutions.
This sort of focus on the human in the AI solution will be what brings Responsible AI to fruition and makes AI safe and effective in DoD applications. It turns out there’s another movie to help show the way; it wasn’t until we started watching Star Wars that my kids saw a more balanced portrayal of an AI-enabled future. Take R2-D2: he’s a droid that can fly, navigate and target autonomously. But even though this means he has all the capabilities needed to become a mini-Skynet, he’s never a menace. Why? Because his main role is simply to help a human pilot perform better. This type of transparent, trusting partnership is what will make the application of AI successful in the DoD.
Kyle Rice is the director of solutions in the office of the chief technology officer at Virtualitics.
That’s not the droid you’re looking for: Why human-AI partnership is the key to responsible AI
There’s currently a significant debate about the appropriateness of using AI in combat. Over thirty nations are working through the U.N. to introduce a ban on...
My kids and I have watched a lot of sci-fi movies over the years. One of our early choices was the original Terminator movie — otherwise, they would miss out on several of life’s most useful quotes! I was also trying to convince them that the protagonist and I have the same name. That part didn’t work out, but the movie helped parse the current discussion about artificial intelligence used in combat, which is often framed in a “Rise of the Machines” perspective.
There’s currently a significant debate about the appropriateness of using artificial intelligence in combat. Over 30 nations are working through the U.N. to introduce a ban on autonomous AI weapons. There are similar discussions among some U.S. AI companies about whether their technology should be used in military applications, the highest profile of which was Google’s decision to not renew a contract with the Defense Department to perform image recognition.
This reticence to use AI in military applications is notable because advances in technology have always been used to improve warfighting capabilities, from radar to computers to smart bombs. Each technical innovation makes the military more effective at their jobs while reducing risk to our soldiers. Countries adopt technology to improve their warfighting abilities, knowing that not doing so would put their national security at risk to adversaries who have better technology support.
In this light, it is necessary to discuss responsible ways to adopt artificial intelligence to support our national defense.
Join us Jan. 27 for our Industry Exchange Cyber 2025 event where industry leaders will share the latest cybersecurity strategies and technologies.
Why AI in DoD is challenging
AI is fundamentally different from the previous technologies that have been adopted for defense because it is more difficult for a user to understand what an AI algorithm is doing. Contrast this with an earlier innovation such as the introduction of computer-assisted targeting. In an analog world, a soldier would compute targeting coordinates with a map, protractor and ruler. Adding a computer made this significantly faster and more accurate, but the basic process was the same and was thus something that a targeter could trust.
AI is much more of a black box, where users may not understand how or why a particular direction was given. Why did the algorithm recommend a particular route? How did the algorithm choose a particular target? Our warfighters need AI solutions, but they need to know they can trust and rely on those solutions. That means AI needs to be transparent, with clear explanations behind recommendations.
Responsible and intentional AI
The DoD’s plan for responsible AI (RAI) development signals a critical step forward in accelerating efforts to embrace AI technologies in lawful and ethical ways to keep pace with evolving security threats. The RAI Strategy and Implementation Pathway details that the DoD’s Al ethical standards will be implemented via RAI, a dynamic approach to the design, development, deployment and usage of AI, to increase the trustworthiness of Al capabilities.
According to the pathway document, the following foundational tenets will serve as priority areas to guide the implementation of RAI across the DoD: RAI governance, warfighter trust, AI product and acquisition lifecycle, requirements validation, responsible Al ecosystem and Al workforce. Together, these principles form the framework that DoD will use to deploy effective and secure AI environments.
Keep the human at the core
The tenets of responsible AI are notable in that they are primarily not algorithmic focused. Instead, they tie back to the human participants in the environment. This approach is key to the success of AI in the DoD. It recognizes that AI will work in partnership with soldiers — so we don’t focus just on the algorithms, but also focus on how the solutions will help those soldiers. The importance of human-AI teaming was identified early on as a critical factor in the National Security Commission on Artificial Intelligence’s (NSCAI) final report.
So, what does human-AI teaming look like? Three main attributes will help bring RAI into practice:
This sort of focus on the human in the AI solution will be what brings Responsible AI to fruition and makes AI safe and effective in DoD applications. It turns out there’s another movie to help show the way; it wasn’t until we started watching Star Wars that my kids saw a more balanced portrayal of an AI-enabled future. Take R2-D2: he’s a droid that can fly, navigate and target autonomously. But even though this means he has all the capabilities needed to become a mini-Skynet, he’s never a menace. Why? Because his main role is simply to help a human pilot perform better. This type of transparent, trusting partnership is what will make the application of AI successful in the DoD.
Kyle Rice is the director of solutions in the office of the chief technology officer at Virtualitics.
Read more: Commentary
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
Army pilots using AI to streamline selection boards
Air Force seeks AI application on a military enterprise scale
GSA, CISA turning to AI tools, standards to help secure federal supply chains