The U.S. military gets legal advice from the JAGs before using force, but the Air Force is seeing if AI can improve and speed up that advice.
Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.
When the U.S. military actually uses force, it does so after getting legal advice from the Judge Advocate General (JAG) Corps. To improve and speed up the assembly of that advice, the Air Force is looking to see if artificial intelligence can help. The Federal Drive with Tom Temin spoke to West Point law professor Hitoshi Nasu and Alex Heit, vice president of AI vendor VISIMO, to explain this initiative.
Interview transcript:
Tom Temin:
And tell us, Professor, what it is we’re trying to solve here, what is the issue that artificial intelligence might come into play for the JAGs?Hitoshi Nasu: So during the military operations for each targeting decision, JAG officers, the legal advisers for commanders, need to assimilate and assess a large amount of information. And sometimes they’re required to do so in a very constrained timeframe. So we are trying to develop the AI-based tool to help them so that they can focus their time on looking at the most pragmatic aspects of targeting information from this vast amount of information available to them.
Tom Temin: I get it. So right now you’re working with the Air Force in particular, they were kind of a test bed here?
Hitoshi Nasu: So as I understand the Air Force has this grant in the company, the VISIMO we are working together with they receive the grant from the Air Force. And with that, they approached us to see what we can provide our expertise in terms of legal compliance with the targeted operations and requirements. So that’s how we got into this project.
Tom Temin: And Alex, just a quick question, how come you didn’t go to the Air Force Academy?
Alex Heit: Yeah, so that’s a good question. When we started the project, we were awarded this STTR funding through AFWERX under the Air Force. And we did some research and found that our research institution partner with West Point, and the Lieber Institute really had the best knowledge, both in terms of AI and use of force rule of law at their center. And so that’s why we felt that West Point was the best partner for us.
Tom Temin: So what, the learnings that happen there then could probably apply to all the services once you’re through this process?
Alex Heit: That’s correct.
Tom Temin: And what is the technical issue here? What is it you’re trying to do from a standpoint, VISIMO I presume is an AI company, you’re based in Pittsburgh, and so you must have a data set and the hope for outcome?
Alex Heit: Yes. So right now what we’re working on is really just building up a proof of concept in phase two of this STTR. That’s where we would want to be able to identify the right data set as a targeting package that we’d be able to use for this AI that we’re training to do legal reviews and help increase the effectiveness and decrease the amount of time for decision making.
Tom Temin: So the datasets then would be legal decisions and legal history and statutes related to use of force?
Alex Heit: A little different, it would really be these targeting packages. They’re multiple page images and slide decks, both for terminology, and for situations that occur during real time actions that are taken on the battlefield.
Tom Temin: We’re speaking with Alex Heit, he is a vice president at VISIMO of Pittsburgh, and with Professor Hitoshi Nasu, a law professor at the U.S. Military Academy at West Point. So Professor, tell us what is the legal situation, someone wants to use force, a general says, OK this is what we need to do, and a targeting package. What are the legal issues that come into play if they feel troops are in danger, or that sort of thing?
Hitoshi Nasu: So under international law, there are rules that apply to warfighting and those rules to regulate the conduct of hostilities, even in situations of armed conflict. So military forces of any nation are required to comply with those rules, when they make decisions on the use of force. So in practice, that has meant that the legal advisers and the JAG officers need to be made available to commanders for the planning or directing targeting decisions and operations.
Tom Temin: So does this vary from country to country, because it’s not just U.S. law then here that has to apply, it’s wherever the land is that they’re operating in.
Hitoshi Nasu: The rules themselves are universal, all of those states are required to comply with those rules as a matter of customary international law, but the way in which they implement those obligations may vary. And we have a specific way of doing it. And availability of JAG officers, for commanders in this country is quite phenomenal compared to other countries.
Tom Temin: And what are the consequences if they make a mistake in targeting and using force, and the JAG says, well guess what, you violated this particular statute or this particular provision?
Hitoshi Nasu: Yes, the mistake can happen. And the law indeed accommodates this room for making mistakes. And that’s why we have, for example, obligation to take feasible precautions to avoid unnecessary civilian casualties, for example, or to avoid a mistake about the status of a target. So that mistake can happen. But what we are trying to do with this AI-based tool is to eliminate or reduce the chance of making mistakes, because we are all human, humans make mistakes. That AI tool, that may actually help us reduce the chance to make those mistakes.
Tom Temin: And Alex, maybe just explain a little bit more how you tie the targeting package, which sounds like it has imagery and other information of that nature, is there any tie back to statute in this? I mean, what does the data, tell us what the output of plying the algorithm to the target package?
Alex Heit: Yeah, so to get to the output that we’re looking for, what we want to be able to do is advise people, judge advocates and others when to make that decision more effectively. So as an example, a couple of years ago, there was a hospital bombing of Doctors Without Borders. This occurred because there was a mistake with the targeting coordinates. And maybe if there had been another review process that occurred, that wouldn’t have happened. So we want to be able to better inform and better provide other sources of information that inform how and when things should be targeted, and how outcomes are going to be produced that say, OK well, in real time, this is going to show yes/no, this is the right target. If this is the right target, then the AI is not going to say “yes, send the bomb,” because that’s not something that we allow. But it’s going to say “under the law in which we operate, this is the right step, we verified that.” And then it’s going to go back to a human to make that decision.
Tom Temin: And if this proves to be an effective tool, once some of the datasets are run through, and you can I guess, we’ll run scenarios, if/then types of scenarios on this, what form could this take? What would be the deliverable to the armed services? What would they have a dashboard or something on their smartphone or what?
Alex Heit: Yeah, so the output could be really in terms of software that is a front end on their computer, it really could be on their smartphone, if that’s the level of granularity they want to see it in. Or it could just be a script that pops up and shows them on that targeting package. Right now, as I understand it, a lot of this is done in sort of a war room situation with a PowerPoint slide deck. And one of the things, us at VISIMO, we’re experts in technology, AI software development, we’re not experts in the way that war time decisions are made. And so that’s one of the benefits of us working with our partners at West Point. They know how all of these situations play out.
Tom Temin: And Hitoshi, I imagine then this could then feed back into the way that you train JAGs for this type of situation over time as you learn more from these algorithms.
Hitoshi Nasu: Oh, absolutely. I think it’d be really helpful for training as well as actual practical application in the battlefield as well.
Tom Temin: Hitoshi Nasu is a law professor at the U.S. Military Academy at West Point. Thanks so much for joining me.
Hitoshi Nasu: Thank you for inviting me.
Tom Temin: And Alex Heit is vice president at AI vendor VISIMO. Thanks so much.
Alex Heit: Thanks very much, good speaking with you.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years.
Follow @tteminWFED