Recently the Defense Innovation Unit issued what it calls responsible AI guidelines.
Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.
At the Defense Department, artificial intelligence has been topic one lately. But unlike say China, the U.S. federal government, including DoD has a policy of ethical and unbiased application of AI. Recently, the Defense Innovation Unit issued what it calls responsible AI guidelines. Here with what they’re all about, the unit Technical Director for Artificial Intelligence and Machine Learning, Jared Dunnmon, spoke to the Federal Drive with Tom Temin.
Interview transcript:
Tom Temin: Dr. Dunnmon, good to have you on.
Jared Dunnmon: Thanks a lot. Appreciate it.
Tom Temin: And just review for us very briefly the way that DoD thinks or intends to use artificial intelligence. It’s a pretty broad based application set, isn’t it?
Jared Dunnmon: Yeah, it is. There’s a lot of different applications within the DoD that one could think about using artificial intelligence for. And that ranges from making ourselves more efficient and how we analyze remote sensing data to respond to natural disasters. It involves making sure that we optimize business processes, when we are doing things many times that require a lot of manual work within the DoD in places like our financial systems, or our business processes, making sure that we can run those, but more effectively and more efficiently. Also providing value to folks who spend a great deal of their time out in the field, analyzing data, trying to figure out what pieces of important information they can use to make better decisions in the context of their operational environment. And that spans the gamut of things from folks who are again, say, looking at satellite imagery to folks who are looking through open source information, all the way down to folks in the medical system, who are trying to analyze medical information to make better decision for veteran’s health care. So there’s a huge gamut of potential applications for artificial intelligence within a venture that has kind of the breadth and depth of the DoD. So it’s a unique context, and we should be able to work on these things.
Tom Temin: And so what prompted the development of these guidelines then?
Jared Dunnmon: So a couple of years ago, really February 2020, DoD committed to five major AI ethical principles. And these were pretty well thought out and concisely stated, but there’s a wide gap between stating those principles and translating them into concrete actions that can be taken on any given program. So because we work at DIU, with a large number of companies, many of whom have put a good deal of thought in some of these same issues, we’re able not only to prototype and rapidly iterate upon our approach to operationalizing those principles, but also integrate feedback from some of the best folks in the world on the commercial side. So in that context, let me give you a little bit on kind of what DIU does and how it works to contextualize how I’m going to answer this question. What we do is we’re a DoD entity within the Office of the Secretary of Defense that focuses on deploying commercial technology in support of national security. We do that primarily by running commercial solutions openings programs, in which we work with DoD partners to identify a DoD need. We put out a brief solicitation describing that need, we received bids for large number both traditional and non traditional companies. And then we run it down select to get these folks on contract within a matter of months, help to manage a one-to-two-year period of performance, and then ultimately, if the company is able to meet the DoD partner success criteria, leverage an OTA-based contract structure that we work with to scale those solutions broadly within DoD. So because we’re executing programs, we’ve been really focused on making sure that when vendors came to us and said, okay, you have these ethical principles of DoD, what does that mean, we need to do on these programs? It’s really important for us to be able to say, not as DoD policy, but as guidelines that are based on our experience working with private companies, it then informed by best practices from academia, from industry, from folks in government, how can we give at least some guidance to say when we say traceable equitable, governable, etc., in our AI ethical principles, how does that translate into the concrete process of developing and deploying an AI capability? And that’s what those guidelines are really intended to inform.
Tom Temin: We’re speaking with Jared Dunnmon, he is technical director for artificial intelligence and machine learning at the Defense Innovation Unit. So these guidelines then are aimed at industry that would hope to do business with DIU. Or do you also kind of hope they get throughout DoD itself and read there also?
Jared Dunnmon: So they’re certainly aimed at DIU specifically, in the sense that we are laser focused on executing these programs. That being said, of course, the hope is that if folks, we’ve gone through the effort of writing up the supporting materials for these things that they really they involve kind of three sets of processes, three flow charts, almost, if you will: One for the planning phase, one for the development phase, one for the deployment phase. And, we have worksheets describing kind of how we think about each of those phases and the things that we need to work through the questions we need to ask. So things like, have you have you thought about where your data is coming from? Do you know what its provenance is? How have you convinced yourself it’s relevant? What is your task? What is your input? What is your output? And what is the benchmark by which you’re measuring performance? Have you done a concentrated effort to identify your stakeholders and end users? And have you thought about harms modeling? Have you done a targeted harms modeling process? So all of these, the worksheets that we’ve got to guide that process for ourselves, really, we’ve gone to the effort of making so that we can put it in a public forum. It’s up on our website, along with a white paper that actually describes kind of some of our learnings from applying the star programs over the last year or so. And so yes, the short answer is that we certainly aim to do it for ourselves to hold, both make our lives easier, and to hold ourselves accountable for how we’re running these programs. But at the same time, we certainly hope that it’ll provide value not only for folks in DOD, but also for folks who are even in the private sector, because there are aspects of this where when you’re working on a DOD application, there are questions that you have to ask where you have to really asked hard questions, sometimes. There are occasionally times where your folks are developing applications in the private sector where it’s not always obvious what the best process is. And so our hope is that by documenting some of these things that we’ve had to wrestle with, we can both, a) provide value more broadly, but also give folks from academia, industry, the partners that we work with the chance to throw tomatoes out and say, hey, you should do this better, here’s a way you can do this better, etc. So it’s transparent in that way.
Tom Temin: And the implication here and tell me if I’m correct about this is that when deploying an algorithm to do something to direct some sort of a system or outcome, the algorithm is not as crucial to the ethical deployment of it as the data used to train it, or is that going too far?
Jared Dunnmon: It’s all related. You can certainly have data that makes sense and would align with the ethical principles that you have. And you could certainly find a way to train an algorithm that would not reflect kind of those ethical principles. So it’s the entire process. So that’s why it’s not only a matter of, have you gotten the right data, and have you trained an algorithm that seemed like it would make sense? There’s also a big part of this, which is in that deployment phase I mentioned, which is what do you do? Building AI systems is not just a matter of training a model and saying, yes, I have it. It is a continuous process of maintenance and inquiry, and monitoring, and making sure that you’re thinking post deployment, am I continuing to do that harms modeling? Am I continuing to monitor? What about this model is going right? What about this model is going wrong? Has my environment changed such that the assumptions that I built this model under are no longer correct? And that’s causing me to have suboptimal outcomes. So the short answer is, it’s all the above. And the important thing to realize is it’s not just a matter of okay, I counted for those things so I’m good. It’s I have to continue to do that every single day. And by the way, that can also mean that if you look at that, the costs of doing that, like there’s a cost associated with that, right? So there are some algorithms where if you tell a human hey, human, go and read these 1 billion documents, look, there’s no way. It’d be very helpful for machine learning to help me out doing that and identifying important pieces of information. So maybe the cost to maintain that model there is worth it. But there are other contexts where people have looked at the process that we go through and kind of gotten a sense of, well, this is actually what it takes to maintain an AI system. And their response was, I’m not sure it’s worth it. And then our response being yeah, that may be the right answer?
Tom Temin: In the DoD context there’s probably, or in the federal context, it’s not simply a matter of doing those constant accountability checks that you mentioned, but also documenting them and being able to show that you’ve done them, which itself is a big effort and an expensive one.
Jared Dunnmon: Yeah. And that’s why again, that’s why we went to the effort of creating kind of those worksheets, again, for ourselves to kind of guide and make that process efficient, both for ourselves and for our partners, and to make sure that we’re all on the same page, because you can also say, here’s what I want, and the folks you’re working with take that in a slightly different way. And then you end up talking past each other. So yeah, it is work. But that doesn’t mean that it shouldn’t be done, right? And so I guess the response to be clear, the response is, yes, it is work. But really doing anything worthwhile requires work. And so the question becomes, is that work worth it to accomplish the outcome that one wants to achieve with an AI system? And that’s why it’s so critically important to be clear about from the start, what are the first questions we ask, what is the task you’re doing? How are you measuring performance? And what is the benchmark? What is the current way you’re doing that? Because if you can’t answer those questions, crisply and concisely, you will not be able to measure value. And you can’t actually make that value judgment about whether it’s worth doing the entire rest of this, because you won’t be clear about how the system’s going to provide you value.
Tom Temin: And it sounds like then that these guidelines are useful to anybody, any agency deploying AI or contemplating it, not just Defense Department.
Jared Dunnmon: Yeah, I mean, we’ve certainly talked with a number of folks outside the DoD and kind of both gotten their input from kind of their own experience, and certainly tried to make these things accessible. So I won’t go in … and speak for the other folks that have to read things that we wrote, but at the same time, certainly our hope is that there’s at least some value provided to those folks.
Tom Temin: Jared Dunnmon is technical director for Artificial Intelligence and Machine Learning at the Defense Innovation Unit. Thanks so much for joining me.
Jared Dunnmon: Thanks, Tom.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years.
Follow @tteminWFED