DoD unveils responsible AI toolkit

The Defense Department’s responsible artificial intelligence toolkit has 70 tools to help perform tasks like studying when things go awry, as well as tools to...

The Pentagon unveiled a responsible artificial intelligence toolkit as it continues to push using the technology responsibly across the Defense Department and to find more use cases for it, while trying to mitigate harm.

“The Responsible Artificial Intelligence (RAI) Toolkit provides a centralized process that identifies, tracks and improves the alignment of AI projects toward RAI best practices and the DoD AI Ethical Principles while capitalizing on opportunities for innovation,” according to an executive summary DoD released last week. “The RAI Toolkit provides an intuitive flow guiding the user through tailorable and modular assessments, tools and artifacts throughout the AI product lifecycle. Using the toolkit enables people to incorporate traceability and assurance concepts throughout their development cycle.”

As part of that toolkit, the Defense Department will have an AI incident repository and an AI incident database. The repository, which is still in development, will collect AI incidents and failures for the department to look at to improve future AI development. The database will index the collective history of harms or near harms in the real world by AI deployment. This database is aimed at helping DoD learn from prior experiences to prevent or mitigate harm and bad results.

As part of DoD’s continued effort to reduce risks, the AI toolkit includes a tool to document AI risks as well as how to calculate and mitigate these risks. There is also a guide to help think through issues when creating AI systems.

“A responsible approach to AI means innovating at a speed that can outpace existing and emerging threats and with a level of effectiveness that provides justified confidence in the technology and its employment,” the toolkit’s background information said. “Responsible AI involves ensuring that our technology matches our values. It positions our nation to lead in technological innovation while remaining committed to and advocates of our democratic principles. In order to accomplish this goal, RAI work at the DoD translates ethical principles into concrete benchmarks for each use case. The RAI Toolkit provides the resources to do this work and is designed to accessibility, modularity and customization.”

DoD is also looking to minimize harm. Out of the 70 tools, one is an auditing tool to help DoD examine, report and mitigate discrimination and bias in machine learning models, while providing metrics for the AI tools. Additionally, there’s a toolset to reduce human bias in AI projects. The Responsible AI toolkit has a framework to mitigate AI and machine learning threats.

The toolkit has items to help throughout a project. For example, there is a tool for responsible AI goals of a project and a guide for senior leadership to evaluate AI project program managers. There is also a tool to define and establish roles and responsibilities for AI projects. Another tool lists AI use cases. This use case tool comes after the Pentagon created Project Lima to explore generative AI use cases.

The toolkit, released last week, comes after DoD adopted AI ethical principles in 2020 and after the RAI Strategy and Implementation Pathway from 2022 outlined more than 60 efforts across DoD to support its ethical principles. The RAI toolkit is helping DoD carry out its strategy and implementation plan. It also comes on the heels of DoD’s new AI strategy, which came out earlier this month. That strategy focused on agile AI adoption throughout DoD. The toolkit also comes as DoD’s Chief Digital and Artificial Intelligence Office is looking to increase knowledge of AI across the department with a digital learning platform.

The toolkit expands upon Defense Innovation Unit, National Institute of Standards and Technology tools and guidelines as well as the IEEE 7000 Standard Model Process for Addressing Ethical Concerns.

DoD’s responsible AI toolkit will be updated as technologies and best practices change and as new capabilities are available.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories