Challenges of DoD’s Ethical Principles for AI

The development and publication of the DoD principles and actions it has taken since demonstrate that DoD is taking this seriously.

In February of this year, the Department of Defense (DoD) issued five Ethical Principles for Artificial Intelligence (AI): Responsible, Equitable, Traceable, Reliable and Governable. The DoD principles build off recommendations from 2019 by the Defense Innovation Board and the interim report of the National Security Commission on AI (NSCAI). Both Congress, though legislation creating the NSCAI, and the Trump Administration, through Executive Order 13859 (Feb. 11, 2019), have begun to address the issue of AI and national security. The defense industry and others in the private sector have also been considering ethical issues regarding AI, including the issue of whether businesses should have an AI code of ethics. When cyber first became an issue about 22-years ago, the trend was to raise awareness and think through the consequences. Similarly, now we are developing awareness of the issues and beginning to think through the consequences of AI.

Having spent many years at DoD as both a lawyer and in the policy shop and on Capitol Hill at the House Armed Services Committee and the House Ethics Committee, I see many potential significant practical issues with the use of AI in both the military and business spheres.

However, the development and publication of the DoD principles and actions it has taken since demonstrate that DoD is taking this seriously.

Department of Defense and AI

The DoD AI principles were developed by the Defense Intelligence Board, which submitted a series of recommendations to Secretary of Defense Mark Esper in October 2019. The recommendations were developed over a 15-month period by AI leaders in government and the private sector and were based on existing Law of War principles and statutes. They are designed to address new ethical issues raised by AI.

The DoD General Counsel, Paul C. Ney Jr., has commented on the importance of applying the existing principles of the law of war to new legal issues and in particular AI. Speaking before an international Law of War conference in May 2019, Ney said, “The advantage of artificial intelligence and other autonomy-related emerging technologies is the use of software or machine control of the systems rather than manual control by a human being.” However, Ney went on to stress that any such weapon system would at some point remain under “human control.”

To implement the DoD AI principles, DoD has established an AI policy team in the Pentagon’s Joint Artificial Intelligence Center (JAIC) under the command of Air Force Lt. Gen. Jack Shanahan. Shanahan quickly hired attorney Alka Patel to head the policy team implementing the principles. Patel had been the executive director of the Risk & Regulatory Services Innovation Center at Carnegie Mellon University.

Early implementation of the DoD principles

Patel and her colleagues have begun to use the new DoD principles at JAIC. Among the first steps taken:

  • Including the principles as applicable standards in requests for proposals, including a May award to Booz Allen Hamilton.
  • JAIC participation, through Patel, in a Responsible AI subcommittee, part of a larger DoD working group writing a broader DoD policy document.
  • Establishment of pilot program, “Responsible AI Champions,” bringing together a broad group inside JAIC to look at the principles throughout the entire cycle of AI programs.
  • Early work on the creation of a Data Governance Council within the U.S. government and other countries.

Issues

As DoD and the defense sector commence the challenge of applying ethical principles to real-world business and national security issues, several issues must be looked at carefully:

  • Clarification of the terms used in the principles, including, but not limited to, what is “appropriate,” in the principle of responsibility and “unintended bias” in the second principle of equity.
  • As acknowledged by Ney, Stuart Russell and many others, what will be the scope of human control of AI involved in military applications?
  • As also broadly discussed, what will be the scope of ultimate human accountability for AI decision-making?
  • And finally the overarching problem about “moral machines”: “what to think about machines that think.”

Implications for the private sector

The implications for the defense contractors and the private sector are immense. The Pentagon is continuing to litigate issues regarding its $10 billion cloud computing contract between Microsoft (which won the contract in 2019) and Amazon and others, which are contesting the contract. In addition to the DoD issues addressed by Ney, the use of AI in other areas of the private sector is broad and growing (examples include healthcare, project management, and automation of E-commerce).

Paul Lewis is Director of Ethics and Business Conduct for the Intelligence & Security sector of BAE Systems Inc. He is the former General Counsel of the House Armed Services Committee and teaches Ethics at Georgetown University.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    U.S. Marine Corps Forces Cybersp/Staff Sgt. Jacob OsborneMarines with Marine Corps Forces Cyberspace Command pose for photos in the cybersecurity operations center at Lasswell Hall aboard Fort Meade, Maryland.

    CMMC requirements demand innovative approaches to securing CUI

    Read more
    Graphic By: Derace LauderdaleCybersecurity

    Rethinking continuous risk metrics to fortify federal cybersecurity

    Read more
    (Getty Images/iStockphoto/metamorworks)Artificial intelligence (AI) concept showing a brain on a computer screen

    AI emerging as a not-so-secret business development weapon for government contractors

    Read more