The Biden administration is setting a higher bar for how federal agencies oversee artificial intelligence and automation tools, and how they implement them in t...
The Biden administration is setting a higher bar for how federal agencies oversee artificial intelligence and automation tools, and how they implement them in their own operations.
The White House Office of Science and Technology Policy released a blueprint for a “Bill of Rights” on Tuesday for the design, development and deployment of artificial intelligence and other automated systems.
The Bill of Rights outlines what more than a dozen agencies will do to ensure AI tools deployed in and out of government align with privacy rights and civil liberties.
The administration, as part of those efforts, is working on new federal procurement policy and guidance, to ensure agencies purchase and implement AI and automation tools that transparent and free of bias.
Other agency actions include cracking down on employers who use discriminatory hiring algorithms, rooting out algorithmic bias and discrimination in health care and protecting renters and home buyers from automated systems that reinforce housing segregation.
OSTP Director Arati Prabhakar said automation and AI tools are becoming more common to make sense of an “increasingly data-dense world,” but require guidelines to protect privacy, civil rights and other democratic values.
“Government agencies and researchers and individuals are rushing to use them, but you actually can’t figure what’s going on by just looking here at the bits and the algorithms. We can’t really understand it fully from the vantage point of a single use case, or from the vantage point of a single company or agency, or even a single community. It really requires a big picture,” Prabhakar said.
Deputy OSTP Director Alondra Nelson said the White House started its work on the AI Bill of Rights a year ago, to ensure that privacy and civil liberty protections “are baked in from the start” of this technology.
“Data-driven tools do tremendous good. They can empower us, they can help us solve some of our greatest challenges, but too often the use of these technologies results in just the opposite, the deepening of inequality and the undermining of our rights,” Nelson said.
Chiraag Bains, deputy director of the Domestic Policy Council for Racial and Economic Justice, called the Bill of Rights a “core part” of the administration’s executive orders promoting diversity, equity, inclusion and accessibility in government.
“Make no mistake, the harms that automated systems cause can constitute and do constitute a new civil rights frontier,” Bains said. “It’s imperative that all of us, from every sector, ensure that such technologies are designed and deployed in ways that safeguard and strengthen our civil rights and our civil liberties and that bolster equal opportunity for American society rather than undermining them.”
The AI Bill of Rights builds on some steps agencies already took this year, and sets new goals for agencies to weed out AI bias and discrimination.
The Equal Employment Opportunity Commission (EEOC) and the Justice Department issued guidance in May outlining ways AI and automated hiring tools can violate the Americans with Disabilities Act (ADA).
EEOC Director Charlotte Burrows said the commission is rolling out new AI training for its investigators, and is working with its Office of Federal Contract Compliance Programs and the Justice Department to ensure the federal government is “sending consistent messages” on its efforts to crack down on algorithmic discrimination.
“What we’re realizing is that pretty much every kind of employment practice that has existed is now being automated in various ways,” Burrows said.
“If we don’t keep pace with that, we will lose the power of the civil rights laws that people fought and died to have,” she added.
Secretary of Health and Human Services Xavier Becerra said the department is working on an equity assessment of AI systems used within the health care sector that will be released by the end of the year.
HHS is also considering federal rulemaking that would put the force of law behind the AI Bill of Rights.
“What we’re going to ultimately do, if the courts don’t get in our way, is have a rule that we’ll have out there that essentially would prohibit discrimination with the use of algorithms when it comes to health technology. And so if we’re able to move forward, we should be able to have a better look at how it is used is making use of AI,” Beccera said.
The blueprint for an AI Bill of Rights is a non-binding whitepaper that does not, in its current form, provide binding guidance to the public or federal agencies.
Becerra said HHS is also directing its Micky Tripathi, the National Coordinator for Health Information Technology, to investigate cases where AI and automation have discriminated against patients.
The Department of Veterans Affairs, meanwhile, is taking steps to notify veterans about when AI systems are used in their health care.
The Consumer Financial Protection Bureau (CFPB) is also taking steps under federal anti-discrimination law to ensure creditors provide consumers with specific and accurate explanations when credit applications are denied.
“Every single day, people are essentially being falsely accused by an algorithm of having a criminal conviction or some sort of court filing, because they happen to have a common last name,” CFPB Director Rohit Chopra said.
Chopra said the bureau is looking up to “muscle up” with staff who understand AI and automation.
CFPB is hiring more technologists and launched a whistleblower program for tech workers when there are law violations baked into the user experience (UX) and user interface (UI) of what they’re working on.
“We need to make sure that we’re bringing all the different disciplines to really be able to understand and frankly, to be able to police and regulate it,” Chopra said.
The Energy Department, meanwhile, will join a growing list of agencies with their own AI ethics documents later this fall.
The department soon plans to release its Principles and Guidelines for Advancing Responsible and Trustworthy AI and its AI Risk Management Playbook suggests mitigations to proactively manage AI risks such as algorithmic discrimination.
OSTP also worked with the Office of Management and Budget Federal Chief Information Officers Council to publish an inventory of non-classified and non-sensitive government AI use cases.
The Bill of Rights outlines five core protections:
Nelson said the blueprint for an AI Bill of Rights is meant for individuals that interact daily with these technologies, as well as the developers who create these technologies.
“Many building these technologies across America, from businesses and engineers, want to do the right thing. Some have demonstrated a willful ignorance about the harms of the tools they deploy. Policymakers want to do the right thing, too, but need support and partnership to shape the laws and regulations to protect their constituents,” she said.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Jory Heckman is a reporter at Federal News Network covering U.S. Postal Service, IRS, big data and technology issues.
Follow @jheckmanWFED