The imperative of AI guardrails: A human factors perspective
With the surge of consumer-facing AI tools and systems such as ChatGPT and others, we are witnessing the insertion of AI into our lives in one way or another.
The burgeoning landscape of artificial intelligence represents a sea change in the capability of technology. With this rapid paradigm shift in the making comes a challenge that could not be more urgent: the need to create a high level of synergy between humans and machines.
With the surge of consumer-facing AI tools and systems such as ChatGPT and others, we are witnessing the insertion of AI into our lives in one way or another. The serious possibility of adverse side effects can be easily veiled in the dazzling allure of these new tools. On one hand, AI embodies a technological achievement that, if wielded prudently, could benefit a number of activities. On the other, routine AI errors, biases and the relinquishing of consumer autonomy deliver a palpable sense of unease.
Many people are worried about possible long-term effects of AI on human employment, increases in wealth disparity, and loss of critical skills as AI takes over current tasks and jobs. In the more immediate time frame, however, human decision-making and performance can be negatively impacted by AI systems as they are being implemented today. As such, a critical and immediate concern is the effect of AI on the people who interact with it.
As AI permeates human endeavors in fields as widespread as driving, healthcare and on-line chatbots, it becomes urgent to implement “guardrails,” or protective frameworks that shield against the erosion of people’s ability to oversee and interact with these systems. These guardrails are needed to protect people from significant harm while supporting their ability to engage with AI usefully and effectively. To implement AI safely and effectively we need to draw heavily on research that has uncovered how AI affects human performance, and the factors that make the difference in people’s ability to detect and recover from AI limitations.
Human factors and ergonomics is a multi-disciplinary scientific field that focuses on understanding how humans interact with systems, products and environments. The goal of human factors and ergonomics is to improve human performance, health, safety and comfort using evidence-based research and design principles. It combines principles from psychology, engineering and other sciences to design and optimize these interactions for both human well-being and overall system performance. It involves leveraging and considering factors such as human abilities, limitations and characteristics in the design of, well, everything people interact with. This includes anything from computer keyboards to aircraft displays to the workflow of healthcare workers.
This field is crucial in ensuring that products and systems are tailored to meet the needs and capabilities of their users, ultimately leading to more efficient, effective and satisfying interactions between humans and technology.
As such, human factors researchers are in a unique position to provide scientifically based input on the question of how to avoid negative outcomes from AI systems, while retaining the positive effects that AI can provide. Over the past 40 years, this profession has conducted a substantial body of research exploring the effects of AI and automation on human performance, as well as determining the key characteristics of the technology that lead to both good and bad outcomes.
Governments and organizations around the world have begun to develop principles for the use of AI and are starting to implement legislation and regulations that should be in place for its use. For example, the European Union has introduced the Artificial Intelligence Act, establishing initial regulations for the use of AI. The Defense Department has stated that AI should be responsible, equitable, traceable, reliable and governable. The White House Office of Science Technology and Policy (OSTP) has issued a blueprint for the AI Bill of Rights stating that AI should (1) be safe and effective, (2) protect people from algorithmic discrimination, (3) have built-in data privacy protections, (4) provide notice that it is an AI and explanations for its actions, and (5) allow for human alternatives, considerations and fallback, allowing people to opt out of the AI.
These are all sound and appropriate principles that need to be further detailed to allow for meaningful and actionable implementation in different settings.
The foundation of AI guardrails
Human factors research, traditionally the bastion of safety and user experience, offers a vision for the development of useful pathways for AI. The Human Factors and Ergonomics Society (HFES) recently issued policy guidance, based on rigorous research, that provides useful insights for mitigating the darker sides of AI while fortifying its potential to positively support human endeavors.
First, all AI outputs should be explicitly labeled as the product of a computer system and the source of information used for its outputs should be specifically identified. Transparency is vital in revealing the origins of the information, and its trustworthiness, whether it’s from AI, peer-reviewed academic journals, or just an individual’s opinion. This is important so that people can determine what to believe from new generative AI tools.
Secondly, it should be illegal for AI to be used to commit or promote fraud. All AI output that alters or creates text, images or video to communicate factually inaccurate events or information must be explicitly and prominently labeled as “fiction” or “fake,” and violations of this rule should be legally considered as fraud. This would make it much easier for people to distinguish election misinformation or fake images online for example. AI is being employed to create texts, images and videos that might not accurately portray real events, often for fraudulent purposes. Developers must avoid the intentional alteration of content that leads to the spread of misinformation.
Thirdly, AI systems should both avoid overt biases and work to expose any biases that might exist in the system. Biases in AI systems can result in disparate impacts on people, such as with systems screen resumes for example. Any known limitation of the applicability of an AI system to a set of conditions or circumstances must be made transparent to the users of the AI.
AI can exhibit biases, stemming from non-representative training data or development processes, leading to poor performance or incorrect conclusions in untrained scenarios. These biases are not only opaque, making them difficult for users and even developers to identify, but they also influence human decision-making in subtle and potentially harmful ways. Since biases can worsen human judgment, it becomes challenging for individuals to detect and correct these biases independently.
Fourth, the developers of AI systems must be required to assume liability for the performance of their systems. AI systems are often presented as solutions to human error, yet research indicates new problems can emerge due to AI taking over tasks. Accidents created by the performance of an automated or semi-automated vehicle are the result of how they are programmed, leading to new challenges for human performance. People often find it hard to understand or predict AI behavior and compensate for its shortcomings, and have a reduced level of situational understanding when they are in use. As such, developers of AI systems should be held accountable for their products, especially since the end-users might lack sufficient knowledge about the AI’s capabilities and limitations in specific contexts.
Fifth, AI needs to be both transparent and explainable. Unless people have sufficient information to understand what the AI is doing and why, and when it is in a situation it cannot handle, they will be unable to take over control when needed. Many accidents are caused by problems with overseeing new automated vehicles, for example. Similarly, people will have difficulties in developing enough understanding of AI systems to be able to properly oversee them in healthcare, military operations, aviation and many other safety-critical areas.
The tumultuous road ahead: Navigating cultural and ethical frontiers
The integration of these AI guardrails requires action by Congress as well as careful development and testing of AI systems by developers. By viewing the development and implementation of AI tools and systems through the lens of human factors and these research-based guardrails, developers can create systems that provide the users of AI with the information they need, creating user-centered AI systems.
While AI tools and technology can provide some useful products and services, the potential for negative impacts on human performance is significant. Only by mitigating these problems through the establishment and use of effective guardrails can the benefits of AI be realized, and negative outcomes minimized.
Mica Endsley is the government relations chair for the Human Factors and Ergonomics Society (HFES). She’s also president of SA Technologies, a situation awareness research, design and training firm. She was formerly the chief scientist of the United States Air Force.
The imperative of AI guardrails: A human factors perspective
With the surge of consumer-facing AI tools and systems such as ChatGPT and others, we are witnessing the insertion of AI into our lives in one way or another.
The burgeoning landscape of artificial intelligence represents a sea change in the capability of technology. With this rapid paradigm shift in the making comes a challenge that could not be more urgent: the need to create a high level of synergy between humans and machines.
With the surge of consumer-facing AI tools and systems such as ChatGPT and others, we are witnessing the insertion of AI into our lives in one way or another. The serious possibility of adverse side effects can be easily veiled in the dazzling allure of these new tools. On one hand, AI embodies a technological achievement that, if wielded prudently, could benefit a number of activities. On the other, routine AI errors, biases and the relinquishing of consumer autonomy deliver a palpable sense of unease.
Many people are worried about possible long-term effects of AI on human employment, increases in wealth disparity, and loss of critical skills as AI takes over current tasks and jobs. In the more immediate time frame, however, human decision-making and performance can be negatively impacted by AI systems as they are being implemented today. As such, a critical and immediate concern is the effect of AI on the people who interact with it.
As AI permeates human endeavors in fields as widespread as driving, healthcare and on-line chatbots, it becomes urgent to implement “guardrails,” or protective frameworks that shield against the erosion of people’s ability to oversee and interact with these systems. These guardrails are needed to protect people from significant harm while supporting their ability to engage with AI usefully and effectively. To implement AI safely and effectively we need to draw heavily on research that has uncovered how AI affects human performance, and the factors that make the difference in people’s ability to detect and recover from AI limitations.
Learn what you need to know as you make your annual health care benefits choices during Federal News Network's Open Season Exchange 2024. Register today!
Human factors and ergonomics, champion of safety
Human factors and ergonomics is a multi-disciplinary scientific field that focuses on understanding how humans interact with systems, products and environments. The goal of human factors and ergonomics is to improve human performance, health, safety and comfort using evidence-based research and design principles. It combines principles from psychology, engineering and other sciences to design and optimize these interactions for both human well-being and overall system performance. It involves leveraging and considering factors such as human abilities, limitations and characteristics in the design of, well, everything people interact with. This includes anything from computer keyboards to aircraft displays to the workflow of healthcare workers.
This field is crucial in ensuring that products and systems are tailored to meet the needs and capabilities of their users, ultimately leading to more efficient, effective and satisfying interactions between humans and technology.
As such, human factors researchers are in a unique position to provide scientifically based input on the question of how to avoid negative outcomes from AI systems, while retaining the positive effects that AI can provide. Over the past 40 years, this profession has conducted a substantial body of research exploring the effects of AI and automation on human performance, as well as determining the key characteristics of the technology that lead to both good and bad outcomes.
Governments and organizations around the world have begun to develop principles for the use of AI and are starting to implement legislation and regulations that should be in place for its use. For example, the European Union has introduced the Artificial Intelligence Act, establishing initial regulations for the use of AI. The Defense Department has stated that AI should be responsible, equitable, traceable, reliable and governable. The White House Office of Science Technology and Policy (OSTP) has issued a blueprint for the AI Bill of Rights stating that AI should (1) be safe and effective, (2) protect people from algorithmic discrimination, (3) have built-in data privacy protections, (4) provide notice that it is an AI and explanations for its actions, and (5) allow for human alternatives, considerations and fallback, allowing people to opt out of the AI.
These are all sound and appropriate principles that need to be further detailed to allow for meaningful and actionable implementation in different settings.
The foundation of AI guardrails
Human factors research, traditionally the bastion of safety and user experience, offers a vision for the development of useful pathways for AI. The Human Factors and Ergonomics Society (HFES) recently issued policy guidance, based on rigorous research, that provides useful insights for mitigating the darker sides of AI while fortifying its potential to positively support human endeavors.
First, all AI outputs should be explicitly labeled as the product of a computer system and the source of information used for its outputs should be specifically identified. Transparency is vital in revealing the origins of the information, and its trustworthiness, whether it’s from AI, peer-reviewed academic journals, or just an individual’s opinion. This is important so that people can determine what to believe from new generative AI tools.
Secondly, it should be illegal for AI to be used to commit or promote fraud. All AI output that alters or creates text, images or video to communicate factually inaccurate events or information must be explicitly and prominently labeled as “fiction” or “fake,” and violations of this rule should be legally considered as fraud. This would make it much easier for people to distinguish election misinformation or fake images online for example. AI is being employed to create texts, images and videos that might not accurately portray real events, often for fraudulent purposes. Developers must avoid the intentional alteration of content that leads to the spread of misinformation.
Read more: Commentary
Thirdly, AI systems should both avoid overt biases and work to expose any biases that might exist in the system. Biases in AI systems can result in disparate impacts on people, such as with systems screen resumes for example. Any known limitation of the applicability of an AI system to a set of conditions or circumstances must be made transparent to the users of the AI.
AI can exhibit biases, stemming from non-representative training data or development processes, leading to poor performance or incorrect conclusions in untrained scenarios. These biases are not only opaque, making them difficult for users and even developers to identify, but they also influence human decision-making in subtle and potentially harmful ways. Since biases can worsen human judgment, it becomes challenging for individuals to detect and correct these biases independently.
Fourth, the developers of AI systems must be required to assume liability for the performance of their systems. AI systems are often presented as solutions to human error, yet research indicates new problems can emerge due to AI taking over tasks. Accidents created by the performance of an automated or semi-automated vehicle are the result of how they are programmed, leading to new challenges for human performance. People often find it hard to understand or predict AI behavior and compensate for its shortcomings, and have a reduced level of situational understanding when they are in use. As such, developers of AI systems should be held accountable for their products, especially since the end-users might lack sufficient knowledge about the AI’s capabilities and limitations in specific contexts.
Fifth, AI needs to be both transparent and explainable. Unless people have sufficient information to understand what the AI is doing and why, and when it is in a situation it cannot handle, they will be unable to take over control when needed. Many accidents are caused by problems with overseeing new automated vehicles, for example. Similarly, people will have difficulties in developing enough understanding of AI systems to be able to properly oversee them in healthcare, military operations, aviation and many other safety-critical areas.
The tumultuous road ahead: Navigating cultural and ethical frontiers
The integration of these AI guardrails requires action by Congress as well as careful development and testing of AI systems by developers. By viewing the development and implementation of AI tools and systems through the lens of human factors and these research-based guardrails, developers can create systems that provide the users of AI with the information they need, creating user-centered AI systems.
While AI tools and technology can provide some useful products and services, the potential for negative impacts on human performance is significant. Only by mitigating these problems through the establishment and use of effective guardrails can the benefits of AI be realized, and negative outcomes minimized.
About the Author
Want to stay up to date with the latest federal news and information from all your devices? Download the revamped Federal News Network app
Mica Endsley is the government relations chair for the Human Factors and Ergonomics Society (HFES). She’s also president of SA Technologies, a situation awareness research, design and training firm. She was formerly the chief scientist of the United States Air Force.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
Implementing AI well means revisiting your enterprise architecture
More agencies turn to AI to fix website accessibility issues
AI effort to improve contractor performance ratings stalls