Nurturing the future through human-centric AI guardrails

People cannot be expected to perform well with AI systems without training on their capabilities and limitations.

In film, artificial intelligence has often been portrayed as a threat to humans. As AI’s role in people’s daily life increases, the public continues to grow concerned. Although there are several challenges associated with AI, including potential long-term effects on employment, wealth disparity, loss of skills, and the need to develop and test robust, trustworthy AI software, the most critical and immediate concern is the effect of AI on the people who interact with it today.

Similarly to other forms of automation, AI can bias human decisions in inappropriate ways, which can lead to significant problems with people understanding what AI systems are doing and puts people of out-of-the-loop and unable to effectively take over from AI or override its decisions when needed. These challenges create a whole new class of errors and accidents that can undermine human wellbeing. For example, when AI is used to provide information, people may not understand how accurate or reliable its recommendations are.

Human factors and ergonomics are a multidisciplinary field that studies the interactions between people and the systems, products or environments they interact with, to optimize efficiency, safety and wellbeing. It applies psychology, engineering and design to create human-centric solutions, ensuring that products and systems are designed with consideration for people’s abilities and limitations. This results in technologies that enhance overall user experience and performance and are therefore better able to achieve their intended purpose.

Human-centric AI design, as defined by this multidisciplinary field, involves considering the physical, cognitive and emotional aspects of human interaction with technology. There are seven recommended AI guardrails that have been created by human factors and ergonomics professionals, grounded in extensive research on how AI affects human performance, which are needed to create effective AI tools for human use that avoid the many potential pitfalls of AI systems.

  1. User-centered design:

User-centered design places the needs and capabilities of users at the forefront of the design process. AI systems should be intuitive, accommodating users with varying levels of technical expertise. Conducting user research, usability testing and incorporating feedback throughout the development lifecycle helps create AI interfaces that are not only efficient but also align with users’ mental models, reducing the risk of errors and enhancing overall user satisfaction.

  1. Transparency and explainability:

AI systems often process complex information and present users with recommendations or perform tasks. It needs to be very clear to people when AI is being used, and the accuracy or reliability of the AI. People need to understand what the system is doing and how much they should trust it to perform well in any given situation. This means that they need to understand what its capabilities are in general, what types of situations it works for and its limits. Developers need to have the context on why AI makes the recommendations it does, while ensuring transparency. This means people need real-time information on the reliability of the AI for the current situation at hand. What is it doing and what will it do next? Is the current situation within its capabilities? Transparency and explainability are critical for supporting people’s ability to work effectively with AI.

  1. Bias and ethics:

Biases can creep into AI systems due to limitations in the databases used to train the AI systems, which often even developers may not be aware exist. If the AI is used in situations outside of that training, it can lead to unwanted outcomes. Developers must be vigilant in identifying and mitigating biases that may arise in training data, as biased algorithms can lead to poor human decision making. This can even perpetuate existing inequalities in some cases, such as with hiring algorithms. Understanding how AI systems arrive at decisions is essential for detecting biases and can help to prevent inadvertent discrimination. Employing interpretable models and providing transparent explanations for AI decisions ensures that users can make informed choices, allowing for the responsible use of AI.

  1. Collaboration between humans and AI:

Human-AI collaboration is a cornerstone of positive user experience. Instead of replacing human capabilities, AI should augment and complement them. Human factors and ergonomics underscore the importance of designing AI systems that seamlessly integrate into existing workflows, providing valuable support without creating unnecessary friction. Interfaces that facilitate effective communication between humans and AI contribute to a positive collaboration, utilizing the strengths of both parties for optimal outcomes.

  1. Safety:

AI is being developed for many safety-critical applications in driving, flying, healthcare, power systems and military systems, for example.  Ensuring that AI leads to safe outcomes and supports the needs of human users in maintaining system safety is essential. AI should provide salient and timely alerts to users when human intervention or manual take-over is required. When the AI gets into situations that it cannot handle, the system needs to revert to a safe state until people can successfully take control. People are a critical component of system safety and AI systems should enhance their safety processes without creating new safety concerns.

  1. Joint human-AI testing:

The quality of AI, like many software products, cannot be determined just by whether the program runs or has bugs. It must be tested in a wide variety of realistic situations with representative human users. The ability of people to understand AI system accuracy and its ability to perform tasks should be carefully assessed in testing. In safety critical applications, the ability of people to detect AI performance deficiencies and safely assume control of operations within the time available to avoid accidents or unwanted outcomes must be demonstrated. This is particularly important, given that human vigilance in monitoring AI can be low and people may be distracted by competing tasks. Any deficiencies found should be used to improve AI interfaces.

  1. Training:

People cannot be expected to perform well with AI systems without training on their capabilities and limitations. This is critical for developing a good understanding of what the system can do. And follow-on training is needed whenever those capabilities change, which may often be the case with AI.

As AI continues to shape the way people live, it is crucial to leverage it for positive outcomes and attention to the design of AI using human factors and ergonomics principles must play a pivotal role. By prioritizing user-centered design, ensuring explainability and transparency, addressing bias and other ethical considerations, fostering collaboration, and paying careful attention to safety, system testing and user training, we can create AI systems that are successful at improving our goals. These recommended guardrails serve as a blueprint for developing AI technologies that push the boundaries of innovation and align with human needs and capabilities. In embracing human-centric AI, we pave the way for a future where AI technology elevates the human experience, promoting safe and effective outcomes.

Mica Endsley is a panelist at the Human Factors and Ergonomics Society (HFES), president of SA Technologies and Former chief scientist of the U.S. Air Force.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories