Our world is becoming more digital by the minute and the amount of data generated is growing exponentially. At the same time, AI is developing at a breakneck pace and the growth of research and policy guidance needs to ensure AI is being implemented in an ethical way.
To implement AI ethically, agencies need to ensure AI is acting within scope and its behavior is well-investigated in terms of fairness and harm. If an AI model can’t be trusted to be compatible with that mission or can’t be proven to limit its harm on the public, it cannot become part of the public infrastructure.
As a result, explainable AI (XAI) is an essential part of a risk management framework. Having the tools to understand AI models and models with the architecture capable of generating human-understandable explanations is fundamental and a prerequisite to appreciating the risks in any AI system. Protecting something first requires seeing what’s posing the risks.
The National Institute of Standards and Technology’s four principles for XAI outline its component needs—a model should be compatible with generating an explanation; that explanation must be accurate to serve as a useful metric; the explanation also must be meaningful to the stakeholders in the context of the outcome; and finally, that explanation must include the limits of its explanatory power, so the model won’t be misused without setting off alarms.
While necessary, achieving XAI in line with NIST’s principles can be challenging. Most AI models attempt to replicate the way humans think, but the underlying mathematical models are significantly different, making the logic or heuristics behind these model decisions understandable to humans difficult. Three steps—adopting a transparency-focused AI model, maintaining production monitoring and including employee education—can help agencies responsibly utilize AI and align with NIST’s principles.
Making AI explainable
First, agencies must be able to interpret the varying complexity of the different algorithms driving these AI models. Depending on the underlying principles, it could be something relatively intuitive such as regressions and decision tree, or it could be something more abstract, such as natural language transformers where it’s harder to understand the results.
For a model to be explainable, the interpretation must also be easily understandable. This is where AI knowledge and human-computer interaction can join forces, translating the statistical metrics and model measurements into language that help humans decide whether to take action based on the model output.
For example, with computer vision algorithms that help identify anomalies in medical images, such as looking for cancer on a lung CT-scan, the final diagnoses should be done by medical professionals. The computer vision algorithms highlight the “areas of interests,” such as high pixels that contribute to the automated detection of cancerous tissues, but these findings are confirmed by experienced human professionals.
A transparency-first approach to AI
There are two major components to enable a transparency-native approach to AI model development and operationalization. The first component is MLOps (Machine Learning Operations), which is the technology infrastructure and process that enables reproducible and repeatable model development and deployment. Without MLOps, model explainability is ad hoc and manual, making minor changes or upgrades to models time-intensive and expensive, and monitoring production AI model performance inconsistent and unreliable.
The reality is that all AI models will depreciate in performance over time. This can happen due to data or concept drift, new algorithms or changing business ROI. Whatever the reason, production monitoring is how the owner/operator can be alerted and trigger the right actions.
The second component is algorithmic explainability, the ability to extract and translate the heuristics behind model decisions into actionable language. Machine learning expertise is crucial to extract the model decision heuristic. Depending on the use case, there might be a certain tradeoff between model explainability and model performance in terms of accuracy and time-to-predict, so human involvement is required to make the optimal algorithm selection and recommendation.
The role of employee education
Similar to the adoption of other new technologies, part of the solution is proper internal education and change management. AI is often most effective when integrated behind-the-scenes or “invisibly” into the existing technology infrastructure and workflow. But this makes educating employees on the existence of AI difficult.
A few key pieces of information should be communicated and available to employees: a list of AI models in their workflow, a basic primer on the AI’s scope, the AI model’s owners and contact information, a guide on the proper usage of the model, and training on how to protect the security of the model and a feedback mechanism for them to report issues with the model.
It is also important to highlight the crucial role humans play in the incorporation of AI since having a human in the loop is essential to ensure the ethical applications of AI, and AI explainability is striving to build the information bridge to keep humans in the loop and be the ultimate decision-maker.
AI has made vast strides in the last decade. But these strides have been measured in terms of model performance alone, often at the cost of model interpretability and explainability. Today, the area of explainability is now one of the most active areas of research and investment.
We are seeing promising new findings in using contrasting learning and contrafactual approaches in tackling some deep learning techniques. There has also been exciting research in understanding the fundamental logic driving key neural networks components as well as comparison between different powerful neural networks.
Applying these new techniques and incorporating them as the norm for AI implementation should be a priority for all organizations. Explainable AI is not a luxury, it is an essential component to keep humans in the decision making process, make AI models more resilient and sustainable, and minimize potential harm to the users.
NIST’s AI risk management framework must focus on ethical AI
AI is developing at a breakneck pace and the growth of research and policy guidance needs to ensure AI is being implemented in an ethical way.
Our world is becoming more digital by the minute and the amount of data generated is growing exponentially. At the same time, AI is developing at a breakneck pace and the growth of research and policy guidance needs to ensure AI is being implemented in an ethical way.
To implement AI ethically, agencies need to ensure AI is acting within scope and its behavior is well-investigated in terms of fairness and harm. If an AI model can’t be trusted to be compatible with that mission or can’t be proven to limit its harm on the public, it cannot become part of the public infrastructure.
As a result, explainable AI (XAI) is an essential part of a risk management framework. Having the tools to understand AI models and models with the architecture capable of generating human-understandable explanations is fundamental and a prerequisite to appreciating the risks in any AI system. Protecting something first requires seeing what’s posing the risks.
The National Institute of Standards and Technology’s four principles for XAI outline its component needs—a model should be compatible with generating an explanation; that explanation must be accurate to serve as a useful metric; the explanation also must be meaningful to the stakeholders in the context of the outcome; and finally, that explanation must include the limits of its explanatory power, so the model won’t be misused without setting off alarms.
While necessary, achieving XAI in line with NIST’s principles can be challenging. Most AI models attempt to replicate the way humans think, but the underlying mathematical models are significantly different, making the logic or heuristics behind these model decisions understandable to humans difficult. Three steps—adopting a transparency-focused AI model, maintaining production monitoring and including employee education—can help agencies responsibly utilize AI and align with NIST’s principles.
Making AI explainable
First, agencies must be able to interpret the varying complexity of the different algorithms driving these AI models. Depending on the underlying principles, it could be something relatively intuitive such as regressions and decision tree, or it could be something more abstract, such as natural language transformers where it’s harder to understand the results.
For a model to be explainable, the interpretation must also be easily understandable. This is where AI knowledge and human-computer interaction can join forces, translating the statistical metrics and model measurements into language that help humans decide whether to take action based on the model output.
For example, with computer vision algorithms that help identify anomalies in medical images, such as looking for cancer on a lung CT-scan, the final diagnoses should be done by medical professionals. The computer vision algorithms highlight the “areas of interests,” such as high pixels that contribute to the automated detection of cancerous tissues, but these findings are confirmed by experienced human professionals.
A transparency-first approach to AI
There are two major components to enable a transparency-native approach to AI model development and operationalization. The first component is MLOps (Machine Learning Operations), which is the technology infrastructure and process that enables reproducible and repeatable model development and deployment. Without MLOps, model explainability is ad hoc and manual, making minor changes or upgrades to models time-intensive and expensive, and monitoring production AI model performance inconsistent and unreliable.
The reality is that all AI models will depreciate in performance over time. This can happen due to data or concept drift, new algorithms or changing business ROI. Whatever the reason, production monitoring is how the owner/operator can be alerted and trigger the right actions.
The second component is algorithmic explainability, the ability to extract and translate the heuristics behind model decisions into actionable language. Machine learning expertise is crucial to extract the model decision heuristic. Depending on the use case, there might be a certain tradeoff between model explainability and model performance in terms of accuracy and time-to-predict, so human involvement is required to make the optimal algorithm selection and recommendation.
The role of employee education
Similar to the adoption of other new technologies, part of the solution is proper internal education and change management. AI is often most effective when integrated behind-the-scenes or “invisibly” into the existing technology infrastructure and workflow. But this makes educating employees on the existence of AI difficult.
A few key pieces of information should be communicated and available to employees: a list of AI models in their workflow, a basic primer on the AI’s scope, the AI model’s owners and contact information, a guide on the proper usage of the model, and training on how to protect the security of the model and a feedback mechanism for them to report issues with the model.
Read more: Commentary
It is also important to highlight the crucial role humans play in the incorporation of AI since having a human in the loop is essential to ensure the ethical applications of AI, and AI explainability is striving to build the information bridge to keep humans in the loop and be the ultimate decision-maker.
AI has made vast strides in the last decade. But these strides have been measured in terms of model performance alone, often at the cost of model interpretability and explainability. Today, the area of explainability is now one of the most active areas of research and investment.
We are seeing promising new findings in using contrasting learning and contrafactual approaches in tackling some deep learning techniques. There has also been exciting research in understanding the fundamental logic driving key neural networks components as well as comparison between different powerful neural networks.
Applying these new techniques and incorporating them as the norm for AI implementation should be a priority for all organizations. Explainable AI is not a luxury, it is an essential component to keep humans in the decision making process, make AI models more resilient and sustainable, and minimize potential harm to the users.
Henry Jia is the data science lead at Excella
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
What the UK gets about remote work that the US doesn’t
Network connectivity: An urgent matter of national security
NIST’s quantum standards: The time for upgrades is now