Understanding AI decision-making: Research examines model transparency

The SAGE framework for explaining context in explainable AI (Mill et al., 2024). Credit: Applied Artificial Intelligence (2024). DOI: 10.1080/08839514.2024.2430867

Are we putting our faith in technology that we don’t fully understand? A new study from the University of Surrey comes at a time when AI systems are making decisions impacting our daily lives—from banking and health care to crime detection. The study calls for an immediate shift in how AI models are designed and evaluated, emphasizing the need for transparency and trustworthiness in these powerful algorithms.

The research is published in the journal Applied Artificial Intelligence.

As AI becomes integrated into high-stakes sectors where decisions can have life-altering consequences, the risks associated with “black box” models are greater than ever. The research sheds light on instances where AI systems must provide adequate explanations for their decisions, allowing users to trust and understand AI rather than leaving them confused and vulnerable.

With cases of misdiagnosis in health care and erroneous fraud alerts in banking, the potential for harm—which could be life-threatening—is significant.

Surrey’s researchers detail the alarming instances where AI systems have failed to adequately explain their decisions, leaving users confused and vulnerable. Fraud datasets are inherently imbalanced—0.01% are fraudulent transactions—leading to damage on the scale of billions of dollars.

It is reassuring for people to know most transactions are genuine, but the imbalance challenges AI in learning fraud patterns. Still, AI algorithms can identify a fraudulent transaction with great precision but currently lack the capability to adequately explain why it is fraudulent.

Dr. Wolfgang Garn, co-author of the study and Senior Lecturer in Analytics at the University of Surrey, said, “We must not forget that behind every algorithm’s solution, there are real people whose lives are affected by the determined decisions. Our aim is to create AI systems that are not only intelligent but also provide explanations to people—the users of technology—that they can trust and understand.”

The study proposes a comprehensive framework known as SAGE (Settings, Audience, Goals, and Ethics) to address these critical issues. SAGE is designed to ensure that AI explanations are not only understandable but also contextually relevant to the end-users.

By focusing on the specific needs and backgrounds of the intended audience, the SAGE framework aims to bridge the gap between complex AI decision-making processes and the human operators who depend on them.

In conjunction with this framework, the research uses Scenario-Based Design (SBD) techniques, which delve deep into real-world scenarios to find out what users truly require from AI explanations. This method encourages researchers and developers to step into the shoes of the end-users, ensuring that AI systems are crafted with empathy and understanding at their core.

Dr. Garn said, “We also need to highlight the shortcomings of existing AI models, which often lack the contextual awareness necessary to provide meaningful explanations. By identifying and addressing these gaps, our paper advocates for an evolution in AI development that prioritizes user-centric design principles.

“It calls for AI developers to engage with industry specialists and end-users actively, fostering a collaborative environment where insights from various stakeholders can shape the future of AI. The path to a safer and more reliable AI landscape begins with a commitment to understanding the technology we create and the impact it has on our lives. The stakes are too high for us to ignore the call for change.”

The research highlights the importance of AI models explaining their outputs in a text form or graphical representations, catering to the diverse comprehension needs of users.

This shift aims to ensure that explanations are not only accessible but also actionable, enabling users to make informed decisions based on AI insights.

More information:
Eleanor Mill et al, Real-World Efficacy of Explainable Artificial Intelligence using the SAGE Framework and Scenario-Based Design, Applied Artificial Intelligence (2024). DOI: 10.1080/08839514.2024.2430867

Provided by
University of Surrey


Citation:
Understanding AI decision-making: Research examines model transparency (2025, February 19)
retrieved 19 February 2025
from https://techxplore.com/news/2025-02-ai-decision-transparency.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.