eXplainable Artificial Intelligence (XAI) aims to enhance human understanding of AI outputs, particularly in high-stakes domains. While recent research has emphasized the need for more human-centered approaches, many efforts remain technically focused, often overlooking how users’ cognitive, emotional, and dispositional characteristics shape their engagement with AI explanations. This mini-narrative review explores six key psychological factors that influence decision-making in the context of XAI. Our goal is to support the development of more human-centered systems by addressing not only what the system explains, but also how people think, feel, and behave when making decisions. We examine how cognitive elements affect the processing of explanations (such as cognitive biases and cognitive load), and emotional responses (both general and AI-directed) modulate trust and engagement. We also consider individual differences, particularly intolerance of uncertainty and decision-making styles, which shape how users evaluate and respond to AI recommendations. Together, these factors provide a psychologically grounded framework for understanding what people decide, how they process information, and why they trust or reject AI outputs. We propose that adaptive, ethically responsible XAI systems tailored to user traits can improve clarity and usability, ultimately enabling users to navigate, not avoid, uncertainty in AI-supported decision-making. This may have important practical implications for developers, users, and policymakers.
Sansoni, M., Beretta, A., De Gaspari, S., Giannotti, F., Pedreschi, D., Pellungrini, R., Pugnana, A., Pupillo, C., Repetto, C., Sajno, E., Villani, D., Riva, G., Beyond Rationality: Bridging Cognitive, Emotional, and Individual Differences in XAI Decision-Making, <<ANNUAL REVIEW OF CYBERTHERAPY AND TELEMEDICINE>>, 2025; 23 (NA): 12-19 [https://hdl.handle.net/10807/327318]
Beyond Rationality: Bridging Cognitive, Emotional, and Individual Differences in XAI Decision-Making
Sansoni, Maria;De Gaspari, Stefano;Pugnana, Alessandra;Pupillo, Chiara;Repetto, Claudia;Sajno, Elena;Villani, Daniela;Riva, Giuseppe
2025
Abstract
eXplainable Artificial Intelligence (XAI) aims to enhance human understanding of AI outputs, particularly in high-stakes domains. While recent research has emphasized the need for more human-centered approaches, many efforts remain technically focused, often overlooking how users’ cognitive, emotional, and dispositional characteristics shape their engagement with AI explanations. This mini-narrative review explores six key psychological factors that influence decision-making in the context of XAI. Our goal is to support the development of more human-centered systems by addressing not only what the system explains, but also how people think, feel, and behave when making decisions. We examine how cognitive elements affect the processing of explanations (such as cognitive biases and cognitive load), and emotional responses (both general and AI-directed) modulate trust and engagement. We also consider individual differences, particularly intolerance of uncertainty and decision-making styles, which shape how users evaluate and respond to AI recommendations. Together, these factors provide a psychologically grounded framework for understanding what people decide, how they process information, and why they trust or reject AI outputs. We propose that adaptive, ethically responsible XAI systems tailored to user traits can improve clarity and usability, ultimately enabling users to navigate, not avoid, uncertainty in AI-supported decision-making. This may have important practical implications for developers, users, and policymakers.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



