The integration of AI into decision support systems raises concerns about overreliance and distrust. To address this, we propose an experimental protocol combining Learning to Defer (LtD)—where AI delegates decisions to humans when appropriate—and Explainable AI (XAI), which provides users with decision rationales. Our study investigates how these approaches impact human decision- making, particularly in high-stakes contexts. Participants will classify noisy images from ImageNet under three between-subjects conditions: Defer (AI defers to user), Defer + XAI (AI provides an explanation), and Hidden Delegation (AI involvement is concealed). Each condition will be tested in neutral and high-stakes scenarios, the latter framed through narratives emphasizing the danger of misclassification. We will assess decision accuracy and reaction times, as well as psychological measures that explore the influence of individual differences (i.e., intolerance to uncertainty and cognitive styles), and emotions (e.g., emotion regulation, and AI-related anxiety). We hypothesize that Defer may prompt more analytical thinking, improving accuracy over Hidden Delegation, while Defer + XAI may further enhance performance. In contrast, Hidden Delegation could promote reliance on intuitive processing. We expect higher accuracy and longer response times in high- stakes conditions. Findings will inform the design of human-AI systems that optimize user engagement and reliability, particularly in domains like clinical decision-making.
Sajno, E., Beretta, A., De Gaspari, S., Giannotti, F., Pedreschi, D., Pellungrini, R., Pugnana, A., Pupillo, C., Repetto, C., Sansoni, M., Villani, D., Riva, G., AI Says I’m Better: Evaluating the Effect of AI Defer on Users. A Study Protocol, <<ANNUAL REVIEW OF CYBERTHERAPY AND TELEMEDICINE>>, 2025; 23 (NA): 282-288 [https://hdl.handle.net/10807/327316]
AI Says I’m Better: Evaluating the Effect of AI Defer on Users. A Study Protocol
Sajno, Elena;De Gaspari, Stefano;Pugnana, Alessandra;Pupillo, Chiara;Repetto, Claudia;Sansoni, Maria;Villani, Daniela;Riva, Giuseppe
2025
Abstract
The integration of AI into decision support systems raises concerns about overreliance and distrust. To address this, we propose an experimental protocol combining Learning to Defer (LtD)—where AI delegates decisions to humans when appropriate—and Explainable AI (XAI), which provides users with decision rationales. Our study investigates how these approaches impact human decision- making, particularly in high-stakes contexts. Participants will classify noisy images from ImageNet under three between-subjects conditions: Defer (AI defers to user), Defer + XAI (AI provides an explanation), and Hidden Delegation (AI involvement is concealed). Each condition will be tested in neutral and high-stakes scenarios, the latter framed through narratives emphasizing the danger of misclassification. We will assess decision accuracy and reaction times, as well as psychological measures that explore the influence of individual differences (i.e., intolerance to uncertainty and cognitive styles), and emotions (e.g., emotion regulation, and AI-related anxiety). We hypothesize that Defer may prompt more analytical thinking, improving accuracy over Hidden Delegation, while Defer + XAI may further enhance performance. In contrast, Hidden Delegation could promote reliance on intuitive processing. We expect higher accuracy and longer response times in high- stakes conditions. Findings will inform the design of human-AI systems that optimize user engagement and reliability, particularly in domains like clinical decision-making.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



