Trust plays a pivotal role in the acceptance of AI (Artificial Intelligence), particularly when it involves people’s health and safety. AI systems have proven to hold great potential when applied to the medical field. However, users still find it challenging to trust AI over a human doctor for decisions regarding their health. This paper establishes a new theoretical framework, drawing upon the integration of the Uncertainty Reduction Theory (URT) and the theorization on agency locus. This framework aims to examine the influence of transparency, agency locus, and human oversight, mediated by uncertainty reduction, on trust development. Transparency has already been revealed as a key element in fostering trust, as AI systems showing some kind of transparency, providing insights into their inner workings, are generally perceived as more trustworthy. One explanation for this can pertain to the system becoming more understandable and predictable to the user, which reduces the uncertainty of the interaction. The framework also focuses on the differences entailed by the application in different fields, namely healthcare and first response intervention. Moreover, the paper foresees multiple experiments that will validate this model, shedding light on the complex dynamics of trust in AI.

Aquilino, L., Bisconti, P., Marchetti, A., Trust in AI: Transparency, and Uncertainty Reduction. Development of a new theoretical framework, Paper, in CEUR Workshop Proceedings, (Gothenburg, Sweden, 04-04 December 2023), CEUR-WS.org, Gothenburg, Sweden 2024:<<CEUR WORKSHOP PROCEEDINGS>>, 19-26 [https://hdl.handle.net/10807/261794]

Trust in AI: Transparency, and Uncertainty Reduction. Development of a new theoretical framework

Aquilino, Letizia
;
Marchetti, Antonella
2024

Abstract

Trust plays a pivotal role in the acceptance of AI (Artificial Intelligence), particularly when it involves people’s health and safety. AI systems have proven to hold great potential when applied to the medical field. However, users still find it challenging to trust AI over a human doctor for decisions regarding their health. This paper establishes a new theoretical framework, drawing upon the integration of the Uncertainty Reduction Theory (URT) and the theorization on agency locus. This framework aims to examine the influence of transparency, agency locus, and human oversight, mediated by uncertainty reduction, on trust development. Transparency has already been revealed as a key element in fostering trust, as AI systems showing some kind of transparency, providing insights into their inner workings, are generally perceived as more trustworthy. One explanation for this can pertain to the system becoming more understandable and predictable to the user, which reduces the uncertainty of the interaction. The framework also focuses on the differences entailed by the application in different fields, namely healthcare and first response intervention. Moreover, the paper foresees multiple experiments that will validate this model, shedding light on the complex dynamics of trust in AI.
2024
Inglese
CEUR Workshop Proceedings
MULTITTRUST 2023 Multidisciplinary Perspectives on Human-AI Team Trust 2023
Gothenburg, Sweden
Paper
4-dic-2023
4-dic-2023
CEUR-WS.org
Aquilino, L., Bisconti, P., Marchetti, A., Trust in AI: Transparency, and Uncertainty Reduction. Development of a new theoretical framework, Paper, in CEUR Workshop Proceedings, (Gothenburg, Sweden, 04-04 December 2023), CEUR-WS.org, Gothenburg, Sweden 2024:<<CEUR WORKSHOP PROCEEDINGS>>, 19-26 [https://hdl.handle.net/10807/261794]
File in questo prodotto:
File Dimensione Formato  
paper7.pdf

accesso aperto

Tipologia file ?: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 281.69 kB
Formato Adobe PDF
281.69 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10807/261794
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact