As artificial intelligence (AI) becomes ubiquitous across various fields, understanding people’s acceptance and trust in AI systems becomes essential. This review aims to identify quantitative measures used to measure trust in AI and the associated studied elements. Following the PRISMA guidelines, three databases were consulted, selecting articles published before December 2023. Ultimately, 45 articles out of 1283 were selected. Articles were included if they were peer-reviewed journal publications in English reporting empirical studies measuring trust in AI systems with multi-item questionnaires. Studies were analyzed through the lenses of cognitive and affective trust. We investigated trust definitions, questionnaires employed, types of AI systems, and trust-related constructs. Results reveal diverse trust conceptualizations and measurements. In addition, the studies covered a wide range of AI system types, including virtual assistants, content detection tools, chatbots, medical AI, robots, and educational AI. Overall, the studies show compatibility of cognitive or affective trust focus between theorization, items, experimental stimuli, and level of anthropomorphism of the systems. The review underlines the need to adapt measurement of trust in the specific characteristics of human–AI interaction, accounting for both the cognitive and affective sides. Trust definitions and measurement could be chosen depending also on the level of anthropomorphism of the systems and the context of application.
Aquilino, L., Di Dio, C., Manzi, F., Massaro, D., Bisconti, P., Marchetti, A., Decoding Trust in Artificial Intelligence: A Systematic Review of Quantitative Measures and Related Variables, <<INFORMATICS>>, 2025; 12 (3): 1-40. [doi:10.3390/informatics12030070] [https://hdl.handle.net/10807/318996]
Decoding Trust in Artificial Intelligence: A Systematic Review of Quantitative Measures and Related Variables
Aquilino, Letizia;Di Dio, Cinzia;Manzi, Federico;Massaro, Davide;Marchetti, Antonella
2025
Abstract
As artificial intelligence (AI) becomes ubiquitous across various fields, understanding people’s acceptance and trust in AI systems becomes essential. This review aims to identify quantitative measures used to measure trust in AI and the associated studied elements. Following the PRISMA guidelines, three databases were consulted, selecting articles published before December 2023. Ultimately, 45 articles out of 1283 were selected. Articles were included if they were peer-reviewed journal publications in English reporting empirical studies measuring trust in AI systems with multi-item questionnaires. Studies were analyzed through the lenses of cognitive and affective trust. We investigated trust definitions, questionnaires employed, types of AI systems, and trust-related constructs. Results reveal diverse trust conceptualizations and measurements. In addition, the studies covered a wide range of AI system types, including virtual assistants, content detection tools, chatbots, medical AI, robots, and educational AI. Overall, the studies show compatibility of cognitive or affective trust focus between theorization, items, experimental stimuli, and level of anthropomorphism of the systems. The review underlines the need to adapt measurement of trust in the specific characteristics of human–AI interaction, accounting for both the cognitive and affective sides. Trust definitions and measurement could be chosen depending also on the level of anthropomorphism of the systems and the context of application.| File | Dimensione | Formato | |
|---|---|---|---|
|
informatics-12-00070.pdf
accesso aperto
Tipologia file ?:
Versione Editoriale (PDF)
Licenza:
Creative commons
Dimensione
759.38 kB
Formato
Adobe PDF
|
759.38 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



