A critical issue that AI–human interaction must address is that AI does not have a distinct face. When a non-specialist user is told that they will be interacting with AI, they have no idea what to expect: a humanoid android with a body? An omniscient voice that floats around the room like in a movie? Or some website-like software that simply performs analyses and outputs a PDF file? This uncertainty closely relates to the so-called black box issue, namely the fact that outcomes of machine learning processes are not retraceable and fully understandable for users.2 In other words, it is impossible to know how an AI arrived at a given response or produced a given output. In AI, the black box causes trust issues, which are exacerbated by sensitive contexts. For example, if one is required to make a delicate decision (e.g., diagnosis or treatment in medicine) based on AI outputs, and the consequences of the decision may affect their own career and the safety of others at any level, AI users may find themselves in a situation of uncertainty and decision paralysis. In such cases, AI becomes controversial or even useless in sensitive situations. In this view, the main goals of the Artificial Face (ART-F) project, involving researchers from different institutions—the Laboratory for Advanced Human-Technology Interaction at Italy's Università Telematica Pegaso, the Artificial Intelligence Institute at France's SKEMA Business School, and the Humane Technology Laboratory at Italy's Catholic University of Milan—are to find, explore, and test the different ways to solve the black box problem, find a “face” for AI, and improve the way humans and AI interact.

Triberti, S., Torre, D. L., Riva, G., The Artificial Face (ART-F) Project: Addressing the Problem of Interpretability, Interface, and Trust in Artificial Intelligence, <<CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING>>, 2023; 26 (4): 318-320. [doi:10.1089/cyber.2023.29273.ceu] [https://hdl.handle.net/10807/269917]

The Artificial Face (ART-F) Project: Addressing the Problem of Interpretability, Interface, and Trust in Artificial Intelligence

Triberti, Stefano;Riva, Giuseppe
2023

Abstract

A critical issue that AI–human interaction must address is that AI does not have a distinct face. When a non-specialist user is told that they will be interacting with AI, they have no idea what to expect: a humanoid android with a body? An omniscient voice that floats around the room like in a movie? Or some website-like software that simply performs analyses and outputs a PDF file? This uncertainty closely relates to the so-called black box issue, namely the fact that outcomes of machine learning processes are not retraceable and fully understandable for users.2 In other words, it is impossible to know how an AI arrived at a given response or produced a given output. In AI, the black box causes trust issues, which are exacerbated by sensitive contexts. For example, if one is required to make a delicate decision (e.g., diagnosis or treatment in medicine) based on AI outputs, and the consequences of the decision may affect their own career and the safety of others at any level, AI users may find themselves in a situation of uncertainty and decision paralysis. In such cases, AI becomes controversial or even useless in sensitive situations. In this view, the main goals of the Artificial Face (ART-F) project, involving researchers from different institutions—the Laboratory for Advanced Human-Technology Interaction at Italy's Università Telematica Pegaso, the Artificial Intelligence Institute at France's SKEMA Business School, and the Humane Technology Laboratory at Italy's Catholic University of Milan—are to find, explore, and test the different ways to solve the black box problem, find a “face” for AI, and improve the way humans and AI interact.
2023
Inglese
Triberti, S., Torre, D. L., Riva, G., The Artificial Face (ART-F) Project: Addressing the Problem of Interpretability, Interface, and Trust in Artificial Intelligence, <<CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING>>, 2023; 26 (4): 318-320. [doi:10.1089/cyber.2023.29273.ceu] [https://hdl.handle.net/10807/269917]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10807/269917
Citazioni
  • ???jsp.display-item.citation.pmc??? 3
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 3
social impact