he rapid evolution of Artificial Intelligence (AI) has significantly reshaped decision-making processes, enabling more sophisticated data analysis, predictive modeling, and insight generation. However, many of these advancements rely on complex “black box” models, where the inner workings of the systems are opaque and difficult to interpret. This lack of transparency creates challenges in understanding how decisions are derived, raising concerns about their reliability, fairness, and ethical soundness. In response to these challenges, the field of eXplainable AI (XAI) has emerged, focusing on methodologies that enhance transparency by enabling humans to understand and verify the outputs of machine learning algorithms. These XAI frameworks include approaches such as eXplanation by Design, which incorporates interpretability from the outset of model development, and Black Box eXplanation, which provides post-hoc explanations for existing systems. XAI aims to demystify AI decision-making processes by revealing the underlying logic, potential biases, and expected impacts of these systems. In response to these needs, the Humane Technology Lab of the Catholic University of Sacred Heart (Milan, Italy) and the University of Pisa (Pisa, Italy), conducted within the framework of the Italian PNRR—M4C2—Investment 1.3, Extended Partnership PE00000013—“FAIR—Future Artificial Intelligence Research”—Spoke 1 “Human-centered AI” a new research project: CO-XAI (Cognitive Decision Intelligence Framework for Explainable AI Systems). It aims at developing and validating a novel DI framework that seamlessly integrates cognitive neuroscience, decision-making, and user experience principles into the design and evaluation of explainable AI systems.
Sajno, E., De Gaspari, S., Pupillo, C., Riva, G., CO-XAI—Cognitive Decision Intelligence Framework for Explainable AI Systems, <<CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING>>, 2024; 27 (12): 954-956. [doi:10.1089/cyber.2024.87594.ceu] [https://hdl.handle.net/10807/312928]
CO-XAI—Cognitive Decision Intelligence Framework for Explainable AI Systems
Sajno, Elena;De Gaspari, Stefano;Pupillo, Chiara;Riva, Giuseppe
2024
Abstract
he rapid evolution of Artificial Intelligence (AI) has significantly reshaped decision-making processes, enabling more sophisticated data analysis, predictive modeling, and insight generation. However, many of these advancements rely on complex “black box” models, where the inner workings of the systems are opaque and difficult to interpret. This lack of transparency creates challenges in understanding how decisions are derived, raising concerns about their reliability, fairness, and ethical soundness. In response to these challenges, the field of eXplainable AI (XAI) has emerged, focusing on methodologies that enhance transparency by enabling humans to understand and verify the outputs of machine learning algorithms. These XAI frameworks include approaches such as eXplanation by Design, which incorporates interpretability from the outset of model development, and Black Box eXplanation, which provides post-hoc explanations for existing systems. XAI aims to demystify AI decision-making processes by revealing the underlying logic, potential biases, and expected impacts of these systems. In response to these needs, the Humane Technology Lab of the Catholic University of Sacred Heart (Milan, Italy) and the University of Pisa (Pisa, Italy), conducted within the framework of the Italian PNRR—M4C2—Investment 1.3, Extended Partnership PE00000013—“FAIR—Future Artificial Intelligence Research”—Spoke 1 “Human-centered AI” a new research project: CO-XAI (Cognitive Decision Intelligence Framework for Explainable AI Systems). It aims at developing and validating a novel DI framework that seamlessly integrates cognitive neuroscience, decision-making, and user experience principles into the design and evaluation of explainable AI systems.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.