The AI-MIND EU funded project aims to develop an artificial intelligence (AI)-based digital tool, which will allow to estimate the risk of people with mild cognitive impairment to develop dementia. To facilitate its implementation in clinical practice, in the project attention is paid to the Health Technology Assessment (HTA) principles, criteria and model. HTA requires to assess clinical, safety, economical, organizational, ethical and legal implications of a given health technology (HT). In the AI-MIND project, first, a group explored the potential impact of the use of an AI on the doctor-patient relationship, then HTA experts identified potentially relevant issues for AI, and finally bioethical experts, trained in HTA, conducted an analysis on the role of explainability, a new ethical issue to take into consideration in an HTA process regarding AI-based HT. A Delphy panel conducted ended in April 2023 allowed to identify HTA issues potentially relevant for AI. 46 qualified experts from mainly provided their feedbacks. The highest level of consensus was reached on the importance of ethical issues, explainability included. Bioethical experts conducted an analysis on the role of explainability for AI-based digital tools for the risk assessment to develop dementia. Both global (information on the model as a whole) and local explainability (information on a specific prediction) were investigated and applied to the specific use case. Attention was dedicated to the designated role in the decision-making process (algorithm-based, -driven, or –determined) of the AI-MIND predicting tool. From the HTA perspective, explainability is increasingly considered crucial for ethical analysis and should be assessed in relation to other aspects such as the design of the technologies, the population target, or the market access. It should be one of the topic to address for building and implementing ethical AI-solutions for early diagnosis of dementia.

Refolo, P., Sacchini, D., Di Bidino, R., Papavero, S. C., Gove, D., Tveter, M., Cicchetti, A., Haraldsen, I., The ethical relevance of explainibility for AI-solutions for early diagnosis of dementia, Abstract de <<33rd Alzheimer Europe Conference>>, (Helsinki, 16-18 October 2023 ), N/A, N/A 2023: N/A-N/A [https://hdl.handle.net/10807/255794]

The ethical relevance of explainibility for AI-solutions for early diagnosis of dementia

Refolo, Pietro;Sacchini, Dario;Papavero, Sara Consilia;Cicchetti, Americo;
2023

Abstract

The AI-MIND EU funded project aims to develop an artificial intelligence (AI)-based digital tool, which will allow to estimate the risk of people with mild cognitive impairment to develop dementia. To facilitate its implementation in clinical practice, in the project attention is paid to the Health Technology Assessment (HTA) principles, criteria and model. HTA requires to assess clinical, safety, economical, organizational, ethical and legal implications of a given health technology (HT). In the AI-MIND project, first, a group explored the potential impact of the use of an AI on the doctor-patient relationship, then HTA experts identified potentially relevant issues for AI, and finally bioethical experts, trained in HTA, conducted an analysis on the role of explainability, a new ethical issue to take into consideration in an HTA process regarding AI-based HT. A Delphy panel conducted ended in April 2023 allowed to identify HTA issues potentially relevant for AI. 46 qualified experts from mainly provided their feedbacks. The highest level of consensus was reached on the importance of ethical issues, explainability included. Bioethical experts conducted an analysis on the role of explainability for AI-based digital tools for the risk assessment to develop dementia. Both global (information on the model as a whole) and local explainability (information on a specific prediction) were investigated and applied to the specific use case. Attention was dedicated to the designated role in the decision-making process (algorithm-based, -driven, or –determined) of the AI-MIND predicting tool. From the HTA perspective, explainability is increasingly considered crucial for ethical analysis and should be assessed in relation to other aspects such as the design of the technologies, the population target, or the market access. It should be one of the topic to address for building and implementing ethical AI-solutions for early diagnosis of dementia.
2023
Inglese
33rd Alzheimer Europe Conference
33rd Alzheimer Europe Conference
Helsinki
16-ott-2023
18-ott-2023
N/A
N/A
Refolo, P., Sacchini, D., Di Bidino, R., Papavero, S. C., Gove, D., Tveter, M., Cicchetti, A., Haraldsen, I., The ethical relevance of explainibility for AI-solutions for early diagnosis of dementia, Abstract de <<33rd Alzheimer Europe Conference>>, (Helsinki, 16-18 October 2023 ), N/A, N/A 2023: N/A-N/A [https://hdl.handle.net/10807/255794]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10807/255794
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact