Together with their undeniable advantages, the new technologies of the Fintech Revolution bring new risks. Some of these risks are already known but have taken on a new form; some are entirely new. Among the latter, one of the most relevant concerns the opacity of artificial intelligence (AI). This lack of transparency generates questions not only about measuring the correctness and efficiency of the choices made by the algorithm, but also about the impact of these choices on third parties. There is, therefore, an issue of the legitimacy of the decision thus made: its opacity makes it arbitrary and insensitive to the rights of third parties affected by the choice. Thus it is essential to understand what level of explanation is needed in order to allow the use of the algorithm. Focusing on the AI transparency issue, there are grounds for believing that, at least in the EU, the costs deriving from a lack of transparency cannot be passed on to third parties and must instead be managed inside the enterprise. Therefore, the task of the enterprise, its directors, and in particular its compliance function must be dynamic, taking into account all foreseeable AI risks.

Mozzarelli, M. C. M., Digital Compliance: The Case for Algorithmic Transparency, in Centonze, F., Manacorda, S. (ed.), Corporate Compliance on a Global Scale. Legitimacy and Effectiveness, Springer, Cham 2022: 259- 284 [https://hdl.handle.net/10807/193623]

Digital Compliance: The Case for Algorithmic Transparency

Mozzarelli, Michele Cesare Maria
2022

Abstract

Together with their undeniable advantages, the new technologies of the Fintech Revolution bring new risks. Some of these risks are already known but have taken on a new form; some are entirely new. Among the latter, one of the most relevant concerns the opacity of artificial intelligence (AI). This lack of transparency generates questions not only about measuring the correctness and efficiency of the choices made by the algorithm, but also about the impact of these choices on third parties. There is, therefore, an issue of the legitimacy of the decision thus made: its opacity makes it arbitrary and insensitive to the rights of third parties affected by the choice. Thus it is essential to understand what level of explanation is needed in order to allow the use of the algorithm. Focusing on the AI transparency issue, there are grounds for believing that, at least in the EU, the costs deriving from a lack of transparency cannot be passed on to third parties and must instead be managed inside the enterprise. Therefore, the task of the enterprise, its directors, and in particular its compliance function must be dynamic, taking into account all foreseeable AI risks.
2022
Inglese
Corporate Compliance on a Global Scale. Legitimacy and Effectiveness
9783030816544
Springer
Mozzarelli, M. C. M., Digital Compliance: The Case for Algorithmic Transparency, in Centonze, F., Manacorda, S. (ed.), Corporate Compliance on a Global Scale. Legitimacy and Effectiveness, Springer, Cham 2022: 259- 284 [https://hdl.handle.net/10807/193623]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10807/193623
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact