This study explores adolescents' moral judgments regarding decisions made by humans and robots, using variations of the Trolley Dilemma. In scenarios where humans or robots decide between sacrificing one human to save five robots or vice versa, adolescents' judgments were tested. Results indicate that adolescents generally evaluate human and robotic decisions similarly. Specifically, a decision to sacrifice a human to save five robots was perceived with severe moral judgment, increased blame attribution, and greater suggested punishment. In terms of personal choices, adolescents typically favored a utilitarian decision, except when the decision entailed sacrificing a human to save five robots. This study supports the notion that adolescents apply consistent moral principles to both human and robot decisions, and that moral judgement is focused on the social and moral consequences of actions rather than the nature of the decision-maker. The outcomes also suggest a prevailing utilitarian principle when entities involved belong to the same ontological category. The research advances our understanding of adolescents' moral judgments in Human-Robot Interaction (HRI), a rapidly developing field with significant social implications.
Tacci, A. L., Manzi, F., Di Dio, C., Marchetti, A., Riva, G., Massaro, D., Moral Context Matters: A study of Adolescents’ Moral Judgment towards Robots, in 2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), (Boston, Massachussetts, USA, 10-13 September 2023), IEEE COMPUTER SOC, Cambridge MA 2023:2023 1-6. [10.1109/ACIIW59127.2023.10388182] [https://hdl.handle.net/10807/262294]
Moral Context Matters: A study of Adolescents’ Moral Judgment towards Robots
Tacci, Andrea Luna
;Manzi, Federico
;Di Dio, Cinzia;Marchetti, Antonella;Riva, Giuseppe;Massaro, Davide
2023
Abstract
This study explores adolescents' moral judgments regarding decisions made by humans and robots, using variations of the Trolley Dilemma. In scenarios where humans or robots decide between sacrificing one human to save five robots or vice versa, adolescents' judgments were tested. Results indicate that adolescents generally evaluate human and robotic decisions similarly. Specifically, a decision to sacrifice a human to save five robots was perceived with severe moral judgment, increased blame attribution, and greater suggested punishment. In terms of personal choices, adolescents typically favored a utilitarian decision, except when the decision entailed sacrificing a human to save five robots. This study supports the notion that adolescents apply consistent moral principles to both human and robot decisions, and that moral judgement is focused on the social and moral consequences of actions rather than the nature of the decision-maker. The outcomes also suggest a prevailing utilitarian principle when entities involved belong to the same ontological category. The research advances our understanding of adolescents' moral judgments in Human-Robot Interaction (HRI), a rapidly developing field with significant social implications.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.