Including robots in children's lives calls for reflection on the psychological and moral aspects of such relationships, especially with respect to children's ability to differentiate intentional from unintentional false statements, that is, lies from mistakes. This ability calls for an understanding of an interlocutor's intentions. This study examined the ability of 5-6-year-olds to recognize, and morally evaluate, lies and mistakes produced by a human as compared to a NAO robot, and to attribute relevant emotions to the deceived party. Irrespective of the agent, children had more difficulty in understanding mistakes than lies. In addition, they were disinclined to attribute a lie to the robot. Children's age and their understanding of intentionality were the strongest predictors of their performance on the lie-mistake task. Children's Theory of Mind, but not their executive function skills, also correlated with their performance. Our findings suggest that, regardless of age, a robot is perceived as an intentional agent. Robot behaviour was more acceptable for children because his actions could be attributed to someone who programmed it to act in a specific way.

Peretti, G., Manzi, F., Di Dio, C., Cangelosi, A., Harris, P. L., Massaro, D., Marchetti, A., Can a robot lie? Young children's understanding of intentionality beneath false statements, <<INFANT AND CHILD DEVELOPMENT>>, 2023; (e2398): 1-25. [doi:10.1002/icd.2398] [https://hdl.handle.net/10807/222928]

Can a robot lie? Young children's understanding of intentionality beneath false statements

Peretti, Giulia
;
Manzi, Federico
;
Di Dio, Cinzia;Massaro, Davide;Marchetti, Antonella
2023

Abstract

Including robots in children's lives calls for reflection on the psychological and moral aspects of such relationships, especially with respect to children's ability to differentiate intentional from unintentional false statements, that is, lies from mistakes. This ability calls for an understanding of an interlocutor's intentions. This study examined the ability of 5-6-year-olds to recognize, and morally evaluate, lies and mistakes produced by a human as compared to a NAO robot, and to attribute relevant emotions to the deceived party. Irrespective of the agent, children had more difficulty in understanding mistakes than lies. In addition, they were disinclined to attribute a lie to the robot. Children's age and their understanding of intentionality were the strongest predictors of their performance on the lie-mistake task. Children's Theory of Mind, but not their executive function skills, also correlated with their performance. Our findings suggest that, regardless of age, a robot is perceived as an intentional agent. Robot behaviour was more acceptable for children because his actions could be attributed to someone who programmed it to act in a specific way.
2023
Inglese
Peretti, G., Manzi, F., Di Dio, C., Cangelosi, A., Harris, P. L., Massaro, D., Marchetti, A., Can a robot lie? Young children's understanding of intentionality beneath false statements, <<INFANT AND CHILD DEVELOPMENT>>, 2023; (e2398): 1-25. [doi:10.1002/icd.2398] [https://hdl.handle.net/10807/222928]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10807/222928
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 5
social impact