In this chapter we presented a dimensional semantic model (Multidimensional Emotional Appraisal Semantic Space, MEAS) which locates emotion in a four axes space, in the attempt to detect rules linking pattern of signals to underlying emotional axes, such as novelty, valence, coping and arousal. Although the development of this set of rules is still in progress, this model is aimed at providing hints in the work area of Affective Computing concerning emotion decoding, i.e. the implementing and design of automatic emotion recognizers. In particular, within this field of studies, we addressed the subtask of semantic attribution: once the machine is able to capture and to process the multimodal signals (and pattern of signals) exhibited by the human user during the interaction, how is it possible to attribute to them an emotional meaning (or in other words, to label them)? First of all, it is important to note that despite the use of the term «rule», the MEAS scoring system is not meant as a set of fixed and stable laws to be rigidly applied to every type of HM interaction and context. In fact, this would disclaim one of the principles on which the system is based (embodiment) and the more general conception of the HM interaction which is here proposed. Concerning the former – as previously explained – the MEAS system is thought as strictly linked to the context, that is to the type of running task and to the actions performed by the human user. Concerning the second, in our view, the machine should use the user’s emotional signals to be able to tune to his/her emotional state (process of attunement). Moreover, as clearly showed by our data, the users themselves show emotional responses which are highly influenced and congruent with the type of eliciting stimuli and the way they are appraised. Therefore, the MEAS rules are flexible and may change according to these contextual elements adjusting to them. Second, the MEAS system is designed to record the continuous modifications of emotional dimensions rather than the number of appearances of certain types of emotion categories, since it is based on a theoretical conception of emotion as a process rather than a state.

Ciceri, M. R., Balzarotti, S., From signals to emotions: Applying emotion models to HM interactions, in Or Jimm, O. J. (ed.), Affective Computing. Emotion Modelling, Synthesis and Recognition, I-Tech Education and Publishing,, Vienna 2008: 271- 296 [http://hdl.handle.net/10807/14010]

From signals to emotions: Applying emotion models to HM interactions

Ciceri, Maria Rita;Balzarotti, Stefania
2008

Abstract

In this chapter we presented a dimensional semantic model (Multidimensional Emotional Appraisal Semantic Space, MEAS) which locates emotion in a four axes space, in the attempt to detect rules linking pattern of signals to underlying emotional axes, such as novelty, valence, coping and arousal. Although the development of this set of rules is still in progress, this model is aimed at providing hints in the work area of Affective Computing concerning emotion decoding, i.e. the implementing and design of automatic emotion recognizers. In particular, within this field of studies, we addressed the subtask of semantic attribution: once the machine is able to capture and to process the multimodal signals (and pattern of signals) exhibited by the human user during the interaction, how is it possible to attribute to them an emotional meaning (or in other words, to label them)? First of all, it is important to note that despite the use of the term «rule», the MEAS scoring system is not meant as a set of fixed and stable laws to be rigidly applied to every type of HM interaction and context. In fact, this would disclaim one of the principles on which the system is based (embodiment) and the more general conception of the HM interaction which is here proposed. Concerning the former – as previously explained – the MEAS system is thought as strictly linked to the context, that is to the type of running task and to the actions performed by the human user. Concerning the second, in our view, the machine should use the user’s emotional signals to be able to tune to his/her emotional state (process of attunement). Moreover, as clearly showed by our data, the users themselves show emotional responses which are highly influenced and congruent with the type of eliciting stimuli and the way they are appraised. Therefore, the MEAS rules are flexible and may change according to these contextual elements adjusting to them. Second, the MEAS system is designed to record the continuous modifications of emotional dimensions rather than the number of appearances of certain types of emotion categories, since it is based on a theoretical conception of emotion as a process rather than a state.
2008
Inglese
Affective Computing. Emotion Modelling, Synthesis and Recognition
978-3-902613-23-3
http://www.intechopen.com/account/login
Ciceri, M. R., Balzarotti, S., From signals to emotions: Applying emotion models to HM interactions, in Or Jimm, O. J. (ed.), Affective Computing. Emotion Modelling, Synthesis and Recognition, I-Tech Education and Publishing,, Vienna 2008: 271- 296 [http://hdl.handle.net/10807/14010]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10807/14010
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact