Multimodal multimodel emotion analysis as linked data

J. Fernando Sanchez-Rada, Carlos A. Iglesias, Hesam Sagha, Bjorn Schuller, Ian Wood, Paul Buitelaar
2017 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)  
The lack of a standard emotion representation model hinders emotion analysis due to the incompatibility of annotation formats and models from different sources, tools and annotation services. This is also a limiting factor for multimodal analysis, since recognition services from different modalities (audio, video, text) tend to have different representation models (e. g., continuous vs. discrete emotions). This work presents a multi-disciplinary effort to alleviate this problem by formalizing
more » ... em by formalizing conversion between emotion models. The specific contributions are: i) a semantic representation of emotion conversion; ii) an API proposal for services that perform automatic conversion; iii) a reference implementation of such a service; and iv) validation of the proposal through use cases that integrate different emotion models and service providers.
doi:10.1109/aciiw.2017.8272599 dblp:conf/acii/Sanchez-RadaISS17 fatcat:jdkadkqdvnbcbjp5tcyiyszfmm