A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
An Investigation of Annotation Delay Compensation and Output-Associative Fusion for Multimodal Continuous Emotion Prediction
2015
Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge - AVEC '15
Continuous emotion dimension prediction has increased in popularity over the last few years, as the shift away from discrete classification based tasks has introduced more realism in emotion modeling. However, many questions remain including how best to combine information from several modalities (e.g. audio, video, etc). As part of the AV+EC 2015 Challenge, we investigate annotation delay compensation and propose a range of multimodal systems based on an outputassociative fusion framework. The
doi:10.1145/2808196.2811640
dblp:conf/mm/HuangDCSLSE15
fatcat:msmozxht2jgkxmstnpemenlfle