Combining Long Short-Term Memory and Dynamic Bayesian Networks for Incremental Emotion-Sensitive Artificial Listening
IEEE Journal on Selected Topics in Signal Processing
The automatic estimation of human affect from the speech signal is an important step towards making virtual agents more natural and human-like. In this work we present a novel technique for incremental recognition of the user's emotional state as it is applied in a Sensitive Artificial Listener (SAL) system designed for socially competent human-machine communication. Our method is capable of using acoustic, linguistic, as well as long-range contextual information in order to continuously
... the current quadrant in a two-dimensional emotional space spanned by the dimensions valence and activation. The main system components are a hierarchical Dynamic Bayesian Network (DBN) for detecting linguistic keyword features and Long Short-Term Memory (LSTM) recurrent neural networks which model phoneme context and emotional history to predict the affective state of the user. Experimental evaluations on the SAL corpus of non-prototypical real-life emotional speech data consider a number of variants of our recognition framework: continuous emotion estimation from low-level feature frames is evaluated as a new alternative to the common approach of computing statistical functionals of given speech turns. Further performance gains are achieved by discriminatively training LSTM networks and by using bidirectional context information, leading to a quadrant prediction F1-measure of up to 51.3 %, which is only 7.6 % below the average inter-labeler consistency.