A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit <a rel="external noopener" href="https://link.springer.com/content/pdf/10.1007/s00779-019-01246-9.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
<i title="Springer Science and Business Media LLC">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/yubpzzxtazfhzllyyxylnnu7ru" style="color: black;">Personal and Ubiquitous Computing</a>
Since the contextual information has an important impact on the speaker's emotional state, how to use emotion-related context information to conduct feature learning is a key problem. The existing speech emotion recognition algorithms achieve the relatively high recognition rate; these algorithms are not very good application to the real-life speech emotion recognition systems. Therefore, in order to address the abovementioned issues, a novel speech emotion recognition algorithm based on<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s00779-019-01246-9">doi:10.1007/s00779-019-01246-9</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/nv6kc6maffdtnhjo2qvgtvp7oi">fatcat:nv6kc6maffdtnhjo2qvgtvp7oi</a> </span>
more »... d stacked kernel sparse deep model is proposed in this paper, which is based on auto-encoder, denoising auto-encoder, and sparse auto-encoder to improve the Chinese speech emotion recognition. The first layer of the structure uses a denoising autoencoder to learn a hidden feature with a larger dimension than the dimension of the input features, and the second layer employs a sparse auto-encoder to learn sparse features. Finally, a wavelet-kernel sparse SVM classifier is applied to classify the features. The proposed algorithm is evaluated on the testing dataset, which contains the speech emotion data of spontaneous, non-prototypical, and long-term. The experimental results show that the proposed algorithm outperforms the existing state-of-the-art algorithms in speech emotion recognition.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201108072811/https://link.springer.com/content/pdf/10.1007/s00779-019-01246-9.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c2/49/c249c97226b67423eb9390c150456df0101891fd.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s00779-019-01246-9"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>