A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Continuing Pre-trained Model with Multiple Training Strategies for Emotional Classification
2022
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis
unpublished
This paper describes the continual pre-training method for the masked language model (MLM) to enhance the DeBERTa pre-trained language model. ...
Moreover, our submission ranked Top-1 with all metrics in the evaluation phase for the Emotion Classification task. ...
., 2020) model with continuing pre-training method for processing this classification task, where the main method structure is shown in Figure 1 . ...
doi:10.18653/v1/2022.wassa-1.22
fatcat:dwd2qgyznjazbh5kvajf2frlgm
Codified audio language modeling learns useful representations for music information retrieval
2021
Zenodo
four MIR tasks: tagging, genre classification, emotion recognition, and key detection. ...
For key detection, we observe that representations from Jukebox are considerably stronger than those from models pre-trained on tagging, suggesting that pre-training via codified audio language modeling ...
We also thank all reviewers for their helpful feedback. ...
doi:10.5281/zenodo.5624605
fatcat:trinlottffdvpizt5r43be7ajq
Codified audio language modeling learns useful representations for music information retrieval
[article]
2021
arXiv
pre-print
four MIR tasks: tagging, genre classification, emotion recognition, and key detection. ...
For key detection, we observe that representations from Jukebox are considerably stronger than those from models pre-trained on tagging, suggesting that pre-training via codified audio language modeling ...
We also thank all reviewers for their helpful feedback. ...
arXiv:2107.05677v1
fatcat:26yvxejc7vdlnph66gd6rnrcze
Sentiment Analysis for Spanish Tweets based on Continual Pre-training and Data Augmentation
2021
Annual Conference of the Spanish Society for Natural Language Processing
In addition, we leverage two augmented strategies to enhance the classic fine-tuned model, namely continual pre-training and data augmentation to improve the generalization capability. ...
Experimental results demonstrate the effectiveness of the BERT model and two augmented strategies. ...
In addition, continual pre-training with training set and low proportion data back translation respectively outperforms continual pre-training with general corpus and whole data back translation. ...
dblp:conf/sepln/FuYLWC21
fatcat:74q72ppbu5b55gtm3gvkjpk53y
Speech Emotion Recognition with Heterogeneous Feature Unification of Deep Neural Network
2019
Sensors
Automatic speech emotion recognition is a challenging task due to the gap between acoustic features and human emotions, which rely strongly on the discriminative acoustic features extracted for a given ...
for recognition task. ...
For example, in Lakomkin et al. [29] , two models which use a pre-trained automatic speech recognition (ASR) network were proposed for speech emotion recognition. ...
doi:10.3390/s19122730
fatcat:n4pgdcbcnzannd5p3ystsyuclm
Joint Deep Cross-Domain Transfer Learning for Emotion Recognition
[article]
2020
arXiv
pre-print
Despite such substantial progress, existing approaches are still hindered by insufficient training data, and the resulting models do not generalize well under mismatched conditions. ...
Deep learning has been applied to achieve significant progress in emotion recognition. ...
fusing the pre/post-trained models with a classification loss. ...
arXiv:2003.11136v1
fatcat:rtlw75elrvcope6zfkkz2qvku4
CAiRE: An Empathetic Neural Chatbot
[article]
2020
arXiv
pre-print
., 2019) learning approach that fine-tunes a large-scale pre-trained language model with multi-task objectives: response language modeling, response prediction and dialogue emotion detection. ...
We evaluate our model on the recently proposed empathetic-dialogues dataset (Rashkin et al., 2019), the experiment results show that CAiRE achieves state-of-the-art performance on dialogue emotion detection ...
The cross-entropy is applied for emotion classification loss L E . ...
arXiv:1907.12108v4
fatcat:4qh7cs5un5chjp3wrxy3uqlthe
A Data-Driven Adaptive Emotion Recognition Model for College Students Using an Improved Multifeature Deep Neural Network Technology
2022
Computational Intelligence and Neuroscience
Second, feature fusion is performed on multiple features using the autosklearn model integration technique. ...
With the increasing pressure on college students in terms of study, work, emotion, and life, the emotional changes of college students are becoming more and more obvious. ...
Typical continuous emotional expression models include Wundt emotional space [30] , Schlosberg 3D cone emotional space [31] , 3D emotional space for PAD [32] , and Plutchik emotional wheel continuous ...
doi:10.1155/2022/1343358
pmid:35665293
pmcid:PMC9162810
fatcat:neiez4r2f5fldik4isszengbci
Emotion Classification for Spanish with XLM-RoBERTa and TextCNN
2021
Annual Conference of the Spanish Society for Natural Language Processing
Finally, the output of the model is input into the fully connected layer for classification. Our model rank 14th in this task. The weighted-averaged F1 is 0.5570, and the accurcy is 0.5368. ...
Our team (team name is Dong) first use XLM-Roberta for embedding. ...
Acknowledgements We would like to thank the organizers for organizing this task and providing data support, and thank the review experts for their patience. ...
dblp:conf/sepln/QuYQ21
fatcat:svb7w2kymjckliqfjqsbalfl2y
Text Emotion Distribution Learning via Multi-Task Convolutional Neural Network
2018
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
However, there exists ambiguity characteristic for the emotion analysis, since a single sentence can evoke multiple emotions with different intensities. ...
the emotion classification. ...
[Wang and Pal, 2015] propose a model with several constraints based on an emotion lexicon for emotion classification. ...
doi:10.24963/ijcai.2018/639
dblp:conf/ijcai/ZhangFSZWY18
fatcat:mh6gdl2m6vdyjbcdhfzr6chhei
EmotionX-IDEA: Emotion BERT – an Affectional Model for Conversation
[article]
2019
arXiv
pre-print
In this paper, we investigate the emotion recognition ability of the pre-training language model, namely BERT. ...
The experiments show that by mapping the continues dialogue into a causal utterance pair, which is constructed by the utterance and the reply utterance, models can better capture the emotions of the reply ...
The pre-training strategies are described below. ...
arXiv:1908.06264v1
fatcat:37m4nyb2xbbkjkzygqttb43oyy
Emotion Embedding Spaces for Matching Music to Stories
2021
Zenodo
Both the music and text domains have existing datasets with emotion labels, but mismatched emotion vocabularies prevent us from using mood or emotion annotations directly for matching. ...
., books), use multiple sentences as input queries, and automatically retrieve matching music. We formalize this task as a cross-modal text-to-music retrieval problem. ...
Embedding Models to Bridge the Modality Gap
Classification As a starting point, we train two separate mood classification models for text and music (Figure 2-(a) ). ...
doi:10.5281/zenodo.5624482
fatcat:uqlm3s5korb5rm2ybkbvr42qpi
Dimensional Emotion Detection from Categorical Emotion
[article]
2021
arXiv
pre-print
We present a model to predict fine-grained emotions along the continuous dimensions of valence, arousal, and dominance (VAD) with a corpus with categorical emotion annotations. ...
We use pre-trained RoBERTa-Large and fine-tune on three different corpora with categorical labels and evaluate on EmoBank corpus with VAD scores. ...
We use pre-splits of train, valid, test set of the dataset. EmoBank. Sentences paired with continuous VAD scores as labels. ...
arXiv:1911.02499v2
fatcat:yedt7nfdnnhijkle5e32fwth7u
Image based Static Facial Expression Recognition with Multiple Deep Network Learning
2015
Proceedings of the 2015 ACM on International Conference on Multimodal Interaction - ICMI '15
The pre-trained models are then fine-tuned on the training set of SFEW 2.0. ...
Each CNN model is initialized randomly and pre-trained on a larger dataset provided by the Facial Expression Recognition (FER) Challenge 2013. ...
Network Pre-training on FER We pre-train our CNN model on the combined FER dataset formed by train, validation and test set. ...
doi:10.1145/2818346.2830595
dblp:conf/icmi/YuZ15
fatcat:t5v5qcpj65crdhmmmzzytl3vqq
BERT-based Acronym Disambiguation with Multiple Training Strategies
[article]
2021
arXiv
pre-print
Since few works have been done for AD in scientific field, we propose a binary classification model incorporating BERT and several training strategies including dynamic negative sample selection, task ...
adaptive pretraining, adversarial training and pseudo labeling in this paper. ...
Combining multiple advantages in above works, we propose a binary classification model utilizing BERT and several training strategies such as adversarial training and so on. ...
arXiv:2103.00488v2
fatcat:rnllmywavjbl5onjhhxgejdm54
« Previous
Showing results 1 — 15 out of 57,243 results