A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
A Generalized Zero-Shot Framework for Emotion Recognition from Body Gestures
[article]
2020
arXiv
pre-print
The performance of our framework on an emotion recognition dataset is significantly superior to the traditional method of emotion classification and state-of-the-art zero-shot learning methods. ...
In order to solve this problem, we introduce a Generalized Zero-Shot Learning (GZSL) framework, which consists of three branches to infer the emotional state of the new body gestures with only their semantic ...
Zero-Shot Learning and Generalized Zero-Shot Learning The early works of zero-shot learning directly construct classifiers for attributes of seen and unseen classes. Lampert et al. ...
arXiv:2010.06362v2
fatcat:t7eb5qdcmvejhb32rgkxdytr6q
2021 Index IEEE Transactions on Multimedia Vol. 23
2021
IEEE transactions on multimedia
., +, TMM 2021 899-910 A Novel Perspective to Zero-Shot Learning: Towards an Alignment of Manifold Structures via Semantic Feature Expansion. ...
Zhou, G., +, TMM 2021 1630-1639 A Novel Perspective to Zero-Shot Learning: Towards an Alignment of Manifold Structures via Semantic Feature Expansion. ...
., Low-Rank Pairwise Align- ment Bilinear Network For Few-Shot Fine-Grained Image Classification; TMM 2021 1666-1680 Huang, H., see 1855 -1867 Huang, H., see Jiang, X., TMM 2021 2602-2613 Huang, J., ...
doi:10.1109/tmm.2022.3141947
fatcat:lil2nf3vd5ehbfgtslulu7y3lq
Face Behavior a la carte: Expressions, Affect and Action Units in a Single Network
[article]
2020
arXiv
pre-print
that it has been trained in a zero- and few-shot learning setting. ...
Three of its iconic tasks are automatic recognition of basic expressions (e.g. happy, sad, surprised), estimation of continuous emotions (e.g., valence and arousal), and detection of facial action units ...
(zero-shot and few-shot learning). ...
arXiv:1910.11111v3
fatcat:2n2xtyge5fawxj7vlbujkqgidq
Affective Image Content Analysis: Two Decades Review and New Perspectives
[article]
2021
arXiv
pre-print
We then summarize and compare the representative approaches on (1) emotion feature extraction, including both handcrafted and deep features, (2) learning methods on dominant emotion recognition, personalized ...
Recently, with the rapid advancement of emotional intelligence and the explosive growth of visual data, extensive research efforts have been dedicated to affective image content analysis (AICA). ...
[38] proposed an affective structural embedding framework, which constructs an intermediate embedding space using ANP features for zero-shot emotion recognition. ...
arXiv:2106.16125v1
fatcat:5y5y5nhoebccxjjybarnveecgq
Semantic movie summarization based on string of IE-RoleNets
2015
Computational Visual Media
An IE-RoleNet (interaction and emotion rolenet) models the emotion and interactions of roles in a shot of the movie. The whole movie is represented as a string of IE-RoleNets. ...
Roles, their emotion, and interactions between them are three key elements for semantic content understanding of movies. ...
Then affective related features are extracted from music and speech respectively. Most existing approaches to vocal affect recognition use multiple acoustic features. ...
doi:10.1007/s41095-015-0015-3
fatcat:y7kzew2fufdqzlr7q6mr5ezrkq
2018 IndexIEEE Transactions on Cognitive and Developmental SystemsVol. 10
2018
IEEE Transactions on Cognitive and Developmental Systems
., +, TCDS June 2018 205-212 Zero-Shot Image Classification Based on Deep Feature Extraction. ...
., +, TCDS Dec. 2018 894-902 Zero-Shot Image Classification Based on Deep Feature Extraction. ...
doi:10.1109/tcds.2019.2892259
fatcat:b2njmkdqk5azpmll74ucef3h5m
Feature Dimensionality Reduction for Video Affect Classification: A Comparative Study
[article]
2018
arXiv
pre-print
Affective computing has become a very important research area in human-machine interaction. However, affects are subjective, subtle, and uncertain. ...
Thus, dimensionality reduction is critical in affective computing. This paper presents our preliminary study on dimensionality reduction for affect classification. ...
And, 22,881 features were extracted from 64-channel EEG signals in [15] for emotion recognition. On the contrary, affects are very subjective, subtle, and uncertain. ...
arXiv:1808.02956v1
fatcat:qj5ngc2gpbbfje5xuzq5pw6mbm
Use of Affective Visual Information for Summarization of Human-Centric Videos
[article]
2021
arXiv
pre-print
First, we train a visual input-driven state-of-the-art continuous emotion recognition model (CER-NET) on the RECOLA dataset to estimate emotional attributes. ...
Then, we integrate the estimated emotional attributes and the high-level representations from the CER-NET with the visual information to define the proposed affective video summarization architectures ...
First, they learn video representations using Image Transfer Encoding and textual representations using zero-shot learning from auxiliary datasets. ...
arXiv:2107.03783v1
fatcat:jwiaetby2fgovejhqdoxtixkpu
Toward Open-World Electroencephalogram Decoding Via Deep Learning: A Comprehensive Survey
[article]
2021
arXiv
pre-print
However, an open-world environment is a more realistic setting, where situations affecting EEG recordings can emerge unexpectedly, significantly weakening the robustness of existing methods. ...
learning Semi-supervised learning Classification How to recognize both the known and the unknown categories of EEG Zero-shot learning How to exploit the structure of unlabeled EEG data to provide supervision ...
For example, Jia et al. proposed a novel, semi-supervised DL framework for EEG emotion recognition [16] . ...
arXiv:2112.06654v2
fatcat:roxf5k7ypfcvtdzz3pbho3kdri
Evidence Theory-Based Multimodal Emotion Recognition
[chapter]
2009
Lecture Notes in Computer Science
Automatic recognition of human affective states is still a largely unexplored and challenging topic. ...
In this paper, we explore audio-visual multimodal emotion recognition. ...
Fig. 4 . 4 Multimodal emotion recognition.
Fig. 5 . 5 Facial Feature Points.
Fig. 6 . 6 NNET classifier fusion structure.
Anger Disgust Fear Happiness Sadness Surprise CR + MAPTable 1. ...
doi:10.1007/978-3-540-92892-8_44
fatcat:qdcqbmcfyzcdtmb5yg3xb2yvoi
Shot scale matters: The effect of close-up frequency on mental state attribution in film viewers
2020
Poetics
zero close-up control condition. ...
The present experiment focuses on the role of close-up shots of the character's face in viewers' mental state attribution, as well as in their cognitive and affective processing more generally. ...
A higher level of language emotionality can indicate a higher degree of emotional engagement with the film (Tausczik & Pennebaker, 2010) . ...
doi:10.1016/j.poetic.2020.101480
fatcat:dwlbj2ur2fhj7bddyruuln4zxe
Efficient Single-Shot Multi-Object Tracking for Vehicles in Traffic Scenarios
2021
Sensors
To alleviate this problem, single-shot methods, which simultaneously perform object detection and embedding extraction, have been developed and have drastically improved the inference speed. ...
Therefore, this study proposes an enhanced single-shot multi-object tracking system that displays improved accuracy while maintaining a high inference speed. ...
To address the aforementioned limitation, single-shot approaches, which apply a parallel structure to object detection and embedding extraction, have been developed [4] [5] [6] [7] . ...
doi:10.3390/s21196358
pmid:34640675
fatcat:vrezyq6vbfagbgbsffhkejfoay
Multi-Perspective Cost-Sensitive Context-Aware Multi-Instance Sparse Coding and Its Application to Sensitive Video Recognition
2016
IEEE transactions on multimedia
The experiments demonstrate that the features with an emotional meaning are effective for violent and horror video recognition, and our cost-sensitive context-aware MI-SC and multi-perspective MI-J-SC ...
Based on color emotion and color harmony theories, we extract visual emotional features from videos. ...
A video is divided into a series of shots via shot segmentation and a key frame from each shot is selected. ...
doi:10.1109/tmm.2015.2496372
fatcat:oeenmbw43jebrivncuixurxwqm
MovieGraphs: Towards Understanding Human-Centric Situations from Videos
2018
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
This requires machines to have the ability to "read" people's emotions, motivations, and other factors that affect behavior. ...
This requires proper reading of people's emotions, understanding their mood, motivations, and other factors that affect behavior. ...
doi:10.1109/cvpr.2018.00895
dblp:conf/cvpr/VicolTCF18
fatcat:q4uejlxaafcdba3fmb7w2t6hu4
Expert and Crowd-Guided Affect Annotation and Prediction
[article]
2021
arXiv
pre-print
clip labels is used for binary emotion recognition on the Evaluation set for which only dynamic crowd annotations are available. ...
Observed experimental results confirm the effectiveness of the EG-MTL algorithm, which is reflected via improved arousal and valence estimation for , and higher recognition accuracy for . ...
We now illustrate the utility of MTL, and show how it
monly employed for emotion recognition. ...
arXiv:2112.08432v1
fatcat:fhwokluzd5drxfll5bmhh4reyy
« Previous
Showing results 1 — 15 out of 4,813 results