430 Hits in 5.2 sec

A Saliency based Feature Fusion Model for EEG Emotion Estimation [article]

Victor Delvigne, Antoine Facchini, Hazem Wannous, Thierry Dutoit, Laurence Ris, Jean-Philippe Vandeborre
2022 arXiv   pre-print
In this paper, we propose a dual model considering two different representations of EEG feature maps: 1) a sequential based representation of EEG band power, 2) an image-based representation of the feature  ...  We also propose an innovative method to combine the information based on a saliency analysis of the image-based model to promote joint learning of both model parts.  ...  Another advantage of the saliency-based feature fusion is its low standard deviation compared to other models.  ... 
arXiv:2201.03891v3 fatcat:ky2xtlyjjndkppppj3qkizuzyq

Rethinking Saliency Map: An Context-aware Perturbation Method to Explain EEG-based Deep Learning Model [article]

Hanqi Wang, Xiaoguang Zhu, Tao Chen, Chengfang Li, Liang Song
2022 arXiv   pre-print
To validate our idea and make a comparison with the other methods, we select three representative EEG-based models to implement experiments on the emotional EEG dataset DEAP.  ...  Based on the characteristic of EEG data, we suggest a context-aware perturbation method to generate a saliency map from the perspective of the raw EEG signal.  ...  Considering these weaknesses, we cannot regard their works as a promising way to investigate saliency problem for the EEG-based model.  ... 
arXiv:2205.14976v1 fatcat:tkwh5fnotjhv3ds6z26jezxnny

Gender and Emotion Recognition with Implicit User Signals [article]

Maneesh Bilalpur, Seyed Mostafa Kia, Manisha Chawla, Tat-Seng Chua and Ramanathan Subramanian
2017 arXiv   pre-print
Also, fairly modest valence (positive vs negative emotion) recognition is achieved with EEG and eye-based features.  ...  Implicit viewer responses in the form of EEG brain signals and eye movements are then examined for existence of (a) emotion and gender-specific patterns from event-related potentials (ERPs) and fixation  ...  Focusing on speci cs, EEG features considerably outperform eye-based features for GR and higher AUC scores are obtained with emotion-speci c (exclusively negative emotion) features as compared to emotion  ... 
arXiv:1708.08735v1 fatcat:zd7qf4x7mfaq5kwr55ycfr47ba

Gender and emotion recognition with implicit user signals

Maneesh Bilalpur, Seyed Mostafa Kia, Manisha Chawla, Tat-Seng Chua, Ramanathan Subramanian
2017 Proceedings of the 19th ACM International Conference on Multimodal Interaction - ICMI 2017  
Also, fairly modest valence (positive vs negative emotion) recognition is achieved with EEG and eye-based features.  ...  Implicit viewer responses in the form of EEG brain signals and eye movements are then examined for existence of (a) emotion and gender-speci c pa erns from event-related potentials (ERPs) and xation distributions  ...  Focusing on speci cs, EEG features considerably outperform eye-based features for GR and higher AUC scores are obtained with emotion-speci c (exclusively negative emotion) features as compared to emotion  ... 
doi:10.1145/3136755.3136790 dblp:conf/icmi/BilalpurKCCS17 fatcat:uqwv23vgbzhchjjo2yftw3igo4

Fusion Of Musical Contents, Brain Activity And Short Term Physiological Signals For Music-Emotion Recognition

Jimmy Jarjoura, Sergio Giraldo, Rafael Ramirez
2017 Zenodo  
In this study we propose a multi-modal machine learning approach, combining EEG and Audio features for music emotion recognition using a categorical model of emotions.  ...  EEG data was obtained from three male participants listening to the labeled music excerpts. Feature level fusion was adopted to combine EEG and Audio features.  ...  Conclusion In this study we explored the effect of combining EEG and audio features on the classification accuracy of trained machine learning models to estimate the emotional state based on EEG data using  ... 
doi:10.5281/zenodo.1095499 fatcat:neoiqqhvsjf7fkofgoi4dwaqmy

Gender and Emotion Recognition from Implicit User Behavior Signals [article]

Maneesh Bilalpur, Seyed Mostafa Kia, Mohan Kankanhalli, Ramanathan Subramanian
2020 arXiv   pre-print
Experimental results reveal that (a) reliable GR and ER is achievable with EEG and eye features, (b) differential cognitive processing of negative emotions is observed for females and (c) eye gaze-based  ...  This work explores the utility of implicit behavioral cues, namely, Electroencephalogram (EEG) signals and eye movements for gender recognition (GR) and emotion recognition (ER) from psychophysical behavior  ...  NB is a generative classifier that estimates the test label based on the maximum a-posteriori criterion, p(C | X), assuming classconditional feature independence.  ... 
arXiv:2006.13386v1 fatcat:y2baswyrwffdtflkse5uk3qvbe

2020 Index IEEE Transactions on Multimedia Vol. 22

2020 IEEE transactions on multimedia  
., TMM Dec. 2020 3115-3127 Jevremovic, A., see Kostic, Z., TMM July 2020 1904-1916 Ji, Q., see Wang, S., TMM April 2020 1084-1097 Jia, K., see 1345-1357 Jia, Y., see 2138-2148 Jian, M., Dong, J.,  ...  ., +, TMM March 2020 786-794 ATMFN: Adaptive-Threshold-Based Multi-Model Fusion Network for Compressed Face Hallucination.  ...  ., +, TMM June 2020 1619-1633 ATMFN: Adaptive-Threshold-Based Multi-Model Fusion Network for Compressed Face Hallucination.  ... 
doi:10.1109/tmm.2020.3047236 fatcat:llha6qbaandfvkhrzpe5gek6mq

Front Matter: Volume 10836

Ruidan Su
2018 2018 International Conference on Image and Video Processing, and Artificial Intelligence  
using a Base 36 numbering system employing both numerals and letters.  ...  Publication of record for individual papers is online in the SPIE Digital Library. Paper Numbering: Proceedings of SPIE follow an e-First publication model.  ...  10836 18 A complementary tracking model with multiple features [10836-20] 10836 19 Linguistic attention-based model for aspect extraction [10836-46] 10836 1A Structural-attentioned LSTM for action  ... 
doi:10.1117/12.2516621 fatcat:s4qjw53qhvf6jhkfsdnnhwzymm

Affective video recommender systems: A survey

Dandan Wang, Xiaoming Zhao
2022 Frontiers in Neuroscience  
Therefore, it is suitable for reliable emotion analysis. The physical signals can be recorded by a webcam or recorder.  ...  (SCR) by a galvanic skin response (GSR), and photoplethysmography (PPG) estimating users' pulse.  ...  In Alhagry (2017) , an LSTM is adopted to learn the EEG features for emotional video recognition.  ... 
doi:10.3389/fnins.2022.984404 pmid:36090291 pmcid:PMC9459336 fatcat:f6545crzj5afvhnyj5tdcw2p4m

Behavioral, Physiological and EEG Activities Associated with Conditioned Fear as Sensors for Fear and Anxiety

Jui-Hong Chien, Luana Colloca, Anna Korzeniewska, Timothy J. Meeker, O. Joe Bienvenu, Mark I. Saffer, Fred A. Lenz
2020 Sensors  
., Dot Probe Test and emotional Stroop.  ...  ERS and ERD are related to the ratings above as well as to anxious personalities and clinical anxiety and can resolve activity over short time intervals like those for some moods and emotions.  ...  The manuscript is in accordance with the statement of ethical standards for manuscripts submitted to MDPI (Sensors).  ... 
doi:10.3390/s20236751 pmid:33255916 fatcat:4pm4yg3pbrfyvgmx3rkwb3ctne

Data, Signal and Image Processing and Applications in Sensors

Manuel J. C. S. Reis
2021 Sensors  
With the rapid advance of sensor technology, a vast and ever-growing amount of data in various domains and modalities are readily available [...]  ...  [20] propose a highly efficient optimal estimation algorithm for MEMS arrays based on Wavelet Compressive Fusion (WCF), to improve the performance of MEMS inertial devices.  ...  Dai and Li [22] proposed a saliency detection method for multiple targets based on multi-saliency detection, aiming to solve the problem of incomplete saliency detection and unclear boundaries in infrared  ... 
doi:10.3390/s21103323 pmid:34064747 fatcat:l5advw57nzgfbdi4txenowhzcy

Review of Visual Saliency Prediction: Development Process from Neurobiological Basis to Deep Models

Fei Yan, Cheng Chen, Peng Xiao, Siyu Qi, Zhiliang Wang, Ruoxiu Xiao
2021 Applied Sciences  
the saliency model, and the emerging applications, to provide new saliency predictions for follow-up work and the necessary help and advice.  ...  Deep learning models can automatically learn features, thus solving many drawbacks of the classic models, such as handcrafted features and task settings, among others.  ...  [81] proposed a deep feature-based saliency (DeepFeat) model to utilize features by combining bottom-up and top-down saliency maps. AKa et al.  ... 
doi:10.3390/app12010309 fatcat:u5yvrsykkbcevj46un5e4hrzs4

Psychophysiology-Based QoE Assessment: A Survey

Ulrich Engelke, Daniel P. Darcy, Grant H. Mulliken, Sebastian Bosse, Maria G. Martini, Sebastian Arndt, Jan-Niklas Antons, Kit Yan Chan, Naeem Ramzan, Kjell Brunnstrom
2017 IEEE Journal on Selected Topics in Signal Processing  
We present a survey of psychophysiology-based assessment for Quality of Experience (QoE) in advanced multimedia technologies.  ...  This survey is not considered to be exhaustive but serves as a guideline for those interested to further explore this emerging field of research.  ...  No fusion. ECG features for classification of arousal level, EEG features for valence recognition (positive/negative).  ... 
doi:10.1109/jstsp.2016.2609843 fatcat:6hekxtfmozebnfjstiwof5vqzi

Affective Computing for Large-scale Heterogeneous Multimedia Data

Sicheng Zhao, Shangfei Wang, Mohammad Soleymani, Dhiraj Joshi, Qiang Ji
2019 ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)  
We begin this survey by introducing the typical emotion representation models from psychology that are widely employed in AC. We briefly describe the available datasets for evaluating AC algorithms.  ...  ., images, music, videos, and multimodal data, with the focus on both handcrafted features-based methods and deep learning methods.  ...  Model-based methods address multimodal fusion in model construction.  ... 
doi:10.1145/3363560 fatcat:m56udtjlxrauvmj6d5z2r2zdeu

Automatic, Dimensional and Continuous Emotion Recognition

Hatice Gunes, Maja Pantic
2010 International Journal of Synthetic Emotions  
Savran et al. (2006) have obtained feature/decision level fusion of the fNIRS and EEG feature vectors on a block-by-block basis.  ...  results from feature and decision level fusion 51% for bio-signals, 54% for speech, 55% applying feature fusion, 52% for decision fusion, and 54% for hybrid fusion, subject independent validation  ... 
doi:10.4018/jse.2010101605 fatcat:hipfyafiybfl5fk2ag6gvflm24
« Previous Showing results 1 — 15 out of 430 results