9,901 Hits in 3.8 sec

Facial Expression and Peripheral Physiology Fusion to Decode Individualized Affective Experience [article]

Yu Yin, Mohsen Nabian, Miolin Fan, ChunAn Chou, Maria Gendron, Sarah Ostadabbas
2018 arXiv   pre-print
We validated our approach using a multimodal dataset consists of (i) facial videos and (ii) several peripheral physiological signals, synchronously recorded from 12 participants while watching 4 emotion-eliciting  ...  video-based stimuli.  ...  To eliminate the interference of head movement, we first extract the depth information of each face pixels from 2D video frames using the 3D morphable face model described in (Kittler et al., 2016) .  ... 
arXiv:1811.07392v1 fatcat:xq2bvudmdfferjcakmt62oshri

Non-Contact Method of Heart Rate Measurement Based on Facial Tracking

Ruqiang Huang, Weihua Su, Shiyue Zhang, Wei Qin
2019 Journal of Computer and Communications  
Image photoplethysmography can realize low-cost and easy-to-operate non-contact heart rate detection from the facial video, and effectively overcome the limitations of traditional contact method in daily  ...  analysis and experimental verification, this method effectively reduces the error rate under different experimental variables and has good consistency with the heart rate value collected by the medical physiological  ...  Experimental Procedure The experimental flow arrangement is as shown in Figure 5 , while facial video is recorded by camera, physiological parameters of subjects are collected by physiological vest, recorded  ... 
doi:10.4236/jcc.2019.75002 fatcat:x7nszy4n4nfrnozyngenlgrcbu

Emotion Analysis: Bimodal Fusion of Facial Expressions and EEG

Huiping Jiang, Rui Jiao, Demeng Wu, Wenbo Wu
2021 Computers Materials & Continua  
This indicated that the expression mode helped eliminate EEG signals that contained few or no emotional features, enhancing emotion recognition accuracy.  ...  Compared with the EEG-LSTM model, the Facial-LSTM model improved by about 3%.  ...  Acknowledgement: The author thanks all subjects who participated in this research and the technical support from FISTAR Technology Inc.  ... 
doi:10.32604/cmc.2021.016832 fatcat:5nwasdmqfvhlxhicomlwxezecq

New Breakthroughs and Innovation Modes in English Education in Post-pandemic Era

Yumin Shen, Hongyu Guo
2022 Frontiers in Psychology  
A dual-modality emotion recognition method is proposed that mainly recognizes and analyzes facial expressions and physiological signals of students and the virtual character in each scene.  ...  The outbreak of COVID-19 has brought drastic changes to English teaching as it has shifted from the offline mode before the pandemic to the online mode during the pandemic.  ...  After obtaining the physiological signals, EVM was used to magnify the color of the facial videos.  ... 
doi:10.3389/fpsyg.2022.839440 pmid:35222216 pmcid:PMC8873145 fatcat:kjnivesfnvdppp6kck6hkwc6ve

An Improved Fatigue Detection System Based on Behavioral Characteristics of Driver [article]

Rajat Gupta, Kanishk Aman, Nalin Shiva, Yadvendra Singh
2017 arXiv   pre-print
Principle Component Analysis is thus implemented to reduce the features while minimizing the amount of information lost.  ...  The camera detect the driver's face and observe the alteration in its facial features and uses these features to observe the fatigue level. Facial features include eyes and mouth.  ...  The images from the video capture unit are the RGB image and for the very dim light condition, we perform low-light image enhancement and noise elimination [20] .  ... 
arXiv:1709.05669v1 fatcat:olbvorblobekbkpvuez2cjvfae

Neonatal Facial Coding System for Assessing Postoperative Pain in Infants: Item Reduction is Valid and Feasible

Jeroen W. B. Peters, Hans M. Koot, Ruth E. Grunau, Josien de Boer, Marieke J. van Druenen, Dick Tibboel, Hugo J. Duivenvoorden
2003 The Clinical Journal of Pain  
Objective: The objectives of this study were to: (1) evaluate the validity of the Neonatal Facial Coding System (NFCS) for assessment of postoperative pain and (2) explore whether the number of NFCS facial  ...  Stepwise backward elimination (one by one; P out >0.10) served to obtain only the relevant facial variables. Conversely, none of covariates was removed from the model.  ...  These findings suggest that apart from overt behavior, nurses also take into account additional information when assigning VAS scores.  ... 
doi:10.1097/00002508-200311000-00003 pmid:14600535 fatcat:n7sw7ju2zvbqjhgq3frs255ylu

UBFC-Phys: A Multimodal Database For Psychophysiological Studies Of Social Stress

Rita Meziatisabour, Yannick Benezeth, Pierre De Oliveira, Julien Chappe, Fan Yang
2021 IEEE Transactions on Affective Computing  
Video recordings allowed to compute remote pulse signals, using remote photoplethysmography (RPPG), and facial expression features.  ...  Our dataset permits to evaluate the possibility of using video-based physiological measures compared to more conventional contact-based modalities.  ...  estimated from video recordings to describe the participants' facial expressions.  ... 
doi:10.1109/taffc.2021.3056960 fatcat:h2vnp6sqkbeybgkyqp73bdxgyq

A Systematic Review on Affective Computing: Emotion Models, Databases, and Recent Advances [article]

Yan Wang, Wei Song, Wei Tao, Antonio Liotta, Dawei Yang, Xinlei Li, Shuyong Gao, Yixuan Sun, Weifeng Ge, Wei Zhang, Wenqiang Zhang
2022 arXiv   pre-print
However, it is hard to reveal one's inner emotion hidden purposely from facial expressions, audio tones, body gestures, etc.  ...  Thus, the fusion of physical information and physiological signals can provide useful features of emotional states and lead to higher accuracy.  ...  The video modality (facial expression, voice, gesture, posture, etc.) may also be integrated with multimodal physiological signals for video-physiological modality fusion for affective analysis [397,  ... 
arXiv:2203.06935v3 fatcat:h4t3omkzjvcejn2kpvxns7n2qe

Emotion Classification from Facial Images and Videos Using a Convolutional Neural Network

2022 International Journal of Advanced Trends in Computer Science and Engineering  
And that is what we will indeed be focusing on reading emotions of human being from facial image and videos.  ...  And for emotion recognition from videos, we segment the video into individual frames at 30 frames per second and repeat the process of facial images on each frame, then do sentiment analysis, and finally  ...  Verbal and nonverbal information like facial expression changes, voice tone and physiological signs [4] may be used to identify a person's emotional state.  ... 
doi:10.30534/ijatcse/2022/031112022 fatcat:pp5ozwilynaobn4gdtjq3i7w7u

Multimodal-Based Stream Integrated Neural Networks for Pain Assessment

Ruicong ZHI, Caixia ZHOU, Junwei YU, Tingting LI, Ghada ZAMZMI
2021 IEICE transactions on information and systems  
Nonverbal pain indicators such as pain related facial expressions and changes in physiological parameters could provide valuable insights for pain assessment.  ...  Pain is an essential physiological phenomenon of human beings. Accurate assessment of pain is important to develop proper treatment.  ...  However, it is far not enough to lean facial features by 2D convolution from spatial dimensions when applied to video analysis tasks.  ... 
doi:10.1587/transinf.2021edp7065 fatcat:r3h6cews75duhhadmb73rvvokm

Drowsiness Classification for Internal Driving Situation Awareness on Mobile Platform

Julkar Nine, Naeem Ahmed, Rahul Mathavan
2021 Embedded Selforganising Systems  
This work aims to classify drowsiness, warn, and inform drivers, helping them to stop falling asleep at the wheel.  ...  This paper proposes an implementation of a lightweight method to detect driver's sleepiness using facial landmarks and head pose estimation based on neural network methodologies on a mobile device.  ...  [22] utilized the filtering and thresholding techniques to eliminate the noise from the ECG input data.  ... 
doi:10.14464/ess.v8i2.491 fatcat:2jubj2gqbvac5k6dtz42thxu7m

Multimodal Affect Recognition: Current Approaches and Challenges [chapter]

Hussein Al Osman, Tiago H. Falk
2017 Emotion and Attention Recognition Based on Biological Signals and Images  
Third, we present publicly accessible multimodal datasets designed to expedite work on the topic by eliminating the laborious task of dataset collection.  ...  as video and physiological data.  ...  Moreover, recent studies have explored the extraction of physiological information (e.g., heart rate and breathing) from face videos [81, 82] , and thus may open doors for multimodal systems, which, in  ... 
doi:10.5772/65683 fatcat:du7u2lfx4nhkzf5d7zq7g5ofty

A New Approach for Noncontact Imaging Photoplethysmography Using Chrominance Features and Low-rank in the IoT Operating Room

Hongwei Yue, Xiaorong Li, Hongtao Wang, Huazhou Chen, Xiaojun Wang, Ken Cai
2019 IEEE Access  
Factors such as variations in ambient light and face shaking can also easily affect heart rate detection based on face videos, thus resulting in inaccurate estimations of heart rate from the blood volume  ...  Finally, users can log on to the health-related cloud platform and gain information regarding their health status in real time.  ...  CONCLUSION AND FUTURE DIRECTIONS Operating rooms based on IoT technology utilize numerous sensors to gather physiological information from patients.  ... 
doi:10.1109/access.2019.2932204 fatcat:iddg4ar4lnc6jesd2luln2qga4

Driver Drowsiness Detection Using Multi-Channel Second Order Blind Identifications

Chao Zhang, Xiaopei Wu, Xi Zheng, Shui Yu
2019 IEEE Access  
Video streams containing subject's facial region are analyzed to identify the physiological sources that are mixed in each image.  ...  Experiments on 15 subjects show that the multi-channel SOBI presents a promising framework to accurately detect drowsiness by merging multi-physiological information in a less complex way.  ...  More channels of data should be properly involved to extract more physiological sources from the facial videos.  ... 
doi:10.1109/access.2019.2891971 fatcat:5tikoj24avfdlfuj4f2oom35zu

Neural correlates of affective context in facial expression analysis: A simultaneous EEG-fNIRS study

Yanjia Sun, Hasan Ayaz, Ali N. Akansu
2015 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP)  
The affective states were registered from the video capture of facial expression and related neural activity was measured using wearable and portable neuroimaging systems: functional near infrared spectroscopy  ...  These findings encourage further studies of the joint utilization of video and brain signals for face perception and braincomputer interface (BCI) applications.  ...  Motion artifacts were eliminated prior to extracting the features from fNIRS signals by applying a fast Independent Component Analysis (ICA) [18] .  ... 
doi:10.1109/globalsip.2015.7418311 dblp:conf/globalsip/SunAA15 fatcat:pzab63o5vbb37lgqh7ql4i7c4e
« Previous Showing results 1 — 15 out of 9,901 results