Filters








1,867 Hits in 6.5 sec

Spatio-Temporal Relation and Attention Learning for Facial Action Unit Detection [article]

Zhiwen Shao, Lixin Zou, Jianfei Cai, Yunsheng Wu, Lizhuang Ma
2020 arXiv   pre-print
Spatio-temporal relations among facial action units (AUs) convey significant information for AU detection yet have not been thoroughly exploited.  ...  To tackle these limitations, we propose a novel spatio-temporal relation and attention learning framework for AU detection.  ...  Spatio-Temporal Relation Learning for AU Detection.  ... 
arXiv:2001.01168v1 fatcat:zsvic45l7jcjnpekqwyrnldj5q

Spatio-Temporal Analysis of Facial Actions using Lifecycle-Aware Capsule Networks [article]

Nikhil Churamani, Sinan Kalkan, Hatice Gunes
2021 arXiv   pre-print
Most state-of-the-art approaches for Facial Action Unit (AU) detection rely upon evaluating facial expressions from static frames, encoding a snapshot of heightened facial activity.  ...  For this purpose, we propose the Action Unit Lifecycle-Aware Capsule Network (AULA-Caps) that performs AU detection using both frame and sequence-level features.  ...  Figure 1 : 1 Action Unit Lifecycle-Aware Capsule Network (AULA-Caps) for Multi-label Facial Action Unit Detection. Figure 2 : 2 Onset and Apex segment contiguous frames.  ... 
arXiv:2011.08819v2 fatcat:76ixpovisrhr7k4t5lms7xwnra

Multi-modal Multi-label Facial Action Unit Detection with Transformer [article]

Lingfeng Wang, Shisen Wang, Jin Qi
2022 arXiv   pre-print
After that, we proposed a action units correlation module to learn relationships between each action unit labels and refine action unit detection result.  ...  We proposed a transfomer based model to detect facial action unit (FAU) in video. To be specific, we firstly trained a multi-modal model to extract both audio and visual feature.  ...  RELATED WORKS Previous studies on the Aff-Wild2 have proposed some effective facial action unit detection models. Kuhnke et al.  ... 
arXiv:2203.13301v2 fatcat:p4p2redrlbe6pfsnopibehm3xy

Micro-expression Action Unit Detection with Spatial and Channel Attention

Yante Li, Xiaohua Huang, Guoying Zhao
2021 Neurocomputing  
Action Unit (AU) detection plays an important role in facial behaviour analysis. In the literature, AU detection has extensive researches in macro-expressions.  ...  as spatial and channel attentions, respectively.  ...  YKJ201982), the Jiangsu joint research project of sino-foreign cooperative education platform and Technology Innovation Project of Nanjing for Oversea Scientist.  ... 
doi:10.1016/j.neucom.2021.01.032 fatcat:t2hzoeycp5ggblhduyk5yti3pe

Non-contact Pain Recognition from Video Sequences with Remote Physiological Measurements Prediction [article]

Ruijing Yang, Ziyu Guan, Zitong Yu, Xiaoyi Feng, Jinye Peng, Guoying Zhao
2021 arXiv   pre-print
The framework is able to capture both local and long-range dependencies via the proposed attention mechanism for the learned appearance representations, which are further enriched by temporally attended  ...  This framework is dubbed rPPG-enriched Spatio-Temporal Attention Network (rSTAN) and allows us to establish the state-of-the-art performance of non-contact pain recognition on publicly available pain databases  ...  Acknowledgments We thank Wei Peng, Youqi Zhang, Henglin Shi as well as Long Chen for their help, support and valuable discussions on this project. This  ... 
arXiv:2105.08822v2 fatcat:s6ujvgwr2rguhffsqxwk3xx33e

Estimating Blink Probability for Highlight Detection in Figure Skating Videos [article]

Tamami Nakano, Atsuya Sakata, Akihiro Kishimoto
2020 arXiv   pre-print
It is thus imperative to detect highlight scenes more suitably for human interest with high temporal accuracy.  ...  Highlight detection in sports videos has a broad viewership and huge commercial potential.  ...  infrastructure technologies harmonized with societies" #11027, awarded to TN from the Japan Science and Technology Agency (JST), Japan.  ... 
arXiv:2007.01089v1 fatcat:kpq4ihk4d5ef7ahngcdsw7uy6i

G3AN: Disentangling Appearance and Motion for Video Generation [article]

Yaohui Wang, Piotr Bilinski, Francois Bremond, Antitza Dantcheva
2020 arXiv   pre-print
as the Weizmann and UCF101 datasets on human action.  ...  To tackle this challenge, we introduce G^3AN, a novel spatio-temporal generative model, which seeks to capture the distribution of high dimensional video data and to model appearance and motion in disentangled  ...  The feature maps F Sn , F Tn and F Vn represent inputs for the following G 3 n+1 module. Factorized spatio-temporal Self-Attention (F-SA).  ... 
arXiv:1912.05523v3 fatcat:6z6dkivg7nbmlk5vbstastthvq

Compound emotion recognition of Autistic children during Meltdown Crisis based on deep spatio-temporal analysis of facial geometric features

Salma Kammoun Jarraya, Marwa Masmoudi, Mohamed Hammami
2020 IEEE Access  
INDEX TERMS Autism, deep spatio-temporal features, meltdown crisis, facial expressions, compound emotions.  ...  Certainly, the indications of Meltdown are linked to abnormal facial expressions related to compound emotions.  ...  Shots of different facial expressions are taken for 230 human beings. Furthermore, a manual examination of the coding system of actions is introduced to reveal 26 units of action.  ... 
doi:10.1109/access.2020.2986654 fatcat:sgw4id5wcjazrd22azpq5niile

Lecture quality assessment based on the audience reactions using machine learning and neural networks

Elahe Mohammadreza, Reza Safabakhsh
2021 Computers and Education: Artificial Intelligence  
Four methods based on extracting spatio-temporal features from the videos and classifying them using different classifiers to measure the audience's attention and consequently, lecture quality are presented  ...  In this paper, we try to solve this problem with the help of machine learning and neural networks. A dataset of lectures in real classrooms is collected.  ...  Acknowledgements This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.  ... 
doi:10.1016/j.caeai.2021.100022 fatcat:ksubb4lyd5h75p37en3w4x64zy

On pain assessment from facial videos using spatio-temporal local descriptors

Ruijing Yang, Shujun Tong, Miguel Bordallo, Elhocine Boutellaa, Jinye Peng, Xiaoyi Feng, Abdenour Hadid
2016 2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)  
Automatically recognizing pain from spontaneous facial expression is of increased attention, since it can provide for a direct and relatively objective indication to pain experience.  ...  In this context, this paper investigates and quantifies for the first time the role of the spatio-temporal information in pain assessment by comparing the performance of several baseline local descriptors  ...  Facial Action Coding System (FACS) [12] is used to describe the corresponding correlation between different facial muscle movement and facial expressions described by 44 independent action units (AU)  ... 
doi:10.1109/ipta.2016.7820930 dblp:conf/ipta/YangTLBPFH16 fatcat:zo33ouns4ba6bftuyb6icepzua

Combining Sequential Geometry and Texture Features for Distinguishing Genuine and Deceptive Emotions

Liandong Li, Tadas Baltrusaitis, Bo Sun, Louis-Philippe Morency
2017 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)  
To utilize the temporal information, we introduce temporal attention gated model for this emotion recognition task.  ...  Compared to texture features which describe the whole face area, the facial landmark sequences may also indicate the temporal changes of the face, thus we utilize them by encoding feature sequence unsupervisedly  ...  [3] developed continuous conditional random fields for facial action unit detection. Liu et al. [16] developed spatio-temporal manifold learning for dynamic facial expression recognition.  ... 
doi:10.1109/iccvw.2017.372 dblp:conf/iccvw/LiBSM17 fatcat:nzgb2ztd2vhvnkssdk2k2nadey

Automatic Analysis of Facial Affect: A Survey of Registration, Representation, and Recognition

Evangelos Sariyanidi, Hatice Gunes, Andrea Cavallaro
2015 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Automatic affect analysis has attracted great interest in various contexts including the recognition of action units and basic or non-basic emotions.  ...  This survey allows us to identify open issues and to define future directions for designing real-world affect recognition systems.  ...  The former set of systems usually rely on the Facial Action Coding System (FACS) [38] . FACS consists of facial Action Units (AUs), which are codes that describe certain facial configurations (e.g.  ... 
doi:10.1109/tpami.2014.2366127 pmid:26357337 fatcat:5uv4jaqu4nhihnkwcpqimvylye

Automatic Analysis of Facial Actions: A Survey

Brais Martinez, Michel F. Valstar, Bihan Jiang, Maja Pantic
2017 IEEE Transactions on Affective Computing  
As one of the most comprehensive and objective ways to describe facial expressions, the Facial Action Coding System (FACS) has recently received significant attention.  ...  , AU recognition, AU temporal segment detection, AU intensity estimation 1949-3045 (c)  ...  The revision specifies 32 atomic facial muscle actions, named Action Units (AUs), and 14 additional Action Descriptors (ADs) that account for head pose, gaze direction, and miscellaneous actions such as  ... 
doi:10.1109/taffc.2017.2731763 fatcat:6qp6zg5rrrgwrladsnwmyeqbq4

Graph-based Facial Affect Analysis: A Review [article]

Yang Liu, Xingming Zhang, Yante Li, Jinzhao Zhou, Xin Li, Guoying Zhao
2022 arXiv   pre-print
For the relational reasoning in graph-based FAA, existing studies are categorized according to their non-deep or deep learning methods, emphasizing the latest graph neural networks.  ...  As one of the most important affective signals, facial affect analysis (FAA) is essential for developing human-computer interaction systems.  ...  ACKNOWLEDGMENTS The authors would like to thank Muzammil Behzad and Tuomas Varanka for providing materials and suggestions for the figures used in this paper.  ... 
arXiv:2103.15599v6 fatcat:o2r6wi7qtzdcnbhicm45yvclzy

FusionSense: Emotion Classification Using Feature Fusion of Multimodal Data and Deep Learning in a Brain-Inspired Spiking Neural Network

Clarence Tan, Gerardo Ceballos, Nikola Kasabov, Narayan Puthanmadam Subramaniyam
2020 Sensors  
SNNs have been shown to handle Spatio-temporal data, which is essentially the nature of the data encountered in emotion recognition problem, in an efficient manner.  ...  Several studies have utilized state of the art deep learning methods and combined physiological signals, such as the electrocardiogram (EEG), electroencephalogram (ECG), skin temperature, along with facial  ...  Based on the Facial Action Coding System (FACS), which originally described 44 single action units (AU) including head and eye movements, with each action unit linked with an independent motion on the  ... 
doi:10.3390/s20185328 pmid:32957655 pmcid:PMC7571195 fatcat:hasiuoax3fb5hhyu5nriad3gve
« Previous Showing results 1 — 15 out of 1,867 results