Filters








18,377 Hits in 6.2 sec

Differential video coding of face and gesture events in presentation videos

Robin Tan, James W. Davis
2004 Computer Vision and Image Understanding  
The detected face/ hand regions and gesture events in the video are then encoded at higher resolution than the remaining lower-resolution background.  ...  In this research, we aim to provide a better compression of presentation videos (e.g., lectures).  ...  In our research, we are interested in detecting the presence of a person (face and hands) and the key hand-gesture events in presentation videos to provide a saliency map for differential encoding of the  ... 
doi:10.1016/j.cviu.2004.02.008 fatcat:evihaw5loffy3em3paqse5nc6u

Seeing Touches Early in Life

Margaret Addabbo, Elena Longhi, Nadia Bolognini, Irene Senna, Paolo Tagliabue, Viola Macchi Cassia, Chiara Turati, Marcello Costantini
2015 PLoS ONE  
Looking times and orienting responses were measured in a visual preference task, in which participants were simultaneously presented with two videos depicting a touching and a no-touching gesture involving  ...  In Experiment 1, 2-day-old newborns and 3-month-old infants viewed two videos: in one video a moving hand touched a static face, in the other the moving hand stopped before touching it.  ...  Acknowledgments We thank all the babies who participated in this study, their parents and the staff at the Neonatal Ward at the Hospital San Gerardo, Monza. We also thank Dr. V. Gaddi, C.  ... 
doi:10.1371/journal.pone.0134549 pmid:26366563 pmcid:PMC4569186 fatcat:zmzifhdgdrgldcyfovodvm7s74

The quantification of gesture–speech synchrony: A tutorial and validation of multimodal data acquisition using device-based and video-based motion tracking

Wim Pouw, James P. Trujillo, James A. Dixon
2019 Behavior Research Methods  
We further introduce and compare, using an actual gesture dataset (392 gesture events), the performance of two video-based motion-tracking methods (deep learning vs. pixel change) against a high-performance  ...  In the present article, we provide a tutorial on more efficient methods to quantify the temporal properties of gesture kinematics, in which we focus on common challenges and possible solutions that come  ...  A tutorial has been held at GESPIN2019 conference in Paderborn on the basis of this paper. The tutorial was recorded and can be viewed at https://osf.io/rxb8j/.  ... 
doi:10.3758/s13428-019-01271-9 pmid:31659689 fatcat:jvik6kriqbbbffzisahbuzhfh4

Speeding up the detection of non-iconic and iconic gestures (SPUDNIG): A toolkit for the automatic detection of hand movements and gestures in video data

Jordy Ripperda, Linda Drijvers, Judith Holler
2020 Behavior Research Methods  
To facilitate this process, we here present SPUDNIG (SPeeding Up the Detection of Non-iconic and Iconic Gestures), a tool to automatize the detection and annotation of hand movements in video data.  ...  In human face-to-face communication, speech is frequently accompanied by visual signals, especially communicative hand gestures.  ...  We thank Katharina Menn, Marlijn ter Bekke, Naomi Nota, Mareike Geiger, and Chloe Berry for assistance with reliability coding.  ... 
doi:10.3758/s13428-020-01350-2 pmid:31974805 fatcat:wrah435vlfhjjn7viwidgcknbm

Real Time Multimodal Emotion Recognition System using Facial Landmarks and Hand over Face Gestures

Mahesh Krishnananda Prabhu Prabhu, Dinesh Babu Jayagopi
2017 International Journal of Machine Learning and Computing  
In this paper we propose a method for building a Multimodal Emotion Recognition System (MERS), which combine mainly face cues and hand over face gestures which work in near real time with an average frame  ...  Index Terms-Hand-over-face gesture, facial landmark, histogram of oriented gradient, space-time interest points.  ...  For hand over face gesture first we find out if there is a hand occlusion or not by using some of the coding descriptors mentioned in [7] and then classify the gestures based on certain hypothesis.  ... 
doi:10.18178/ijmlc.2017.7.2.615 fatcat:umslkxkb2ze2pd27polo7copte

Automatic Analysis of Naturalistic Hand-Over-Face Gestures

Marwa Mahmoud, Tadas Baltrušaitis, Peter Robinson
2016 ACM transactions on interactive intelligent systems (TiiS)  
In this paper, we present an analysis of automatic detection and classification of hand-over-face gestures.  ...  We detect hand-over-face occlusions and classify hand-over-face gesture descriptors in videos of natural expressions using multi-modal fusion of different state-of-the-art spatial and spatio-temporal features  ...  We would like also to thank Yousef Jameel and Qualcomm for providing funding as well.  ... 
doi:10.1145/2946796 fatcat:mnbbc3cl4nbklcwuqcmfpzuq7m

Automatic Detection of Naturalistic Hand-over-Face Gesture Descriptors

Marwa M. Mahmoud, Tadas Baltrušaitis, Peter Robinson
2014 Proceedings of the 16th International Conference on Multimodal Interaction - ICMI '14  
In this paper, we detect hand-over-face occlusions and classify hand-over-face gesture descriptors in videos of natural expressions using multi-modal fusion of different state-of-the-art spatial and spatio-temporal  ...  To our knowledge, this work is the first attempt to automatically detect and classify hand-over-face gestures in natural expressions.  ...  We present the details of gesture coding and dataset used in Section 3.  ... 
doi:10.1145/2663204.2663258 dblp:conf/icmi/MahmoudB014 fatcat:a2wqdabuxffy5pisutsc56phhe

Differential cerebral activation during observation of expressive gestures and motor acts

M. Lotze, U. Heymans, N. Birbaumer, R. Veit, M. Erb, H. Flor, U. Halsband
2006 Neuropsychologia  
Our data suggest that both, the VLPFC and the STS are coding for differential emotional valence during the observation of expressive gestures.  ...  We compared brain activation involved in the observation of isolated right hand movements (e.g. twisting a lid), body-referred movements (e.g. brushing teeth) and expressive gestures (e.g. threatening)  ...  Acknowledgements We thank Martin Kircher for help with the computerized video presentation in the scanner. The study was supported by the "Deutsche Forschungsgemeinschaft".  ... 
doi:10.1016/j.neuropsychologia.2006.03.016 pmid:16730755 fatcat:7lxtvy6xa5hh7aj6sdgzngmtyq

The magic of storytelling: Does storytelling through videos improve EFL students' oral performance?

Tgk Maya Silviyanti, Diana Achmad, Fathimath Shaheema, Nurul Inayah
2022 Studies in English Language and Education  
This shows that even though the participants recorded their performance, and there was no audience watching them directly, they still faced barriers and a lack of confidence when presenting the storytelling  ...  The students performed retelling of narratives such as fables, legends, myths, and fairy tales using their smartphones and video recorder.  ...  The intonations of words produced differentiate written and spoken activities (Norrick, 2000) and thus make the oral presentation interesting.  ... 
doi:10.24815/siele.v9i2.23259 fatcat:475q7suopnh75jrlbdoxbt4yre

Imitation of Non-Speech Oral Gestures by 8-Month-Old Infants

Heidi Diepstra, Sandra E Trehub, Alice Eriks-Brophy, Pascal HHM van Lieshout
2016 Language and Speech  
This study investigates the oral gestures of 8-month-old infants in response to audiovisual presentation of lip and tongue smacks.  ...  Infants exhibited more lip gestures than tongue gestures following adult lip smacks and more tongue gestures than lip gestures following adult tongue smacks.  ...  Funding This work was supported by the National Sciences and Engineering Research Council of Canada (458145) and in part by the Canada Research Chairs Program.  ... 
doi:10.1177/0023830916647080 pmid:28326993 fatcat:q5tb6uesxfdjheej4d6ufd7gva

Gesture's body orientation modulates the N400 for visual sentences primed by gestures

Yifei He, Svenja Luell, R Muralikrishnan, Benjamin Straube, Arne Nagels
2020 Human Brain Mapping  
To address this research question, we carried out an electroencephalography (EEG) experiment presenting to participants (n = 21) videos of frontal and lateral communicative hand gestures of 5 s (e.g.,  ...  Body orientation of gesture entails social-communicative intention, and may thus influence how gestures are perceived and comprehended together with auditory speech during face-to-face communication.  ...  Therefore, given the social nature of communicative gestures, it might be hypothesized that gestures differing in social aspects, as in the case of facing versus not facing the addressee, may differentially  ... 
doi:10.1002/hbm.25166 pmid:32808721 pmcid:PMC7643362 fatcat:ml6ekh4lobgqtgdt2ght2mnd3y

What speakers do and what addressees look at: Visual attention to gestures in human interaction live and on video

Marianne Gullberg, Kenneth Holmqvist
2006 Pragmatics & Cognition  
We compare a live face-to-face setting to two video conditions. In all conditions, the face dominates as a fixation target and only a minority of gestures draw fixations.  ...  The attraction force of holds is unaffected by changes in social and size parameters, suggesting a bottom-up response, whereas speaker-fixated gestures draw significantly less attention in both video conditions  ...  Acknowledgements We gratefully acknowledge the support of Birgit and Gad Rausing's Foundation for Research in the Humanities through a grant to the first author, as well as financial and technical support  ... 
doi:10.1075/pc.14.1.05gul fatcat:ts2iwklopfazzoz42mbdxoi5oe

Contextual Factors and Adaptative Multimodal Human-Computer Interaction: Multi-level Specification of Emotion and Expressivity in Embodied Conversational Agents [chapter]

Myriam Lamolle, Maurizio Mancini, Catherine Pelachaud, Sarkis Abrilian, Jean-Claude Martin, Laurence Devillers
2005 Lecture Notes in Computer Science  
In this paper we present an Embodied Conversational Agent (ECA) model able to display rich verbal and non-verbal behaviors.  ...  We explain how the multi-level annotation of a corpus of emotionally rich TV video interviews can provide context-dependent knowledge as input for the specification of the ECA (e.g. which contextual cues  ...  We are very grateful to Bjoern Hartmann for implementing the expressive behavior module and to Vincent Maya for his help in this project.  ... 
doi:10.1007/11508373_17 fatcat:363bu2qhmvcffawrokmea32zzu

A survey on video classification using action recognition

Caleb Andrew, Rex Fiona
2018 International Journal of Engineering & Technology  
The growth in multimedia technology have resulted in producing a variety of videos every day.  ...  Various techniques used for video classification such as Multiple Instance Learning (MIL), Conditional Random Field (CRFs) and classifying based on the action and gesture are studied.  ...  Gesture recognition Gesture recognition is the process of interpreting a human motion mathematically by the use of computer devices. Gestures originate from human body mostly from face and hands.  ... 
doi:10.14419/ijet.v7i2.31.13404 fatcat:p6ursb46fjd33dwpcwpjr524eu

Discourse management

Zeynep Azar, Aslı Özyürek
2016 Dutch Journal of Applied Linguistics  
Hence we provide supportive evidence for the reverse correlation between the accessibility of a discourse referent and its coding in speech and gesture.  ...  We find that Turkish speakers mostly use fuller forms to code subject referents in re-introduction context and the null form in maintenance context and they point to gesture space for referents more in  ...  Aylin Küntay for providing room facilities and access to Turkish speakers at Koç University in Istanbul, Turkey. We thank Dr. Pamela Perniss for the stimulus material.  ... 
doi:10.1075/dujal.4.2.06aza fatcat:ag364xp2sndbzfhvlgt5iuiimi
« Previous Showing results 1 — 15 out of 18,377 results