Filters








1,266 Hits in 6.7 sec

Biometric liveness checking using multimodal fuzzy fusion

Girija Chetty
2010 International Conference on Fuzzy Systems  
Liveness checking can detect fraudulent impostor attacks on the security systems, and ensure that biometric cues are acquired from a live person who is actually present at the time of capture for authenticating  ...  The proposed fuzzy fusion of audio visual features is based on mutual dependency models which extract the spatio-temporal correlation between face and voice dynamics during speech production, Performance  ...  The joint analysis of co-occurring acoustic and visual speech signals during speech production can improve the robustness of automatic recognition systems [2, 3] .  ... 
doi:10.1109/fuzzy.2010.5584864 dblp:conf/fuzzIEEE/Chetty10 fatcat:sgex3otadjcwjfwkhwytlw5tiy

Multimodal Fusion for Robust Identity Authentication: Role of Liveness Checks [chapter]

Girija Chetty, Emdad Hossai
2011 Advanced Biometric Technologies  
It is possible to detect the semantic correlation between visual faces and their associated speech based on the LSA technique.  ...  The joint analysis of acoustic and visual speech can improve the robustness of automatic speech recognition systems , Gurbuz et al (2002) .  ...  Multimodal Fusion for Robust Identity Authentication: Role of Liveness Checks, Advanced Biometric Technologies, Dr.  ... 
doi:10.5772/18131 fatcat:3bbyaq7cfbgu3fomn3isenflne

Audiovisual Speech Synchrony Measure: Application to Biometrics

Hervé Bredin, Gérard Chollet
2007 EURASIP Journal on Advances in Signal Processing  
This paper reviews recent works in the field of audiovisual speech, and more specifically techniques developed to measure the level of correspondence between audio and visual speech.  ...  It overviews the most common audio and visual speech front-end processing, transformations performed on audio, visual, or joint audiovisual feature spaces, and the actual measure of correspondence between  ...  ACKNOWLEDGMENT The research leading to this paper was supported by the European Commission under Contract FP6-027026, Knowledge Space of semantic inference for automatic annotation and retrieval of multimedia  ... 
doi:10.1155/2007/70186 fatcat:nnyxpklkjjgh7cdtjt4qcg532e

A virtual reality-based method for examining audiovisual prosody perception [article]

Hartmut Meister, Isa Samira Winter, Moritz Waeachtler, Pascale Sandmann, Khaled Abdellatif
2022 arXiv   pre-print
The purpose of this report is to present a method for examining audiovisual prosody using virtual reality.  ...  We show that animations based on a virtual human provide motion cues similar to those obtained from video recordings of a real talker.  ...  Acknowledgements Supported by the Deutsche Forschungsgemeinschaft (ME 2751-4.1) to HM and (SA 3615/1-1 and SA 3615/2-1) to PS.  ... 
arXiv:2209.05745v1 fatcat:pwshk6jtabbhtlt5foskqgvksa

The development of face perception in infancy: Intersensory interference and unimodal visual facilitation

Lorraine E. Bahrick, Robert Lickliter, Irina Castellanos
2013 Developmental Psychology  
The present studies tested the prediction, generated from the intersensory redundancy hypothesis (IRH), that face discrimination, which relies on detection of visual featural information, would be impaired  ...  in the context of intersensory redundancy provided by audiovisual speech and enhanced in the absence of intersensory redundancy (unimodal visual and asynchronous audiovisual speech) in early development  ...  Further, 2-month-old infants were expected to show evidence of face discrimination in unimodal visual speech in the present study, given that infants of this age discriminated between two static live faces  ... 
doi:10.1037/a0031238 pmid:23244407 pmcid:PMC3975831 fatcat:wl4csx7arzfafbnfphdyepq54u

Multi-Level Liveness Verification for Face-Voice Biometric Authentication

Girija Chetty, Michael Wagner
2006 2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference  
In this paper we present the details of the multilevel liveness verification (MLLV) framework proposed for realizing a secure face-voice biometric authentication system that can thwart different types  ...  The proposed MLLV framework based on novel feature extraction and multimodal fusion approaches, uncovers the static and dynamic relationship between voice and face information from speaking faces, and  ...  The visual manifestation of speech in speaking faces for example, in terms of synchronous speech signal and the facial movements associated with an utterance provide powerful cues for robust authentication  ... 
doi:10.1109/bcc.2006.4341615 fatcat:6fetgeual5ezbo3iwxoaxs5lxu

Multimodal Analysis of Laughter for an Interactive System [chapter]

Jérôme Urbain, Radoslaw Niewiadomski, Maurizio Mancini, Harry Griffin, Hüseyin Çakmak, Laurent Ach, Gualtiero Volpe
2013 Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering  
In this paper, we focus on the development of new methods to detect and analyze laughter, in order to enhance human-computer interactions.  ...  Then, we propose the use of two new modalities, namely body movements and respiration, to enrich the audiovisual laughter detection and classification phase.  ...  Acknowledgment The research leading to these results has received funding from the European  ... 
doi:10.1007/978-3-319-03892-6_22 fatcat:unm4li2wxzcl3avtramijr6s2i

Visual Influences on Auditory Behavioral, Neural, and Perceptual Processes: A Review

Collins Opoku-Baah, Adriana M Schoenhaut, Sarah G Vassall, David A Tovar, Ramnarayan Ramachandran, Mark T Wallace
2021 Journal of the Association for Research in Otolaryngology  
nature of audiovisual interactions, and on the limitations of current imaging-based approaches.  ...  The third section reviews the wealth of neuroanatomical and neurophysiological data from animal models that highlights audiovisual interactions at the neuronal and circuit level in both subcortical and  ...  Neuroanatomical and Electrophysiological Evidence of Visual Influences on Auditory Processes in Animal Models Methods to Study Audiovisual Interactions in the Auditory System Multisensory integration  ... 
doi:10.1007/s10162-021-00789-0 pmid:34014416 pmcid:PMC8329114 fatcat:whkrn47rlvhztf6ndh2pi3hpuy

Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy

Eswen Fava, Rachel Hull, Heather Bortfeld
2014 Brain Sciences  
Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds.  ...  Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech).  ...  Acknowledgments This work was supported by the National Institutes of Health R01DC010075.  ... 
doi:10.3390/brainsci4030471 pmid:25116572 pmcid:PMC4194034 fatcat:n5rpskfee5bvrheyoss3yvfycq

Multimedia content processing through cross-modal association

Dongge Li, Nevenka Dimitrova, Mingkun Li, Ishwar K. Sethi
2003 Proceedings of the eleventh ACM international conference on Multimedia - MULTIMEDIA '03  
Among them CFA gives the best retrieval performance. Finally, this paper addresses the use of cross-modal association to detect talking heads.  ...  The focus of existing research in this area has been predominantly on the use of fusion technology.  ...  These include audiovisual speech recognition, cross-modal compression, and audiovisual animation.  ... 
doi:10.1145/957013.957143 dblp:conf/mm/LiDLS03 fatcat:rs5i65fdmfhkbnxx2ibm5wsbu4

Identifying Cortical Lateralization of Speech Processing in Infants Using Near-Infrared Spectroscopy

Heather Bortfeld, Eswen Fava, David A. Boas
2009 Developmental Neuropsychology  
Older infants (aged 6-9 months) were allowed to sit on their caretakers' laps during stimulus presentation to determine relative differences in focal activity in the temporal region of the brain during  ...  This work suggests that infants' ability to perceive speech actually begins in the womb and progresses dramatically across the first year of life (DeCasper & Fifer, 1980; Werker & Tees, 1984) .  ...  agreed to have their infants participate in the research.  ... 
doi:10.1080/87565640802564481 pmid:19142766 pmcid:PMC2981820 fatcat:33shebickjdebagp3yfbwm4hxm

Survey on audiovisual emotion recognition: databases, features, and data fusion strategies

Chung-Hsien Wu, Jen-Chun Lin, Wen-Li Wei
2014 APSIPA Transactions on Signal and Information Processing  
In this paper, a survey on the theoretical and practical work offering new and broad views of the latest research in emotion recognition from bimodal information including facial and vocal expressions  ...  Emotion recognition is the ability to identify what people would think someone is feeling from moment to moment and understand the connection between his/her feelings and expressions.  ...  On the one hand, since both laughter and speech are also naturally audiovisual events, the MAHNOB Laughter audiovisual database [55] containing laughter, speech, posed laughs, speech-laughs, and other  ... 
doi:10.1017/atsip.2014.11 fatcat:6ujyy4sv55ezvdfbn3rt3leki4

AGE-RELATED ALTERATIONS IN AUDIOVISUAL INTEGRATION: A BRIEF OVERVIEW

Yanna REN, Zhihan XU, Tao WANG, Weiping YANG
2020 Psychologia  
Because dysfunctions arise in auditory sensory processing and visual sensory processing during healthy ageing, the function of audiovisual integration (AVI), the process of integrating auditory and visual  ...  To shed light on the basic mechanisms involved in the influence of age on AVI, this comprehensive literature review summarizes the current behavioural and neural studies on AVI in ageing and discusses  ...  The basic phenomenon is that detection responses are faster when signals are presented on bimodal audiovisual channels than on either auditory or visual channel alone, and this phenomenon is referred to  ... 
doi:10.2117/psysoc.2020-a002 fatcat:aqnvxqqusjeunhzky4d5tmb2mq

Animated virtual characters to explore audio-visual speech in controlled and naturalistic environments

Raphaël Thézé, Mehdi Ali Gadiri, Louis Albert, Antoine Provost, Anne-Lise Giraud, Pierre Mégevand
2020 Scientific Reports  
Although the McGurk effect has widely been applied to the exploration of audio-visual speech processing, it relies on isolated syllables, which severely limits the conclusions that can be drawn from the  ...  This audiovisual mismatch is known to induce the illusion of hearing /v/ in a proportion of trials.  ...  Acknowledgements This work was supported by the Swiss National Science Foundation (Grant #167836 to PM).  ... 
doi:10.1038/s41598-020-72375-y pmid:32968127 pmcid:PMC7511320 fatcat:mhwov3g7h5gidblwb2wocwr7jy

Multimedia content processing through cross-modal association

Dongge Li, Nevenka Dimitrova, Mingkun Li, Ishwar K. Sethi
2003 Proceedings of the eleventh ACM international conference on Multimedia - MULTIMEDIA '03  
Among them CFA gives the best retrieval performance. Finally, this paper addresses the use of cross-modal association to detect talking heads.  ...  Compared to CCA, CFA provides better noise resistance capabilities and has no constraints on the features to be processed.  ...  These include audiovisual speech recognition, cross-modal compression, and audiovisual animation.  ... 
doi:10.1145/957142.957143 fatcat:24726hk6cjdhloy5vdp5m52oki
« Previous Showing results 1 — 15 out of 1,266 results