29 Hits in 2.9 sec

Prosody based co-analysis for continuous recognition of coverbal gestures

S. Kettebekov, M. Yeasin, R. Sharma
Proceedings. Fourth IEEE International Conference on Multimodal Interfaces  
This paper presents a Bayesian formulation that uses a phenomenon of gesture and speech articulation for improving accuracy of automatic recognition of continuous coverbal gestures.  ...  It was found that the above co-analysis helps in detecting and disambiguating visually small gestures, which subsequently improves the rate of continuous gesture recognition.  ...  We thank Ryan Poore for his help with the data processing and implementation.  ... 
doi:10.1109/icmi.2002.1166986 dblp:conf/icmi/KettebekovYS02 fatcat:lhsaoe4ikzb4hegnwhbq3ro4pu

Prosody based audiovisual coanalysis for coverbal gesture recognition

S. Kettebekov, M. Yeasin, R. Sharma
2005 IEEE transactions on multimedia  
Multimodal co-analysis of visual gesture and speech signals provide an attractive means of improving continuous gesture recognition.  ...  We present a computational framework for improving continuous gesture recognition based on two phenomena that capture voluntary (co-articulation) and involuntary (physiological) contributions of prosodic  ...  Discussion Results of the continuous gesture recognition have demonstrated the effectiveness of the prosody-based co-analysis showing a significant improvement of the continuous gesture recognition rates  ... 
doi:10.1109/tmm.2004.840590 fatcat:47s5r2fyfnghnppvygz5krupae

Exploiting prosodic structuring of coverbal gesticulation

Sanshzar Kettebekov
2004 Proceedings of the 6th international conference on Multimodal interfaces - ICMI '04  
One of the main reasons for that is the modeling complexity of spontaneous gestures.  ...  Those types of articulatory strokes represent different communicative functions. The analysis is based on the temporal alignment of detected vocal perturbations and the concurrent hand movement.  ...  Multimodal co-analysis with speech has an attractive prospect of improving coverbal gesture classification.  ... 
doi:10.1145/1027933.1027953 dblp:conf/icmi/Kettebekov04 fatcat:papsa65lszg35cn2fvke6hwp2y

Graphical models for social behavior modeling in face-to face interaction

Alaeddine Mihoub, Gérard Bailly, Christian Wolf, Frédéric Elisei
2016 Pattern Recognition Letters  
The challenge for this behavioral model is to generate coverbal actions (gaze, hand gestures) for the subject given his verbal productions, the current phase of the interaction and the perceived actions  ...  For this end, we present a multimodal behavioral model based on a Dynamic Bayesian Network (DBN).  ...  a robot for the generation of its coverbal behavior.  ... 
doi:10.1016/j.patrec.2016.02.005 fatcat:267o5jnx3vd6fjcnf6ynsjb3sq

A Comparison of Coverbal Gesture Use in Oral Discourse Among Speakers With Fluent and Nonfluent Aphasia

Anthony Pak-Hin Kong, Sam-Po Law, Gigi Wan-Chi Chak
2017 Journal of Speech, Language and Hearing Research  
The relationship between gesture production and linguistic abilities and clinical implications for gesture-based language intervention for speakers with aphasia are also discussed.  ...  Purpose: Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific.  ...  and Care Centre of The Hong Kong Society for Rehabilitation, and Self Help Group for the Brain Damaged.  ... 
doi:10.1044/2017_jslhr-l-16-0093 pmid:28609510 pmcid:PMC5831092 fatcat:lgf2y5tczjegrgq7tdmnvqdrmq

annotation of gesture and gesture / prosody synchronization in multimodal speech corpora

Giorgina Cantalini, Massimo Moneglia
2020 Journal of Speech Sciences  
Gesticulation co-occurs with speech in about 90% of the speech flow examined and gestural arcs are synchronous with prosodic boundaries.  ...  This paper was written with the aim of highlighting the functional and structural correlations between gesticulation and prosody, focusing on gesture / prosody synchronization in spontaneous spoken Italian  ...  The annotation of gesture and gesture / prosody synchronization in multimodal speech corpora JoSS (9): 07-30. 2020  ... 
doi:10.20396/joss.v9i00.14956 fatcat:yvskgx3xwrgkfo5gveyifjjsyu

Co-verbal gestures among speakers with aphasia: Influence of aphasia severity, linguistic and semantic skills, and hemiplegia on gesture employment in oral discourse

Anthony Pak-Hin Kong, Sam-Po Law, Watson Ka-Chun Wat, Christy Lai
2015 Journal of Communication Disorders  
As for the non-content carrying gestures, beats were used primarily for reinforcing speech prosody or guiding speech flow, while non-identifiable gestures were associated with assisting lexical retrieval  ...  This study systematically investigated the impact of aphasia severity, integrity of semantic processing, and hemiplegia on the use of coverbal gestures, with reference to gesture forms and functions, by  ...  and Care Centre of The Hong Kong Society for Rehabilitation, and Self Help Group for the Brain Damage.  ... 
doi:10.1016/j.jcomdis.2015.06.007 pmid:26186256 pmcid:PMC4530578 fatcat:3u26fh4mprf2vgmxdrn25trjxm

A Coding System with Independent Annotations of Gesture Forms and Functions During Verbal Communication: Development of a Database of Speech and GEsture (DoSaGE)

Anthony Pak-Hin Kong, Sam-Po Law, Connie Ching-Yin Kwan, Christy Lai, Vivian Lam
2014 Journal of nonverbal behavior  
About one-third of the subjects did not use any co-verbal gestures.  ...  This paper first described a recently developed Database of Speech and GEsture (DoSaGE) based on independent annotation of gesture forms and functions among 119 neurologically unimpaired right-handed native  ...  (Co-I), and Lee, A. (Co-I). The authors would like to thank all subjects for their help and co-operation during data collection.  ... 
doi:10.1007/s10919-014-0200-6 pmid:25667563 pmcid:PMC4319117 fatcat:esvj5aepjrb4hla2hjmrzkx3zq

Speech-gesture driven multimodal interfaces for crisis management

R. Sharma, M. Yeasin, N. Krahntoever, I. Rauschert, Guoray Cai, I. Brewer, A.M. Maceachren, K. Sengupta
2003 Proceedings of the IEEE  
for CM.  ...  Dialogue-enabled devices, based on natural, multimodal interfaces have the potential of making a variety of information technology tools accessible during crisis management.  ...  The existing continuous gesture recognition system in the weather domain has 80% accuracy and we expect that recognition rates as high as 90% can be achieved using prosody-based co-analysis of speech gesture  ... 
doi:10.1109/jproc.2003.817145 fatcat:flbaisvreresla7wufztzpnvfq

A Framework for Emotions and Dispositions in Man-Companion Interaction [chapter]

Harald Traue, Frank Ohl, André Brechmann, Friedhelm Schwenker, Henrik Kessler, Kerstin Limbrecht, Holger Hoffmann, Stefan Scherer, Michael Kotzyba, Andreas Scheck, Steffen Walter
2013 Coverbal Synchrony in Human-Machine Interaction  
Such automated video processing systems for the recognition of facial movements are based on the gathering and classification of features (Wimmer and Radig, 2007) .  ...  Psychomotor functions: Gestures, body movements and attention focus The automatic recognition of gestures generally takes place in three steps: (1) the object detection, (2) the chronologically recursive  ... 
doi:10.1201/b15477-6 fatcat:2pxlpoe6bvb3ljtb4oit2od3mq

Multimodal human discourse: gesture and speech

Francis Quek, David McNeill, Robert Bryll, Susan Duncan, Xin-Feng Ma, Cemil Kirbas, Karl E. McCullough, Rashid Ansari
2002 ACM Transactions on Computer-Human Interaction  
The basis for this integration is the psycholinguistic concept of the coequal generation of gesture and speech from the same semantic intent.  ...  Quek et al. two independent sets of analyses on the video and audio data: video and audio analysis to extract segmentation cues, and expert transcription of the speech and gesture data by microanalyzing  ...  Wachsmuth [2000, 1999] describe a study based on a system for using coverbal iconic gestures for describing objects in the performance of an assembly task in a virtual environment.  ... 
doi:10.1145/568513.568514 fatcat:nxvyn642pjbolbtmaxqrvf4pse

Gesture Salience as a Hidden Variable for Coreference Resolution and Keyframe Extraction

J. Eisenstein, R. Barzilay, R. Davis
2008 The Journal of Artificial Intelligence Research  
We present conditional modality fusion, a conditional hidden-variable model that learns to predict which gestures are salient for coreference resolution, the task of determining whether two noun phrases  ...  In addition, we show that the model of gesture salience learned in the context of coreference accords with human intuition, by demonstrating that gestures judged to be salient by our model can be used  ...  Acknowledgments The authors acknowledge the editor and anonymous reviewers for their helpful comments. We also thank our colleagues Aaron Adler, S.  ... 
doi:10.1613/jair.2450 fatcat:qaula6fcjbfxrp3bwiuuo6n6py

Language is a complex adaptive system [article]

Kristine Lund, Pierluigi Basso Fossali, Audrey Mazur, Magali Ollagnier-Beldame
2022 Zenodo  
Finally, we argue for a change in vantage point regarding the search for linguistic universals.  ...  Our specific contributions include adding elements to and extending the field of application of the models proposed by others through new examples of emergence, interplay of heterogeneous elements, intrinsic  ...  Acknowledgements The authors are grateful to the people who accepted to be recorded for the diverse corpora analyzed within this book and to those who made data collection, post-production, and analysis  ... 
doi:10.5281/zenodo.6546418 fatcat:fddaimg7x5g7lko77zifxrx5vm

A real-time framework for natural multimodal interaction with large screen displays

N. Krahnstoever, S. Kettebekov, M. Yeasin, R. Sharma
Proceedings. Fourth IEEE International Conference on Multimodal Interfaces  
The core of the proposed framework is a principled method for combining information derived from audio and visual cues.  ...  The performance of the proposed framework has been validated through the development of several prototype systems as well as commercial applications for the retail and entertainment industry.  ...  Leas who have contributed to the development of parts of this work and I. Rauschert and S. Olenoski for providing the photos for Figures 9 and 10 respectively.  ... 
doi:10.1109/icmi.2002.1167020 dblp:conf/icmi/KrahnstoeverKYS02 fatcat:lrstdythxzgi3bmxijhb4ohh74

Effects of regiolects on the perception of developmental foreign accent syndrome

W. Tops, S. Neimeijer, P. Mariën
2018 Journal of Neurolinguistics  
The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.  ...  This abstract summarises independent research funded by the National Institute for Health Research (NIHR)'s Invention for Innovation Programme (Grant Reference Number II-LB-0813-20004).  ...  Experimental Design The main purpose of the experimental protocol was to induce multimodal communication based on speech and co-verbal gestures.  ... 
doi:10.1016/j.jneuroling.2017.10.002 fatcat:evymttu5rffjdhxsou74puz5qq
« Previous Showing results 1 — 15 out of 29 results