Filters








3,066 Hits in 5.3 sec

Piavca: A Framework for Heterogeneous Interactions with Virtual Characters

Marco Gillies
2008 2008 IEEE Virtual Reality Conference  
Human to human interaction is highly multimodal, involving features such as verbal language, tone of voice, facial expression, gestures and gaze.  ...  It also encourages the creation of behavior out of basic building blocks. making it easy to create and configure new beahviors for novel situations.  ...  We would like to thank the funders of this work: BT plc, the European Union FET project PRESENCIA (contract number 27731) and the Empathic Avatars project funded by the UK Engineering and Physical Sciences  ... 
doi:10.1109/vr.2008.4480788 dblp:conf/vr/Gillies08 fatcat:n6wzvznivzhqnguvycsq42pnny

Piavca: a framework for heterogeneous interactions with virtual characters

Marco Gillies, Xueni Pan, Mel Slater
2010 Virtual Reality  
Human to human interaction is highly multimodal, involving features such as verbal language, tone of voice, facial expression, gestures and gaze.  ...  It also encourages the creation of behavior out of basic building blocks. making it easy to create and configure new beahviors for novel situations.  ...  We would like to thank the funders of this work: BT plc, the European Union FET project PRESENCIA (contract number 27731) and the Empathic Avatars project funded by the UK Engineering and Physical Sciences  ... 
doi:10.1007/s10055-010-0167-5 fatcat:2h2meudzcvfq5fcikwz2syskhu

Interactive editing in French Sign Language dedicated to virtual signers: requirements and challenges

Sylvie Gibet, François Lefebvre-Albaret, Ludovic Hamon, Rémi Brun, Ahmed Turki
2015 Universal Access in the Information Society  
In recent years, an emerging approach uses captured data to edit and generate Sign Language (SL) gestures.  ...  A solution is to insert the human operator in a loop for constructing new utterances, and to incorporate within the utterance's structure constraints that are derived from linguistic patterns.  ...  Section 5 highlights the key issues for building and accessing an efficient heterogeneous database containing both the captured and annotated motion.  ... 
doi:10.1007/s10209-015-0411-6 fatcat:6jyfq2ilxbgqvdlpj7xljcuxby

Hybrid inverse motion control for virtual characters interacting with sound synthesis

A. Bouënard, S. Gibet, M. M. Wanderley
2011 The Visual Computer  
Finally, we present an architecture offering an effective way to manage heterogenous data (motion and sound parameters) and feedback (visual and sound) that influence the resulting virtual percussion performances  ...  Combining physics-based simulation with motion data is a recent approach to finely represent and modulate this motion-sound interaction, while keeping the realism and expressivity of the original captured  ...  and the Pôle de Compétitivité Images & Réseaux (France).  ... 
doi:10.1007/s00371-011-0620-9 fatcat:nukif2h22ffmjgya5f7rn57feu

MIS: Multimodal Interaction Services in a cloud perspective [article]

Patrizia Grifoni, Fernando Ferri, Maria Chiara Caschera, Arianna D'Ulizia, Mauro Mazzei
2017 arXiv   pre-print
It can offer adequate computational resources to manage the complexity implied by the use of the five senses when involved in human machine interaction.  ...  The Web is becoming more and more a wide software framework on which each one can compose and use contents, software applications and services.  ...  Beyond the capability to capture the different signals from humans, there are devices able to capture signals from the environment where they are situated; therefore they can acquire contextual information  ... 
arXiv:1704.00972v1 fatcat:hld4rrlbyrbi7bii6yskvezf2m

The Medical Cyber-physical Systems Activity at EIT: A Look under the Hood

Daniel Sonntag, Sonja Zillner, Samarjit Chakraborty, Andras Lorincz, Esko Strommer, Luciano Serafini
2014 2014 IEEE 27th International Symposium on Computer-Based Medical Systems  
In this paper, we describe how we combine active and passive user input modes in clinical environments for knowledge discovery and knowledge acquisition towards decision support in clinical environments  ...  This combination for knowledge acquisition and decision support (while using machine learning techniques) has not yet been explored in clinical environments and is of specific interest because it combines  ...  The authors would like to thank the industrial and academic project partners and software engineers.  ... 
doi:10.1109/cbms.2014.83 dblp:conf/cbms/SonntagZCLSS14 fatcat:hly4pj46orhz5lc4j2dwc253r4

Adaptive Multimodal Emotion Detection Architecture for Social Robots

Juanpablo Heredia, Edmundo Lopes-Silva, Yudith Cardinale, Jose Diaz-Amado, Irvin Dongo, Wilfredo Graterol, Ana Aguilera
2022 IEEE Access  
Emotion recognition is a strategy for social robots used to implement better Human-Robot Interaction and model their social behaviour.  ...  Results reveal that our approach is able to adapt to the quality and presence of modalities.  ...  For example, from the audio, the content of the speech and voice modulation can be analyzed; from images, it is possible to analyze human faces, human postures, and the context to detect the emotion.  ... 
doi:10.1109/access.2022.3149214 fatcat:ehlptl44yneufdthidrkgjecvm

Simulating Multi-Scale Pulmonary Vascular Function by Coupling Computational Fluid Dynamics With an Anatomic Network Model

Behdad Shaarbaf Ebrahimi, Haribalan Kumar, Merryn H. Tawhai, Kelly S. Burrowes, Eric A. Hoffman, Alys R. Clark
2022 Frontiers in Network Physiology  
This allows us to estimate the effect of posture on left and right pulmonary artery wall shear stress, with predictions varying by 0.75–1.35 dyne/cm2 between postures.  ...  The model predicts interactions between 3D and 1D models that lead to a redistribution of blood between postures, both on a macro- and a micro-scale.  ...  between 20-30 years old, derived from the Human Lung Atlas Database Hoffman et al. (2004) .  ... 
doi:10.3389/fnetp.2022.867551 fatcat:atcssdhfgreuhn4kfvpig6dcha

Multimodal signal processing and interaction for a driving simulator: Component-based architecture

Alexandre Benoit, Laurent Bonnaud, Alice Caplier, Frédéric Jourde, Laurence Nigay, Marcos Serrano, Ioannis Damousis, Dimitrios Tzovaras, Jean-Yves Lionel Lawson
2007 Journal on Multimodal User Interfaces  
Capturing and interpreting the driver's focus of attention and fatigue state will be based on video data (e.g., facial expression, head movement, eye tracking).  ...  Active input modalities are added in the meta-User Interface to let the user dynamically select the output modalities.  ...  She works in the domain of facial expressions classification, human postures recognition, Cued Speech language classification and head rigid or non rigid motion analysis. Y.  ... 
doi:10.1007/bf02884432 fatcat:exltqxahpzg2lfstiik6m7eyjy

Requirements for Robotic Interpretation of Social Signals "in the Wild": Insights from Diagnostic Criteria of Autism Spectrum Disorder

Madeleine E Bartlett, Cristina Costescu, Paul Baxter, Serge Thill
2020 Information  
Just as for social robotics, ASD diagnosis fundamentally requires the ability to characterise human behaviour from observable aspects.  ...  The last few decades have seen widespread advances in technological means to characterise observable aspects of human behaviour such as gaze or posture.  ...  Posture and Gesture Behaviour Intention Recognition in Social Robotics Vision-based methods (using standard cameras/2D images) for human motion capture are well established [68] , with face tracking  ... 
doi:10.3390/info11020081 fatcat:pwtplx2x2vco5i27rrvvkapp7q

Advances in Emotion Recognition: Link to Depressive Disorder [chapter]

Xiaotong Cheng, Xiaoxia Wang, Tante Ouyang, Zhengzhi Feng
2020 Mental Disorders [Working Title]  
Emotion recognition enables real-time analysis, tagging, and inference of cognitive affective states from human facial expression, speech and tone, body posture and physiological signal, as well as social  ...  It refers to the reactions that can be directly observed from the appearance, such as facial expression, speech, and posture.  ...  Posture emotion recognition Posture refers to the expressional actions of other parts of human body than face.  ... 
doi:10.5772/intechopen.92019 fatcat:jmss4llbpnfrxcue6bzebsgmby

Accuracy of Markerless 3D Motion Capture Evaluation to Differentiate between On/Off Status in Parkinson's Disease after Deep Brain Stimulation

Hector R. Martinez, Alexis Garcia-Sarreon, Carlos Camara-Lemarroy, Fortino Salazar, María L. Guerrero-González
2018 Parkinson's Disease  
The objective of this study was to determine whether a markerless 3D motion capture system is a useful instrument to objectively differentiate between PD patients with DBS in On and Off states and controls  ...  and Off.  ...  Before submission, the paper was sent to Enago-Global for a substantive editing, which was funded by Tecnológico de Monterrey ITESM.  ... 
doi:10.1155/2018/5830364 pmid:30363689 pmcid:PMC6180930 fatcat:23o5yu2m2ff4bn5jvsvpqo6e4a

Visible-Infrared Person Re-Identification: A Comprehensive Survey and a New Setting

Huantao Zheng, Xian Zhong, Wenxin Huang, Kui Jiang, Wenxuan Liu, Zheng Wang
2022 Electronics  
To this end, combining visible images with infrared images is a natural trend, and are considerably heterogeneous modalities.  ...  Additionally, we elaborate on frequently used datasets and metrics for performance evaluation. We give insights on the historical development and conclude the limitations of off-the-shelf methods.  ...  A large cross-modal discrepancy and intra-modal variations generated by varied camera angles, human postures, etc., impact the VI-ReID.  ... 
doi:10.3390/electronics11030454 fatcat:rbjugiqeaffl7mmllbz6xjjuvq

Pervasive technologies and assistive environments: cognitive systems for assistive environments: special issue of PETRA 2010 and 2011 conferences

Ilias Maglogiannis, Fillia Makedon, Grammati Pantziou, Margrit Betke
2013 Universal Access in the Information Society  
In conclusion, this special issue presents novel data capture and processing technologies, which appear in different modalities.  ...  and postures.  ... 
doi:10.1007/s10209-013-0311-6 fatcat:g2qr6tyh5jeudcqg24xf4r7uya

Challenges in multimodal gesture recognition

Sergio Escalera, Vassilis Athitsos, Isabelle Guyon
2016 Journal of machine learning research  
We published papers using this technology and other more conventional methods, including regular video cameras, to record data, thus providing a good overview of uses of machine learning and computer vision  ...  This paper surveys the state of the art on multimodal gesture recognition and introduces the JMLR special topic on gesture recognition 2011-2015.  ...  Acknowledgments This work has been partially supported by ChaLearn Challenges in Machine Learning http: //chalearn.org, the Human Pose Recovery and Behavior Analysis Group 7 , the Pascal2 network of excellence  ... 
dblp:journals/jmlr/EscaleraAG16 fatcat:r4q2iywy7balhjlh2vpknltrde
« Previous Showing results 1 — 15 out of 3,066 results