Filters








81 Hits in 2.7 sec

Adapting X3D for multi-touch environments

Y. Jung, J. Keil, J. Behr, S. Webel, M. Zöllner, T. Engelke, H. Wuest, M. Becker
2008 Proceedings of the 13th international symposium on 3D web technology - Web3D '08  
We present a robust FTIR based optical tracking system, examine in how far current sensor and navigation abstractions in the X3D standard are useful and finally present extensions to the standard, which  ...  In this paper we present a comprehensive hardware and software setup, which includes an X3D based layer to simplify the application development process.  ...  Different subjects have to be taken into account, like e.g. finger and gesture tracking and recognition, software setup and implementation, and also more sophisticated graphical interfaces and interaction  ... 
doi:10.1145/1394209.1394218 dblp:conf/vrml/JungKBWZEWB08 fatcat:f6qbjgao6bcqxolkl4q2emxtk4

Rendering of X3D content on mobile devices with OpenGL ES

Daniele Nadalutti, Luca Chittaro, Fabio Buttussi
2006 Proceedings of the eleventh international conference on 3D web technology - Web3D '06  
In this paper, we exploit the main emerging standard in 3D rendering on mobile devices (OpenGL ES) to build a mobile player (called MobiX3D) for X3D and H-Anim content.  ...  The rendering engine of the MobiX3D player supports classic lighting and shading algorithms. We discuss the performance of the player and we apply it to sign language visualization.  ...  Acknowledgements Roberto Ranon and Stefano Burigat provided precious advice during the development of the described work.  ... 
doi:10.1145/1122591.1122594 dblp:conf/vrml/NadaluttiCB06 fatcat:tkx4nhwcsvcchdxvtgtwgssqhi

Waving Real Hand Gestures Recorded by Wearable Motion Sensors to a Virtual Car and Driver in a Mixed-Reality Parking Game

David Bannach, Oliver Amft, Kai S. Kunze, Ernst A. Heinz, Gerhard Troster, Paul Lukowicz
2007 2007 IEEE Symposium on Computational Intelligence and Games  
gesture recognition to allow for smooth game control.  ...  We envision to add context awareness and ambient intelligence to edutainment and computer gaming applications in general.  ...  We used the X3D loader in our application for models, materials, textures, camera, and lights. IV. MOTION-SENSED HAND GESTURE RECOGNITION A.  ... 
doi:10.1109/cig.2007.368076 dblp:conf/cig/BannachAKHTL07 fatcat:sf6kdk5eprch3glgxkfnzsheuu

Enhancing realism of mixed reality applications through real-time depth-imaging devices in X3D

Tobias Franke, Svenja Kahn, Manuel Olbrich, Yvonne Jung
2011 Proceedings of the 16th International Conference on 3D Web Technology - Web3D '11  
In this paper, we present a framework to include depth sensing devices into X3D in order to enhance visual fidelity of X3D Mixed Reality applications by introducing some extensions for advanced rendering  ...  We furthermore outline how to calibrate depth and image data in a meaningful way through calibration for devices that do not already come with precalibrated sensors, as well as a discussion of some of  ...  and gesture recognition, the Mixed Reality (MR) and electronic art performance scene saw a sudden increase in installations and demos using these devices.  ... 
doi:10.1145/2010425.2010439 dblp:conf/vrml/FrankeKOJ11 fatcat:7mf45y2r5bagtnmrdioxlx2hli

Employing virtual humans for education and training in X3D/VRML worlds

Lucio Ieronutti, Luca Chittaro
2007 Computers & Education  
To test the applicability and effectiveness of our approach, we have applied it in a virtual museum of computer science.  ...  Advances in computer graphics, improvements in hardware performance, and network technologies have enabled a new class of interactive applications involving virtual humans, threedimensional simulations  ...  To test benefits of our approach, we use it in a 3D Web site representing a Computer Science museum in which the virtual human leads users through the environment, it invites students to interact with  ... 
doi:10.1016/j.compedu.2005.06.007 fatcat:jk22jlq7sjfm3idnd2l6netwem

A user interface framework for multimodal VR interactions

Marc Erich Latoschik
2005 Proceedings of the 7th international conference on Multimodal interfaces - ICMI '05  
A modified Augmented Transition Nettwork (ATN) approach accesses the knowledge layer as well as the preprocessing components to integrate linguistic, gestural, and context information in parallel.  ...  multimodally initialized gestural interactions.  ...  AI-representation for the KRL as well as a neural network layer which will support the KRL as well as the matching stage of the gesture processing.  ... 
doi:10.1145/1088463.1088479 dblp:conf/icmi/Latoschik05 fatcat:u6l5l7zqyzdszosbxd4qrne6di

IMMIView: a multi-user solution for design review in real-time

Ricardo Jota, Bruno R. de Araújo, Luís C. Bruno, João M. Pereira, Joaquim A. Jorge
2009 Journal of Real-Time Image Processing  
Users can interact with the system using laser pointers, speech commands, body gestures and mobile devices.  ...  In particular, our system takes advantage of multiple modalities to provide a natural interaction for design review.  ...  To enable interaction with the GUI, IMMIView includes a number of modalities, pen or laser interaction, speech recognition, mobile devices and body tracking.  ... 
doi:10.1007/s11554-009-0141-1 fatcat:hnh4a7xsj5hife6vo7l677v3ni

Context-dependent multimodal communication in human-robot collaboration

Csaba Kardos, Zsolt Kemény, András Kovács, Balázs E. Pataki, József Váncza
2018 Procedia CIRP  
A new methodology is proposed to analyze existing products in view of their functional and physical architecture.  ...  An industrial case study on two product families of steering columns of thyssenkrupp Presta France is then carried out to give a first industrial evaluation of the proposed approach.  ...  Acknowledgment This research has been supported by the GINOP-2.3.2-15-2016-00002 grant on an "Industry 4.0 research and innovation center of excellence" and by the EU H2020 Grant SYMBIO-TIC No. 637107.  ... 
doi:10.1016/j.procir.2018.03.027 fatcat:y6fwjzquanauhlz4ifjz7dzuia

Believable Virtual Characters in Human-Computer Dialogs [article]

Yvonne Jung, Arjan Kuijper, Dieter Fellner, Michael Kipp, Jan Miksatko, Jonathan Gratch, Daniel Thalmann
2011 Eurographics State of the Art Reports  
Therefore, in this report we give a comprehensive overview how to go from communication models to actual animation and rendering.  ...  including its perceivable behavior, from a decoding perspective, such as facial expressions and gestures, belongs to the domain of computer graphics and likewise implicates many open issues concerning  ...  Acknowledgements Part of this research has been carried out within the framework of the Excellence Cluster Multimodal Computing and Interaction (MMCI), sponsored by the German Research Foundation (DFG)  ... 
doi:10.2312/eg2011/stars/075-100 fatcat:ry6wr3y2p5dsjpb67gp6jvblja

Interactive Agents Learning Their Environment [chapter]

Michiel Hildebrand, Anton Eliëns, Zhisheng Huang, Cees Visser
2003 Lecture Notes in Computer Science  
Interactive agents are designed to perform tasks requested by a user in natural language.  ...  In particular, an interactive agent can tell when necessary information for a task is missing, giving the user a chance to supply this information, which may in effect result in teaching the agent.  ...  In addition their system uses camera's and speech recognition. In [10] Virtual Teletubbies are developed to create believable agents.  ... 
doi:10.1007/978-3-540-39396-2_3 fatcat:tqisb37ml5e27akwqvlxpainb4

Vision-Augmented Molecular Dynamics Simulation of Nanoindentation

Rajab Al-Sayegh, Charalampos Makatsoris
2015 Journal of Nanomaterials  
The hand gestures are used to pick and place atoms on the screen allowing thereby the ease of carrying out molecular dynamics simulation in a more efficient way.  ...  The end result is that users with limited expertise in developing molecular structures can now do so easily and intuitively by the use of body gestures to interact with the simulator to study the system  ...  Kinect includes the features of gestures recognition, facial recognition and voice recognition.  ... 
doi:10.1155/2015/857574 fatcat:i2fl4nreevgf5bunncd7ywnk6u

Web browser accessibility using open source software

Željko Obrenović, Jacco van Ossenbruggen
2007 Proceedings of the 2007 international cross-disciplinary conference on Web accessibility (W4A) - W4A '07  
with additional interaction modalities; another describing a non-disabled user browsing in a suboptimal interaction situation.  ...  A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long term goal still far from being achieved.  ...  ACKNOWLEDGEMENTS Part of this research was funded by the European ITEA Passepartout project and the MultimediaN project of the BSIK programme of the Dutch Government, and by European Commission under contract  ... 
doi:10.1145/1243441.1243451 dblp:conf/w4a/ObrenovicO07 fatcat:6lovcrdb7vdxnpofmrt6akwkgu

Automatic speech grammar generation during conceptual modelling of virtual environments

Lode Vanacken, Chris Raymaekers, Karin Coninx
2008 The Visual Computer  
In this paper, we introduce an approach to automatically generate a speech grammar which is generated using semantic information.  ...  Speech interfaces are becoming more and more popular as a means to interact with virtual environments but the development and integration of these interfaces is usually still ad-hoc, especially the speech  ...  Speech interfaces are increasingly being used in virtual environment applications since this way of interacting allows for more flexible and natural forms of interaction within a virtual environment.  ... 
doi:10.1007/s00371-008-0276-2 fatcat:z4jgqs4y75edvcmb5xcsxtt7eq

ISAS

James Oliverio, Yvonne R. Masakowski, Howard Beck, Raja Appuswamy
2007 Proceedings of the twelfth international conference on 3D web technology - Web3D '07  
high-level collaboration and augmented decision-making in civil and coalition activities.  ...  Institute has demonstrated an effective web services-enhanced graphicallybased environment for globally-distributed operations ranging from humanitarian aid during large-scale environmental disasters to  ...  ACKNOWLEDGMENTS The authors wish to acknowledge and thank the various collaborators and members of the original ISAS team including Rick Lind, Andy Quay, Arturo Sinclair, Georges El Khoury, Tommy Chuan  ... 
doi:10.1145/1229390.1229403 dblp:conf/vrml/OliverioMBA07 fatcat:mcfttgp34vch5no4j46fntijuq

Creating explorable extended reality environments with semantic annotations

Jakub Flotyński
2020 Multimedia tools and applications  
Such analysis can be intended, in particular, to monitor, comprehend, examine, and control XR environments as well as users' skills, experience, interests and preferences, and XR objects' features.  ...  Such actions and interactions constitute the evolution of the content over time.  ...  as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.  ... 
doi:10.1007/s11042-020-09772-y fatcat:i4rsrqpzajbm5hfcqdk2vh42ve
« Previous Showing results 1 — 15 out of 81 results