Filters








696 Hits in 5.9 sec

Real-time 3D Face-Eye Performance Capture of a Person Wearing VR Headset [article]

Guoxian Song and Jianfei Cai and Tat-Jen Cham and Jianmin Zheng and Juyong Zhang and Henry Fuchs
2019 pre-print
However, in order to facilitate face-to-face communications for HMD users, real-time 3D facial performance capture of a person wearing HMD is needed, which is a very challenging task due to the large occlusion  ...  In this paper, we propose a convolutional neural network (CNN) based solution for real-time 3D face-eye performance capture of HMD users without complex modification to devices.  ...  INTRODUCTION This paper considers the problem for only using a commodity RGB camera to do real-time 3D facial performance capture of a person wearing a virtual reality (VR) head-mount display (HMD).  ... 
doi:10.1145/3240508.3240570 arXiv:1901.06765v1 fatcat:onqdisma7vgzpbuaftc32j3ltm

Expressive Telepresence via Modular Codec Avatars [article]

Hang Chu, Shugao Ma, Fernando De la Torre, Sanja Fidler, Yaser Sheikh
2020 arXiv   pre-print
This paper aims in this direction and presents Modular Codec Avatars (MCA), a method to generate hyper-realistic faces driven by the cameras in the VR headset.  ...  We demonstrate that MCA achieves improved expressiveness and robustness w.r.t to CA in a variety of real-world datasets and practical scenarios.  ...  Finally, once the person-specific face animation model is learned using these correspondences, a real-time photo-realistic avatar is driven from the VR headset cameras. telepresence, the users wear the  ... 
arXiv:2008.11789v1 fatcat:43vp653rwrcfbmd4mjn7lhz5ju

Facial performance sensing head-mounted display

Hao Li, Laura Trutoiu, Kyle Olszewski, Lingyu Wei, Tristan Trutna, Pei-Lun Hsieh, Aaron Nicholls, Chongyang Ma
2015 ACM Transactions on Graphics  
To map the input signals to a 3D face model, we perform a single-instance offline training session for each person.  ...  To advance virtual reality as a nextgeneration communication platform, we develop a novel HMD that enables 3D facial performance-driven animation in real-time.  ...  algorithms, Frances Chen for being our capture model, Liwen Hu for helping with the results, and Ryan Ebert for his mechanical design.  ... 
doi:10.1145/2766939 fatcat:7j6h6lkvsbhjbdg3novpojd5pu

Real-time Visual Representations for Mixed Reality Remote Collaboration [article]

Lei Gao, Huidong Bai, Thammathip Piumsomboon, Gun A. Lee, Robert W. Lindeman, Mark Billinghurst
2017 International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments  
We conducted a pilot study to evaluate the usability of our system by comparing the performance of three different interface designs (showing the real-time view in forms of 2D first-person view, a 2D third-person  ...  By combining a low-resolution 3D point-cloud of the environment surrounding the local worker with a high-resolution real-time view of small focused details, the remote expert can see a virtual copy of  ...  The depth sensors, video camera and VR headset are directly connected via USB to a single local PC, which is responsible for local data processing, such as 3D point cloud fusing of the local scene, real-time  ... 
doi:10.2312/egve.20171344 dblp:conf/egve/GaoBPLLB17 fatcat:3yii6zzfi5f3lecb2taobn4ov4

Eyemotion: Classifying facial expressions in VR using eye-tracking cameras [article]

Steven Hickson, Nick Dufour, Avneesh Sud, Vivek Kwatra, Irfan Essa
2017 arXiv   pre-print
Specifically, we show that images of the user's eyes captured from an IR gaze-tracking camera within a VR headset are sufficient to infer a select subset of facial expressions without the use of any fixed  ...  Using these inferences, we can generate dynamic avatars in real-time which function as an expressive surrogate for the user.  ...  [35] also perform real-time gaze-aware facial reenactment in VR using a RGB-D camera to capture the unoccluded regions, and two internal infra-red (IR) cameras to track the eye gaze.  ... 
arXiv:1707.07204v2 fatcat:zab7t5by2vemrp4xghox3hsrkm

FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality [article]

Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, Matthias Nießner
2018 arXiv   pre-print
The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking  ...  Based on reenactment of a prerecorded stereo video of the person without the HMD, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances  ...  TUM-IAS Rudolf Mößbauer Fellowship, and a Google Faculty Award.  ... 
arXiv:1610.03151v2 fatcat:pnx2nfjqrrgpbcpaarsrg6ykwi

AudienceMR: Extending the Local Space for Large-Scale Audience with Mixed Reality for Enhanced Remote Lecturer Experience

Bin Han, Gerard Jounghyun Kim
2021 Applied Sciences  
Most subjects preferred AudienceMR over the alternatives despite the nuisance of having to wear a video see-through headset.  ...  Compared to 3D VR, AudienceMR offers a more natural and easily usable real object-based interaction.  ...  Although it is true that the need to wear a headset and other limitations (e.g., close and natural interactions) are still significant obstacles to the wider adoption of 3D immersive VR, the younger generations  ... 
doi:10.3390/app11199022 fatcat:zd2iowf45rd2zpizw4jf2r4lqy

A Robust Real-Time 3D Reconstruction Method for Mixed Reality Telepresence

Fazliaty Edora Fadzli, Ajune Wanis Ismail
2020 International Journal of Innovative Computing  
of a remote person.  ...  Therefore, this paper explores on a robust real-time 3D reconstruction method for MR telepresence.  ...  ACKNOWLEDGMENT Deepest gratitude and appreciation to Mixed and Virtual Reality Laboratory (mivielab) at Vicubelab, Faculty of Engineering, Universiti Teknologi Malaysia (UTM).  ... 
doi:10.11113/ijic.v10n2.265 fatcat:mjk65liwtbfrfpzork3ggsxbeu

Deep appearance models for face rendering

Stephen Lombardi, Jason Saragih, Tomas Simon, Yaser Sheikh
2018 ACM Transactions on Graphics  
(VR).  ...  This representation, together with a novel unsupervised technique for mapping images to facial states, results in a system that is naturally suited to real-time interactive settings such as Virtual Reality  ...  To generate this data, we build personalized blendshape models of the face from the captured expression performances, similar to Laine et al. [2017] , and use it to track the face through the captured  ... 
doi:10.1145/3197517.3201401 fatcat:37mhsrw43zhfvaxbhmz26y6fae

D5.2 - Market analysis report

Landais Loïc, Perrot Pascal
2019 Zenodo  
This deliverable describes the market for audio-visual immersive products and provides a set of recommendations as input for the use cases and the exploitation and innovation transfer activities in T.5.1  ...  A real-time motion capture system captures the positions and orientations of active markers on the user's hands and feet, as well as on the VR headset and backpack PC.  ...  Mimesys from Belgium has taken a different approach, working with real-time capture of the users with 3D cameras and representing them as photorealistic "holograms" in VR or AR.  ... 
doi:10.5281/zenodo.4531484 fatcat:7fstvk32gjhqrlzvxfbao5eh7a

Virtual reality based novel use case in remote sensing and GIS

Jai Gopal Singla
2021 Current Science  
User can move around or teleport to different locations inside the 3D scene and interact with real time objects available in the scene.  ...  Available GIS software(s) allow users to interact with remote sensing data as a third person only.  ...  We also thank Shri Kirti Padia for guidance and support during the development of this work. Suggestions from internal referees to improve an earlier version of this paper are sincerely acknowledged.  ... 
doi:10.18520/cs/v121/i7/958-961 fatcat:ksmgvxodhjdyvammb6snuyxqc4

Hands-Free User Interface for AR/VR Devices Exploiting Wearer's Facial Gestures Using Unsupervised Deep Learning

Jaekwang Cha, Jinhyuk Kim, Shiho Kim
2019 Sensors  
The UI device was embedded in a commercial AR headset, and several experiments were performed on the online sensor data to verify operation of the device.  ...  Developing a user interface (UI) suitable for headset environments is one of the challenges in the field of augmented reality (AR) technologies.  ...  A wink gesture of the headset wearer was captured as an input command by a custom-designed skin-deformation-detection sensor.  ... 
doi:10.3390/s19204441 pmid:31614988 pmcid:PMC6832972 fatcat:42ckbx5qlvdtnkv6l2h3tqtjhe

Using a Virtual Reality Social Network During Awake Craniotomy to Map Social Cognition: Prospective Trial

Florian Bernard, Jean-Michel Lemée, Ghislaine Aubin, Aram Ter Minassian, Philippe Menei
2018 Journal of Medical Internet Research  
and explore gesture communication while wearing a VR headset.  ...  Methods: This was a single-center, prospective, unblinded trial. During wound closure, different VR experiences with a VR headset were proposed to the patient.  ...  Multimedia Appendix 1 During awake brain surgery, the patient and the neuropsychologist (A) performing a language task; (B) Direct electrical stimulation and mapping of the cortex during the task; and  ... 
doi:10.2196/10332 pmid:29945859 pmcid:PMC6039768 fatcat:qgkkqjmsvrd2dgzya266sqnfba

Assessing Facial Expressions in Virtual Reality Environments

Catarina Runa Miranda, Verónica Costa Orvalho
2016 Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications  
Hence, to the reproduction of the users' facial expressions in VR scenarios, we need the on-the-fly animation of the embodied 3D characters.  ...  However, current facial animation approaches with Motion Capture (MoCap) are disabled due to persistent partial occlusions produced by the VR headsets.  ...  Also, in subsequent wearings by the same person, a smaller calibration is needed to re-adapt the hardware measures.  ... 
doi:10.5220/0005716604860497 dblp:conf/visapp/MirandaO16 fatcat:nnvs4ijxhndsjmpxph6zbmih34

Immersive Virtual Reality and Ocular Tracking for Brain Mapping During Awake Surgery: Prospective Evaluation Study

Morgane Casanova, Anne Clavreul, Gwénaëlle Soulard, Matthieu Delion, Ghislaine Aubin, Aram Ter Minassian, Renaud Seguier, Philippe Menei
2021 Journal of Medical Internet Research  
Language mapping was performed with a naming task, DO 80, presented on a computer tablet and then in 2D and 3D via the VRH. Patients were also immersed in a visuospatial and social VR experience.  ...  This study aims to evaluate the feasibility and safety of a virtual reality headset equipped with an eye-tracking device that is able to promote an immersive visuospatial and social virtual reality (VR  ...  Neurochirugie, Centre Hospitalier Universitaire d'Angers, Angers, France), Doctor Jérémy Besnard, and Professor Philippe Allain (LPPL-EA4638, Université d'Angers, Angers, France) for their help in the design of  ... 
doi:10.2196/24373 pmid:33759794 pmcid:PMC8074984 fatcat:ffzx6gjtcja7fptgnnm67nqg5i
« Previous Showing results 1 — 15 out of 696 results