Filters








2,504 Hits in 7.0 sec

QuickTime VR

Shenchang Eric Chen
1995 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques - SIGGRAPH '95  
Traditionally, virtual reality systems use 3D computer graphics to model and render virtual environments in real-time.  ...  The panoramic image is digitally warped on-the-fly to simulate camera panning and zooming.  ...  Another solution to the static environment constraint is the combination of image warping and 3D rendering. Since most backgrounds are static, they can be generated efficiently from environment maps.  ... 
doi:10.1145/218380.218395 dblp:conf/siggraph/Chen95 fatcat:kccwdtd5szhtnlzkfpkl5hjkhu

A Tele-immersive System Based On Binocular View Interpolation [article]

Pierre Boulanger, Martha Benitez, Winston Wong
2004 ICAT-EGVE 2014 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments  
other forms of shared digital data (video, 3D models, images, text, etc.).  ...  We also need to do this for two virtual cameras corresponding to the inter-ocular distance of each participant.  ...  What distinguishes virtual environments from 3D graphic environments is the idea of 'immersion', meaning the user is totally absorbed inside the virtual world while outside stimuli are minimized.  ... 
doi:10.2312/egve/egve04/137-146 fatcat:v35mxvp6kre6dik6v5a6eoffny

3D High Dynamic Range dense visual SLAM and its application to real-time object re-lighting

Maxime Meilland, Christian Barat, Andrew Comport
2013 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)  
ACKNOWLEDGEMENTS The present work is funded by the French DGA Rapid project Fraudo on dense visual SLAM in real-time.  ...  reality with virtual objects and realistic relighting using dense 3D HDR environment-maps.  ...  As proposed in [25] , this 3D model is build incrementally in a SLAM approach, and is used to predict a dense virtual image by rasterising and blending nearby key-frames at a desired camera pose within  ... 
doi:10.1109/ismar.2013.6671774 dblp:conf/ismar/MeillandBC13 fatcat:d5ke4q6cofbe7nbcnytjcelq4m

Peeking Behind Objects: Layered Depth Prediction from a Single Image

Helisa Dhamo, Keisuke Tateno, Iro Laina, Nassir Navab, Federico Tombari
2019 Pattern Recognition Letters  
This limits the use of depth prediction in augmented and virtual reality applications, that aim at scene exploration by synthesizing the scene from a different vantage point, or at diminished reality.  ...  While conventional depth estimation can infer the geometry of a scene from a single RGB image, it fails to estimate scene regions that are occluded by foreground objects.  ...  [10] instead, predict a 3D point cloud from an RGB, by generating multiple 3D shapes. Wu et al. [11] (3D-VAE-GAN), builds a 3D model from the latent vector of an image.  ... 
doi:10.1016/j.patrec.2019.05.007 fatcat:dmrezb4cfvaipazpr6o6jxe5va

A flexible technique to select objects via convolutional neural network in VR space

Huiyu Li, Linwei Fan
2019 Science China Information Sciences  
In this paper, we propose a flexible 3D selection technique in a large display projection-based virtual environment.  ...  Herein, we present a body tracking method using convolutional neural network (CNN) to estimate 3D skeletons of multi-users, and propose a region-based selection method to effectively select virtual objects  ...  displayed objects is necessary in a head-tracked projection-based virtual environment (VE).  ... 
doi:10.1007/s11432-019-1517-3 fatcat:me5ndxdw7vc6rk7zslphylcooq

Modern Augmented Reality: Applications, Trends, and Future Directions [article]

Shervin Minaee, Xiaodan Liang, Shuicheng Yan
2022 arXiv   pre-print
Although it has been around for nearly fifty years, it has seen a lot of interest by the research community in the recent years, mainly because of the huge success of deep learning models for various computer  ...  We then give an overview of around 100 recent promising machine learning based works developed for AR systems, such as deep learning works for AR shopping (clothing, makeup), AR based image filters (such  ...  ACKNOWLEDGMENTS We would like to thank Iasonas Kokkinos, Qi Pan, Lyric Kaplan, and Liz Markman for reviewing this work, and providing very helpful comments and suggestions.  ... 
arXiv:2202.09450v2 fatcat:x436ycnvxnhdpfdvhnxkzgbqce

Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses

Llogari Casas Cambra, Matthias Fauconneau, Maggie Kosek, Kieran Mclister, Kenny Mitchell
2019 Computers  
We demonstrate the practical application of the method in generating otherwise laborious in-betweening frames for 3D printed stop-motion animation.  ...  Results are presented on a range of objects, deformations, and illumination conditions in real-time Augmented Reality (AR) on a mobile device.  ...  markerless 3D object tracking [4] (using Vuforia [5] ).  ... 
doi:10.3390/computers8020029 fatcat:ksotur7qgrfrfdfd6o6474cdqy

Pixel-wise closed-loop registration in video-based augmented reality

Feng Zheng, Dieter Schmalstieg, Greg Welch
2014 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)  
We discuss the trade-offs between, and different use cases of, forward and backward warping with model-based tracking in terms of specific properties for registration.  ...  , leading to surface details that are not present in the virtual object; and (2) backward warping of the camera image into the real scene model, preserving the full use of the dense geometry buffer (depth  ...  for the "City-of-Sights" dataset.  ... 
doi:10.1109/ismar.2014.6948419 dblp:conf/ismar/ZhengSW14 fatcat:pxpsvcagfrgvbfizcocl37akfa

AADS: Augmented Autonomous Driving Simulation using Data-driven Algorithms [article]

Wei Li, Chengwei Pan, Rong Zhang, Jiaping Ren, Yuexin Ma, Jin Fang, Feilong Yan, Qichuan Geng, Xinyu Huang, Huajun Gong, Weiwei Xu, Guoping Wang, Dinesh Manocha, Ruigang Yang
2019 arXiv   pre-print
Scalability is particularly important for AD simulation and we believe the complexity and diversity of the real world cannot be realistically captured in a virtual environment.  ...  In addition, the fidelity of CG images still lacks the richness and authenticity of real-world images and using these images for training leads to degraded performance.  ...  Data and materials availability: The RGB and point cloud datasets (ApolloScape-RGB and ApolloScape-PC) are hosted with the web link http://apolloscape.auto/scene.html.  ... 
arXiv:1901.07849v2 fatcat:s6esfsa6zzbt3m4dmmrssy5gni

AutoRemover: Automatic Object Removal for Autonomous Driving Videos [article]

Rong Zhang, Wei Li, Peng Wang, Chenye Guan, Jin Fang, Yuhang Song, Jinhui Yu, Baoquan Chen, Weiwei Xu, Ruigang Yang
2019 arXiv   pre-print
Experiments show that our method outperforms other state-of-the-art (SOTA) object removal algorithms, reducing the RMSE by over 19%.  ...  To deal with large ego-motion, we take advantage of the multi-source data, in particular the 3D data, in autonomous driving.  ...  Acknowledgements Weiwei Xu is partially supported by NSFC (No. 61732016) and the fundamental research fund for the central universities. Jinhui Yu is partially supported by NSFC (No. 61772463).  ... 
arXiv:1911.12588v1 fatcat:223s5gmqjbczzcccbb6pt64neq

Interaction Between Real and Virtual Humans: Playing Checkers [article]

Rémy Torre, Pascal Fua, Selim Balcisoy, Michal Ponder, Daniel Thalmann
2000 ICAT-EGVE 2014 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments  
For some years, we have been able to integrate virtual hu- mans into virtual environments.  ...  Instead, we rely on purely image- based techniques to address the registration issue, when the camera or the objects move, and to drive the virtual human's behavior.  ...  As shown in Fig. 7 , the virtual world is rendered with a black background, and all the 3D objects which represent real objects are black.  ... 
doi:10.2312/egve/egve00/023-032 fatcat:aj3jmc6rszdwxfx5w7kj6klncq

AutoRemover: Automatic Object Removal for Autonomous Driving Videos

Rong Zhang, Wei Li, Peng Wang, Chenye Guan, Jin Fang, Yuhang Song, Jinhui Yu, Baoquan Chen, Weiwei Xu, Ruigang Yang
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Experiments show that our method outperforms other state-of-the-art (SOTA) object removal algorithms, reducing the RMSE by over 19%.  ...  To deal with large ego-motion, we take advantage of the multi-source data, in particular the 3D data, in autonomous driving.  ...  Acknowledgements Weiwei Xu is partially supported by NSFC (No. 61732016) and the fundamental research fund for the central universities. Jinhui Yu is partially supported by NSFC (No. 61772463).  ... 
doi:10.1609/aaai.v34i07.6982 fatcat:kgrz7qzw6zhmrcsdoearrzzwna

Interaction Between Real and Virtual Humans: Playing Checkers [chapter]

Rémy Torre, Pascal Fua, Selim Balcisoy, Michal Ponder, Daniel Thalmann
2000 Eurographics  
For some years, we have been able to integrate virtual humans into virtual environments.  ...  Instead, we rely on purely image-based techniques to address the registration issue, when the camera or the objects move, and to drive the virtual human's behavior.  ...  As shown in Figure 8 , the virtual world is rendered with a black background, and all the 3D objects which represent real objects are black.  ... 
doi:10.1007/978-3-7091-6785-4_4 fatcat:5m2muxaacjh4lldaeb333msdze

Segmentation and tracking of nonplanar templates to improve VSLAM

Abdelsalam Masoud, William Hoff
2016 Robotics and Autonomous Systems  
A VSLAM system estimates its position and orientation (pose) by tracking distinct landmarks in the environment using its camera.  ...  We present an algorithm that estimates the 3D structure of a nonplanar template as it is tracked through a sequence of images.  ...  (d) Number of inlier 3D points being tracked, for each frame. 11: Tracked points in frame 6, for WPM (left) and PPM (right). 2D points are labeled by red, 3D points are labeled by green.  ... 
doi:10.1016/j.robot.2016.07.007 fatcat:ymbyelpaundyvnimbcgviktm2q

VirtualCube: An Immersive 3D Video Communication System [article]

Yizhong Zhang, Jiaolong Yang, Zhen Liu, Ruicheng Wang, Guojun Chen, Xin Tong, Baining Guo
2021 arXiv   pre-print
We use VirtualCubes as the basic building blocks of a virtual conferencing environment, and we provide each VirtualCube user with a surrounding display showing life-size videos of remote participants.  ...  The key ingredient is VirtualCube, an abstract representation of a real-world cubicle instrumented with RGBD cameras for capturing the 3D geometry and texture of a user.  ...  At run-time, we compute the differ- G refers to the global virtual environment defined by the V-Cube As- ences of pixels’ depth and color to the background image values and sembly (as shown in  ... 
arXiv:2112.06730v2 fatcat:ce7dls4fvbf7zbyqdn7mx4hvpe
« Previous Showing results 1 — 15 out of 2,504 results