Filters








3,733 Hits in 5.5 sec

Mobile Devices Based 3D Image Display Depending on User's Actions and Movements

Kohei Arai, Herman Tolle, Akihiro Serita
2013 International Journal of Advanced Research in Artificial Intelligence (IJARAI)  
One of applications of the proposed system is attempted for demonstration of virtual interior of the house in concern. User can take a look at the inside house in accordance with user's movement.  ...  Thus user can imagine how the interior of the house looks like.  ...  ACKNOWLEDGMENT The authors would like to thank Arai's laboratory members for their useful comments and suggestions during this research works.  ... 
doi:10.14569/ijarai.2013.020612 fatcat:mtyys2lghrg3fasa24c3b2xp4e

A multi-camera person tracking system for robotic applications in virtual reality TV studio

S. Nair, G. Panin, M. Wojtczyk, C. Lenz, T. Friedlhuber, A. Knoll
2008 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems  
In this paper, an integrated multi-camera person tracking system for virtual reality television studios (VR-TV) is presented.  ...  The system robustly tracks the moderator while freely moving, sitting or walking around the studio, and the estimation result can be used in order to drive the main broadcasting camera mounted on a large  ...  ACKNOWLEDGEMENTS The authors wish to express their acknowledgements to the RTL Television Studio Köln, Germany for providing the environment, and the pictures and live video sequences for our tracking  ... 
doi:10.1109/iros.2008.4650727 dblp:conf/iros/NairPWLFK08 fatcat:rn3oypne2fhqvhb6axvjc2c4ai

A distributed and scalable person tracking system for robotic visual servoing with 8 dof in virtual reality TV studio automation

Suraj Nair, Giorgio Panin, Thorsten Roder, Thomas Friedlhuber, Alois Knoll
2009 2009 6th International Symposium on Mechatronics and its Applications  
In this paper, a distributed and scalable person tracking system for visual servoing using Industrial Robot Arms for virtual reality television studios (VR-TV) is presented.  ...  The system robustly tracks the moderator while freely moving, sitting or walking around the studio, and the estimation result can be used to drive the main broadcasting camera mounted on a large robotic  ...  ACKNOWLEDGEMENTS The authors wish to express their acknowledgements to the RTL Television Studio Köln, Germany for providing the environment, and the pictures and live video sequences for our tracking  ... 
doi:10.1109/isma.2009.5164807 fatcat:igr73gjcvbhajbkjutbbrxjzoi

EgoCap

Helge Rhodin, Christian Richardt, Dan Casas, Eldar Insafutdinov, Mohammad Shafiei, Hans-Peter Seidel, Bernt Schiele, Christian Theobalt
2016 ACM Transactions on Graphics  
Therefore, we propose a new method for real-time, marker-less, and egocentric motion capture: estimating the full-body skeleton pose from a lightweight stereo pair of fisheye cameras attached to a helmet  ...  Our approach combines the strength of a new generative pose estimation framework for fisheye views with a ConvNet-based body-part detector trained on a large new dataset.  ...  Acknowledgements We thank all reviewers for their valuable feedback, Dushyant Mehta, James Tompkin, and The Foundry for license support.  ... 
doi:10.1145/2980179.2980235 fatcat:kx3rcoljurb3xgewan4acsb2qa

EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras [article]

Helge Rhodin, Christian Richardt, Dan Casas, Eldar Insafutdinov, Mohammad Shafiei, Hans-Peter Seidel, Bernt Schiele, Christian Theobalt
2016 arXiv   pre-print
We therefore propose a new method for real-time, marker-less and egocentric motion capture which estimates the full-body skeleton pose from a lightweight stereo pair of fisheye cameras that are attached  ...  It combines the strength of a new generative pose estimation framework for fisheye views with a ConvNet-based body-part detector trained on a large new dataset.  ...  Acknowledgements We thank all reviewers for their valuable feedback, Dushyant Mehta, James Tompkin, and The Foundry for license support.  ... 
arXiv:1609.07306v1 fatcat:xatetlqtpbbsbclzgv3ik2fxee

Four Metamorphosis States in a Distributed Virtual (TV) Studio: Human, Cyborg, Avatar, and Bot – Markerless Tracking and Feedback for Realtime Animation Control [chapter]

Jens Herder, Jeff Daemen, Peter Haufs-Brusberg, Isis Abdel Aziz
2015 Lecture Notes in Computer Science  
Virtual studios differ from other virtual environments because there always exist two concurrent views: The view of the tv consumer and the view of the talent in front of the camera.  ...  The major challenge in virtual studio technology is the interaction between actors and virtual objects.  ...  Some work was carried out within the "IVO [at] hiTV -Interaction with virtual objects in iTV productions" project, supported by the "FHprofUnt" program of the Federal Ministry of Education and Research  ... 
doi:10.1007/978-3-319-17043-5_2 fatcat:yt6s3dppvzckpjz3m4ksge5inq

4D Visualization of Dynamic Events from Unconstrained Multi-View Videos [article]

Aayush Bansal, Minh Vo, Yaser Sheikh, Deva Ramanan, Srinivasa Narasimhan
2020 arXiv   pre-print
We present a data-driven approach for 4D space-time visualization of dynamic events from videos captured by hand-held multiple cameras.  ...  This model allows us to create virtual cameras that facilitate: (1) freezing the time and exploring views; (2) freezing a view and moving through time; and (3) simultaneously changing both time and view  ...  We are also thankful to Gengshan Yang for his help with the disparity estimation code and many other friends for their patience in collecting the various sequences. We list them on our project page.  ... 
arXiv:2005.13532v1 fatcat:sebhlm3mlbfwfnihehjwuk4a5q

Motion tracking: no silver bullet, but a respectable arsenal

G. Welch, E. Foxlin
2002 IEEE Computer Graphics and Applications  
Thus, the typical inside-looking-out characterization would be misleading.  ...  These two alternatives are often referred to as outside-looking-in and inside-looking-out respectively, although that characterization can be misleading (see the sidebar, "Outside-In or Inside-Out?").  ... 
doi:10.1109/mcg.2002.1046626 fatcat:32vtilloojf3voow3ysowcsa4i

Efficient optical camera tracking in virtual sets

Y.S. Xirouhakis, A.I. Drosopoulos, A.N. Delopoulos
2001 IEEE Transactions on Image Processing  
Optical tracking systems have become particularly popular in virtual studios applications tending to substitute electromechanical ones.  ...  However, optical systems are reported to be inferior in terms of accuracy in camera motion estimation.  ...  ACKNOWLEDGMENT The authors wish to thank the three anonymous reviewers and the Associate Editor for their encouragement in providing erroranalysis results and their guidelines on improving the manuscript  ... 
doi:10.1109/83.913595 pmid:18249650 fatcat:ovnquqsalbc3replpv72cpewci

Human Pose Manipulation and Novel View Synthesis using Differentiable Rendering [article]

Guillaume Rochette, Chris Russell, Richard Bowden
2022 arXiv   pre-print
We show how our approach can be used for motion transfer between individuals; novel view synthesis of individuals captured from just a single camera; to synthesize individuals from any virtual viewpoint  ...  We present a new approach for synthesizing novel views of people in new poses. Our novel differentiable renderer enables the synthesis of highly realistic images from any viewpoint.  ...  This work reflects only the authors view and the Commission is not responsible for any use that may be made of the information it contains.  ... 
arXiv:2111.12731v2 fatcat:4ncbud64frbwrmdwt7liexj62y

Learning Lightprobes for Mixed Reality Illumination

David Mandl, Kwang Moo Yi, Peter Mohr, Peter M. Roth, Pascal Fua, Vincent Lepetit, Dieter Schmalstieg, Denis Kalkofen
2017 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)  
Note that we only register the real world lighting and do not consider any camera effects such as exposure or blur.  ...  To keep the pipeline accurate and efficient, we propose to fuse the light estimation results from multiple CNN instances, and we show an approach for caching estimates over time.  ...  ACKNOWLEDGEMENTS This work was funded in part by the EU FP7-ICT project MAGELLAN under contract 611526 and the Christian Doppler Laboratory for Semantic 3D Computer Vision.  ... 
doi:10.1109/ismar.2017.25 dblp:conf/ismar/MandlYMRFLSK17 fatcat:ctnhktjlsrhbndk6k6gyn3ahra

UNOC: Understanding Occlusion for Embodied Presence in Virtual Reality [article]

Mathias Parger, Chengcheng Tang, Yuanlu Xu, Christopher Twigg, Lingling Tao, Yijing Li, Robert Wang, Markus Steinberger
2020 arXiv   pre-print
Unlike the popular 3D pose estimation setting, the problem is often formulated as inside-out tracking based on embodied perception (e.g., egocentric cameras, handheld sensors).  ...  Tracking body and hand motions in the 3D space is essential for social and self-presence in augmented and virtual environments.  ...  To simulate the multiple inside-out tracking cameras of headsets like the Oculus Quest or Microsoft Hololens, we place a virtual camera four centimeters in front of the nose.  ... 
arXiv:2012.03680v1 fatcat:xj6ndxwpvrd5hdnubelpkqqfva

Direct Manipulation of 3D Virtual Objects by Actors for Recording Live Video Content

Michihiko Minoh, Hideto Obara, Takuya Funtatomi, Masahiro Toyoura, Koh Kakusho
2007 Second International Conference on Informatics Research for Development of Knowledge Society Infrastructure (ICKS'07)  
The 3D virtual objects used for virtual props are modeled by recovering the 3D shapes of the real objects by the volume intersection method.  ...  In this article, we will describe our work aiming to realize direct manipulation of 3D virtual objects used as virtual props by a human playing a role of an actor for recording a live video content.  ...  The studio camera is equipped with sensors for measuring the camera work including panning, tilting, zooming and dollying, which are used for creating the CG image of the virtual set with the same camera  ... 
doi:10.1109/icks.2007.10 fatcat:qtzccowikze6vlzay6sip3jtvy

You2Me: Inferring Body Pose in Egocentric Video via First and Second Person Interactions [article]

Evonne Ng, Donglai Xiang, Hanbyul Joo, Kristen Grauman
2020 arXiv   pre-print
The body pose of a person wearing a camera is of great interest for applications in augmented reality, healthcare, and robotics, yet much of the person's body is out of view for a typical wearable camera  ...  We propose a learning-based approach to estimate the camera wearer's 3D body pose from egocentric video sequences.  ...  Acknowledgements: We thank Hao Jiang for helpful discussions. UT Austin is supported in part by ONR PECASE and NSF IIS-1514118.  ... 
arXiv:1904.09882v2 fatcat:6svureohwnfmldvzpkev6hc6re

Motion capture from body-mounted cameras

Takaaki Shiratori, Hyun Soo Park, Leonid Sigal, Yaser Sheikh, Jessica K. Hodgins
2011 ACM SIGGRAPH 2011 papers on - SIGGRAPH '11  
Outward-looking cameras are attached to the limbs of the subject, and the joint angles and root pose are estimated through non-linear optimization.  ...  Structure-from-motion is used to estimate the skeleton structure and to provide initialization for the non-linear optimization procedure.  ...  We would also like to thank Moshe Mahler, Valeria Reznitskaya, and Matthew Kaemmerer for their help in modeling and rendering, and Justin Macey for his help in recording the motion capture data.  ... 
doi:10.1145/1964921.1964926 fatcat:5ujb5rclwranfawpdom2bs47ly
« Previous Showing results 1 — 15 out of 3,733 results