Filters








51 Hits in 7.3 sec

Jump

Robert Anderson, David Gallup, Jonathan T. Barron, Janne Kontkanen, Noah Snavely, Carlos Hernández, Sameer Agarwal, Steven M. Seitz
2016 ACM Transactions on Graphics  
The Jump system produces omnidirectional stereo (ODS) video. (a) The multi-camera rig with the ODS viewing circle overlaid and three rays for the left (green) and right (red) stitches.  ...  (b) A stitched ODS video generated from 16 input videos, shown in anaglyphic stereo here. (c) A VR headset in which the video can be viewed.  ...  This approach of enforcing temporal consistency by connecting each pixel to its nearby pixels in the video sequence implicitly reasons about object motion by assuming motion is small and temporally smooth  ... 
doi:10.1145/2980179.2980257 fatcat:qwqow24s6bhhxhvpioko6u7wlq

MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images [article]

Benjamin Attal, Selena Ling, Aaron Gokaslan, Christian Richardt, James Tompkin
2020 arXiv   pre-print
Stereo 360 imagery can be captured from multi-camera systems for virtual reality (VR), but lacks motion parallax and correct-in-all-directions disparity cues.  ...  We introduce a method to convert stereo 360 (omnidirectional stereo) imagery into a layered, multi-sphere image representation for six degree-of-freedom (6DoF) rendering.  ...  Acknowledgments: We thank Ana Serrano for help with RGB-D comparisons and Eliot Laidlaw for improving the Unity renderer.  ... 
arXiv:2008.06534v1 fatcat:vdgpevz7brc6tbaq5sgiqa6ylm

TeleParallax: Low-Motion-Blur Stereoscopic System With Correct Interpupillary Distance for 3D Head Rotations

Tomohiro Amemiya, Kazuma Aoyama, Michitaka Hirose
2021 Frontiers in Virtual Reality  
Binocular parallax provides cues for depth information when a scene is viewed with both eyes. In visual telepresence systems, stereo cameras are commonly used to simulate human eyes.  ...  The use of omnidirectional cameras can reduce the motion blur, but does not provide the correct interpupillary distance (IPD) when viewers tilt or turn their heads sideways.  ...  ACKNOWLEDGMENTS The authors would like to thank Tsubasa Morita for his help in developing the first prototype using WebRTC.  ... 
doi:10.3389/frvir.2021.726285 fatcat:yobi5e5li5bydangxirdickqdu

Image-based interactive exploration of real-world environments

M. Uyttendaele, A. Criminisi, Sing Bing Kang, S. Winder, R. Szeliski, R. Hartley
2004 IEEE Computer Graphics and Applications  
We decided instead on omnidirectional capture at video rates. A common approach to omnidirectional capture is to use catadioptric systems consisting of mirrors and lenses.  ...  The user postprocesses the captured data with a more complex stitching and parallax compensation step to produce seamless panoramas.  ... 
doi:10.1109/mcg.2004.1297011 pmid:15628073 fatcat:a4v2bwkxfbd3ddyjdy3hdf3z5e

3-D scene data recovery using omnidirectional multibaseline stereo

Sing Bing Kang, R. Szeliski
1996 Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition  
stereo.  ...  By taking such image panoramas at different camera locations, we can recover 3-D data of the scene using a set of simple techniques: feature tracking, an 8-point structure from motion algorithm, and multibaseline  ...  Acknowledgements We would like to thank Andrew Johnson for the use of his 3-D modeling and rendering program and Richard Weiss for helpful discussions.  ... 
doi:10.1109/cvpr.1996.517098 dblp:conf/cvpr/KangS96 fatcat:hvgqttyo3rgr3jldq6g4yej7we

Low-cost 360 stereo photography and video capture

Kevin Matzen, Michael F. Cohen, Bryce Evans, Johannes Kopf, Richard Szeliski
2017 ACM Transactions on Graphics  
We validate our method by generating both stills and videos. We have conducted a user study to better understand what kinds of geometric processing are necessary for a pleasant viewing experience.  ...  In this work, we describe a method that takes images from two 360 • spherical cameras and synthesizes an omni-directional stereo panorama with stereo in all directions.  ...  Applying our algorithm naively to frames of video independently produces some amount of temporal flickering, since none of our dense correspondence methods explicitly account for temporal consistency.  ... 
doi:10.1145/3072959.3073645 fatcat:3w2fbxymzjffvjzwljhbwv7ozm

360-Degree Video Streaming: A Survey of the State of the Art

Rabia Shafi, Wan Shuai, Muhammad Usman Younus
2020 Symmetry  
360-degree video streaming is expected to grow as the next disruptive innovation due to the ultra-high network bandwidth (60–100 Mbps for 6k streaming), ultra-high storage capacity, and ultra-high computation  ...  Next, the latest ongoing standardization efforts for enhanced degree-of-freedom immersive experience are presented.  ...  Acknowledgments: We would like to thank Kaifang Yang from Shaanxi Normal University for his insightful comments to improve the quality of the manuscript.  ... 
doi:10.3390/sym12091491 fatcat:wciqpwsi75grffl2ug73uzsbsm

Enhancing Light Fields through Ray-Space Stitching

Xinqing Guo, Zhan Yu, Sing Bing Kang, Haiting Lin, Jingyi Yu
2016 IEEE Transactions on Visualization and Computer Graphics  
The final LF stitching is done using multi-resolution, high-dimensional graph-cut in order to account for possible scene motion, imperfect RSMM estimation, and/or undersampling.  ...  Our technique consists of two key components: LF registration and LF stitching. To register LFs, we use what we call the ray-space motion matrix (RSMM) to establish pairwise ray-ray correspondences.  ...  There are techniques to generate 3D panoramas, mostly in the context of stereoscopic images and videos. In the angular domain, Peleg et al.  ... 
doi:10.1109/tvcg.2015.2476805 pmid:26357400 fatcat:ghmmyryhfjg47kzzq3hqnp5wfa

Content Format and Quality of Experience in Virtual Reality [article]

Henrique Galvan Debarba, Mario Montagud, Sylvain Chagué, Javier Lajara, Ignacio Lacosta, Sergi Fernandez Langa, Caecilia Charbonnier
2020 arXiv   pre-print
Namely, 360 stereoscopic video, the combination of a 3D environment with a video billboard for dynamic elements, and a full 3D rendered scene.  ...  On the other hand, 3D content allows for point of view translation, but real-time photorealistic rendering is not trivial and comes at high production and processing costs.  ...  simulate motion parallax when the user moves. a cinematic segment with two actors.  ... 
arXiv:2008.04511v1 fatcat:e4hnnshzjvefvorlcqjggn3sby

Survey of image-based representations and compression techniques

Heung-Yeung Shum, Sing Bing Kang, Shing-Chow Chan
2003 IEEE transactions on circuits and systems for video technology (Print)  
Capturing panoramas is even easier if omnidirectional cameras (e.g., [60] and [61] ) or fisheye lens [91] are used.  ...  Capturing panoramas is even easier if omnidirectional cameras (e.g., [61] and [60] ) or fisheye lens [91] are used.  ... 
doi:10.1109/tcsvt.2003.817360 fatcat:44xdumu5rjdcfe6uk7omo2uvhy

Review of image-based rendering techniques

Harry Shum, Sing B. Kang, King N. Ngan, Thomas Sikora, Ming-Ting Sun
2000 Visual Communications and Image Processing 2000  
In this paper, we survey the techniques for image-based rendering.  ...  For example, the samples used in [28] are cylindrical panoramas. Disparity of each pixel in stereo pairs of cylindrical panoramas is computed and used for generating new plenoptic function samples.  ...  Capturing panoramas is even easier if omnidirectional cameras (e.g., [30, 29] ) or fisheye lens [45] are used.  ... 
doi:10.1117/12.386541 fatcat:iojdcke6zjgvhb4juxqp223yzm

Survey of image-based rendering techniques

Sing B. Kang, Sabry F. El-Hakim, Armin Gruen
1998 Videometrics VI  
In this paper, we survey the techniques for image-based rendering.  ...  For example, the samples used in [28] are cylindrical panoramas. Disparity of each pixel in stereo pairs of cylindrical panoramas is computed and used for generating new plenoptic function samples.  ...  Capturing panoramas is even easier if omnidirectional cameras (e.g., [30, 29] ) or fisheye lens [45] are used.  ... 
doi:10.1117/12.333774 fatcat:gkwroprrznegbktrbipwun76iy

Multi-perspective stereoscopy from light fields

Changil Kim, Alexander Hornung, Simon Heinzle, Wojciech Matusik, Markus Gross
2011 Proceedings of the 2011 SIGGRAPH Asia Conference on - SA '11  
Figure 1 : We propose a framework for flexible stereoscopic disparity manipulation and content post-production.  ...  The proposed framework is novel and useful for stereoscopic image processing and post-production.  ...  Acknowledgements We thank Kenny Mitchell, Maurizio Nitti, Manuel Lang, Thomas Oskam, and Wojciech Jarosz for providing various data sets, and the reviewers for their comments and suggestions.  ... 
doi:10.1145/2024156.2024224 fatcat:wkrgri5e4rgvfbtfr4i3trbkdy

Multi-perspective stereoscopy from light fields

Changil Kim, Alexander Hornung, Simon Heinzle, Wojciech Matusik, Markus Gross
2011 ACM Transactions on Graphics  
Acknowledgements We thank Kenny Mitchell, Maurizio Nitti, Manuel Lang, Thomas Oskam, and Wojciech Jarosz for providing various data sets, and the reviewers for their comments and suggestions.  ...  For example, the complete graph construction for a single video frame consisting of 50 images with 640 × 480 resolution used for Figure 1 takes about 2.5 seconds.  ...  As high frame rate light field cameras are not yet available, we captured stop motion videos to demonstrate our method on live-action footage.  ... 
doi:10.1145/2070781.2024224 fatcat:72msddctwzdq7jzcntwqgi5c4a

Neural Camera Models [article]

Igor Vasiljevic
2022 arXiv   pre-print
Machine-learning-aided depth perception, or depth estimation, predicts for each pixel in an image the distance to the imaged scene point.  ...  To enable these embodied agents to interact with real-world objects, cameras are increasingly being used as depth sensors, reconstructing the environment for a variety of downstream reasoning tasks.  ...  OmniMVS: End-to-end learning for omnidirectional stereo matching.  ... 
arXiv:2208.12903v1 fatcat:kmmanyu7brekxdbe3ynoasq3nq
« Previous Showing results 1 — 15 out of 51 results