Filters








419 Hits in 7.3 sec

3D Human Motion Reconstruction Using Video Processing [chapter]

Nadiya Roodsarabi, Alireza Behrad
2008 Lecture Notes in Computer Science  
In this method 2D tracking is used for 3D reconstruction, which a database of selected frames are used for the correction of tracking process.  ...  Finally, 3D reconstruction is performed using Taylor method. By using DCT, we can select best frequency region for various tasks such as tracking, matching, correcting joints and so on.  ...  Extracting monocular 3D human motion poses a number of difficulties such as Depth 3D-2D Projection Ambiguities, High-Dimensional Representation, Physical Constraints, Self-Occlusion and Observation Ambiguities  ... 
doi:10.1007/978-3-540-69905-7_44 fatcat:r3ceix2vfjcyzkesllsmopyivu

Compression and interpolation of 3D stereoscopic and multiview video

Mel Siegel, Sriram Sethuraman, Jeffrey S. McVeigh, Angel G. Jordan, Scott S. Fisher, John O. Merritt, Mark T. Bolas
1997 Stereoscopic Displays and Virtual Reality Systems IV  
It is natural and usually advantageous to integrate motion compensation with the disparity calculation and coding.  ...  Compression implicit in conventional TV coding In the temporal domain, the information content of the optical stream is reduced by averaging over the electronic shutter time and by sampling at the frame  ...  Thus P A and B A frames use a combination of disparity compensated prediction (DCP) and motion compensated prediction Fig.2 .  ... 
doi:10.1117/12.274461 fatcat:udhq5zruwbdrbdvd3dmxe6ib4q

Disparity-compensated view synthesis for S3D content correction

Philippe Robert, Cédric Thébault, Pierre-Henri Conze, Andrew J. Woods, Nicolas S. Holliman, Gregg E. Favalora
2012 Stereoscopic Displays and Applications XXIII  
High quality material to the audience is required but not always ensured, and correction of the stereo views may be required. This is done via disparity-compensated view synthesis.  ...  itself (vertical disparity, color difference between views...).  ...  The coefficients used for the bidirectional interpolation are changed on the right of the regions occluded in the left view, and on the left of the regions occluded in the right view.  ... 
doi:10.1117/12.908831 fatcat:fqvrbz22pzcd5hm7gu2m7w2mbm

3D Human Motion Tracking and Reconstruction Using DCT Matrix Descriptor

Alireza Behrad, Nadia Roodsarabi
2012 ISRN Machine Vision  
In this method, 2D tracking is used for 3D reconstruction, which a database of selected frames is used for the correction of tracking process.  ...  The advantage of using this descriptor is the capabilities of selecting proper frequency regions in various tasks, which results in an efficient tracking and pose matching algorithms.  ...  We use the information of middle-frequency regions for this purpose to remove clothing color (low frequency) and body deformation details (high frequency).  ... 
doi:10.5402/2012/235396 fatcat:uyruoxnt5vcwlln3r2nsrinhjq

Motion magnification

Ce Liu, Antonio Torralba, William T. Freeman, Frédo Durand, Edward H. Adelson
2005 ACM SIGGRAPH 2005 Papers on - SIGGRAPH '05  
After an initial image registration step, we measure motion by a robust analysis of feature point trajectories, and segment pixels based on similarity of position, color, and motion.  ...  a) Input sequence (b) Motion magnified sequence Figure 1: Frames from input and motion magnified output sequence.  ...  We acknowledge support from the National Geospatial-Intelligence Agency under contract BAA NEGI 1582-04-0004 and support from Shell Research.  ... 
doi:10.1145/1186822.1073223 fatcat:mnkvj4siubgxhn4kphsmyu3spa

Motion magnification

Ce Liu, Antonio Torralba, William T. Freeman, Frédo Durand, Edward H. Adelson
2005 ACM Transactions on Graphics  
After an initial image registration step, we measure motion by a robust analysis of feature point trajectories, and segment pixels based on similarity of position, color, and motion.  ...  a) Input sequence (b) Motion magnified sequence Figure 1: Frames from input and motion magnified output sequence.  ...  We acknowledge support from the National Geospatial-Intelligence Agency under contract BAA NEGI 1582-04-0004 and support from Shell Research.  ... 
doi:10.1145/1073204.1073223 fatcat:xmsxlfle7ragtexcr2j7f56w6e

SceneFlowFields++: Multi-frame Matching, Visibility Prediction, and Robust Interpolation for Scene Flow Estimation [article]

René Schuster, Oliver Wasenmüller, Christian Unger, Georg Kuschk, Didier Stricker
2019 arXiv   pre-print
With the successful concept of pixel-wise matching and sparse-to-dense interpolation, we push the limits of scene flow estimation.  ...  Using image information from multiple time steps and explicit visibility prediction based on previous results, we achieve competitive performances on different data sets.  ...  Therefore, we tackle the problem before it occurs by using image information from multiple frames to avoid mismatches and resolve ambiguity in unmatchable regions.  ... 
arXiv:1902.10099v1 fatcat:ljs5iq57gvgwrodycqa4uvqsfi

A Dataset for Visual Navigation with Neuromorphic Methods

Francisco Barranco, Cornelia Fermuller, Yiannis Aloimonos, Tobi Delbruck
2016 Frontiers in Neuroscience  
For both datasets the cameras move with a rigid motion in a static scene, and the data includes the images, events, optic flow, 3D camera motion, and the depth of the scene, along with calibration procedures  ...  We present datasets to evaluate the accuracy of frame-free and frame-based approaches for tasks of visual navigation.  ...  The authors thank the sensors research group at the Institute of Neuroinformatics in Zurich (ETH Zurich and University of Zurich), and IniLabs for their support.  ... 
doi:10.3389/fnins.2016.00049 pmid:26941595 pmcid:PMC4763084 fatcat:kejvcqhn2fghxfgiyx4azphrnq

MoSculp: Interactive Visualization of Shape and Time [article]

Xiuming Zhang, Tali Dekel, Tianfan Xue, Andrew Owens, Qiurui He, Jiajun Wu, Stefanie Mueller, William T. Freeman
2018 arXiv   pre-print
By providing viewers with 3D information, motion sculptures reveal space-time motion information that is difficult to perceive with the naked eye, and allow viewers to interpret how different parts of  ...  We validate the effectiveness of this approach with user studies, finding that our motion sculpture visualizations are significantly more informative about motion than existing stroboscopic and space-time  ...  We thank Kevin Burg for allowing us to use the ballet clips from [10] . We thank Katie Bouman, Vickie Ye, and Zhoutong Zhang for their help with the supplementary video.  ... 
arXiv:1809.05491v1 fatcat:2fkf32opvvfgngxe3vm5jqs3v4

Towards Plenoptic Raumzeit Reconstruction [chapter]

Martin Eisemann, Felix Klose, Marcus Magnor
2011 Lecture Notes in Computer Science  
The goal of image-based rendering is to evoke a visceral sense of presense in a scene using only photographs or videos.  ...  Examining the underlying models we find three different main categories: view interpolation based on geometry proxies, pure image interpolation techniques and complete scene flow reconstruction.  ...  Acknowledgements We would like to thank Jonathan Starck for providing us with the SurfCap test data (www.ee.surrey.ac.uk/CVSSP/VMRG/surfcap.htm) and the Stanford Computer Graphics lab for the buddha light  ... 
doi:10.1007/978-3-642-24870-2_1 fatcat:yn2reedkorbt3okam6ronebthu

Model-Based Reinforcement of Kinect Depth Data for Human Motion Capture Applications

Luis Calderita, Juan Bandera, Pablo Bustos, Andreas Skiadopoulos
2013 Sensors  
New cheap depth sensors and open source frameworks, such as OpenNI, allow for perceiving human motion on-line without using invasive systems.  ...  However, these proposals do not evaluate the validity of the obtained poses. This paper addresses this issue using a model-based pose generator to complement the OpenNI human tracker.  ...  Acknowledgments This work has been partially supported by the Spanish Ministerio de Ciencia e Innovación, TIN2011-27512-C05-04 and AIB2010PT-00149, and by the Junta de Extremadura project, IB10062.  ... 
doi:10.3390/s130708835 pmid:23845933 pmcid:PMC3758625 fatcat:2vbze6bbzjfxzn4twgm4rcovju

3D display size matters: Compensating for the perceptual effects of S3D display scaling

Karim Benzeroual, Robert S. Allison, Laurie M. Wilcox
2012 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops  
This paper will review the primary human factors issues related to S3D image scaling and the techniques and algorithms used to scale content. − =  ...  Of these, one of the most variable and unpredictable factors influencing the observer's S3D experience is the display size, which ranges from S3D mobile devices to large-format 3D movie theatres.  ...  The depth map is a grayscale image associated with each frame and defines the depth information for each pixel in the frame.  ... 
doi:10.1109/cvprw.2012.6238907 dblp:conf/cvpr/BenzeroualAW12 fatcat:ge247jjsubhsljuwhc7gpwot24

Stereo Analysis by Hybrid Recursive Matching for Real-Time Immersive Video Conferencing

N. Atzpadin, P. Kauff, O. Schreer
2004 IEEE transactions on circuits and systems for video technology (Print)  
To cope with this problem, mismatches have to be detected and substituted by a sophisticated interpolation and extrapolation scheme.  ...  Especially, errors in occluded regions and homogenous or less structured regions lead to disturbing artifacts in the synthesized virtual views.  ...  Therefore, color information is used for further decisions.  ... 
doi:10.1109/tcsvt.2004.823391 fatcat:hq3sti2wxbehdj2dx2bc6naqa4

EV-IMO: Motion Segmentation Dataset and Learning Pipeline for Event Cameras [article]

Anton Mitrokhin, Chengxi Ye, Cornelia Fermuller, Yiannis Aloimonos, Tobi Delbruck
2020 arXiv   pre-print
Our approach is based on an efficient implementation of the SfM learning pipeline using a low parameter neural network architecture on event data.  ...  By 3D scanning the room and the objects, accurate depth map ground truth and pixel-wise object masks are obtained, which are reliable even in poor lighting conditions and during fast motion.  ...  In addition constraints on the occlusion regions [27] and discontinuities [14] have been used. Recently, machine learning techniques have been used for motion segmentation [13] , [5] .  ... 
arXiv:1903.07520v2 fatcat:56uvc4u6cba3firmurxmwip244

Relief texture mapping

Manuel M. Oliveira, Gary Bishop, David McAllister
2000 Proceedings of the 27th annual conference on Computer graphics and interactive techniques - SIGGRAPH '00  
We present an extension to texture mapping that supports the representation of 3-D surface details and view motion parallax.  ...  The 1-D warping functions work in texture coordinates to handle the parallax and visibility changes that result from the 3-D shape of the displacement surface.  ...  Brooks, Jr. for his detailed critique of an earlier draft of this paper. Cássio Ribeiro designed Relief Town. The UNC IBR group provided the reading room data set.  ... 
doi:10.1145/344779.344947 dblp:conf/siggraph/OliveiraBM00 fatcat:rurmfeli25byncp4m2okid2hny
« Previous Showing results 1 — 15 out of 419 results