A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Deep Two-View Structure-from-Motion Revisited
[article]
2021
arXiv
pre-print
2D optical flow correspondences, and 3) a scale-invariant depth estimation network that leverages epipolar geometry to reduce the search space, refine the dense correspondences, and estimate relative depth ...
Extensive experiments show that our method outperforms all state-of-the-art two-view SfM methods by a clear margin on KITTI depth, KITTI VO, MVS, Scenes11, and SUN3D datasets in both relative pose and ...
Acknowledgements Yuchao Dai was supported in part by National Natural Science Foundation of China (61871325) and National Key Research and Development Program of China (2018AAA0102803). ...
arXiv:2104.00556v1
fatcat:tubxrltigvahrh7zlk3bnuckty
Towards Self-Supervised Category-Level Object Pose and Size Estimation
[article]
2022
arXiv
pre-print
In this work, we tackle the challenging problem of category-level object pose and size estimation from a single depth image. ...
In particular, shape deformation and registration are applied to the template mesh to eliminate the differences in shape, pose and scale. ...
Then, a point cloud registration network is applied to explore correspondence and estimate the pose and scale parameters. ...
arXiv:2203.02884v2
fatcat:td7xux4ntrbwfik3kxqbpkgoka
Texture-less planar object detection and pose estimation using Depth-Assisted Rectification of Contours
2012
2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
In order to achieve invariance to rotation, scale and perspective distortions, a rectified representation of the contours is obtained using the available depth information. ...
DARC requires only a single RGB-D image of the planar objects in order to estimate their pose, opposed to some existing approaches that need to capture a number of views of the target object. ...
This work is financially supported by the CAPES/INRIA/CONICYT STIC-AmSud project ARVS and by CNPq (process 141705/2010-8). Special thanks go to Mário Gonçalves, for lending a Kinect device. ...
doi:10.1109/ismar.2012.6402582
dblp:conf/ismar/LimaUTM12
fatcat:3anxedjjejgvvk6nrvbekpobbu
Global localization and relative pose estimation based on scale-invariant features
2004
Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.
In this paper we describe a vision-based hybrid localization scheme based on scale-invariant keypoints. ...
Once the best match has been found, the relative pose between the model view and the current image is recovered. ...
This work is supported by NSF grant IIS-0118732 and George Mason University Provost Scholarship fund. ...
doi:10.1109/icpr.2004.1333767
dblp:conf/icpr/KoseckaY04
fatcat:qwgc5abmcrbatmbzmlyf4vwciq
Comparison of local image descriptors for full 6 degree-of-freedom pose estimation
2009
2009 IEEE International Conference on Robotics and Automation
From the experiments we can conclude that duplet features, that use pairs of interest points, improve pose estimation accuracy, and that affine covariant features do not work well in current pose estimation ...
Recent years have seen advances in the estimation of full 6 degree-of-freedom object pose from a single 2D image. ...
The authors gratefully acknowledge the contribution and input of Prof. Robert Forchheimer. ...
doi:10.1109/robot.2009.5152360
dblp:conf/icra/VikstenFJM09
fatcat:ph7snjwuznhdvbou4rec77hmoy
Three-dimensional model-based object recognition and pose estimation using probabilistic principal surfaces
2000
Applications of Artificial Neural Networks in Image Processing V
Each node on the spherical manifold also corresponds nicely to a pose on a viewing sphere with 2 degrees of freedom. The proposed system is applied to aircraft classification and pose estimation. ...
A novel scheme using spherical manifolds is proposed for the simultaneous classification and pose estimation of 3-D objects from 2-D images. ...
ACKNOWLEDGMENTS This research was supported in part by Army Research contracts DAAG55-98-1-0230 and DAAD19-99-1-0012, and NSF grant ECS-9900353. ...
doi:10.1117/12.382913
fatcat:tmeengnbfndwfbdxctaa6exv54
VIEW AND ILLUMINATION INVARIANT ITERATIVE BASED IMAGE MATCHING
2013
International Journal of Research in Engineering and Technology
In this paper, we propose a view and illumination invariant image matching method. ...
We iteratively estimate the relationship of the relative view and illumination of the images, transform the view of one image to the other, and normalize their illumination for accurate matching. ...
The radius of the region is determined by a priori setting (Harris corner) or the region scale (scale invariant features). ...
doi:10.15623/ijret.2013.0211029
fatcat:yy7f3cyi2fdudk5riuheat2h5m
3D model-based pose invariant face recognition from multiple views
2007
IET Computer Vision
First, pose estimation and 3D face model adaptation are achieved by means of a three-layer linear iterative process. Frontal view face images are synthesised using the estimated 3D models and poses. ...
A 3D model-based pose invariant face recognition method that can recognise a human face from its multiple views is proposed. ...
This paper proposes a 3D model based pose invariant face recognition method using the multiple-view approach. ...
doi:10.1049/iet-cvi:20060014
fatcat:evmxe7o5tbf5fflac343onnzfi
A Novel Algorithm for View and Illumination Invariant Image Matching
2012
IEEE Transactions on Image Processing
In this paper, we propose a view and illumination invariant image-matching method. ...
We iteratively estimate the relationship of the relative view and illumination of the images, transform the view of one image to the other, and normalize their illumination for accurate matching. ...
The radius of the region is determined by a priori setting (Harris corner) or the region scale (scale invariant features). ...
doi:10.1109/tip.2011.2160271
pmid:21712161
fatcat:hyrdtxsaafctzcazjk7aqz2nc4
Object Pose Estimation Using Patch-Duplet/SIFT Hybrids
2009
IAPR International Workshop on Machine Vision Applications
For the application of object pose estimation, one comparison showed a local descriptor, called the Patch-Duplet, of equal or better performance than SIFT. ...
This paper examines different properties of those two descriptors by forming hybrids between them and extending the object pose tests of the original Patch-Duplet paper. All tests use real images. ...
In object recognition and wide baseline stereo, view invariance for features is a good thing. ...
dblp:conf/mva/Viksten09
fatcat:dmwzynngufesvfdw2bv3c3i2w4
Untangling Object-View Manifold for Multiview Recognition and Pose Estimation
[chapter]
2014
Lecture Notes in Computer Science
We propose an efficient computational framework that can untangle such a complex manifold, and achieve a model that separates a view-invariant category representation, from categoryinvariant pose representation ...
We outperform the state of the art in the three widely used multiview dataset, for both category recognition, and pose estimation. ...
Each object is imaged from 24 poses on a viewing sphere (8 azimuth angles × 3 zenith angles), and from 3 scales. ...
doi:10.1007/978-3-319-10593-2_29
fatcat:2pjsfunbcnezvga7dlemhmi5w4
Depth-assisted rectification for real-time object detection and pose estimation
2015
Machine Vision and Applications
Two methods based on depth-assisted rectification are proposed, which transform features extracted from the color image to a canonical view using depth data in order to obtain a representation invariant ...
to rotation, scale and perspective distortions. ...
and ORB+DARP (right) Fig. 16 Scale invariant keypoint matching example using ORB+DARP where 11 matches are found
Fig. 17 17 Scale invariant pose estimation example using ORB+DARP Fig. 18 Non-planar smooth ...
doi:10.1007/s00138-015-0740-8
fatcat:wsac44uezrfq3bqeu4khlj4wcy
Motion-Based View-Invariant Articulated Motion Detection and Pose Estimation Using Sparse Point Features
[chapter]
2009
Lecture Notes in Computer Science
This observational probability is fed to a Hidden Markov Model defined over multiple poses and viewpoints to obtain temporally consistent pose estimates. ...
To estimate the pose and viewpoint we introduce a novel motion descriptor that computes the spatial relationships of motion vectors representing various parts of the person using the trajectories of a ...
We have presented a motion-based approach for detection, tracking, and pose estimation of articulated human motion that is invariant of scale, viewpoint, illumination, and camera motion. ...
doi:10.1007/978-3-642-10331-5_40
fatcat:txwtb4piwfghbk3y7msmtoqrjm
Head pose estimation using multilinear subspace analysis for robot human awareness
2009
2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops
We demonstrate a pipeline of on-the-move detection of pedestrians with a robot stereo vision system, segmentation of the head, and head pose estimation in cluttered urban street scenes. ...
To make such inferences, requires estimating head pose on facial images that are combination of multiple varying factors, such as identity, appearance, head pose, and illumination. ...
We are able to classify head pose in low-resolution images, and automatically localize the face and determine its scale. ...
doi:10.1109/iccvw.2009.5457694
dblp:conf/iccvw/IvanovMV09
fatcat:sgew4rjmyrhizgstxxgwbskrje
Unrestricted Recognition of 3D Objects for Robotics Using Multilevel Triplet Invariants
2004
The AI Magazine
position, scale, orientation, wheel, a bicycle, a room, a house, a landscape,
and pose against a structured background. ...
Yuan, C., and Niemann, H. 2001. Neural Networks
Lowe, D. G. 2001. Local Feature View Clustering for for the Recognition and Pose Estimation of 3D
3D Object Recognition. ...
doi:10.1609/aimag.v25i2.1760
dblp:journals/aim/GranlundM04
fatcat:ujvudjh2ufhw7n4czyqp2o4xrq
« Previous
Showing results 1 — 15 out of 115,617 results