6,730 Hits in 5.1 sec

Unsupervised Feature Learning for Dense Correspondences across Scenes [article]

Chao Zhang, Chunhua Shen, Tingzhi Shen
2015 arXiv   pre-print
We propose a fast, accurate matching method for estimating dense pixel correspondences across scenes.  ...  It is a challenging problem to estimate dense pixel correspondences between images depicting different scenes or instances of the same object category.  ...  To our knowledge, for dense correspondence estimation across scenes, to date the SIFT feature is still the standard due to its very good performance.  ... 
arXiv:1501.00642v2 fatcat:tmrxi23wpvblxfyofpxyhlccoe

Tracking Emerges by Looking Around Static Scenes, with Neural 3D Mapping [article]

Adam W. Harley, Shrinidhi K. Lakshmikanth, Paul Schydlo, Katerina Fragkiadaki
2020 arXiv   pre-print
We propose to leverage multiview data of static points in arbitrary scenes (static or dynamic), to learn a neural 3D mapping module which produces features that are correspondable across time.  ...  We train the voxel features to be correspondable across viewpoints, using a contrastive loss, and correspondability across time emerges automatically.  ...  FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center.  ... 
arXiv:2008.01295v1 fatcat:xpel37v5wzhgbbvhf6ofwwpxiy

DeFeat-Net: General Monocular Depth via Simultaneous Unsupervised Representation Learning [article]

Jaime Spencer, Richard Bowden, Simon Hadfield
2020 arXiv   pre-print
The resulting feature representation is learned in an unsupervised manner with no explicit ground-truth correspondences required.  ...  We propose DeFeat-Net (Depth & Feature network), an approach to simultaneously learn a cross-domain dense feature representation, alongside a robust depth-estimation framework based on warped feature consistency  ...  We would also like to thank NVIDIA Corporation for their Titan Xp GPU grant.  ... 
arXiv:2003.13446v1 fatcat:i5eic7zz7zebdjozvygcd63nai

Learning Topology from Synthetic Data for Unsupervised Depth Completion [article]

Alex Wong, Safa Cicek, Stefano Soatto
2021 arXiv   pre-print
We present a method for inferring dense depth maps from images and sparse depth measurements by leveraging synthetic data to learn the association of sparse point clouds with dense natural shapes, and  ...  Our learned prior for natural shapes uses only sparse depth as input, not images, so the method is not affected by the covariate shift when attempting to transfer learned models from synthetic data to  ...  Learning Topology using Synthetic Data Can we learn to infer the dense topology of the scene given only sparse points?  ... 
arXiv:2106.02994v3 fatcat:zanngo2conhqhaa74dnjl76z4q

RANSAC-Flow: generic two-stage image alignment [article]

Xi Shen, François Darmon, Alexei A. Efros, Mathieu Aubry
2020 arXiv   pre-print
Despite its simplicity, our method shows competitive results on a range of tasks and datasets, including unsupervised optical flow on KITTI, dense correspondences on Hpatches, two-view geometry estimation  ...  This paper considers the generic problem of dense alignment between two images, whether they be two frames of a video, two widely different views of a scene, two paintings depicting similar content, etc  ...  We thank Shiry Ginosar, Thibault Groueix and Michal Irani for helpful discussions, and Elizabeth Alice Honig for her help in building the Brueghel dataset.  ... 
arXiv:2004.01526v2 fatcat:ltpp6c4gcnfmdo3pxdt2ulqqp4

Deep Spectral Methods: A Surprisingly Strong Baseline for Unsupervised Semantic Segmentation and Localization [article]

Luke Melas-Kyriazi and Christian Rupprecht and Iro Laina and Andrea Vedaldi
2022 arXiv   pre-print
These tasks are particularly interesting in an unsupervised setting due to the difficulty and cost of obtaining dense image annotations, but existing unsupervised approaches struggle with complex scenes  ...  Furthermore, by clustering the features associated with these segments across a dataset, we can obtain well-delineated, nameable regions, i.e. semantic segmentations.  ...  We see that the mattes are significantly more useful for editing with our feature affinity term, as they better correspond to objects in the scene.  ... 
arXiv:2205.07839v1 fatcat:6iyz22evgzak5h2apmxbywowuq

Unsupervised Depth Completion with Calibrated Backprojection Layers [article]

Alex Wong, Stefano Soatto
2021 arXiv   pre-print
A decoder, exploiting skip-connections, produces a dense depth map.  ...  We propose a deep neural network architecture to infer dense depth from an image and a sparse point cloud.  ...  Row 4: "After Sparse-to-Dense" denotes the depth features learned by the proposed sparse-to-dense (S2D) module.  ... 
arXiv:2108.10531v2 fatcat:oksdumjhwnfsbb2gvamh3oz4cq

Almost Unsupervised Learning for Dense Crowd Counting

Deepak Babu Sam, Neeraj N Sajjan, Himanshu Maurya, R. Venkatesh Babu
We present an unsupervised learning method for dense crowd count estimation.  ...  Furthermore, we present comparisons and analyses regarding the quality of learned features across various models.  ...  When applied on highly diverse dense crowd images, we show that current unsupervised methods do not learn enough useful features for density regression as evidenced from their performance scores.  ... 
doi:10.1609/aaai.v33i01.33018868 fatcat:fuuvzlulfrehdgcxufxh5z2wm4

4DContrast: Contrastive Learning with Dynamic Correspondences for 3D Scene Understanding [article]

Yujin Chen, Matthias Nießner, Angela Dai
2021 arXiv   pre-print
We present a new approach to instill 4D dynamic object priors into learned 3D representations by unsupervised pre-training.  ...  that can then be effectively transferred to improved performance in downstream 3D semantic scene understanding tasks.  ...  We only visualize the inter-frame correspondence for Ft−2 and Ft−1 , and only spatio-temporal correspondence for for Ft−2 , while those loss are established across all pairs of frames for L̄3D and all  ... 
arXiv:2112.02990v1 fatcat:r45rizkmmfhjtazbteueuxjwv4

SIGNet: Semantic Instance Aided Unsupervised 3D Geometry Perception

Yue Meng, Yongxi Lu, Aman Raj, Samuel Sunarjo, Rui Guo, Tara Javidi, Gaurav Bansal, Dinesh Bharadia
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Unsupervised learning for geometric perception (depth, optical flow, etc.) is of great interest to autonomous systems.  ...  SIGNet is shown to improve upon the state-of-the-art unsupervised learning for depth prediction by 30% (in squared relative error).  ...  Multi-Task Learning for Semantic and Depth: Multitask learning [3] achieves better generalization by allowing the system to learn features that are robust across different tasks.  ... 
doi:10.1109/cvpr.2019.01004 dblp:conf/cvpr/MengLRSGJBB19 fatcat:c46o7rnvordzhnf62b35i3dhmy

Unsupervised Part Discovery from Contrastive Reconstruction [article]

Subhabrata Choudhury, Iro Laina, Christian Rupprecht, Andrea Vedaldi
2022 arXiv   pre-print
The goal of self-supervised visual representation learning is to learn strong, transferable image representations, with the majority of research focusing on object or scene level.  ...  Secondly, prior work argues for reconstructing or clustering pre-computed features as a proxy to parts; we show empirically that this alone is unlikely to find meaningful parts; mainly because of their  ...  We thank Luke Melas-Kyriazi for providing precomputed masks for [54] .  ... 
arXiv:2111.06349v2 fatcat:qxtuzama7vfvdmkg7gthkaeeh4

Unsupervised Learning of Visual 3D Keypoints for Control [article]

Boyuan Chen, Pieter Abbeel, Deepak Pathak
2021 arXiv   pre-print
In this work, we propose a framework to learn such a 3D geometric structure directly from images in an end-to-end unsupervised manner.  ...  The proposed approach outperforms prior state-of-art methods across a variety of reinforcement learning benchmarks. Code and videos at  ...  Acknowledgments We thank Zackory Erickson, Alexander Clegg, and Charlie Kemp for fruitful discussions. This work was supported by NSF IIS-2024594 and NSF IIS-2024675.  ... 
arXiv:2106.07643v1 fatcat:5vmjphexcbeydjixga245sr764

Unsupervised Learning of Dense Visual Representations [article]

Pedro O. Pinheiro, Amjad Almahairi, Ryan Y. Benmalek, Florian Golemo, Aaron Courville
2020 arXiv   pre-print
In this paper, we propose View-Agnostic Dense Representation (VADeR) for unsupervised learning of dense representations.  ...  Specifically, this is achieved through pixel-level contrastive learning: matching features (that is, features that describes the same location of the scene on different views) should be close in an embedding  ...  In this paper, we propose a method for unsupervised learning of dense representations.  ... 
arXiv:2011.05499v2 fatcat:eaxgc3f3trdwxjvttumon2jfei

Unsupervised Object-Level Representation Learning from Scene Images [article]

Jiahao Xie, Xiaohang Zhan, Ziwei Liu, Yew Soon Ong, Chen Change Loy
2021 arXiv   pre-print
We hope our approach can motivate future research on more general-purpose unsupervised representation learning from scene data.  ...  Our key insight is to leverage image-level self-supervised pre-training as the prior to discover object-level semantic correspondence, thus realizing object-level representation learning from scene images  ...  While early efforts learn dense correspondence with labeled data [21, 68, 29, 55, 41] , some recent works learn the similarity between the parts or landmarks of the data in an unsupervised manner [52  ... 
arXiv:2106.11952v2 fatcat:eg4r3pcijvc6heksuo6hnwtdr4

Unsupervised Learning for Improving Efficiency of Dense Three-Dimensional Scene Recovery in Corridor Mapping [chapter]

Thomas Warsop, Sameer Singh
2011 Lecture Notes in Computer Science  
Typical three-dimensional scene recovery methods initialise recovered feature positions by searching for correspondences between image frames.  ...  We build multi-dimensional Gaussian models of recurrent visual features associated with distributions representing recovery results from our own dense planar recovery method.  ...  Unsupervised Learning for Temporal Search Space Reduction As previously mentioned, the intention of TSR is to link image features with 3D scene recovery results.  ... 
doi:10.1007/978-3-642-21227-7_37 fatcat:ix4m6dsshjep5fwtkze2xff5ia
« Previous Showing results 1 — 15 out of 6,730 results