A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Learning Compositional Shape Priors for Few-Shot 3D Reconstruction
[article]
2021
arXiv
pre-print
In this work we experimentally demonstrate that naive baselines fail in this few-shot learning setting, in which the network must learn informative shape priors for inference of new categories. ...
However, building large collections of 3D shapes for supervised training is a laborious process; a more realistic and less constraining task is inferring 3D shapes for categories with few available training ...
More importantly, true 3D shape understanding im- We tackle the problem of single-view 3D reconstruction in the few-shot learning setup. ...
arXiv:2106.06440v2
fatcat:ncnobbj7mndbpdgmuzz6cefumq
Few-Shot Single-View 3-D Object Reconstruction with Compositional Priors
[article]
2020
arXiv
pre-print
On the other hand settings where 3D shape must be inferred for new categories with few examples are more natural and require models that generalize about shapes. ...
In this work we demonstrate experimentally that naive baselines do not apply when the goal is to learn to reconstruct novel objects using very few examples, and that in a few-shot learning setting, the ...
In summary, we make the following contributions: -We investigate the few-shot learning setting for 3D shape reconstruction and demonstrate that this set-up constitutes an ideal testbed for the development ...
arXiv:2004.06302v2
fatcat:5cphxa5awrarvn6b3brm33wzhm
Pose Adaptive Dual Mixup for Few-Shot Single-View 3D Reconstruction
[article]
2021
arXiv
pre-print
We present a pose adaptive few-shot learning procedure and a two-stage data interpolation regularization, termed Pose Adaptive Dual Mixup (PADMix), for single-image 3D reconstruction. ...
PADMix significantly outperforms previous literature on few-shot settings over the ShapeNet dataset and sets new benchmarks on the more challenging real-world Pix3D dataset. ...
Few-Shot
(ICLR).
Single-View 3D Reconstruction with Compositional Priors.
Fan, H.; Su, H.; and Guibas, L. 2017. ...
arXiv:2112.12484v1
fatcat:ht6sjw4o2zex5hhfi6k4w573ne
GeLaTO: Generative Latent Textured Objects
[article]
2020
arXiv
pre-print
Accurate modeling of 3D objects exhibiting transparency, reflections and thin structures is an extremely challenging problem. ...
We demonstrate the effectiveness of our approach by reconstructing complex objects from a sparse set of views. ...
[19] learn a linear shape basis for 3D keypoints for each category, using a variant of NRSfM [6] . Kanazawa et al. ...
arXiv:2008.04852v1
fatcat:mfum5snn7jeh5dhas54hwyjrme
Author Index
2010
2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Connectivity Constraints for Reconstruction of 3D Line Segments from Images Tommasi, Tatiana Safety in Numbers: Learning Categories from Few Examples with Multi Model Knowledge Transfer Tong, Yan Workshop ...
Georgiev, Todor
Demo: Real Time Lightfield Rendering Using GPUs
Gerónimo, David
Learning Appearance in Virtual Scenarios for Pedestrian Detection
Gevers, Theo
3D Scene Priors for Road Detection ...
doi:10.1109/cvpr.2010.5539913
fatcat:y6m5knstrzfyfin6jzusc42p54
Using Shape to Categorize: Low-Shot Learning with an Explicit Shape Bias
[article]
2021
arXiv
pre-print
about 3D shape can be used to improve low-shot learning methods' generalization performance. ...
We propose a new way to improve existing low-shot learning approaches by learning a discriminative embedding space using 3D object shape, and using this embedding by learning how to map images into it. ...
While prior work has demonstrated effective approaches to object categorization using 3D shapes as input [31, 33, 52, 53, 4] , and there is a large literature on few-shot learning from images alone [ ...
arXiv:2101.07296v2
fatcat:qbam6yf7uvdtxaqkl6ibihnbvi
CodeNeRF: Disentangled Neural Radiance Fields for Object Categories
[article]
2021
arXiv
pre-print
CodeNeRF is an implicit 3D neural representation that learns the variation of object shapes and textures across a category and can be trained, from a set of posed images, to synthesize novel views of unseen ...
Unlike the original NeRF, which is scene specific, CodeNeRF learns to disentangle shape and texture by learning separate embeddings. ...
[34, 3, 32] to propose learned representations for 3D reconstruction with 3D supervision. ...
arXiv:2109.01750v1
fatcat:wllnvifnrbai3e65eqjc72uzzu
MetaSDF: Meta-learning Signed Distance Functions
[article]
2020
arXiv
pre-print
Generalizing across shapes with such neural implicit representations amounts to learning priors over the respective function space and enables geometry reconstruction from partial or noisy observations ...
Here, we formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task. ...
Acknowledgments and Disclosure of Funding We would like to offer special thanks to Julien Martel, Matthew Chan, and Trevor Chan for fruitful discussions and assistance in completing this work. ...
arXiv:2006.09662v1
fatcat:ffasyuasirht7kwpakrfluidnm
Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations
[article]
2020
arXiv
pre-print
We demonstrate the potential of SRNs by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model ...
This formulation naturally generalizes across scenes, learning powerful geometry and appearance priors in the process. ...
Michael Zollhöfer was supported by the Max Planck Center for Visual Computing and Communication (MPC-VCC). ...
arXiv:1906.01618v2
fatcat:mdywjeydm5asbacdtbs7wmgj5u
BAE-NET: Branched Autoencoder for Shape Co-Segmentation
[article]
2019
arXiv
pre-print
By complementing the shape reconstruction loss with a label loss, BAE-NET is easily tuned for one-shot learning. ...
We treat shape co-segmentation as a representation learning problem and introduce BAE-NET, a branched autoencoder network, for the task. ...
This encourages the network to learn to represent the final complex hyperdimensional shape as a composition of a few simple hyperdimensional shapes. ...
arXiv:1903.11228v2
fatcat:htxwsw6wovdjzmzrb2hiveb6zi
Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering
[article]
2021
arXiv
pre-print
In the setting of simple scenes, we leverage meta-learning to learn a prior over LFNs that enables multi-view consistent light field reconstruction from as little as a single image observation. ...
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or volumetric based renderers in 3D-structured neural scene representations ...
For the setting of simple scenes, we demonstrate that this challenge can be overcome by learning a prior over 4D light fields in a meta-learning framework. ...
arXiv:2106.02634v1
fatcat:niopcbm5cjhw5iwih4uv27oo54
Self-supervised Tumor Segmentation through Layer Decomposition
[article]
2021
arXiv
pre-print
In this paper, we target self-supervised representation learning for zero-shot tumor segmentation. ...
Forth, our approach achieves superior results for zero-shot tumor segmentation on different downstream datasets, BraTS2018 for brain tumor segmentation and LiTS2017 for liver tumor segmentation. ...
Network Architecture Our proposed self-supervised learning method is flexible for any 3D encoder-decoder network. ...
arXiv:2109.03230v3
fatcat:xaxxtxopzjgxzhvty4psfnswjy
Disentangling 3D Prototypical Networks For Few-Shot Concept Learning
[article]
2021
arXiv
pre-print
We present neural architectures that disentangle RGB-D images into objects' shapes and styles and a map of the background scene, and explore their applications for few-shot 3D object detection and few-shot ...
We show that the proposed 3D neural representations are compositional: they can generate novel 3D scene feature maps by mixing object shapes and styles, resizing and adding the resulting object 3D feature ...
RELATION TO PREVIOUS WORKS Few-shot concept learning Few-shot learning methods attempt to learn a new concept from one or a few annotated examples at test time, yet, at training time, these models still ...
arXiv:2011.03367v3
fatcat:bqxpkyamcnf53cvbgrravlgk4m
Flow Guided Transformable Bottleneck Networks for Motion Retargeting
[article]
2021
arXiv
pre-print
Few-shot motion transfer techniques, which only require one or a few images from a target, have recently drawn considerable attention. ...
This allows us to learn our 3D representation solely from videos of moving people. ...
Conclusion Our approach to few-shot human motion retargeting exploits advantages of 3D representations of human body while avoiding limitations of the more straightforward prior methods. ...
arXiv:2106.07771v1
fatcat:yjprgqyphbdenkfxguq5dpysxu
Im2Struct: Recovering 3D Shape Structure from a Single RGB Image
[article]
2018
arXiv
pre-print
We propose to recover 3D shape structures from single RGB images, where structure refers to shape parts represented by cuboids and part relations encompassing connectivity and symmetry. ...
We demonstrate two applications of our method including structure-guided completion of 3D volumes reconstructed from single-view images and structure-aware interactive editing of 2D images. ...
Human brains do well both in shape inference based on low-level visual stimulus and structural reasoning with the help of prior knowledge about 3D shape compositions. ...
arXiv:1804.05469v1
fatcat:2tqr7m5o3fhwlpbdrgso7gwcia
« Previous
Showing results 1 — 15 out of 3,686 results