SynthCam3D: Semantic Understanding With Synthetic Indoor Scenes [article]

Ankur Handa, Viorica Patraucean, Vijay Badrinarayanan, Simon Stent, Roberto Cipolla, Apollo-University Of Cambridge Repository, Apollo-University Of Cambridge Repository
2018
Rendering Engine encoders decoders back prop feed forward depth height from ground angle with gravity curvature annotation 4-D input Ground truth annotations Label fusion Training Run-time camera trajectory camera poses feed forward depth tsdf 4-D inputs from different viewpoints predictions Figure 1: Our system is trained exclusively on synthetic data obtained from our scene library, SynthCam3D. During testing, per-frame predictions returned by the network are fused using the camera poses
more » ... ded by the reconstruction system. Abstract We are interested in automatic scene understanding from geometric cues. To this end, we aim to bring semantic segmentation in the loop of real-time reconstruction. Our semantic segmentation is built on a deep autoencoder stack trained exclusively on synthetic depth data generated from our novel 3D scene library, SynthCam3D. Importantly, our network is able to segment real world scenes without any noise modelling. We present encouraging preliminary re-sults.
doi:10.17863/cam.26487 fatcat:aiypw2p5jre5loppkr52plgsva