A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is
Our semantic segmentation is built on a deep autoencoder stack trained exclusively on synthetic depth data generated from our novel 3D scene library, SynthCam3D. ... Abstract We are interested in automatic scene understanding from geometric cues. To this end, we aim to bring semantic segmentation in the loop of real-time reconstruction. ... Figure 2 : 2 SynthCam3D is a library of synthetic indoor scenes collected from various online 3D repositories and hosted at http://robotvault.bitbucket.org. ...doi:10.17863/cam.26487 fatcat:aiypw2p5jre5loppkr52plgsva
This implies that training data is preferred with annotations. ... The reason is that, at least, we have to frame object examples with bounding boxes in thousands of images. ... SynthCam3D: Semantic understand- ing with synthetic indoor scenes. ...arXiv:1612.09134v1 fatcat:ky7jz3nvh5gkjaxe5npoztyooi
Computing in Civil Engineering 2019
A recent example, SynthCam3D, gathers a library of synthetic indoor scenes collected from various online 3D repositories of offices and indoor building scenes (Handa et al., 2015) . ... of outdoor scenes in Paris and Lille with more than 50 classes. ...doi:10.1061/9780784482445.009 fatcat:izi4bgxuxrasvgz6p24dachmve