A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit <a rel="external noopener" href="http://pdfs.semanticscholar.org/8b0b/f81dd3ffeefb544143d386afa63c046affbe.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
<i title="Institute of Electrical and Electronics Engineers (IEEE)">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/mst7benysjfpbf6tk2lttvwoaq" style="color: black;">IEEE Computer Graphics and Applications</a>
We present a simple and efficient method based on deep learning to automatically decompose sketched objects into semantically valid parts. We train a deep neural network to transfer existing segmentations and labelings from 3D models to freehand sketches without requiring numerous well-annotated sketches as training data. The network takes the binary image of a sketched object as input and produces a corresponding segmentation map with per-pixel labelings as output. A subsequent post-process<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/mcg.2018.2884192">doi:10.1109/mcg.2018.2884192</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/aqjs5nukyneblp7o2lnt45id2u">fatcat:aqjs5nukyneblp7o2lnt45id2u</a> </span>
more »... cedure with multi-label graph cuts further refines the segmentation and labeling result. We validate our proposed method on two sketch datasets. Experiments show that our method outperforms the state-of-the-art method in terms of segmentation and labeling accuracy and is significantly faster, enabling further integration in interactive drawing systems. We demonstrate the efficiency of our method in a sketch-based modeling application that automatically transforms input sketches into 3D models by part assembly. Freehand sketching is frequently adopted as an efficient means of visual communication. Nowadays, the wide adoption of touch devices, together with the development of well-designed drawing software (e.g., Autodesk SketchBook), notably gives rise to easy creation of digital sketches without pen and paper. Unlike photos, which are faithful captures of the real world from cameras, sketches are artistic depictions from humans. Due to various levels of abstraction and distortion existing in sketches, the computer is still far from being able to robustly interpret their underlying semantics conveyed by humans. Existing studies on sketch analysis, such as sketch classification or sketch-based retrieval, have mainly focused on interpreting an input sketch globally, lacking further understanding of its constituent parts. Sketch segmentation is a step towards finer-level sketch analysis. 1-3 Its goal is to decompose an input sketch into several semantically meaningful components, to which
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190227074438/http://pdfs.semanticscholar.org/8b0b/f81dd3ffeefb544143d386afa63c046affbe.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8b/0b/8b0bf81dd3ffeefb544143d386afa63c046affbe.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/mcg.2018.2884192"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>