Filters








256,620 Hits in 6.7 sec

Object Segmentation Without Labels with Large-Scale Generative Models [article]

Andrey Voynov, Stanislav Morozov, Artem Babenko
2021 arXiv   pre-print
This work demonstrates that large-scale unsupervised models can also perform a more challenging object segmentation task, requiring neither pixel-level nor image-level labeling.  ...  By extensive comparison on standard benchmarks, we outperform existing unsupervised alternatives for object segmentation, achieving new state-of-the-art.  ...  ., 2009) without labels. These directions allow to distinguish object/background pixels in the generated images, providing decent segmentation masks.  ... 
arXiv:2006.04988v2 fatcat:nt3ae3lbsra3tbqwyu5stoce64

FSS-1000: A 1000-Class Dataset for Few-Shot Segmentation [article]

Xiang Li, Tianhan Wei, Yau Pun Chen, Yu-Wing Tai, Chi-Keung Tang
2020 arXiv   pre-print
To evaluate and validate the performance of our approach, we have built a few-shot segmentation dataset, FSS-1000, which consists of 1000 object classes with pixelwise annotation of ground-truth segmentation  ...  Over the past few years, we have witnessed the success of deep learning in image recognition thanks to the availability of large-scale human-annotated datasets such as PASCAL VOC, ImageNet, and COCO.  ...  There is no large-scale object dataset for few-shot segmentation.  ... 
arXiv:1907.12347v2 fatcat:xvybcleocba2vjt7tet4hr6xi4

Scale-Aware Feature Network for Weakly Supervised Semantic Segmentation

Lian Xu, Mohammed Bennamoun, Farid Boussaid, Ferdous Sohel
2020 IEEE Access  
Weakly supervised semantic segmentation with image-level labels is of great significance since it alleviates the dependency on dense annotations.  ...  Inspired by the successful use of multi-scale features for an improved performance in a wide range of visual tasks, we propose a Scale-Aware Feature Network (SAFN) for generating object localization maps  ...  It can be observed that trained with our generated pseudo ground-truth images, the model can produce good segmentation results for images of various scenes with multiple objects.  ... 
doi:10.1109/access.2020.2989331 fatcat:vezu2wqrlvdlhhsn6ffewwi4g4

Concept Mask: Large-Scale Segmentation from Semantic Concepts [chapter]

Yufei Wang, Zhe Lin, Xiaohui Shen, Jianming Zhang, Scott Cohen
2018 Lecture Notes in Computer Science  
With a large number of labels, training and evaluation of such task become extremely challenging due to correlation between labels and lack of datasets with complete annotations.  ...  We formulate semantic segmentation as a problem of image segmentation given a semantic concept, and propose a novel system which can potentially handle an unlimited number of concepts, including objects  ...  Large Scale Segmentation/Parsing Zhao et al. aim to recognize and segment objects with open vocabulary [35] , which is in line with our goal of large scale segmentation.  ... 
doi:10.1007/978-3-030-01258-8_33 fatcat:qeofn3zug5addixudl532bhnwi

Self-supervised Scale Equivariant Network for Weakly Supervised Semantic Segmentation [article]

Yude Wang, Jie Zhang, Meina Kan, Shiguang Shan, Xilin Chen
2019 arXiv   pre-print
This work mainly explores the advantages of scale equivariant constrains for CAM generation, formulated as a self-supervised scale equivariant network (SSENet).  ...  Weakly supervised semantic segmentation has attracted much research interest in recent years considering its advantage of low labeling cost.  ...  Finally, a classical semantic segmentation model DeepLab is trained by these pseudo labels.  ... 
arXiv:1909.03714v1 fatcat:kzdnzixiqvgqtmgp6uzx3zb2ge

Multi-Task Self-Training for Learning General Representations [article]

Golnaz Ghiasi, Barret Zoph, Ekin D. Cubuk, Quoc V. Le, Tsung-Yi Lin
2021 arXiv   pre-print
MuST is scalable with unlabeled or partially labeled datasets and outperforms both specialized supervised models and self-supervised models when training on large scale datasets.  ...  Finally, the dataset, which now contains pseudo labels from teacher models trained on different datasets/tasks, is then used to train a student model with multi-task learning.  ...  [61] showed a model "pre-trained" with pseudo labels on a large unlabeled dataset (at hundreds millions scale) can improve classification accuracy.  ... 
arXiv:2108.11353v1 fatcat:i2bt3bxbxzfsjgvn7k53x3mlry

Weakly Supervised Learning of Object Segmentations from Web-Scale Video [chapter]

Glenn Hartmann, Matthias Grundmann, Judy Hoffman, David Tsai, Vivek Kwatra, Omid Madani, Sudheendra Vijayanarasimhan, Irfan Essa, James Rehg, Rahul Sukthankar
2012 Lecture Notes in Computer Science  
Specifically, given a large collection of raw YouTube content, along with potentially noisy tags, our goal is to automatically generate spatiotemporal masks for each object, such as "dog", without employing  ...  We propose to learn pixel-level segmentations of objects from weakly labeled (tagged) internet videos.  ...  object instances with associated high-precision labels.  ... 
doi:10.1007/978-3-642-33863-2_20 fatcat:4i7y7b7xafg6lbnacdizvrwzby

Tracking Emerges by Colorizing Videos [article]

Carl Vondrick, Abhinav Shrivastava, Alireza Fathi, Sergio Guadarrama, Kevin Murphy
2018 arXiv   pre-print
We use large amounts of unlabeled video to learn models for visual tracking without manual human supervision.  ...  Although the model is trained without any ground-truth labels, our method learns to track well enough to outperform the latest methods based on optical flow.  ...  Tracking without Labels: We build off pioneering work for learning to segment videos without labels [51] [52] [53] .  ... 
arXiv:1806.09594v2 fatcat:c5y74nmepnfvxiaknfcsuasut4

Video Class Agnostic Segmentation Benchmark for Autonomous Driving [article]

Mennatullah Siam, Alex Kendall, Martin Jagersand
2021 arXiv   pre-print
Semantic segmentation approaches are typically trained on large-scale data with a closed finite set of known classes without considering unknown objects.  ...  We then compare it to a model that uses an auxiliary contrastive loss to improve the discrimination between known and unknown objects.  ...  First, we insert unknown objects and modify the basic virtual driving agent in Carla to avoid obstacles, in order to collect large-scale data with ground-truth depth and semantic segmentation labels.  ... 
arXiv:2103.11015v2 fatcat:6e2buw35kvcdnn5cnlbozclf4q

Weakly Supervised Semantic Segmentation Based on Web Image Co-segmentation [article]

Tong Shen, Guosheng Lin, Lingqiao Liu, Chunhua Shen, Ian Reid
2017 arXiv   pre-print
The method utilizes the internet to retrieve a large number of images and uses a large scale co-segmentation framework to generate masks for the retrieved images.  ...  Training a Fully Convolutional Network (FCN) for semantic segmentation requires a large number of masks with pixel level labelling, which involves a large amount of human labour and time for annotation  ...  There are some other methods aiming at large scale co-segmentation with a large number of images presented including noisy data [7, 11] . Faktor et al.  ... 
arXiv:1705.09052v3 fatcat:ppxlgrynnza55ldn4nkqag44ey

Tracking Emerges by Colorizing Videos [chapter]

Carl Vondrick, Abhinav Shrivastava, Alireza Fathi, Sergio Guadarrama, Kevin Murphy
2018 Lecture Notes in Computer Science  
We use large amounts of unlabeled video to learn models for visual tracking without manual human supervision.  ...  Although the model is trained without any ground-truth labels, our method learns to track well enough to outperform the latest methods based on optical flow.  ...  Tracking without Labels: We build off pioneering work for learning to segment videos without labels [51] [52] [53] .  ... 
doi:10.1007/978-3-030-01261-8_24 fatcat:uhhkvzxgrjaqtnwdc7pxxb77fq

RGB-based Semantic Segmentation Using Self-Supervised Depth Pre-Training [article]

Jean Lahoud, Bernard Ghanem
2020 arXiv   pre-print
We show how our proposed self-supervised pre-training with HN-labels can be used to replace ImageNet pre-training, while using 25x less images and without requiring any manual labeling.  ...  We pre-train a semantic segmentation network with our HN-labels, which resembles our final task more than pre-training on a less related task, e.g. classification with ImageNet.  ...  Self-Supervised Learning: The technique presented in [38] proposes a self-supervised method to generate a large labeled dataset without the need for manual labeling.  ... 
arXiv:2002.02200v1 fatcat:ovydmbvdargafe7mo7jx7p3rou

Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation [article]

Golnaz Ghiasi, Yin Cui, Aravind Srinivas, Rui Qian, Tsung-Yi Lin, Ekin D. Cubuk, Quoc V. Le, Barret Zoph
2021 arXiv   pre-print
Building instance segmentation models that are data-efficient and can handle rare object categories is an important challenge in computer vision.  ...  Furthermore, we show Copy-Paste is additive with semi-supervised methods that leverage extra data through pseudo labeling (e.g. self-training).  ...  We present results of our EfficientNet-B7 NAS-FPN model pre-trained with and without Copy-Paste on COCO. † indicates multi-scale/flip ensembling inference. segmentation we find our models trained with  ... 
arXiv:2012.07177v2 fatcat:aspbbkscazd6tgq5adnmcv5nbe

Large-scale Unsupervised Semantic Segmentation [article]

Shanghua Gao and Zhong-Yu Li and Ming-Hsuan Yang and Ming-Ming Cheng and Junwei Han and Philip Torr
2022 arXiv   pre-print
We propose a new problem of large-scale unsupervised semantic segmentation (LUSS) with a newly created benchmark dataset to track the research progress.  ...  There are two major challenges to allowing such an attractive learning modality for segmentation tasks: i) a large-scale benchmark for assessing algorithms is missing; ii) unsupervised category/shape representation  ...  To facilitate the LUSS task, we propose a benchmark with large-scale data with high diversity, a clear objective of learning semantic segmentation without direct/indirect human-annotation, and sufficient  ... 
arXiv:2106.03149v2 fatcat:f2q34ku5znarhmcnnrvptstapy

Mix3D: Out-of-Context Data Augmentation for 3D Scenes [article]

Alexey Nekrasov, Jonas Schult, Or Litany, Bastian Leibe, Francis Engelmann
2021 arXiv   pre-print
We present Mix3D, a data augmentation technique for segmenting large-scale 3D scenes.  ...  Since scene context helps reasoning about object semantics, current works focus on models with large capacity and receptive fields that can fully capture the global context of an input 3D scene.  ...  Mix3D with labels performs best, while mixing without labels is still a viable approach when large amounts of unlabeled data are available or too costly to label.  ... 
arXiv:2110.02210v2 fatcat:u4dro3eclnhu3mkmbahrp2f3am
« Previous Showing results 1 — 15 out of 256,620 results