10,439 Hits in 3.1 sec

Distilling Image Classifiers in Object Detectors [article]

Shuxuan Guo and Jose M. Alvarez and Mathieu Salzmann
2022 arXiv   pre-print
To this end, we study the case of object detection and, instead of following the standard detector-to-detector distillation approach, introduce a classifier-to-detector knowledge transfer framework.  ...  Our experiments on several detectors with different backbones demonstrate the effectiveness of our approach, allowing us to outperform the state-of-the-art detector-to-detector distillation methods.  ...  Acknowledgments and Disclosure of Funding This work was supported in part by the Swiss National Science Foundation and by NVIDIA during an internship.  ... 
arXiv:2106.05209v2 fatcat:3bqmn5exg5f63ktet2qgyujcwa

Open-vocabulary Object Detection via Vision and Language Knowledge Distillation [article]

Xiuye Gu, Tsung-Yi Lin, Weicheng Kuo, Yin Cui
2022 arXiv   pre-print
Our method distills the knowledge from a pretrained open-vocabulary image classification model (teacher) into a two-stage detector (student).  ...  It is costly to further scale up the number of classes contained in existing object detection datasets.  ...  In contrast, we use an image-text pretrained model as a teacher model to supervise student object detectors.  ... 
arXiv:2104.13921v3 fatcat:5mjvaxkeijbbrjxiobsiigc6dq

Boosting Weakly Supervised Object Detection with Progressive Knowledge Transfer [article]

Yuanyi Zhong, Jianfeng Wang, Jian Peng, Lei Zhang
2020 arXiv   pre-print
The box-level pseudo ground truths mined by the target-domain detector in each iteration effectively improve the one-class universal detector.  ...  In this paper, we propose an effective knowledge transfer framework to boost the weakly supervised object detection accuracy with the help of an external fully-annotated source dataset, whose categories  ...  The testing time of the final distilled detector is similar to the usual detector. The details are in the supplementary.  ... 
arXiv:2007.07986v1 fatcat:h7p67bydljcwfbhx5z7zx3i3yy

Bridging the Gap between Object and Image-level Representations for Open-Vocabulary Detection [article]

Hanoona Rasheed, Muhammad Maaz, Muhammad Uzair Khattak, Salman Khan, Fahad Shahbaz Khan
2022 arXiv   pre-print
In essence, the proposed model seeks to minimize the gap between object and image-centric representations in the OVD setting.  ...  Existing open-vocabulary object detectors typically enlarge their vocabulary sizes by leveraging different forms of weak supervision. This helps generalize to novel objects at inference.  ...  Given an image I ∈ R H×W ×3 , we design an open-vocabulary object detector to solve two subsequent problems: (1) effectively localize all objects in the image, (2) classify the detected region into one  ... 
arXiv:2207.03482v1 fatcat:l75lktl6pbf4vdnoskfzz6a52m

Unknown-Aware Object Detection: Learning What You Don't Know from Videos in the Wild [article]

Xuefeng Du, Xin Wang, Gabriel Gozum, Yixuan Li
2022 arXiv   pre-print
We propose a new unknown-aware object detection framework through Spatial-Temporal Unknown Distillation (STUD), which distills unknown objects from videos in the wild and meaningfully regularizes the model's  ...  Building reliable object detectors that can detect out-of-distribution (OOD) objects is critical yet underexplored.  ...  Notably, our distillation process for object detection is performed at the object level, in contrast to constructing the image-level outliers [18] .  ... 
arXiv:2203.03800v1 fatcat:w2zd57ynn5f25mevpeukk6i3ye

WSOD^2: Learning Bottom-up and Top-down Objectness Distillation for Weakly-supervised Object Detection [article]

Zhaoyang Zeng, Bei Liu, Jianlong Fu, Hongyang Chao, Lei Zhang
2019 arXiv   pre-print
In this paper, we propose a novel WSOD framework with Objectness Distillation (i.e., WSOD^2) by designing a tailored training mechanism for weakly-supervised object detection.  ...  We study on weakly-supervised object detection (WSOD) which plays a vital role in relieving human involvement from object-level annotations.  ...  Approach The overview of our proposed weakly-supervised object detector with objectness distillation (WSOD 2 ) is illustrated in Figure 2 . We first adopt a based multiple instance detector (i.e.  ... 
arXiv:1909.04972v1 fatcat:lazg32wyg5gztoommiwpwuifmu

Learning Lightweight Pedestrian Detector with Hierarchical Knowledge Distillation

Rui Chen, Haizhou Ai, Chong Shang, Long Chen, Zijie Zhuang
2019 2019 IEEE International Conference on Image Processing (ICIP)  
In particular, the proposed distillation is performed at multiple hierarchies, multiple stages in a modern detector, which empowers the student detector to learn both low-level details and high-level abstractions  ...  This work presents a novel hierarchical knowledge distillation framework to learn a lightweight pedestrian detector, which significantly reduces the computational cost and still holds the high accuracy  ...  Faster R-CNN [6] is a typical two-stage detector, which generates region proposals in the first stage, and classifies the proposals in the second stage.  ... 
doi:10.1109/icip.2019.8803079 dblp:conf/icip/ChenASCZ19 fatcat:nkcs46kqjjctpdxdbt4i42a7zi

Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks [article]

Yoshitomo Matsubara, Marco Levorato
2020 arXiv   pre-print
The code and trained models are available at .  ...  by introducing a bottleneck layer in the early layers on the head model, and (ii) prefilter pictures that do not contain objects of interest using a convolutional neural network.  ...  Additionally, we observe that while in image classification tasks all images are classified, in object detection tasks only a fraction of the images may contain objects of interest.  ... 
arXiv:2007.15818v2 fatcat:rsofmyqgxfdapoam7gyltaja4y

Adapting Models to Signal Degradation using Distillation [article]

Jong-Chyi Su, Subhransu Maji
2017 arXiv   pre-print
We apply this technique to learn models for recognizing low-resolution images using labeled high-resolution images, non-localized objects using labeled localized objects, line-drawings using labeled color  ...  We show that in many scenarios of practical importance such aligned data can be synthetically generated using computer graphics pipelines allowing domain adaptation through distillation.  ...  Acknowledgement: This research was supported in part by the NSF grants IIS-1617917 and ABI-1661259, and a faculty gift from Facebook.  ... 
arXiv:1604.00433v2 fatcat:46hiev4s2je65eovxige5vdsc4

Learning Efficient Detector with Semi-supervised Adaptive Distillation [article]

Shitao Tang, Litong Feng, Wenqi Shao, Zhanghui Kuang, Wei Zhang, Yimin Chen
2019 arXiv   pre-print
Knowledge Distillation (KD) has been used in image classification for model compression. However, rare studies apply this technology on single-stage object detectors.  ...  Focal loss shows that the accumulated errors of easily-classified samples dominate the overall loss in the training process. This problem is also encountered when applying KD in the detection task.  ...  In [20] , experiments show object detector can gain extra improvement by Semi-supervised learning. Another work is data distillation [16] .  ... 
arXiv:1901.00366v2 fatcat:p2u5twvsgbajbopx6x2j2cmeki

Towards Noise-resistant Object Detection with Noisy Annotations [article]

Junnan Li, Caiming Xiong, Richard Socher, Steven Hoi
2020 arXiv   pre-print
Training deep object detectors requires significant amount of human-annotated images with accurate object labels and bounding box coordinates, which are extremely expensive to acquire.  ...  The first step performs class-agnostic bounding box correction by minimizing classifier discrepancy and maximizing region objectness.  ...  Weakly-supervised Object Detection Weakly-supervised object detection aims to learn object detectors with only image-level labels.  ... 
arXiv:2003.01285v1 fatcat:d6b3xjy3dzbqnbcjnql34qgpdq

Incremental Learning of Object Detectors without Catastrophic Forgetting [article]

Konstantin Shmelkov, Cordelia Schmid, Karteek Alahari
2017 arXiv   pre-print
We present a method to address this issue, and learn object detectors incrementally, when neither the original training data nor annotations for the original classes in the new training set are available  ...  objects of new classes, in the absence of the initial training data.  ...  This work was supported in part by the ERC advanced grant ALLEGRO, a Google research award, and gifts from Facebook and Intel.  ... 
arXiv:1708.06977v1 fatcat:sali36jssbetbfdhxbhp6o3ohi

Comprehensive Attention Self-Distillation for Weakly-Supervised Object Detection [article]

Zeyi Huang, Yang Zou, Vijayakumar Bhagavatula, Dong Huang
2020 arXiv   pre-print
Weakly Supervised Object Detection (WSOD) has emerged as an effective tool to train object detectors using only the image-level category labels.  ...  However, without object-level labels, WSOD detectors are prone to detect bounding boxes on salient objects, clustered objects and discriminative object parts.  ...  WSOD instance classifiers (object detectors) are trained over these bags.  ... 
arXiv:2010.12023v1 fatcat:yg25zw3bjrb5ze4h4brbod7r7u

Progressive Object Transfer Detection

Hao Chen, Yali Wang, Guoyou Wang, Xiang Bai, Yu Qiao
2019 IEEE Transactions on Image Processing  
In LSTD, we distill the implicit object knowledge of source detector to enhance target detector with few annotations. It can effectively warm up WSTD later on.  ...  In WSTD, we design a recurrent object labelling mechanism for learning to annotate weakly-labeled images.  ...  Hence, we propose to distill it for each object proposal in the target domain. (1) Extracting SDK from Source Detector.  ... 
doi:10.1109/tip.2019.2938680 fatcat:vcqeip2q3vc2vfzp7vfqshj3xq

Contrast R-CNN for Continual Learning in Object Detection [article]

Kai Zheng, Cen Chen
2021 arXiv   pre-print
The continual learning problem has been widely studied in image classification, while rare work has been explored in object detection.  ...  In our paper, we propose a new scheme for continual learning of object detection, namely Contrast R-CNN, an approach strikes a balance between retaining the old knowledge and learning the new knowledge  ...  VOC 2007 is composed of 5K images in the trainval set and 5K images in the test set of 20 object categoires.  ... 
arXiv:2108.04224v1 fatcat:x36lkrekabh77ky5uas4nk4hru
« Previous Showing results 1 — 15 out of 10,439 results