A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Rethinking ImageNet Pre-training
[article]
2018
arXiv
pre-print
The results are no worse than their ImageNet pre-training counterparts even when using the hyper-parameters of the baseline system (Mask R-CNN) that were optimized for fine-tuning pre-trained models, with ...
To push the envelope we demonstrate 50.9 AP on COCO object detection without using any external data---a result on par with the top COCO 2017 competition results that used ImageNet pre-training. ...
It can be sufficient to directly train on the target data if its dataset scale is large enough. ...
arXiv:1811.08883v1
fatcat:qv7hctiy3ndopdirjhsbdbcore
Greedy Layerwise Learning Can Scale to ImageNet
[article]
2019
arXiv
pre-print
To our knowledge, this is the first competitive alternative to end-to-end training of CNNs that can scale to ImageNet. ...
Using a simple set of ideas for architecture and training we find that solving sequential 1-hidden-layer auxiliary problems lead to a CNN that exceeds AlexNet performance on ImageNet. ...
Supervised end-to-end learning is the standard approach to neural network optimization. However it has potential issues that can be valuable to consider. ...
arXiv:1812.11446v3
fatcat:sc7yz4pwsrhzfkxdcp64ijpyji
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
[article]
2016
arXiv
pre-print
Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. ...
We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy. ...
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks ...
arXiv:1603.05279v4
fatcat:dsfl3pwi55fxzcicde65yh4s3y
Do ImageNet Classifiers Generalize to ImageNet?
[article]
2019
arXiv
pre-print
We evaluate a broad range of models and find accuracy drops of 3% - 15% on CIFAR-10 and 11% - 14% on ImageNet. ...
However, accuracy gains on the original test sets translate to larger gains on the new test sets. ...
while working on this paper. ...
arXiv:1902.10811v2
fatcat:d32xzqv56fbxfh26omlfy6g4bi
Systematic evaluation of convolution neural network advances on the Imagenet
2017
Computer Vision and Image Understanding
The paper systematically studies the impact of a range of recent advances in CNN architectures and learning methods on the object categorization (ILSVRC) problem. ...
We show that the use of 128x128 pixel images is sufficient to make qualitative conclusions about optimal network structure that hold for the full size Caffe and VGG nets. ...
There has not been much research on the optimal colorspace or pre-processing techniques for CNN. ...
doi:10.1016/j.cviu.2017.05.007
fatcat:4u2un2jkp5aapc5wfoxihjxtey
Self-EMD: Self-Supervised Object Detection without ImageNet
[article]
2021
arXiv
pre-print
Our Faster R-CNN (ResNet50-FPN) baseline achieves 39.8% mAP on COCO, which is on par with the state of the art self-supervised methods pre-trained on ImageNet. ...
More importantly, it can be further improved to 40.4% mAP with more unlabeled images, showing its great potential for leveraging more easily obtained unlabeled data. Code will be made available. ...
For Mask R-CNN with FPN, the AP increases from 38.5% to 39.3 % (+0.8%), for Mask R-CNN with C4 architecture, the AP increases from 37.9% to 38.5% (+0.6%). ...
arXiv:2011.13677v3
fatcat:2dv6eda335gltpnlz2cgv6bkpu
Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?
[article]
2018
arXiv
pre-print
We examine the architectures of various 3D CNNs from relatively shallow to very deep ones on current video datasets. ...
(ii) The Kinetics dataset has sufficient data for training of deep 3D CNNs, and enables training of up to 152 ResNets layers, interestingly similar to 2D ResNets on ImageNet. ...
datasets are obviously too small to be used for optimizing CNN representations from scratch. ...
arXiv:1711.09577v2
fatcat:vdo7vkhspjfkbe2w5ojho5zj4e
Beyond ImageNet: Deep Learning in Industrial Practice
[chapter]
2019
Applied Data Science
We will first present a classical application of CNNs on image-like data, in particular, phenotype classification of cells based on their morphology, and then extend the task to clustering voices based ...
We will focus on convolutional neural networks (CNNs), which have since the seminal work of Krizhevsky et al. (2012) revolutionized image classification and even started surpassing human performance on ...
The architecture of the CNN is inspired by the second-best entry of the 2014 ImageNet competition (Simonyan & Zisserman, 2015) . ...
doi:10.1007/978-3-030-11821-1_12
fatcat:5uo3qf2uw5e2dlzkjvtapynbhu
Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results
[article]
2022
arXiv
pre-print
We test USI on a wide variety of architectures, including CNNs, Transformers, Mobile-oriented and MLP-only. On all models tested, USI outperforms previous state-of-the-art results. ...
Hence, we are able to transform training on ImageNet from an expert-oriented task to an automatic seamless routine. ...
Hence, the optimization process on ImageNet is more sensitive to different hyper-parameters, and the architecture used. ...
arXiv:2204.03475v2
fatcat:lorfkniriras5pawocfcttxowu
Diverse Imagenet Models Transfer Better
[article]
2022
arXiv
pre-print
A commonly accepted hypothesis is that models with higher accuracy on Imagenet perform better on other downstream tasks, leading to much research dedicated to optimizing Imagenet accuracy. ...
We demonstrate our results on several architectures and multiple downstream tasks, including both single-label and multi-label classification. ...
A common practice is to pre-train a model on a large-scale supervised dataset such as ImageNet [68] and fine-tune it on the downstream (target) dataset that is typically of a smaller scale. ...
arXiv:2204.09134v1
fatcat:5eq6c327rzhtjpb7iyimrtqrrq
Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
[article]
2021
arXiv
pre-print
However, ViT achieves inferior performance to CNNs when trained from scratch on a midsize dataset like ImageNet. ...
It also outperforms ResNets and achieves comparable performance with MobileNets by directly training on ImageNet. ...
Though ViT proves the full-transformer architecture is promising for vision tasks, its performance is still inferior to that of similar-sized CNN counterparts (e.g. ...
arXiv:2101.11986v3
fatcat:fupiobqjdzfkno3uclf2wu6fbq
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
[chapter]
2016
Lecture Notes in Computer Science
Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. ...
We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy. ...
This architecture was the first CNN architecture that showed to be successful on ImageNet classification task. This network has 61M parameters. ...
doi:10.1007/978-3-319-46493-0_32
fatcat:n757ualbzzc5pof4hcv2e3tema
Resizing Tiny Imagenet: An Iterative Approach Towards Image Classification
2020
International Journal of Innovative Research in Computer Science & Technology
It is generally considered one of the harder datasets in the domain of image classification. ...
Stanford's Tiny ImageNet dataset has been around for a while and neural networks have struggled to classify them. ...
Most importantly, it uses a backpropagation algorithm to take the updated weights back to the head of the model.
II. DATASET Tiny ImageNet is a subset of ImageNet. ...
doi:10.21276/ijircst.2020.8.6.8
fatcat:fa3fxf3jznh5jpazg3q667gceu
Deep Learning Earth Observation Classification Using ImageNet Pretrained Networks
2016
IEEE Geoscience and Remote Sensing Letters
In this letter, we propose a novel method by considering a pretrained CNN designed for tackling an entirely different classification problem, namely, the ImageNet challenge, and exploit it to extract an ...
Deep learning methods such as convolutional neural networks (CNNs) can deliver highly accurate classification results when provided with large enough data sets and respective labels. ...
The importance of the trainable CNN stage is essential due to its high discriminant potential, the system is able to learn complex relations among highly abstract representations and correctly separate ...
doi:10.1109/lgrs.2015.2499239
fatcat:xcwx2qiiqrfcdiolmga5q62jta
Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective
[article]
2021
arXiv
pre-print
In NAS-Bench-201 and DARTS search spaces, TE-NAS completes high-quality search but only costs 0.5 and 4 GPU hours with one 1080Ti on CIFAR-10 and ImageNet, respectively. ...
Neural Architecture Search (NAS) has been explosively studied to automate the discovery of top-performer neural networks. ...
A one-shot super network can share its parameters to sampled sub-networks and accelerate the architecture evaluations, but it is very heavy and hard to optimize and suffers from a poor correlation between ...
arXiv:2102.11535v4
fatcat:pnid7ogihncrvh26roc5cijdcu
« Previous
Showing results 1 — 15 out of 12,434 results