Filters








133,669 Hits in 7.1 sec

Finding the Optimal Network Depth in Classification Tasks [article]

Bartosz Wójcik, Maciej Wołczyk, Klaudia Bałazy, Jacek Tabor
2020 arXiv   pre-print
This operation, which can be seen as finding the optimal depth of the model, significantly reduces the number of parameters and accelerates inference across different hardware processing units, which is  ...  We show the performance of our method on multiple network architectures and datasets, analyze its optimization properties, and conduct ablation studies.  ...  Inspired by those findings, we introduce NetCut, a quick end-to-end method for reducing the depth of the network in classification tasks.  ... 
arXiv:2004.08172v1 fatcat:uh2zwxqxabcetopiko4nrtstgi

Auto-Meta: Automated Gradient Based Meta Learner Search [article]

Jaehong Kim, Sangyeul Lee, Sungwan Kim, Moonsu Cha, Jung Kwon Lee, Youngduck Choi, Yongseok Choi, Dong-Yeon Cho, Jiwon Kim
2018 arXiv   pre-print
We adopt the progressive neural architecture search liu:pnas_google:DBLP:journals/corr/abs-1712-00559 to find optimal architectures for meta-learners.  ...  To our best knowledge, this work is the first successful neural architecture search implementation in the context of meta learning.  ...  To investigate how our progressive network architecture search algorithm can find the best cells for few-shot image classification tasks, we observed the distribution of depths of the promising cells as  ... 
arXiv:1806.06927v2 fatcat:uqn2tsabd5gsbm42l7p5c2hm54

From generic to specific deep representations for visual recognition

Hossein Azizpour, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, Stefan Carlsson
2015 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
In the common scenario, a ConvNet is trained on a large labeled dataset and the feed-forward units activation, at a certain layer of the network, is used as a generic representation of an input image.  ...  Furthermore, by optimizing these factors, we achieve stateof-the-art performances on 16 visual recognition tasks.  ...  Another example is that we could do fine-tuning with the optimal choices of parameters for nearly all tasks. Obviously, it was highly computationally expensive to produce all the existing results.  ... 
doi:10.1109/cvprw.2015.7301270 dblp:conf/cvpr/AzizpourRSMC15 fatcat:m3ivxh3xgzcn3hyeba6klgnoay

Bayesian Optimization Combined with Successive Halving for Neural Network Architecture Optimization

Martin Wistuba
2017 European Conference on Principles of Data Mining and Knowledge Discovery  
However, the automated methods find shallow convolutional neural networks that outperform human crafted shallow neural networks with respect to classification error and training time.  ...  this task.  ...  In the following, we will assume that we want to find the optimal network architecture for an image classification task.  ... 
dblp:conf/pkdd/Wistuba17 fatcat:hxkcdq5h55hwtb3cg6z4zeh6k4

Deep Architectures for Modulation Recognition [article]

Nathan E West, Timothy J. O'Shea
2017 arXiv   pre-print
We survey the latest advances in machine learning with deep neural networks by applying them to the task of radio modulation recognition.  ...  Results show that radio modulation recognition is not limited by network depth and further work should focus on improving learned synchronization and equalization.  ...  For deep learning classification tasks the probability distribution is usually a softmax (eq. 2) of the output of the classifier network which is then converted to a one-hot encoding for classification  ... 
arXiv:1703.09197v1 fatcat:n3gjj4o5pfgkze5mdpwq3wo4yy

Enhanced Transfer Learning with ImageNet Trained Classification Layer [article]

Tasfia Shermin, Shyh Wei Teng, Manzur Murshed, Guojun Lu, Ferdous Sohel, Manoranjan Paul
2019 arXiv   pre-print
Thus, we hypothesize that the presence of this layer is crucial for growing network depth to adapt better to a new task.  ...  However, the impact of the ImageNet pre-trained classification layer in parameter fine-tuning is mostly unexplored in the literature.  ...  The main contributions of the paper are as follows: -We propose to include the pre-trained classification layer in fine-tuning and find that the transfer learning performance with the pre-trained classification  ... 
arXiv:1903.10150v2 fatcat:ckfpdln2dfbazbcje7vqdfk4ba

Factors of Transferability for a Generic ConvNet Representation [article]

Hossein Azizpour, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, Stefan Carlsson
2015 arXiv   pre-print
In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation  ...  Evidence is mounting that Convolutional Networks (ConvNets) are the most effective representation learning method for visual recognition tasks.  ...  These include the source task (encoded in the labelled training data), network width and depth, distribution of the training data, optimization parameters.  ... 
arXiv:1406.5774v3 fatcat:he252tis7faqjbxkg7y5gqicmy

Is deeper better? It depends on locality of relevant features [article]

Takashi Mori, Masahito Ueda
2021 arXiv   pre-print
It has been recognized that a heavily overparameterized artificial neural network exhibits surprisingly good generalization performance in various machine-learning tasks.  ...  In most of those previous works, the overparameterization is achieved by increasing the width of the network, while the effect of increasing the depth has remained less well understood.  ...  Conclusion In this work, we have studied the effect of increasing the depth in classification tasks.  ... 
arXiv:2005.12488v2 fatcat:bpa6wxdxzzfaxosxc4n4jmzopi

Rethinking Graph Neural Architecture Search from Message-passing [article]

Shaofei Cai, Liang Li, Jincan Deng, Beichen Zhang, Zheng-Jun Zha, Li Su, Qingming Huang
2021 arXiv   pre-print
The GNAS can automatically learn better architecture with the optimal depth of message passing on the graph.  ...  The searched network achieves remarkable improvement over state-of-the-art manual designed and search-based GNNs on five large-scale datasets at three classical graph tasks.  ...  Specifically, it depends on the diameter of the graph in the specific dataset. In order to find the optimal network depth, current works usually use enumeration with the high com-putational cost.  ... 
arXiv:2103.14282v4 fatcat:rshdr2tqvbbm7ijborgvtjju7a

Analysis of Dimensional Influence of Convolutional Neural Networks for Histopathological Cancer Classification [article]

Shreyas Rajesh Labhsetwar, Alistair Michael Baretto, Raj Sunil Salvi, Piyush Arvind Kolte, Veerasai Subramaniam Venkatesh
2020 arXiv   pre-print
Convolutional Neural Networks can be designed with different levels of complexity depending upon the task at hand.  ...  This paper analyzes the effect of dimensional changes to the CNN architecture on its performance on the task of Histopathological Cancer Classification.  ...  accuracy in image classification tasks.  ... 
arXiv:2011.04057v2 fatcat:hd5raqhsb5cs3g5fpkmoi7a2j4

Greedy Network Enlarging [article]

Chuanjian Liu, Kai Han, An Xiao, Yiping Deng, Wei Zhang, Chunjing Xu, Yunhe Wang
2021 arXiv   pre-print
In this paper, we propose to enlarge the capacity of CNN models by improving their width, depth and resolution on stage level.  ...  With step-by-step modifying the computations on different stages, the enlarged network will be equipped with optimal allocation and utilization of MACs.  ...  So as to maximize the utilization of MACs, as shown in Eq. 1. For each stage i in the network, we try to find its optimal depth d i and width w i .  ... 
arXiv:2108.00177v3 fatcat:iaa25hbwyfb63la5f3777iephq

Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics

Roberto Cipolla, Yarin Gal, Alex Kendall
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task's loss.  ...  This allows us to simultaneously learn various quantities with different units or scales in both classification and regression settings.  ...  At nearby values to the optimal weight the network performs worse on one of the tasks.  ... 
doi:10.1109/cvpr.2018.00781 dblp:conf/cvpr/KendallGC18 fatcat:rrkxslhy6ff47dlqfundq4p324

Deep Neural Network Architectures for Modulation Classification [article]

Xiaoyu Liu, Diyu Yang, Aly El Gamal
2018 arXiv   pre-print
In this work, we investigate the value of employing deep learning for the task of wireless signal modulation recognition.  ...  Here, we follow the framework of [1] and find deep neural network architectures that deliver higher accuracy than the state of the art.  ...  Next, we explore the optimal depth of CNN by increasing the number of convolutional layers from 2 to 5. We find that the best accuracy at high SNR is approximately 83.8%.  ... 
arXiv:1712.00443v3 fatcat:qetjoxg3ivhgbpb5wypxduyc3e

Evolving Deep LSTM-based Memory Networks using an Information Maximization Objective

Aditya Rawal, Risto Miikkulainen
2016 Proceedings of the 2016 on Genetic and Evolutionary Computation Conference - GECCO '16  
In the first phase (unsupervised phase), independent memory modules are evolved by optimizing for the info-max objective. In the second phase, the networks are trained by optimizing the task fitness.  ...  To overcome these challenges, a new secondary optimization objective is introduced that maximizes the information (Info-max) stored in the LSTM network. The network training is split into two phases.  ...  This research was supported in part by the National Science Foundation under grant DBI-0939454 and National Institutes of Health under grant 1R01GM105042.  ... 
doi:10.1145/2908812.2908941 dblp:conf/gecco/RawalM16 fatcat:ctb4d6xe3zhg5jgwhsxzah2f2u

Accelerating neural architecture exploration across modalities using genetic algorithms

Daniel Cummings, Sharath Nittur Sridhar, Anthony Sarah, Maciej Szankin
2022 Proceedings of the Genetic and Evolutionary Computation Conference Companion  
Neural architecture search (NAS), the study of automating the discovery of optimal deep neural network architectures for tasks in domains such as computer vision and natural language processing, has seen  ...  Most NAS research efforts have centered around computer vision tasks and only recently have other modalities, such as natural language processing, been investigated in depth.  ...  INTRODUCTION Automating the process of finding optimal deep neural network (DNN) architectures for a given task, known as neural architecture search (NAS), has seen significant progress in the research  ... 
doi:10.1145/3520304.3528786 fatcat:kvf65k6xerckznllnmgt6vtfty
« Previous Showing results 1 — 15 out of 133,669 results