Filters








19,290 Hits in 6.1 sec

Generalization Guarantees for Neural Architecture Search with Train-Validation Split [article]

Samet Oymak, Mingchen Li, Mahdi Soltanolkotabi
2022 arXiv   pre-print
Neural Architecture Search (NAS) is a popular method for automatically designing optimized architectures for high-performance deep learning.  ...  Extensions to transfer learning are developed in terms of the mismatch between training & validation distributions. (2) We establish generalization bounds for NAS problems with an emphasis on an activation  ...  (TVO) Generalization with Train-Validation Split In this section we state our generic generalization bounds for bilevel optimization problems with trainvalidation split.  ... 
arXiv:2104.14132v3 fatcat:ftc4qbwzkvbn7liexpaohiwkyy

JMSNAS: Joint Model Split and Neural Architecture Search for Learning over Mobile Edge Networks [article]

Yuqing Tian, Zhaoyang Zhang, Zhaohui Yang, Qianqian Yang
2021 arXiv   pre-print
In this paper, a joint model split and neural architecture search (JMSNAS) framework is proposed to automatically generate and deploy a DNN model over a mobile edge network.  ...  Considering both the computing and communication resource constraints, a computational graph search problem is formulated to find the multi-split points of the DNN model, and then the model is trained  ...  split and neural architecture search (JMSNAS) framework for deploying an ML model over the MEC.  ... 
arXiv:2111.08206v1 fatcat:6xzcdzzqbbfthmnwjo6hrtocze

Differentially-private Federated Neural Architecture Search [article]

Ishika Singh, Haoyi Zhou, Kunlin Yang, Meng Ding, Bill Lin, Pengtao Xie
2020 arXiv   pre-print
Neural architecture search, which aims to automatically search for architectures (e.g., convolution, max pooling) of neural networks that maximize validation performance, has achieved remarkable progress  ...  To address this problem, we propose federated neural architecture search (FNAS), where different parties collectively search for a differentiable architecture by exchanging gradients of architecture variables  ...  To this end, we study federated neural architecture search (FNAS), where multiple parties collaboratively search for an optimal neural architecture without exchanging sensitive data with each other for  ... 
arXiv:2006.10559v2 fatcat:7xpjs4bc6rf5nj6ljle4mnzdpm

Evaluating volumetric and slice-based approaches for COVID-19 detection in chest CTs

Radu Miron, Cosmin Moisii, Sergiu Dinu, Mihaela Elena Breaban
2021 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)  
on the validation subset.  ...  Our best results reach a macro F1 score of 92.34% on the validation subset and 90.06% on the test set, obtained with the volumetric approach which was ranked second in the competition.  ...  Thus, one model is trained on the official train-validation split for 150 epochs whereas other 4 models are trained on 4 in-house generated folds for 100 epochs.  ... 
doi:10.1109/iccvw54120.2021.00065 fatcat:aommhqelsnb45hpokfnfkv3e74

Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets [article]

Hayeon Lee, Eunyoung Hyung, Sung Ju Hwang
2021 arXiv   pre-print
In this paper, we propose an efficient NAS framework that is trained once on a database consisting of datasets and pretrained networks and can rapidly search for a neural architecture for a novel dataset  ...  mostly tackled the optimization of searching for the network architecture for a single task (dataset), which does not generalize well across multiple tasks (datasets).  ...  Acknowledgements This work was conducted by Center for Applied Research in Artificial Intelligence (CARAI) grant funded by DAPA and ADD (UD190031RD).  ... 
arXiv:2107.00860v1 fatcat:nmi3nfy7cjhz5kdrk37gh67hou

Differentiable Neural Architecture Search with Morphism-based Transformable Backbone Architectures [article]

Renlong Jie, Junbin Gao
2021 arXiv   pre-print
This study aims at making the architecture search process more adaptive for one-shot or online training.  ...  This study introduces a growing mechanism for differentiable neural architecture search based on network morphism.  ...  train (w, α) (3) The second sub-optimization is generally a normal training process of model parameters with training data, while the first one is optimized on validation set and can be done with finite  ... 
arXiv:2106.07211v1 fatcat:7idiaskyirgnznh73lvgwngofi

Continual Learning with Guarantees via Weight Interval Constraints [article]

Maciej Wołczyk, Karol J. Piczak, Bartosz Wójcik, Łukasz Pustelnik, Paweł Morawiecki, Jacek Tabor, Tomasz Trzciński, Przemysław Spurek
2022 arXiv   pre-print
We introduce a new training paradigm that enforces interval constraints on neural network parameter space to control forgetting.  ...  any firm guarantees that network performance will not deteriorate uncontrollably over time.  ...  The main idea is to constrain the parameter search within the set of parameters for which any particular solution is valid for the previous tasks.  ... 
arXiv:2206.07996v1 fatcat:yiqoubkgwnazjmstjakotbvmpq

DC-NAS: Divide-and-Conquer Neural Architecture Search [article]

Yunhe Wang, Yixing Xu, Dacheng Tao
2020 arXiv   pre-print
Neural architecture searching is a way of automatically exploring optimal deep neural networks in a given huge search space.  ...  In contrast to conventional methods, here we present a divide-and-conquer (DC) approach to effectively and efficiently search deep neural architectures.  ...  Thus, for a given parameter K, the RS method randomly selects 156 + K different neural architectures from the search space, and the architectures are fully trained to obtain the validation result.  ... 
arXiv:2005.14456v1 fatcat:4pledk4txjefhg4tu6gz2h5v7i

NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size [article]

Xuanyi Dong, Lu Liu, Katarzyna Musial, Bogdan Gabrys
2021 IEEE Transactions on Software Engineering   accepted
for both aspects of the neural architectures.  ...  NATS-Bench includes the search space of 15,625 neural cell candidates for architecture topology and 32,768 for architecture size on three datasets.  ...  We hope to use the proposed splits to unify the training, validation and test sets for a fairer comparison.  ... 
doi:10.1109/tpami.2021.3054824 pmid:33497330 arXiv:2009.00437v5 fatcat:skeqlmissvdxfiaclgyfr5gujy

Towards Privacy-Preserving Neural Architecture Search [article]

Fuyi Wang and Leo Yu Zhang and Lei Pan and Shengshan Hu and Robin Doss
2022 arXiv   pre-print
To address these issues, we propose a privacy-preserving neural architecture search (PP-NAS) framework based on secure multi-party computation to protect users' data and the model's parameters/hyper-parameters  ...  However, massive user data collected for training deep learning models raises privacy concerns and increases the difficulty of manually adjusting the network structure.  ...  NAS Search for CNN The NAS paradigm aims to automatically search for the optimal network architecture that leads to the best validation accuracy or efficiency with time and resource constraints.  ... 
arXiv:2204.10958v1 fatcat:ta7neeob4newdgridsdvcxz73i

Neural Differential Equations for Single Image Super-resolution [article]

Teven Le Scao
2020 arXiv   pre-print
The adjoint method previously proposed for gradient estimation has no theoretical stability guarantees; we find a practical case where this makes it unusable, and show that discrete sensitivity analysis  ...  Inspired by variational methods for image restoration relying on partial differential equations, we choose to benchmark several forms of Neural DEs and backpropagation methods on single image super-resolution  ...  Architecture search on the BSD dataset As this smaller dataset allows an extensive grid architecture search, we identify the best-performing differential system on the BSD dataset before moving on to DIV2K  ... 
arXiv:2005.00865v1 fatcat:xrelxw76vbg33nehz5lnjhn37i

Single Path One-Shot Neural Architecture Search with Uniform Sampling [article]

Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, Jian Sun
2020 arXiv   pre-print
We revisit the one-shot Neural Architecture Search (NAS) paradigm and analyze its advantages over existing NAS approaches.  ...  All architectures (and their weights) are trained fully and equally. Comprehensive experiments verify that our approach is flexible and effective. It is easy to train and fast to search.  ...  We randomly split the original training set into two parts: 50000 images are for validation (50 images for each class exactly) and the rest as the training set.  ... 
arXiv:1904.00420v4 fatcat:inacuybierd3bboe5aguisrf74

Competitive neural trees for pattern classification

S. Behnke, N.B. Karayiannis
1998 IEEE Transactions on Neural Networks  
This paper introduces different search methods for the CNeT, which are utilized for training as well as for recall.  ...  This paper presents competitive neural trees (CNeT's) for pattern classification. The CNeT contains m m m-ary nodes and grows during learning by using inheritance to initialize new nodes.  ...  The neural tree architectures reported in the literature can be grouped according to the learning paradigm employed for their training.  ... 
doi:10.1109/72.728387 pmid:18255815 fatcat:i2qpxp6icvglrpaa63thih5n4q

A scalable constructive algorithm for the optimization of neural network architectures [article]

Massimiliano Lupo Pasini, Junqi Yin, Ying Wai Li, Markus Eisenbach
2021 arXiv   pre-print
The proposed algorithm, called Greedy Search for Neural Network Architecture, aims to determine a neural network with minimal number of layers that is at least as performant as neural networks of the same  ...  by the selected neural network architecture, and time-to-solution for the hyperparameter optimization to complete.  ...  Vladimir Protopopescu for his valuable feedback in the preparation of this manuscript and three anonymous reviewers for their very useful comments and suggestions.  ... 
arXiv:1909.03306v3 fatcat:5z6yt2i4pvhvpo2pgokt5gvfsa

Simplifying Architecture Search for Graph Neural Network [article]

Huan Zhao and Lanning Wei and Quanming Yao
2020 arXiv   pre-print
To overcome these drawbacks, we propose the SNAG framework (Simplified Neural Architecture search for Graph neural networks), consisting of a novel search space and a reinforcement learning based search  ...  To obtain optimal data-specific GNN architectures, researchers turn to neural architecture search (NAS) methods, which have made impressive progress in discovering effective architectures in convolutional  ...  For all datasets, We split the nodes in all graphs into 60%, 20%, 20% for training, validation, and test. For the transductive task, we use the classification accuracy as the evaluation metric.  ... 
arXiv:2008.11652v2 fatcat:7mgbknvplvcsxoh3mcs3ofhduy
« Previous Showing results 1 — 15 out of 19,290 results