8 Hits in 4.8 sec

NAS-Bench-NLP: Neural Architecture Search Benchmark for Natural Language Processing [article]

Nikita Klyuchnikov, Ilya Trofimov, Ekaterina Artemova, Mikhail Salnikov, Maxim Fedorov, Evgeny Burnaev
2020 arXiv   pre-print
In this work, we step outside the computer vision domain by leveraging the language modeling task, which is the core of natural language processing (NLP).  ...  Neural Architecture Search (NAS) is a promising and rapidly evolving research area.  ...  We acknowledge the usage of the Skoltech CDISE HPC cluster Zhores for obtaining the results presented in this paper.  ... 
arXiv:2006.07116v1 fatcat:ct5ibdzgevajbc5hy6rvlpflrq

Learning Where To Look – Generative NAS is Surprisingly Efficient [article]

Jovita Lukasik, Steffen Jung, Margret Keuper
2022 arXiv   pre-print
The efficient, automated search for well-performing neural architectures (NAS) has drawn increasing attention in the recent past.  ...  To this aim, surrogate models embed architectures in a latent space and predict their performance, while generative models for neural architectures enable optimization-based search within the latent space  ...  NAS-bench-NLP [27] provides a search space for natural language processing.  ... 
arXiv:2203.08734v1 fatcat:q3ltvzt6j5h6lnkqm42nsexdtq

NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly Easy [article]

Yash Mehta, Colin White, Arber Zela, Arjun Krishnakumar, Guri Zabergja, Shakiba Moradian, Mahmoud Safari, Kaicheng Yu, Frank Hutter
2022 arXiv   pre-print
The release of tabular benchmarks, such as NAS-Bench-101 and NAS-Bench-201, has significantly lowered the computational overhead for conducting scientific research in neural architecture search (NAS).  ...  Recently, several new NAS benchmarks have been introduced that cover significantly larger search spaces over a wide range of tasks, including object detection, speech recognition, and natural language  ...  We thank Danny Stoll and Falak Vora for their helpful contributions to this project.  ... 
arXiv:2201.13396v2 fatcat:x5d7eug4lbexfiouvod3bsletq

How Powerful are Performance Predictors in Neural Architecture Search? [article]

Colin White, Arber Zela, Binxin Ru, Yang Liu, Frank Hutter
2021 arXiv   pre-print
Early methods in the rapidly developing field of neural architecture search (NAS) required fully training thousands of neural networks.  ...  To reduce this extreme computational cost, dozens of techniques have since been proposed to predict the final performance of neural architectures.  ...  Sparse GP XGBoost LcSVR 10 1 DNGO GP LGBoost NGBoost 10 0 NAS-Bench-NLP Kendall Tau NAS-Bench-NLP Sparse Kendall Tau NAS-Bench-NLP Pearson NAS-Bench-NLP Spearman Bayes. Lin. Reg. Bayes. Lin. Reg.  ... 
arXiv:2104.01177v2 fatcat:mqp7ndu63rckrglmmc33xoi2be

NAS-HPO-Bench-II: A Benchmark Dataset on Joint Optimization of Convolutional Neural Network Architecture and Training Hyperparameters [article]

Yoichi Hirose, Nozomu Yoshinari, Shinichi Shirakawa
2021 arXiv   pre-print
The benchmark datasets for neural architecture search (NAS) have been developed to alleviate the computationally expensive evaluation process and ensure a fair comparison.  ...  Building the benchmark dataset for joint optimization of architecture and training hyperparameters is essential to further NAS research.  ...  Frank Hutter as the authors of NAS-HPO-Bench for allowing us to call our paper NAS-HPO-Bench-II.  ... 
arXiv:2110.10165v1 fatcat:heerfph34neyte4o73dvnvct7a

Surrogate NAS Benchmarks: Going Beyond the Limited Search Spaces of Tabular NAS Benchmarks [article]

Arber Zela, Julien Siems, Lucas Zimmer, Jovita Lukasik, Margret Keuper, Frank Hutter
2022 arXiv   pre-print
The most significant barrier to the advancement of Neural Architecture Search (NAS) is its demand for large computational resources, which hinders scientifically sound empirical evaluations of NAS methods  ...  To overcome this fundamental limitation, we propose a methodology to create cheap NAS surrogate benchmarks for arbitrary search spaces.  ...  NAS-Bench-NLP (Klyuchnikov et al., 2020) was recently proposed as a tabular benchmark for NAS in the Natural Language Processing domain.  ... 
arXiv:2008.09777v4 fatcat:6jqfb4szqncbnheakz47yyzqje

Learning Versatile Neural Architectures by Propagating Network Codes [article]

Mingyu Ding, Yuqi Huo, Haoyu Lu, Linjie Yang, Zhe Wang, Zhiwu Lu, Jingdong Wang, Ping Luo
2022 arXiv   pre-print
We first introduce a unified design space for multiple tasks and build a multitask NAS benchmark (NAS-Bench-MR) on many widely used datasets, including ImageNet, Cityscapes, KITTI, and HMDB51.  ...  architecture design, i.e., multitask neural architectures and architecture transferring between different tasks.  ...  Zhiwu Lu was supported by National Natural Science Foundation of China (61976220).  ... 
arXiv:2103.13253v2 fatcat:6mxtixe4yvfjthjuzyv23c2psa

Generic Neural Architecture Search via Regression [article]

Yuhong Li, Cong Hao, Pan Li, Jinjun Xiong, Deming Chen
2021 arXiv   pre-print
However, extensive experiments have shown that, prominent neural architectures, such as ResNet in computer vision and LSTM in natural language processing, are generally good at extracting patterns from  ...  In this paper, we attempt to answer two fundamental questions related to NAS. (1) Is it necessary to use the performance of specific downstream tasks to evaluate and search for good neural architectures  ...  We thank IBM-Illinois Center for Cognitive Computing Systems Research (C3SR) for supporting this research. We thank all reviewers and the area chair for valuable discussions and feedback.  ... 
arXiv:2108.01899v2 fatcat:im7t62hb4zhdhga6saihesmywm