Filters








4,284 Hits in 6.0 sec

Latency-Aware Differentiable Neural Architecture Search [article]

Yuhui Xu, Lingxi Xie, Xiaopeng Zhang, Xin Chen, Bowen Shi, Qi Tian, Hongkai Xiong
2020 arXiv   pre-print
Differentiable neural architecture search methods became popular in recent years, mainly due to their low search costs and flexibility in designing the search space.  ...  This paper deals with this problem by adding a differentiable latency loss term into optimization, so that the search process can tradeoff between accuracy and latency with a balancing coefficient.  ...  Latency-Aware Differentiable Architecture Search We present a search framework which we call latency-aware differentiable neural architecture search (LA-DNAS).  ... 
arXiv:2001.06392v2 fatcat:f7yx36zbwjbbblkka3yk4jflla

Hardware-aware Real-time Myocardial Segmentation Quality Control in Contrast Echocardiography [article]

Dewen Zeng, Yukun Ding, Haiyun Yuan, Meiping Huang, Xiaowei Xu, Jian Zhuang, Jingtong Hu, Yiyu Shi
2021 arXiv   pre-print
In this paper, we propose a hardware-aware neural architecture search framework for automatic myocardial segmentation and quality control of contrast echocardiography.  ...  The proposed method searches the best neural network architecture for the segmentation module and quality prediction module with strict latency.  ...  Neural Architecture Search. neural architecture search (NAS) is a technique that automatically searches for optimal neural architectures to replace the human effort in designing handcrafted neural architectures  ... 
arXiv:2109.06909v1 fatcat:p33u6zm62javbfeqjfvm7qn3cy

EH-DNAS: End-to-End Hardware-aware Differentiable Neural Architecture Search [article]

Qian Jiang, Xiaofan Zhang, Deming Chen, Minh N. Do, Raymond A. Yeh
2021 arXiv   pre-print
In hardware-aware Differentiable Neural Architecture Search (DNAS), it is challenging to compute gradients of hardware metrics to perform architecture search.  ...  Given a desired hardware platform, we propose to learn a differentiable model predicting the end-to-end hardware performance of neural network architectures for DNAS.  ...  vnet design via differentiable neural architecture search.  ... 
arXiv:2111.12299v1 fatcat:hrtt7wr6ifep5nrhlruyeqajre

Towards Cardiac Intervention Assistance: Hardware-aware Neural Architecture Exploration for Real-Time 3D Cardiac Cine MRI Segmentation [article]

Dewen Zeng, Weiwen Jiang, Tianchen Wang, Xiaowei Xu, Haiyun Yuan, Meiping Huang, Jian Zhuang, Jingtong Hu, Yiyu Shi
2020 arXiv   pre-print
In this work, we present the first hardware-aware multi-scale neural architecture search (NAS) framework for real-time 3D cardiac cine MRI segmentation.  ...  In addition, the formulation is fully differentiable with respect to the architecture parameters, so that stochastic gradient descent (SGD) can be used for optimization to reduce the computation cost while  ...  Then a differentiable neural architecture search method is used to enable hardware-awareness for real-time 3D MRI segmentation tasks.  ... 
arXiv:2008.07071v2 fatcat:jpj7ogvtwjb2dirio5v254xa5e

FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search [article]

Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, Kurt Keutzer
2019 arXiv   pre-print
To address these, we propose a differentiable neural architecture search (DNAS) framework that uses gradient-based methods to optimize ConvNet architectures, avoiding enumerating and training individual  ...  Due to this, previous neural architecture search (NAS) methods are computationally expensive. ConvNet architecture optimality depends on factors such as input resolution and target devices.  ...  To address the above problems, we propose to use differentiable neural architecture search (DNAS) to discover hardware-aware efficient ConvNets.  ... 
arXiv:1812.03443v3 fatcat:eelcfkt3hzfyzjj7cfo6ydn2sa

S3NAS: Fast NPU-aware Neural Architecture Search Methodology [article]

Jaeseong Lee, Duseok Kang, Soonhoi Ha
2020 arXiv   pre-print
For a fast neural architecture search, we apply a modified Single-Path NAS technique to the proposed supernet structure.  ...  In this paper, we present a fast NPU-aware NAS methodology, called S3NAS, to find a CNN architecture with higher accuracy than the existing ones under a given latency constraint.  ...  They also design a differentiable latency-aware loss function to consider hardware latency in the search algorithm.  ... 
arXiv:2009.02009v1 fatcat:lfhojdc3nbhwtpxefycax4y5ea

RHNAS: Realizable Hardware and Neural Architecture Search [article]

Yash Akhauri, Adithya Niranjan, J. Pablo Muñoz, Suvadeep Banerjee, Abhijit Davare, Pasquale Cocchini, Anton A. Sorokin, Ravi Iyer, Nilesh Jain
2021 arXiv   pre-print
RHNAS is a method that combines reinforcement learning for hardware optimization with differentiable neural architecture search.  ...  The rapidly evolving field of Artificial Intelligence necessitates automated approaches to co-design neural network architecture and neural accelerators to maximize system efficiency and address productivity  ...  We propose a method that integrates a reinforcement learning-based hardware optimizer with differentiable neural architecture search.  ... 
arXiv:2106.09180v1 fatcat:j7zj3qvzjrgbvp3dh2tyg5snai

HAO: Hardware-aware neural Architecture Optimization for Efficient Inference [article]

Zhen Dong, Yizhao Gao, Qijing Huang, John Wawrzynek, Hayden K.H. So, Kurt Keutzer
2021 arXiv   pre-print
Differing from existing hardware-aware neural architecture search (NAS) algorithms that rely solely on the expensive learning-based approaches, our work incorporates integer programming into the search  ...  However, this process remains challenging due to the intractable search space of neural network architectures and hardware accelerator implementation.  ...  Based on our hardware latency model and network accuracy predictor, we propose a hardware-aware neural architecture optimization (HAO) method to generate pareto-optimal DNN designs to run on embedded FPGAs  ... 
arXiv:2104.12766v1 fatcat:wvpt6sil4zhf5dknqhv5zj76lu

FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search

Bichen Wu, Kurt Keutzer, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Neural Architectures Figure 1. Differentiable neural architecture search (DNAS) for ConvNet design. DNAS explores a layer-wise space that each layer of a ConvNet can choose a different block.  ...  To address these, we propose a differentiable neural architecture search (DNAS) framework that uses gradient-based methods to optimize Con-vNet architectures, avoiding enumerating and training individual  ...  To address the above problems, we propose to use differentiable neural architecture search (DNAS) to discover hardware-aware efficient ConvNets.  ... 
doi:10.1109/cvpr.2019.01099 dblp:conf/cvpr/WuDZWSWTVJK19 fatcat:eyxymk3kwnc3jnonjunf7nmahy

Rethinking Co-design of Neural Architectures and Hardware Accelerators [article]

Yanqi Zhou, Xuanyi Dong, Berkin Akin, Mingxing Tan, Daiyi Peng, Tianjian Meng, Amir Yazdanbakhsh, Da Huang, Ravi Narayanaswami, James Laudon
2021 arXiv   pre-print
Our experiments show that the joint search method consistently outperforms previous platform-aware neural architecture search, manually crafted models, and the state-of-the-art EfficientNet on all latency  ...  Neural architectures and hardware accelerators have been two driving forces for the progress in deep learning.  ...  Platform-aware neural architecture search (NAS) Wu et al., 2019; Cai et al., 2019) optimizes the neural architectures for a target inference device.  ... 
arXiv:2102.08619v1 fatcat:rah3mjwrdna5hk7hqdipuynmva

Effective Algorithm-Accelerator Co-design for AI Solutions on Edge Devices [article]

Cong Hao, Yao Chen, Xiaofan Zhang, Yuhong Li, Jinjun Xiong, Wen-mei Hwu, Deming Chen
2020 arXiv   pre-print
and efficient DNN and accelerator co-search method.  ...  High quality AI solutions require joint optimization of AI algorithms, such as deep neural networks (DNNs), and their hardware accelerators.  ...  Figure 1 : (a) The general hardware-aware neural architecture search: the hardware accelerator design space is fixed.  ... 
arXiv:2010.07185v2 fatcat:erfoyh536rgmlklbr4pvrfuds4

EDD: Efficient Differentiable DNN Architecture and Implementation Co-search for Embedded AI Solutions [article]

Yuhong Li, Cong Hao, Xiaofan Zhang, Xinheng Liu, Yao Chen, Jinjun Xiong, Wen-mei Hwu, Deming Chen
2020 arXiv   pre-print
In this work, we are the first to propose a fully simultaneous, efficient differentiable DNN architecture and implementation co-search (EDD) methodology.  ...  Each model produced by EDD achieves similar accuracy as the best existing DNN models searched by neural architecture search (NAS) methods on ImageNet, but with superior performance obtained within 12 GPU-hour  ...  We provided an initial discussion of the potential of simultaneous neural architecture and implementation co-search in [17] , called NAIS.  ... 
arXiv:2005.02563v1 fatcat:felajve7h5bdtmjgdm3vg2ln2u

Fast Hardware-Aware Neural Architecture Search [article]

Li Lyna Zhang, Yuqing Yang, Yuhang Jiang, Wenwu Zhu, Yunxin Liu
2020 arXiv   pre-print
This paper addresses the hardware diversity challenge in Neural Architecture Search (NAS).  ...  Designing accurate and efficient convolutional neural architectures for vast amount of hardware is challenging because hardware designs are complex and diverse.  ...  Introduction Neural Architecture Search (NAS) is a powerful mechanism to automatically generate efficient Convolutional Neural Networks (CNN) without requiring huge manual efforts of human experts to design  ... 
arXiv:1910.11609v3 fatcat:7qdyft7iqzdqjatslcbzglqfau

LC-NAS: Latency Constrained Neural Architecture Search for Point Cloud Networks [article]

Guohao Li, Mengmeng Xu, Silvio Giancola, Ali Thabet, Bernard Ghanem
2020 arXiv   pre-print
Recent progress in automatic Neural Architecture Search (NAS) minimizes the human effort in network design and optimizes high performing architectures.  ...  We implement a novel latency constraint formulation to trade-off between accuracy and latency in our architecture search.  ...  We dub this new framework Latency Constrained Neural Architecture Search (LC-NAS).  ... 
arXiv:2008.10309v1 fatcat:miuacrmhnzg6jhrxnonvbjnpca

HALF: Holistic Auto Machine Learning for FPGAs [article]

Jonas Ney, Dominik Loroch, Vladimir Rybalkin, Nico Weber, Jens Krüger, Norbert Wehn
2021 arXiv   pre-print
It comprises optimizations starting from a hardware-aware topology search for DNNs down to the final optimized implementation for a given FPGA platform.  ...  Deep Neural Networks (DNNs) are capable of solving complex problems in domains related to embedded systems, such as image and natural language processing.  ...  The most prominent gradientbased method is differentiable architecture search (DARTS) [9] , which searches for subgraphs in a larger, differentiable supergraph.  ... 
arXiv:2106.14771v1 fatcat:76ahgbifd5antgtrzk6kvgq6gi
« Previous Showing results 1 — 15 out of 4,284 results