Filters








651 Hits in 9.4 sec

Enabling On-Device CNN Training by Self-Supervised Instance Filtering and Error Map Pruning [article]

Yawen Wu, Zhepeng Wang, Yiyu Shi, Jingtong Hu
2020 arXiv   pre-print
To tackle this problem, we explore the computational redundancies in training and reduce the computation cost by two complementary approaches: self-supervised early instance filtering on data level and  ...  error map pruning on the algorithm level.  ...  We propose an instance filter to predict the loss of each instance and develop a self-supervised algorithm to train the filter.  ... 
arXiv:2007.03213v1 fatcat:ecbaiqlalbcy3j6tznrkrksd54

Table of contents

2020 IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems  
Jiao, and J. Sun 3420 Enabling Latency- Device CNN Training by Self-Supervised Instance Filtering and Error Map Pruning . . . . . . . . . . . Z. Wang, Z. Jiang, Z. Wang, X. Tang, C. Liu, S.  ...  Yin, and Y. Hu 3433 Enabling On-  ... 
doi:10.1109/tcad.2020.3028016 fatcat:s4y5qe6n45hltizz3txl5g5ila

Towards Compact ConvNets via Structure-Sparsity Regularized Filter Pruning [article]

Shaohui Lin, Rongrong Ji, Yuchao Li, Cheng Deng, Xuelong Li
2019 arXiv   pre-print
Moreover, by imposing the structured sparsity, the online inference is extremely memory-light, since the number of filters and the output feature maps are simultaneously reduced.  ...  The success of convolutional neural networks (CNNs) in computer vision applications has been accompanied by a significant increase of computation and memory costs, which prohibits its usage on resource-limited  ...  For APoZ, the sparsity of feature maps is quite reasonable to prune the redundant filters, which is due to the self-sparsity of the pre-trained model with ReLU activation.  ... 
arXiv:1901.07827v2 fatcat:rh2n7f45qnhihnpy7sizoxfb6y

Lightweight Residual Densely Connected Convolutional Neural Network [article]

Fahimeh Fooladgar, Shohreh Kasaei
2020 arXiv   pre-print
Extremely efficient convolutional neural network architectures are one of the most important requirements for limited-resource devices (such as embedded and mobile devices).  ...  The proposed method decreases the cost of training and inference processes without using any special hardware-software equipment by just reducing the number of parameters and computational operations while  ...  Consequently, the kernel pruning performed as the condensation procedure by the condensation factor. In the second half of the training phase, fixed filters have been trained.  ... 
arXiv:2001.00526v2 fatcat:y2m6gvpucraorgk5qmebv3adhi

A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions [article]

Rahul Mishra, Hari Prabhat Gupta, Tanima Dutta
2020 arXiv   pre-print
However, the colossal requirement of computation, energy, and storage of DNN models make their deployment prohibitive on resource constraint IoT devices.  ...  ., network pruning, sparse representation, bits precision, knowledge distillation, and miscellaneous, based upon the mechanism incorporated for compressing the DNN model.  ...  In other words, the model improves its performance using self-knowledge distillation by rapidly training and employing more precise predictions.  ... 
arXiv:2010.03954v1 fatcat:n65hoshh3bbsvfgndq7qnqb3sm

Artificial Intelligence for 5G Wireless Systems: Opportunities, Challenges, and Future Research Directions [article]

Youness Arjoune, Saleh Faruque
2020 arXiv   pre-print
In this respect, the aim of this paper is to survey AI in 5G wireless communication systems by discussing many case studies and the associated challenges, and shedding new light on future research directions  ...  The advent of the wireless communications systems augurs new cutting-edge technologies, including self-driving vehicles, unmanned aerial systems, autonomous robots, Internet-of-Things, and virtual reality  ...  Examples of techniques under this category are clustering techniques such as K-means and self-organizing maps.  ... 
arXiv:2009.04943v1 fatcat:noedzavffng6fo35zzbax3bfsa

Edge Intelligence: Architectures, Challenges, and Applications [article]

Dianlei Xu, Tong Li, Yong Li, Xiang Su, Sasu Tarkoma, Tao Jiang, Jon Crowcroft, Pan Hui
2020 arXiv   pre-print
Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence.  ...  We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed  ...  pruning Speed up inference 10× faster Lossy [238] CNN Global filter pruning Accelerate CNN 70% FLOPs reduction Lossless [239] CNN Network pruning Energy-efficiency 3.7× reduction on energy  ... 
arXiv:2003.12172v2 fatcat:xbrylsvb7bey5idirunacux6pe

The Lottery Ticket Hypothesis for Object Recognition [article]

Sharath Girish, Shishira R. Maiya, Kamal Gupta, Hao Chen, Larry Davis, Abhinav Shrivastava
2021 arXiv   pre-print
In this work, we perform the first empirical study investigating LTH for model pruning in the context of object detection, instance segmentation, and keypoint estimation.  ...  This makes it exceedingly difficult to deploy these systems on low power embedded devices.  ...  This work was partially supported by DARPA GARD #HR00112020007 and a gift from Facebook AI.  ... 
arXiv:2012.04643v2 fatcat:twt3iwxairadpnqhuo2laxrz54

Balancing Specialization, Generalization, and Compression for Detection and Tracking [article]

Dotan Kaufman, Koby Bibas, Eran Borenstein, Michael Chertok, Tal Hassner
2019 arXiv   pre-print
We apply our method to the existing tracker and detector models. We report detection results on the VIRAT and CAVIAR data sets.  ...  Our tests on the OTB2015 benchmark show that applying compression during test time actually improves tracking performance.  ...  Moreover, our novel loss enables the use of self-supervision in the restricted domain, which improves the general detector by up to ×1.3 in average precision.  ... 
arXiv:1909.11348v1 fatcat:wyeiokps2nhwxbzdkdagsnvylm

NetTailor: Tuning the Architecture, Not Just the Weights

Pedro Morgado, Nuno Vasconcelos
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Besides minimizing classification error, the new network is trained to mimic the internal activations of a strong unconstrained CNN, and minimize its complexity by the combination of 1) a soft-attention  ...  graduate fellowship SFRH/BD/109135/2015 from the Portuguese Ministry of Sciences and Education, NRI Grants IIS-1546305 and IIS-1637941, and NVIDIA GPU donations. 1 Source code and pre-trained models available  ...  Then, we train the augmented CNN with a loss that penalizes both classification error and complexity.  ... 
doi:10.1109/cvpr.2019.00316 dblp:conf/cvpr/MorgadoV19 fatcat:a3j3olxw2vcobmjb2jfvs64zty

Machine Learning for Microcontroller-Class Hardware – A Review [article]

Swapnil Sayan Saha, Sandeep Singh Sandha, Mani Srivastava
2022 arXiv   pre-print
This paper highlights the unique requirements of enabling onboard machine learning for microcontroller class devices.  ...  We characterize a closed-loop widely applicable workflow of machine learning model development for microcontroller class devices and show that several classes of applications adopt a specific instance  ...  On-device Training: On-device training frameworks generally divide the learning process into three parts.  ... 
arXiv:2205.14550v3 fatcat:y272riitirhwfgfiotlwv5i7nu

A Survey on Green Deep Learning [article]

Jingjing Xu, Wangchunshu Zhou, Zhiyi Fu, Hao Zhou, Lei Li
2021 arXiv   pre-print
Massive computations not only have a surprisingly large carbon footprint but also have negative effects on research inclusiveness and deployment on real-world applications.  ...  Green deep learning is an increasingly hot research field that appeals to researchers to pay attention to energy usage and carbon emission during model training and inference.  ...  Here we take CV and NLP as examples to review recent self-supervised pre-training models.  ... 
arXiv:2111.05193v2 fatcat:t2blz24y2jakteeeawqqogbkpy

A Survey on Machine Learning-Based Performance Improvement of Wireless Networks: PHY, MAC and Network Layer

Merima Kulin, Tarik Kazaz, Eli De Poorter, Ingrid Moerman
2021 Electronics  
First, the related work and paper contributions are discussed, followed by providing the necessary background on data-driven approaches and machine learning to help non-machine learning experts understand  ...  We first categorize these works into: radio analysis, MAC analysis and network prediction approaches, followed by subcategories within each.  ...  on the instances from the training set and i = 1, ...m.  ... 
doi:10.3390/electronics10030318 fatcat:p6jslz26dvfvbpnqzmrpptloim

Towards Performing Image Classification and Object Detection with Convolutional Neural Networks in Autonomous Driving Systems: A Survey (December 2021)

Tolga Turay, Tanya Vladimirova
2022 IEEE Access  
Thus, weight matrices are turned into spare ones. Parameter pruning is quite robust over various settings in the layer and can support training from scratch or pre-trained models.  ...  For instance, while VGGNet [75] employs 3x3 filters, Inception-v1 [29] makes use of different sizes of filters, such as 1x1, 3x3, and 5x5.  ...  His research focuses on both computer vision tasks for autonomous vehicles with deep learning techniques and optimization methods.  ... 
doi:10.1109/access.2022.3147495 fatcat:i4xtly3gizck3eorcqknbz4xdm

Efficient Processing of Deep Neural Networks: A Tutorial and Survey [article]

Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, Joel Emer
2017 arXiv   pre-print
While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity.  ...  It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations  ...  ACKNOWLEDGMENTS Funding provided by DARPA YFA, MIT CICS, and gifts from Nvidia and Intel.  ... 
arXiv:1703.09039v2 fatcat:fpqfxu5zufdixfeb2ymsktlpwm
« Previous Showing results 1 — 15 out of 651 results