Filters








2,485 Hits in 3.4 sec

On Implicit Filter Level Sparsity in Convolutional Neural Networks [article]

Dushyant Mehta, Kwang In Kim, Christian Theobalt
2019 arXiv   pre-print
We investigate filter level sparsity that emerges in convolutional neural networks (CNNs) which employ Batch Normalization and ReLU activation, and are trained with adaptive gradient descent techniques  ...  Lastly, we show that the implicit sparsity can be harnessed for neural network speedup at par or better than explicit sparsification / pruning approaches, with no modifications to the typical training  ...  Supplementary Document: On Implicit Filter Level Sparsity In Convolutional Neural Networks In this supplemental document, we provide additional experiments that show how filter level sparsity manifests  ... 
arXiv:1811.12495v2 fatcat:7vj37vbzrffipl3tafovfreggy

On Implicit Filter Level Sparsity in Convolutional Neural Networks

Dushyant Mehta, Kwang In Kim, Christian Theobalt
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
We investigate filter level sparsity that emerges in convolutional neural networks (CNNs) which employ Batch Normalization and ReLU activation, and are trained with adaptive gradient descent techniques  ...  Lastly, we show that the implicit sparsity can be harnessed for neural network speedup at par or better than explicit sparsification / pruning approaches, without needing any modifications to the typical  ...  Supplementary Document: On Implicit Filter Level Sparsity In Convolutional Neural Networks In this supplemental document, we provide additional experiments that show how filter level sparsity manifests  ... 
doi:10.1109/cvpr.2019.00061 dblp:conf/cvpr/MehtaKT19 fatcat:qmezbqojfvahpdrabwz4ziebfm

Implicit Filter Sparsification In Convolutional Neural Networks [article]

Dushyant Mehta, Kwang In Kim, Christian Theobalt
2019 arXiv   pre-print
We show implicit filter level sparsity manifests in convolutional neural networks (CNNs) which employ Batch Normalization and ReLU activation, and are trained with adaptive gradient descent techniques  ...  Emergence of, and the subsequent pruning of selective features is observed to be one of the contributing mechanisms, leading to feature sparsity at par or better than certain explicit sparsification /  ...  Introduction In this article we discuss the findings from (Mehta et al., 2019) regarding filter level sparsity which emerges in certain types of feedforward convolutional neural networks.  ... 
arXiv:1905.04967v1 fatcat:2zqa53evrrgiblqbqxx24fvaea

Exploring the Granularity of Sparsity in Convolutional Neural Networks

Huizi Mao, Song Han, Jeff Pool, Wenshuo Li, Xingyu Liu, Yu Wang, William J. Dally
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
Our analysis, which is based on the framework of a recent sparse convolutional neural network (SCNN) accelerator, further demonstrates that it saves 30% − 35% of memory references compared with fine-grained  ...  In this paper we quantitatively measure the accuracy-sparsity relationship with different granularity.  ...  Range of Granularity Sparsity in Deep Neural Network, explicit or implicit, have been studied in a lot of literature.  ... 
doi:10.1109/cvprw.2017.241 dblp:conf/cvpr/MaoHPLL0D17 fatcat:b4lxb2fsmbeijaxdejbwe2ji5i

Combining Knowledge with Deep Convolutional Neural Networks for Short Text Classification

Jin Wang, Zhongyuan Wang, Dawei Zhang, Jun Yan
2017 Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence  
In this paper, we propose a framework based on convolutional neural networks that combines explicit and implicit representations of short text for classification.  ...  Text classification is a fundamental task in NLP applications. Most existing work relied on either explicit or implicit text representation to address this problem.  ...  Conclusion In this paper, we propose a novel model that takes advantage of both explicit and implicit representations for short text classification.  ... 
doi:10.24963/ijcai.2017/406 dblp:conf/ijcai/WangWZY17 fatcat:3fplpob2ergire45p72b5ygnqu

ConvSVD++: A Hybrid Deep CF Recommender Model using Convolutional Neural Network

Mohamed Grida, Lamiaa Fayed, Mohamed Hassan
2020 Journal of Computer Science  
It proposes a hybrid deep CF recommender model called ConvSVD++ that tightly integrates Convolution Neural Network (CNN) and Singular Value Decomposition (SVD++).  ...  The proposed model incorporates items' content, implicit user's feedback along with explicit item-user interaction to enhance prediction accuracy and handle sparsity problem.  ...  Lamiaa Fayed: Participated in all experiment, coordinated the data analysis. Mohamed Hassan: Revised the manuscript, designed the research plan.  ... 
doi:10.3844/jcssp.2020.1697.1708 fatcat:s4wifrdvuzghbpfccnvsyc5equ

SparseRT: Accelerating Unstructured Sparsity on GPUs for Deep Learning Inference [article]

Ziheng Wang
2020 arXiv   pre-print
In recent years, there has been a flurry of research in deep neural network pruning and compression. Early approaches prune weights individually.  ...  For sparse 3x3 convolutions, we show speedups of over 5x on use cases in ResNet-50.  ...  In convolution operations, entire channels are pruned at once from filters [21] . In recurrent neural networks, entire neurons are removed from the network [43] .  ... 
arXiv:2008.11849v1 fatcat:x4usrp5ocrhifkuicim3nujtlm

Local Convolutions Cause an Implicit Bias towards High Frequency Adversarial Examples [article]

Josue Ortega Caro, Yilong Ju, Ryan Pyle, Sourav Dey, Wieland Brendel, Fabio Anselmi, Ankit Patel
2021 arXiv   pre-print
Inspired by theoretical work on linear full-width convolutional models, we hypothesize that the local (i.e. bounded-width) convolutional operations commonly used in current neural networks are implicitly  ...  Adversarial Attacks are still a significant challenge for neural networks.  ...  Recent work has uncovered an implicit bias towards sparsity in the frequency domain in deep linear convolutional networks (Gunasekar et al., 2018) .  ... 
arXiv:2006.11440v4 fatcat:l5kykgnqlngyrgodgqlcxxwrne

Neural Personalized Ranking via Poisson Factor Model for Item Recommendation

Yonghong Yu, Li Zhang, Can Wang, Rong Gao, Weibin Zhao, Jing Jiang
2019 Complexity  
In this paper, we propose a neural personalized ranking model for collaborative filtering with the implicit frequency feedback.  ...  NRPFM applies the ranking-based poisson factor model on neural networks, which endows the linear ranking-based poisson factor model with a high level of nonlinearities.  ...  Acknowledgments This work is supported in part by the Natural Science  ... 
doi:10.1155/2019/3563674 fatcat:rc4kaow6fzg5dpppucsjdcewsy

Neural Network Compression Framework for fast model inference [article]

Alexander Kozlov and Ivan Lazarevich and Vasily Shamporov and Nikolay Lyalyushkin and Yury Gorbachev
2020 arXiv   pre-print
In this work we present a new framework for neural networks compression with fine-tuning, which we called Neural Network Compression Framework (NNCF).  ...  It leverages recent advances of various network compression methods and implements some of them, such as sparsity, quantization, and binarization.  ...  NNCF implements a set of filter pruning object, modifying it in such a manner that during the algorithms for convolutional neural networks.  ... 
arXiv:2002.08679v4 fatcat:5syyycecjfbptnberplaxa3nha

Neural Similarity Learning [article]

Weiyang Liu, Zhen Liu, James M. Rehg, Le Song
2019 arXiv   pre-print
Inner product-based convolution has been the founding stone of convolutional neural networks (CNNs), enabling end-to-end learning of visual representation.  ...  Further, we consider the neural similarity learning (NSL) in order to learn the neural similarity adaptively from training data.  ...  In a high-level sense, CNNs with dynamic neural similarity share the same spirits with HyperNetworks [18] and dynamic filter networks [28] .  ... 
arXiv:1910.13003v3 fatcat:q3oo3de6b5et3mer75pyykpt4m

Characterising Across-Stack Optimisations for Deep Convolutional Neural Networks

Jack Turner, Jose Cano, Valentin Radu, Elliot J. Crowley, Michael OrBoyle, Amos Storkey
2018 2018 IEEE International Symposium on Workload Characterization (IISWC)  
Convolutional Neural Networks (CNNs) are extremely computationally demanding, presenting a large barrier to their deployment on resource-constrained devices.  ...  In this paper we unify the two viewpoints in a Deep Learning Inference Stack and take an across-stack approach by implementing and evaluating the most common neural network compression techniques (weight  ...  Intuitively this should control the sparsity level, although this is not necessarily implicit.  ... 
doi:10.1109/iiswc.2018.8573503 dblp:conf/iiswc/TurnerCRCOS18 fatcat:hxxhuovm6fhyhheg55vtwyvsoi

Characterising Across-Stack Optimisations for Deep Convolutional Neural Networks [article]

Jack Turner, José Cano, Valentin Radu, Elliot J. Crowley, Michael O'Boyle, Amos Storkey
2018 arXiv   pre-print
Convolutional Neural Networks (CNNs) are extremely computationally demanding, presenting a large barrier to their deployment on resource-constrained devices.  ...  In this paper we unify the two viewpoints in a Deep Learning Inference Stack and take an across-stack approach by implementing and evaluating the most common neural network compression techniques (weight  ...  Intuitively this should control the sparsity level, although this is not necessarily implicit.  ... 
arXiv:1809.07196v1 fatcat:wxevr5hprveiro5lg2aie5nnem

Finding Storage- and Compute-Efficient Convolutional Neural Networks

Daniel Becking, Simon Wiedemann, Klaus-Robert Müller
2020 Zenodo  
Convolutional neural networks (CNNs) have taken the spotlight in a variety of machine learning applications.  ...  Ternary networks are not only efficient in terms of storage but also in terms of computational complexity. By explicitly boosting sparsity we reach further efficiency gains.  ...  Convolutional Neural Networks (CNNs) Convolutional Neural Networks (CNNs) are often used in image analysis.  ... 
doi:10.5281/zenodo.5501151 fatcat:zjh4kngadrgtdgzniphqrvfndq

Implicit Discourse Relation Classification via Multi-Task Neural Networks [article]

Yang Liu, Sujian Li, Xiaodong Zhang, Zhifang Sui
2016 arXiv   pre-print
To exploit the combination of different discourse corpora, we design related discourse classification tasks specific to a corpus, and propose a novel Convolutional Neural Network embedded multi-task learning  ...  The experimental results on the PDTB implicit discourse relation classification task demonstrate that our model achieves significant gains over baseline systems.  ...  Acknowledgments We thank all the anonymous reviewers for their insightful comments on this paper.  ... 
arXiv:1603.02776v1 fatcat:pa3zrmscm5atvahspahatalpkq
« Previous Showing results 1 — 15 out of 2,485 results