Filters








7,485 Hits in 6.3 sec

Exploring Sparsity in Recurrent Neural Networks [article]

Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta
<span title="2017-11-06">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Recurrent Neural Networks (RNN) are widely used to solve a variety of problems and as the quantity of data and the amount of available compute have increased, so have model sizes.  ...  At the end of training, the parameters of the network are sparse while accuracy is still close to the original dense neural network.  ...  We propose a method to reduce the number of weights in recurrent neural networks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1704.05119v2">arXiv:1704.05119v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zlpjgblfzrcj7k45j6gh5cg734">fatcat:zlpjgblfzrcj7k45j6gh5cg734</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200930205521/https://arxiv.org/pdf/1704.05119v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2d/74/2d741fa156e274d79501093b87250b7fe344e3ef.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1704.05119v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Block-Sparse Recurrent Neural Networks [article]

Sharan Narang, Eric Undersander, Gregory Diamos
<span title="2017-11-08">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Recurrent Neural Networks (RNNs) are used in state-of-the-art models in domains such as speech recognition, machine translation, and language modelling.  ...  Additionally, we can prune a larger dense network to recover this loss in accuracy while maintaining high block sparsity and reducing the overall parameter count.  ...  We would also like to thank Varun Arora for creating a figure in the paper.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1711.02782v1">arXiv:1711.02782v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6c5vb43zujg5dbwgivj7o6seaq">fatcat:6c5vb43zujg5dbwgivj7o6seaq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200930174445/https://arxiv.org/pdf/1711.02782v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/56/25/56257b0804c9c2418b32337d3af0970f7b67b084.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1711.02782v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Optimizing Speech Recognition For The Edge [article]

Yuan Shangguan, Jian Li, Qiao Liang, Raziel Alvarez, Ian McGraw
<span title="2020-02-07">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This leap to the edge is powered by the progression from traditional speech recognition pipelines to end-to-end (E2E) neural architectures, and the parallel development of more efficient neural network  ...  In this paper, we begin with a baseline RNN-Transducer architecture comprised of Long Short-Term Memory (LSTM) layers.  ...  After pruning, we examine alternative recurrent neural network (RNN) layer architectures.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.12408v3">arXiv:1909.12408v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/g2mi3xwla5b2lok5ymc2uwsvcy">fatcat:g2mi3xwla5b2lok5ymc2uwsvcy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200321125847/https://arxiv.org/pdf/1909.12408v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.12408v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Structured in Space, Randomized in Time: Leveraging Dropout in RNNs for Efficient Training [article]

Anup Sarma, Sonali Singh, Huaipan Jiang, Rui Zhang, Mahmut T Kandemir, Chita R Das
<span title="2021-06-22">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Recurrent Neural Networks (RNNs), more specifically their Long Short-Term Memory (LSTM) variants, have been widely used as a deep learning tool for tackling sequence-based learning tasks in text and speech  ...  While sparsity in Deep Neural Nets has been widely seen as an opportunity for reducing computation time in both training and inference phases, the usage of non-ReLU activation in LSTM RNNs renders the  ...  Weight Sparsity in Deep Neural Networks. Sparsity in Deep Neural Nets has also been extensively explored, and the work in this area can be categorized as unstructured or structured types.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.12089v1">arXiv:2106.12089v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qwhqod2otrcgfavnar3wnq3w4q">fatcat:qwhqod2otrcgfavnar3wnq3w4q</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210625153220/https://arxiv.org/pdf/2106.12089v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e3/7f/e37f340458baedc77d6c257ef9599bf9240eb35c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.12089v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Bayesian Sparsification of Recurrent Neural Networks [article]

Ekaterina Lobacheva, Nadezhda Chirkova, Dmitry Vetrov
<span title="2017-07-31">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Recurrent neural networks show state-of-the-art results in many text analysis tasks but often require a lot of memory to store their weights.  ...  We apply this technique to sparsify recurrent neural networks. To account for recurrent specifics we also rely on Binary Variational Dropout for RNN.  ...  We would also like to thank the Department of Algorithms and Theory of Programming, Faculty of Innovation and High Technology in Moscow Institute of Physics and Technology for provided computational resources  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1708.00077v1">arXiv:1708.00077v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fjahpcop3be3pattqvhpdzfwiy">fatcat:fjahpcop3be3pattqvhpdzfwiy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200827042855/https://arxiv.org/pdf/1708.00077v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/a1/03/a1030e6e0e6995768dbcafedc712a59db090d2b4.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1708.00077v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Effect of dilution in asymmetric recurrent neural networks

Viola Folli, Giorgio Gosti, Marco Leonetti, Giancarlo Ruocco
<span title="">2018</span> <i title="Elsevier BV"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/oml24fsyizfuhn3rn5np75ubdi" style="color: black;">Neural Networks</a> </i> &nbsp;
We study with numerical simulation the possible limit behaviors of synchronous discrete-time deterministic recurrent neural networks composed of N binary neurons as a function of a network's level of dilution  ...  These attractors form the set of all the possible limit behaviors of the neural network.  ...  Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.neunet.2018.04.003">doi:10.1016/j.neunet.2018.04.003</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/29705670">pmid:29705670</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ddp3ohi5krgf5c4gk3c7iiemba">fatcat:ddp3ohi5krgf5c4gk3c7iiemba</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200908045804/https://arxiv.org/pdf/1805.03886v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/72/59/7259ffab2862dbb3fae015e13fe93ade38f409d2.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.neunet.2018.04.003"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> elsevier.com </button> </a>

Exploring sparsity of firing activities and clock gating for energy-efficient recurrent spiking neural processors

Yu Liu, Yingyezhe Jin, Peng Li
<span title="">2017</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/qcoemsk4lfgznflartfnzb5rhy" style="color: black;">2017 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED)</a> </i> &nbsp;
As a model of recurrent spiking neural networks, the Liquid State Machine (LSM) offers a powerful brain-inspired computing platform for pattern recognition and machine learning applications.  ...  While operated by processing neural spiking activities, the LSM naturally lends itself to an efficient hardware implementation via exploration of typical sparse firing patterns emerged from the recurrent  ...  The main objective of this paper is to further improve the energy efficiency of LSM neural processors at runtime by exploring: 1) sparsity of firing activities in the recurrent reservoir via event-driven  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/islped.2017.8009197">doi:10.1109/islped.2017.8009197</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/islped/LiuJL17.html">dblp:conf/islped/LiuJL17</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/p7o6jiz5y5dt7b5lmtrf7nb7dm">fatcat:p7o6jiz5y5dt7b5lmtrf7nb7dm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200321000521/https://par.nsf.gov/servlets/purl/10026438" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/62/bb/62bbf1d3f3203f663ff306b88ecd5a9396a3fc78.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/islped.2017.8009197"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Dynamic Sparsity Neural Networks for Automatic Speech Recognition [article]

Zhaofeng Wu, Ding Zhao, Qiao Liang, Jiahui Yu, Anmol Gulati, Ruoming Pang
<span title="2021-02-08">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we present Dynamic Sparsity Neural Networks (DSNN) that, once trained, can instantly switch to any predefined sparsity configuration at run-time.  ...  In automatic speech recognition (ASR), model pruning is a widely adopted technique that reduces model size and latency to deploy neural network models on edge devices with resource constraints.  ...  Dynamic Sparsity Neural Networks In this section, we first provide a formulation of dynamic sparsity neural networks (DSNN) and justify it using previous studies.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.10627v3">arXiv:2005.10627v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/hitxhjskz5chxeztappxiwlmmq">fatcat:hitxhjskz5chxeztappxiwlmmq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200827191544/https://arxiv.org/pdf/2005.10627v2.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/5e/c7/5ec792f3390cfadf0928b5b259a594be0c57add9.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.10627v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Structured Pruning of Recurrent Neural Networks through Neuron Selection [article]

Liangjian Wen, Xuanyang Zhang, Haoli Bai, Zenglin Xu
<span title="2019-12-08">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Recurrent neural networks (RNNs) have recently achieved remarkable successes in a number of applications.  ...  A practically effective approach is to reduce the overall storage and computation costs of RNNs by network pruning techniques.  ...  Conclusion In this paper, we propose a novel structured sparsity learning method for recurrent neural networks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1906.06847v2">arXiv:1906.06847v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/c2xtxbzdlfcqnksaeqydpctfou">fatcat:c2xtxbzdlfcqnksaeqydpctfou</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200907171734/https://arxiv.org/pdf/1906.06847v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b4/4a/b44aad80422a84818efa09729b556d7a890a4b6a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1906.06847v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Deep Collaborative Filtering Approaches for Context-Aware Venue Recommendation

Jarana Manotumruksa
<span title="">2017</span> <i title="ACM Press"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ibcfmixrofb3piydwg5wvir3t4" style="color: black;">Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval - SIGIR &#39;17</a> </i> &nbsp;
the usefulness of Deep Neural Network algorithms (DNN) in alleviating the sparsity problem for CAVR remains untouched or partially studied.  ...  In particular, based on the previous successes of the DNN approaches for recommendation systems, which exploit convolutional and recurrent neural networks to model the user's temporal preferences, we plan  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3077136.3084159">doi:10.1145/3077136.3084159</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/sigir/Manotumruksa17.html">dblp:conf/sigir/Manotumruksa17</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/hmk7jdrzufathg4xfa7vnib7se">fatcat:hmk7jdrzufathg4xfa7vnib7se</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180722054751/http://eprints.gla.ac.uk/142933/7/142933.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/32/5f/325f141bc2f92af68f125df6dc4ec1ec7a7d5885.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3077136.3084159"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> acm.org </button> </a>

Efficient and effective training of sparse recurrent neural networks

Shiwei Liu, Iftitahu Ni'mah, Vlado Menkovski, Decebal Constantin Mocanu, Mykola Pechenizkiy
<span title="2021-01-26">2021</span> <i title="Springer Science and Business Media LLC"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/a3wauupnbbdj7hbo62upc6grdq" style="color: black;">Neural computing &amp; applications (Print)</a> </i> &nbsp;
AbstractRecurrent neural networks (RNNs) have achieved state-of-the-art performances on various applications.  ...  We demonstrate state-of-the-art sparse performance with long short-term memory and recurrent highway networks on widely used tasks, language modeling, and text classification.  ...  ST-RNNs In this section, we introduce sparse training of recurrent neural networks (ST-RNNs), the new class of sparse recurrent neural network models which we are proposing in this paper.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s00521-021-05727-y">doi:10.1007/s00521-021-05727-y</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/5kecjkpdsjbitlsuevdtpe3lga">fatcat:5kecjkpdsjbitlsuevdtpe3lga</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210429113727/https://link.springer.com/content/pdf/10.1007/s00521-021-05727-y.pdf?error=cookies_not_supported&amp;code=34e3fe68-a216-4597-bc0e-103c0a6fcbc9" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/9a/14/9a141063bbcbe9c48e38108623025e746f6f25a0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s00521-021-05727-y"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> springer.com </button> </a>

Selfish Sparse RNN Training [article]

Shiwei Liu, Decebal Constantin Mocanu, Yulong Pei, Mykola Pechenizkiy
<span title="2021-06-15">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
the Recurrent Neural Networks (RNNs) setting.  ...  However, previous sparse-to-sparse methods mainly focus on Multilayer Perceptron Networks (MLPs) and Convolutional Neural Networks (CNNs), failing to match the performance of dense-to-sparse methods in  ...  STACKED LSTMS Recurrent Highway Networks Recurrent Highway Networks (Zilly et al., 2017 ) is a variant of RNNs allowing RNNs to explore deeper architectures inside the recurrent transition.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2101.09048v3">arXiv:2101.09048v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/yiseccpkonazvfwnijqxup465i">fatcat:yiseccpkonazvfwnijqxup465i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210621225057/https://arxiv.org/pdf/2101.09048v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d4/c2/d4c2393cfbcad98f90d7bc6cedad1f4077516325.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2101.09048v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Dynamically Hierarchy Revolution: DirNet for Compressing Recurrent Neural Network on Mobile Devices [article]

Jie Zhang and Xiaolong Wang and Dawei Li and Yalin Wang
<span title="2018-06-08">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Recurrent neural networks (RNNs) achieve cutting-edge performance on a variety of problems.  ...  optimized fast dictionary learning algorithm, which 1) dynamically mines the dictionary atoms of the projection dictionary matrix within layer to adjust the compression rate 2) adaptively changes the sparsity  ...  , which are used to initialize parameters of neural network layers in the new network structure ( Fig.1 (b) ).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1806.01248v2">arXiv:1806.01248v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zej3mizrpzdnjjer3ksc2np6ay">fatcat:zej3mizrpzdnjjer3ksc2np6ay</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200905045722/https://arxiv.org/pdf/1806.01248v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b3/73/b373a1bb8d9944f513f0974f9dc5088e0877788d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1806.01248v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

An Application-oriented Review of Deep Learning in Recommender Systems

Jyoti Shokeen, Chhavi Rana
<span title="2019-05-08">2019</span> <i title="MECS Publisher"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/koacvjimprbbvhwasrewxdbiuy" style="color: black;">International Journal of Intelligent Systems and Applications</a> </i> &nbsp;
In recent years, deep learning has been also proved effective in handling information overload and recommending items.  ...  Recommender systems have been proved helpful in choosing relevant items. Several algorithms for recommender systems have been proposed in previous years.  ...  Recurrent neural networks are recently used in session recommenders.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5815/ijisa.2019.05.06">doi:10.5815/ijisa.2019.05.06</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/67fgexfbfjh2no5b3phvohbole">fatcat:67fgexfbfjh2no5b3phvohbole</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200215010748/http://www.mecs-press.org/ijisa/ijisa-v11-n5/IJISA-V11-N5-6.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/10/fb/10fbae881ed34cef0438beca604bb99ac57ed030.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5815/ijisa.2019.05.06"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

Weight, Block or Unit? Exploring Sparsity Tradeoffs for Speech Enhancement on Tiny Neural Accelerators [article]

Marko Stamenovic, Nils L. Westhausen, Li-Chia Yang, Carl Jensen, Alex Pawlicki
<span title="2021-11-09">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We explore network sparsification strategies with the aim of compressing neural speech enhancement (SE) down to an optimal configuration for a new generation of low power microcontroller based neural accelerators  ...  Our method supports all three structures above and jointly learns integer quantized weights along with sparsity.  ...  Model Compression Network connection pruning is a well-studied approach to compress neural models [6, 27, 13] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.02351v2">arXiv:2111.02351v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/3sfua4wchbbfxkdxklwu4akzfi">fatcat:3sfua4wchbbfxkdxklwu4akzfi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211105095326/https://arxiv.org/pdf/2111.02351v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/68/ea/68ea7c9fcdcdfdf7649c3f7a1f85483ccfbb54d0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.02351v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 7,485 results