215,302 Hits in 4.9 sec

On the Expressive Power of Overlapping Architectures of Deep Learning [article]

Or Sharir, Amnon Shashua
2018 arXiv   pre-print
Moreover, while denser connectivity can increase the expressive capacity, we show that the most common types of modern architectures already exhibit exponential increase in expressivity, without relying  ...  represented by a deep network of polynomial size.  ...  In other words, there exists tasks that require the highly expressive power of overlapping architectures, on which non-overlapping architectures would have to grow by an exponential factor to achieve the  ... 
arXiv:1703.02065v4 fatcat:vckazsuezvcd5khki5i3ljb2ri

On the Expressive Power of Deep Architectures [chapter]

Yoshua Bengio, Olivier Delalleau
2011 Lecture Notes in Computer Science  
On the Expressive Power of Deep Architectures Yoshua Bengio and Oliver Delalleau S Function approximation f (x) = ....  ...  Deep architecturesDeep architectures: Advantages Fewer nodes for the same function Lost the ability to model a lot of functions Input Low level features  ... 
doi:10.1007/978-3-642-24412-4_3 fatcat:dvr4psguy5gl5nze2gn2v3soni

EmotionNet Nano: An Efficient Deep Convolutional Neural Network Design for Real-Time Facial Expression Recognition

James Ren Lee, Linda Wang, Alexander Wong
2021 Frontiers in Artificial Intelligence  
To the best of the author's knowledge, this is the very first deep neural network architecture for facial expression recognition leveraging machine-driven design exploration in its design process, and  ...  While recent advances in deep learning have led to significant improvements in facial expression classification (FEC), a major challenge that remains a bottleneck for the widespread deployment of such  ...  ACKNOWLEDGMENTS The authors would like to thank NSERC, Canada Research Chairs program, and Microsoft.  ... 
doi:10.3389/frai.2020.609673 pmid:33733225 pmcid:PMC7861268 fatcat:ckzikei5hreejespczhim4guo4

GradNets: Dynamic Interpolation Between Neural Architectures [article]

Diogo Almeida, Nate Sauder
2015 arXiv   pre-print
Traditionally in deep learning, one makes a static trade-off between the needs of early and late optimization.  ...  Neural Networks, in particular, have enormous expressive power and yet are notoriously challenging to train. The nature of that optimization challenge changes over the course of learning.  ...  ACKNOWLEDGMENTS We thank NVIDIA for their generosity in providing access to part of their cluster in support of Enlitic's mission and our research.  ... 
arXiv:1511.06827v1 fatcat:kkpfwhtlt5anlkc3zitevgcxue

EmotionNet Nano: An Efficient Deep Convolutional Neural Network Design for Real-time Facial Expression Recognition [article]

James Ren Hou Lee, Linda Wang, Alexander Wong
2020 arXiv   pre-print
While recent advances in deep learning have led to significant improvements in facial expression classification (FEC), a major challenge that remains a bottleneck for the widespread deployment of such  ...  images/sec/watt at 15W) on an ARM embedded processor, thus further illustrating the efficacy of EmotionNet Nano for deployment on embedded devices.  ...  of highly efficient deep neural network architectures tailored for the task of real-time embedded facial expression recognition.  ... 
arXiv:2006.15759v1 fatcat:kjxpea3sgngexlgpleoukvhrfa

Deep Learning Methodologies for Genomic Data Prediction: Review

Yusuf Aleshinloye Abass, Steve A. Adeshina
2021 Journal of Artificial Intelligence for Medical Sciences  
In this review, we captured the most frequently used deep learning architectures for the genomic domain.  ...  We outline the limitations of deep learning methodologies when dealing with genomic data and we conclude that advancement in deep learning methodologies will help rejuvenate genomic research and build  ...  In term of gene data expression inferencing, the authors of deep learning for gene expression (D-GEX) provided a deep architecture to infer the expression of target genes from the expression available  ... 
doi:10.2991/jaims.d.210512.001 fatcat:cyadysvs6bbo7bz6gpcbmjvbuy

Convolutional Neural Network for Face Recognition in Mobile Phones

Andry Chowanda, Rhio Sutoyo
2019 ICIC Express Letters  
This paper presents the implementation of deep learning models for face recognition in a mobile phone by using CoreML.  ...  The results show that both models resulted in an accuracy score of 93.2% and 93.3% for VGG-19 and Google's Inception V3 architectures respectively.  ...  We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.  ... 
doi:10.24507/icicel.13.07.569 fatcat:32pmoxy67zgodc7amgqlffgwki

Optimizing Deep Convolutional Neural Network for Facial Expression Recognition

Umesh B. Chavan, Dinesh Kulkarni
2020 European Journal of Engineering Research and Science  
The proposed architecture achieves 61% accuracy. This work presents results of accelerated implementation of the CNN with graphic processing units (GPUs).  ...  We designed a large, deep convolutional neural network to classify 40,000 images in the data-set into one of seven categories (disgust, fear, happy, angry, sad, neutral, surprise).  ...  ACKNOWLEDGMENT The authors would like to thank NVIDIA corporation for donating NVIDIA GPU card for this research work.  ... 
doi:10.24018/ejers.2020.5.2.495 fatcat:euv4bc7cvjcqnaevup43xjzbaq

Building energy load forecasting using Deep Neural Networks

Daniel L. Marino, Kasun Amarasinghe, Milos Manic
2016 IECON 2016 - 42nd Annual Conference of the IEEE Industrial Electronics Society  
Therefore, the power grid of the future should provide an unprecedented level of flexibility in energy management.  ...  The presented work investigates two variants of the LSTM: 1) standard LSTM and 2) LSTM-based Sequence to Sequence (S2S) architecture.  ...  The input vector can be expressed as: [ ] = [ [ −1] [ ] _ [ ] ℎ [ ]] [ ] = [ [ ] _ [ ] ℎ [ ] ] (4) The output of the network, ̂[ ] ∈ ℝ is an estimation of the active power for the next time step.  ... 
doi:10.1109/iecon.2016.7793413 dblp:conf/iecon/MarinoAM16 fatcat:b3rtdewlmzfcrhy3kkuxvedp3a


2015 Journal of Architecture and Planning (Transactions of AIJ)  
In the competition, there were 2 major types of Nihon-shumi designs; one had narrow eaves on flat walls, which was a typical expression of Nihon shumi, and another had exaggerated Japanese traditional  ...  motifs out of wooden buildings like deep eaves, pillars of a cloister or Azekura wall.  ...  One has narrow eaves on flat walls which is a typical expression of 'Nihon-shumi' appeared in the beginning of 1930s.  ... 
doi:10.3130/aija.80.2101 fatcat:oqihyfapy5eq5o5czzni3qsviq

Survey of Expressivity in Deep Neural Networks [article]

Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, Jascha Sohl-Dickstein
2016 arXiv   pre-print
We survey results on neural network expressivity described in "On the Expressive Power of Deep Neural Networks".  ...  They suggest that parameters earlier in a network have greater influence on its expressive power -- in particular, given a layer, its influence on expressivity is determined by the remaining depth of the  ...  Random networks The results on networks after random initialization examine the effect of depth and width of a network architecture on its expressive power after random initialization via three natural  ... 
arXiv:1611.08083v1 fatcat:wbzrm6x3rrf7vlknn5fekqlib4

UPR: A Model-Driven Architecture for Deep Phase Retrieval [article]

Naveed Naimipour, Shahin Khobahi, Mojtaba Soltanalian
2020 arXiv   pre-print
Specifically, the proposed method benefits from versatility and interpretability of well established model-based algorithms, while simultaneously benefiting from the expressive power of deep neural networks  ...  Our numerical results illustrate the effectiveness of such hybrid deep architectures and showcase the untapped potential of data-aided methodologies to enhance the existing phase retrieval algorithms.  ...  the expressive power of deep neural networks.  ... 
arXiv:2003.04396v1 fatcat:wdyc35gacjct7pj6opfdathxpe

An Analysis of the Expressiveness of Deep Neural Network Architectures Based on Their Lipschitz Constants [article]

SiQi Zhou, Angela P. Schoellig
2019 arXiv   pre-print
Despite their empirical successes, there is still a lack of theoretical understanding of the representative power of such deep architectures.  ...  We consider this work to be a step towards understanding the expressive power of DNNs and towards designing appropriate deep architectures for practical applications such as system control.  ...  There are several recent works analyzing the expressive power of deep architectures.  ... 
arXiv:1912.11511v1 fatcat:yfemhs7mazhjjdp7tac7rqxuym

Deep Mixture of Experts via Shallow Embedding [article]

Xin Wang, Fisher Yu, Lisa Dunlap, Yi-An Ma, Ruth Wang, Azalia Mirhoseini, Trevor Darrell, Joseph E. Gonzalez
2019 arXiv   pre-print
We explore a mixture of experts (MoE) approach to deep dynamic routing, which activates certain experts in the network on a per-example basis.  ...  Larger networks generally have greater representational power at the cost of increased computational complexity.  ...  Recent work [5] proves that the expressive power of a deep neural network increases super-exponentially with its depth, based on the width.  ... 
arXiv:1806.01531v3 fatcat:rjflq35hirav5ecig5jtgdpahu

Model Complexity of Deep Learning: A Survey [article]

Xia Hu, Lingyang Chu, Jian Pei, Weiqing Liu, Jiang Bian
2021 arXiv   pre-print
Model complexity is a fundamental problem in deep learning. In this paper we conduct a systematic overview of the latest studies on model complexity in deep learning.  ...  Model complexity of deep learning can be categorized into expressive capacity and effective model complexity.  ...  [92] find that smaller widths in the first few layers of a deep ReLU network cause a bottleneck on the expressive power. Kileel et al.  ... 
arXiv:2103.05127v2 fatcat:uknedzqea5evdcqm7mnlatytya
« Previous Showing results 1 — 15 out of 215,302 results