Filters








396 Hits in 4.3 sec

Multi-scale Convolution Aggregation and Stochastic Feature Reuse for DenseNets [article]

Mingjie Wang, Jun Zhou, Wendong Mao, Minglun Gong
2018 arXiv   pre-print
To feed even richer information into the network, a novel adaptive Multi-scale Convolution Aggregation module is presented in this paper.  ...  Composed of layers for multi-scale convolutions, trainable cross-scale aggregation, maxout, and concatenation, this module is highly non-linear and can boost the accuracy of DenseNet while using much fewer  ...  Inspired by the benefits of multi-scale convolutions [19, 34] and features fusion for training deep networks, we design a novel module, referred as Multi-scale Convolution Aggregation (MCA) to work with  ... 
arXiv:1810.01373v1 fatcat:tj5zeutrq5bqdfuqahpxcxazee

Deep Maxout Networks Applied to Noise-Robust Speech Recognition [chapter]

F. de-la-Calle-Silos, A. Gallardo-Antolín, C. Peláez-Moreno
2014 Lecture Notes in Computer Science  
In this paper, we investigate Deep Maxout Networks (DMN) for acoustic modeling in a noisy automatic speech recognition environment.  ...  Deep Neural Networks (DNN) have become very popular for acoustic modeling due to the improvements found over traditional Gaussian Mixture Models (GMM).  ...  This contribution has been supported by an Airbus Defense and Space Grant (Open Innovation -SAVIER) and Spanish Government-CICYT project 2011-26807/TEC.  ... 
doi:10.1007/978-3-319-13623-3_12 fatcat:j2hen2c2ozcsdeoyi7qg62nepm

CIFAR-10: KNN-based Ensemble of Classifiers [article]

Yehya Abouelnaga, Ola S. Ali, Hager Rady, Mohamed Moustafa
2016 arXiv   pre-print
We reduce KNN overfitting using Principal Component Analysis (PCA), and ensemble it with a CNN to increase its accuracy. Our approach improves our best CNN model from 93.33% to 94.03%.  ...  We show that, on CIFAR-10, K-Nearest Neighbors (KNN) and Convolutional Neural Network (CNN), on some classes, are mutually exclusive, thus yield in higher accuracy when combined.  ...  ACKNOWLEDGMENT The authors relied on the implementation of Scikit Learn Python Library [30] and Torch in most of the experiments carried out in this paper.  ... 
arXiv:1611.04905v1 fatcat:lj6dkek5cvbphbdvfunt5jzdkm

Deep Learning Models Based on Image Classification: A Review

Kavi B. Obaid, Subhi R. M. Zeebaree, Omar M. Ahmed
2020 Zenodo  
With the development of the big data age, deep learning developed to become having a more complex network structure and more powerful feature learning and feature expression abilities than traditional  ...  machine learning methods.  ...  Lin et al. (2014) proposed a novel deep network called "Network In Network" (NIN) for classification tasks.  ... 
doi:10.5281/zenodo.4108433 fatcat:boa4clckbvcepjze6et6vsfjpq

Deep representation for partially occluded face verification

Lei Yang, Jie Ma, Jian Lian, Yan Zhang, Houquan Liu
2018 EURASIP Journal on Image and Video Processing  
Bearing this in mind, we propose a novel convolutional neural network which was designed specifically for the verification between the occluded and non-occluded faces for the same identity.  ...  It could learn both the shared and unique features based on a multiple network convolutional neural network architecture.  ...  Acknowledgements The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions. Funding  ... 
doi:10.1186/s13640-018-0379-2 fatcat:k6zl3xbvgrg5xo5cdzfhom4cle

Convolutional Neural Networks In Convolution [article]

Xiaobo Huang
2018 arXiv   pre-print
In contrast, We propose a novel wider Convolutional Neural Networks (CNN) architecture, motivated by the Multi-column Deep Neural Networks and the Network In Network(NIN), aiming for higher accuracy without  ...  And further classifications are then carried out by a global average pooling layer and a softmax layer.  ...  We sincerely thank him for his significant contribution in both the writing and founding of the paper.  ... 
arXiv:1810.03946v1 fatcat:7wvazgfsdzcy7i5px75r22fse4

Network In Network [article]

Min Lin, Qiang Chen, Shuicheng Yan
2014 arXiv   pre-print
We propose a novel deep network structure called "Network In Network" (NIN) to enhance model discriminability for local patches within the receptive field.  ...  We instantiate the micro neural network with a multilayer perceptron, which is a potent function approximator.  ...  Conclusions We proposed a novel deep network called "Network In Network" (NIN) for classification tasks.  ... 
arXiv:1312.4400v3 fatcat:bicbw4jwqnaazcszad2pev2dpa

Asian Female Facial Beauty Prediction using Deep Neural Networks via Transfer Learning and Multi-channel Feature Fusion

Yikui Zhai, Yu Huang, Ying Xu, Junying Gan, He Cao, Wenbo Deng, Ruggero Donida Labati, Vincenzo Piuri, Fabio Scotti
2020 IEEE Access  
Secondly, in order to improve CNN's self-learning ability of facial beauty prediction task, an effective CNN using a novel Softmax-MSE loss function and a double activation layer has been proposed.  ...  Neural Networks.  ...  Section 3 presents the method of structuring an effective CNN with a novel Softmax-MSE loss function and a double activation layer.  ... 
doi:10.1109/access.2020.2980248 fatcat:xvndnoq4bjakvdspcpjqesnd3q

Reconstruction for Diverging-Wave Imaging Using Deep Convolutional Neural Networks [article]

Jingfeng Lu, Fabien Millioz, Damien Garcia, Sebastien Salles, Wanyu Liu, Denis Friboulet
2020 arXiv   pre-print
To deal with this limitation, we propose a convolutional neural networks (CNN) architecture for high-quality reconstruction of DW ultrasound images using a small number of transmissions.  ...  The performance of the proposed approach was evaluated in terms of contrast-to-noise ratio and lateral resolution, and compared with standard compounding method and conventional CNN methods.  ...  In the training stage, the network weights were initialized with the Xavier initializer [45] .  ... 
arXiv:1911.03416v3 fatcat:nuu6czc7grgrfnzb57i3uwplku

Sentiment Analysis via Deep Multichannel Neural Networks with Variational Information Bottleneck

Tong Gu, Guoliang Xu, Jiangtao Luo
2020 IEEE Access  
However, deep neural network often suffers from over-fitting and vanishing gradient during the training.  ...  Neural Network (CNN) and Variational Information Bottleneck (VIB).  ... 
doi:10.1109/access.2020.3006569 fatcat:ssdkrt6annhvtnlnxx3bijo2cy

Light Multi-Segment Activation for Model Compression

Zhenhui Xu, Guolin Ke, Jia Zhang, Jiang Bian, Tie-Yan Liu
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Model compression has become necessary when applying neural networks (NN) into many real application tasks that can accept slightly-reduced model accuracy but with strict tolerance to model complexity.  ...  Specifically, we propose a highly efficient multi-segment activation, called Light Multi-segment Activation (LMA), which can rapidly produce multiple linear regions with very few parameters by leveraging  ...  On the other hand, improving the capacity of activation is also a novel and significant direction to simplify complex architectures and apply Neural Networks more efficiently.  ... 
doi:10.1609/aaai.v34i04.6128 fatcat:cllk3hq24fgi5bllb3jnffa6lu

Competition vs. Concatenation in Skip Connections of Fully Convolutional Networks [chapter]

Santiago Estrada, Sailesh Conjeti, Muneer Ahmad, Nassir Navab, Martin Reuter
2018 Lecture Notes in Computer Science  
Increased information sharing through short and long-range skip connections between layers in fully convolutional networks have demonstrated significant improvement in performance for semantic segmentation  ...  based state-of-the-art methods.  ...  other deep learning variants that employ concatenation layers.  ... 
doi:10.1007/978-3-030-00919-9_25 fatcat:i5ttvoqxlrhe7nyux7hp4kne64

Regularizing Neural Networks via Stochastic Branch Layers [article]

Wonpyo Park, Paul Hongsuck Seo, Bohyung Han, Minsu Cho
2019 arXiv   pre-print
We introduce a novel stochastic regularization technique for deep neural networks, which decomposes a layer into multiple branches with different parameters and merges stochastically sampled combinations  ...  An extensive set of experiments shows that our method effectively regularizes networks and further improves the generalization performance when used together with other existing regularization techniques  ...  While one common form of the techniques is to penalize the weight tensor with a constant (Krogh and Hertz, 1992; Srebro and Shraibman, 2005) , a popular method for deep neural networks is to inject random  ... 
arXiv:1910.01467v1 fatcat:b7sccp23ijemrexu4vyfvh2mzq

All you need is a good init [article]

Dmytro Mishkin, Jiri Matas
2016 arXiv   pre-print
Layer-sequential unit-variance (LSUV) initialization - a simple method for weight initialization for deep net learning - is proposed. The method consists of the two steps.  ...  Experiment with different activation functions (maxout, ReLU-family, tanh) show that the proposed initialization leads to learning of very deep nets that (i) produces networks with test accuracy better  ...  ACKNOWLEDGMENTS The authors were supported by The Czech Science Foundation Project GACR P103/12/G084 and CTU student grant SGS15/155/OHK3/2T/13.  ... 
arXiv:1511.06422v7 fatcat:no2qdu35mjfyjbdna2oh35wf6q

MGFN: A Multi-Granularity Fusion Convolutional Neural Network for Remote Sensing Scene Classification

Zhiguo Zeng, Xihong Chen, Zhihua Song
2021 IEEE Access  
We provide a novel Multi-Granularity Fused convolutional neural Network in Section III.  ...  The learning rate of the classification layer is set to be 0.1 initially, and the entire network fine-tuned with a learning rate of 0.001.  ... 
doi:10.1109/access.2021.3081922 fatcat:lnabnm7zung3jadqzkcusq3m3q
« Previous Showing results 1 — 15 out of 396 results