Filters








41,364 Hits in 2.7 sec

Going deeper with convolutions

Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich
2015 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
We propose a deep convolutional neural network architecture codenamed Inception, which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale  ...  the famous "we need to go deeper" internet meme [1] .  ...  One big problem with the above modules, at least in this naïve form, is that even a modest number of 5⇥5 convolutions can be prohibitively expensive on top of a convolutional layer with a large number  ... 
doi:10.1109/cvpr.2015.7298594 dblp:conf/cvpr/SzegedyLJSRAEVR15 fatcat:lqm5bh23tjhlpip27wrc5abzju

Going Deeper with Dense Connectedly Convolutional Neural Networks for Multispectral Pansharpening

Dong Wang, Ying Li, Li Ma, Zongwen Bai, Jonathan Cheung-Wai Chan
2019 Remote Sensing  
However, the small-scale data and the gradient vanishing problem have been preventing the existing CNN-based fusion approaches from leveraging deeper networks that potentially have better representation  ...  In this paper, we introduce a very deep network with dense blocks and residual learning to tackle these problems.  ...  We introduce a deeper CNN with dense blocks than other existing deep networks for pansharpening.  ... 
doi:10.3390/rs11222608 fatcat:zqk6sq33ynfz7his6hywzdby3i

Going Deeper With Contextual CNN for Hyperspectral Image Classification

Hyungtae Lee, Heesung Kwon
2017 IEEE Transactions on Image Processing  
In this paper, we describe a novel deep convolutional neural network (CNN) that is deeper and wider than other existing deep networks for hyperspectral image classification.  ...  The joint exploitation of the spatio-spectral information is achieved by a multi-scale convolutional filter bank used as an initial component of the proposed CNN pipeline.  ...  The first convolutional layer applied to the input hyperspectral image uses an inception module [5] that locally convolves the input image with two convolutional filters with different sizes (1×1×B and  ... 
doi:10.1109/tip.2017.2725580 pmid:28708555 fatcat:yqti5bhofzalhep2rqhqb3bele

Deeper and wider fully convolutional network coupled with conditional random fields for scene labeling

Kien Nguyen, Clinton Fookes, Sridha Sridharan
2016 2016 IEEE International Conference on Image Processing (ICIP)  
The new strategy for a deeper, wider convolutional network coupled with graphical models has shown promising results on the PASCAL-Context dataset.  ...  Deep convolutional neural networks (DCNNs) have been employed in many computer vision tasks with great success due to their robustness in feature learning.  ...  The above experiments have shown the effectiveness of going deeper, wider and coupled with graphical-modeling for scene labeling.  ... 
doi:10.1109/icip.2016.7532577 dblp:conf/icip/NguyenFS16 fatcat:bcl3cqzrnnbgxpqvs5iiy247li

Going Deeper With Lean Point Networks

Eric-Tuan Le, Iasonas Kokkinos, Niloy J. Mitra
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
By combining these blocks, we design wider and deeper point-based architectures.  ...  time, and accuracy: a convolution-type block for point sets that blends neighborhood information in a memory-efficient manner; a crosslink block that efficiently shares information across low-and high-resolution  ...  We build on the decreased memory budget to go deeper with point networks.  ... 
doi:10.1109/cvpr42600.2020.00952 dblp:conf/cvpr/LeKM20 fatcat:7nhts7fh75ha5h7cwksmsgyt4u

Very Deep Convolutional Networks for Text Classification [article]

Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun
2017 arXiv   pre-print
We are able to show that the performance of this model increases with depth: using up to 29 convolutional layers, we report improvements over the state-of-the-art on several public text classification  ...  The dominant approach for many NLP tasks are recurrent neural networks, in particular LSTMs, and convolutional neural networks.  ...  We plan to further explore adaptations of residual networks to temporal convolutions as we think this a milestone for going deeper in NLP.  ... 
arXiv:1606.01781v2 fatcat:3cr67tgtjvdt7kpc7dg2dez3tq

Very Deep Convolutional Networks for Text Classification

Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun
2017 Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers  
We are able to show that the performance of this model increases with the depth: using up to 29 convolutional layers, we report improvements over the state-ofthe-art on several public text classification  ...  The dominant approach for many NLP tasks are recurrent neural networks, in particular LSTMs, and convolutional neural networks.  ...  We plan to further explore adaptations of residual networks to temporal convolutions as we think this a milestone for going deeper in NLP.  ... 
doi:10.18653/v1/e17-1104 dblp:conf/eacl/SchwenkBCL17 fatcat:uawf445nezbcjazpwghlatelni

Efficient symmetry-driven fully convolutional network for multimodal brain tumor segmentation

Haocheng Shen, Jianguo Zhang, Weishi Zheng
2017 2017 IEEE International Conference on Image Processing (ICIP)  
In this paper, we present a novel and efficient method for brain tumor (and sub regions) segmentation in multimodal MR images based on a fully convolutional network (FCN) that enables end-to-end training  ...  Firstly, we compare the performance with 3 or 4 convolutional blocks to see whether going 'deeper' of the model is helpful for our tasks.  ...  Our method contains three convolutional blocks and encodes multi-scale features from different layers in one loss function. Going deeper did not make a big difference.  ... 
doi:10.1109/icip.2017.8297006 dblp:conf/icip/ShenZZ17 fatcat:24ylfbef65atfjdkw6czar5eei

DeepCaps: Going Deeper With Capsule Networks

Jathushan Rajasegaran, Vinoj Jayasundara, Sandaru Jayasekara, Hirunima Jayasekara, Suranga Seneviratne, Ranga Rodrigo
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Drawing intuition from the success achieved by Convolutional Neural Networks (CNNs) by going deeper, we introduce DeepCaps 1 , a deep capsule network architecture which uses a novel 3D convolution based  ...  Capsule Network is a promising concept in deep learning, yet its true potential is not fully realized thus far, providing sub-par performance on several key benchmark datasets with complex data.  ...  We believe, to the best of our knowledge, that this is the first attempt to go deeper with capsule networks.  ... 
doi:10.1109/cvpr.2019.01098 dblp:conf/cvpr/RajasegaranJJJS19 fatcat:lazlwrxbkbbdniy6q2bjre632i

DeepCaps: Going Deeper with Capsule Networks [article]

Jathushan Rajasegaran, Vinoj Jayasundara, Sandaru Jayasekara, Hirunima Jayasekara, Suranga Seneviratne, Ranga Rodrigo
2019 arXiv   pre-print
Drawing intuition from the success achieved by Convolutional Neural Networks (CNNs) by going deeper, we introduce DeepCaps1, a deep capsule network architecture which uses a novel 3D convolution based  ...  Capsule Network is a promising concept in deep learning, yet its true potential is not fully realized thus far, providing sub-par performance on several key benchmark datasets with complex data.  ...  We believe, to the best of our knowledge, that this is the first attempt to go deeper with capsule networks.  ... 
arXiv:1904.09546v1 fatcat:gi4c7xlb2neszkpuebeltzpnlq

Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections [article]

Xiao-Jiao Mao, Chunhua Shen, Yu-Bin Yang
2016 arXiv   pre-print
We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum.  ...  Significantly, with the large capacity, we can handle different levels of noises using a single model.  ...  Our 10-layer network outperforms the compared methods already, and we achieve better performance with deeper networks.  ... 
arXiv:1603.09056v2 fatcat:iusp33lw5vh43l6p4xsuookk6m

Improved Policy Networks for Computer Go [chapter]

Tristan Cazenave
2017 Lecture Notes in Computer Science  
Golois uses residual policy networks to play Go. Two improvements to these residual policy networks are proposed and tested. The first one is to use three output planes.  ...  Conclusion We evaluated two improvements to deep residual networks for computer Go. Using three output planes enables the networks to generalize better and reach a greater accuracy.  ...  A new residual layer with Spatial Batch Normalization has been shown to perform better than existing residual layers.  ... 
doi:10.1007/978-3-319-71649-7_8 fatcat:kxr3vl4t7rcfphgicnlajrtloe

Towards Deeper Generative Architectures for GANs using Dense connections [article]

Samarth Tripathi, Renbo Tu
2018 arXiv   pre-print
We have experimented with different numbers of layers and inserting these connections in different sections of the network.  ...  Our findings suggests that networks implemented with the connections produce better images than the baseline, and the number of connections added has only slight effect on the result.  ...  As we go deeper, adding more layers results in an exponential increase in feature maps being passed on the next layer, which increases computation and decreases performance.  ... 
arXiv:1804.11031v2 fatcat:yu7jwv676zburpycgu4fqpsspu

Linear regression on a set of selected templates from a pool of randomly generated templates

Peter Taraba
2021 Machine Learning with Applications  
With these templates, we use linear and logistic regression and achieve high accuracy, comparable with deep neural networks.  ...  For the MNIST dataset we do so using max convolutions, whose parameters are generated directly from training images for the digit recognition problem, hence we call them max convolution templates.  ...  This means once we have probability features, there might be no reason to go deeper.  ... 
doi:10.1016/j.mlwa.2021.100126 fatcat:riiszkw23bewxbon2zbx5qpeiq

A Fast Dense Spectral–Spatial Convolution Network Framework for Hyperspectral Images Classification

Wenju Wang, Shuguang Dou, Zhongmin Jiang, Liujie Sun
2018 Remote Sensing  
Inspired by the SSRN and to alleviate its problems, we aimed at building a deeper convolution network that can learn deeper spectral and spatial features separately, but much faster.  ...  Third, SSRN has a deeper CNN structure than other deep learning methods. Early work showed that the deeper a CNN is, the higher the accuracy.  ...  Going Deeper with Densely-Connected Structures Densely-Connected Structure Assume that the CNN has l convolution layers, X l is the output of the lth layer and H l ( * ) represents the complex nonlinear  ... 
doi:10.3390/rs10071068 fatcat:k3pxjsvaurepzkx7lgjipehpkq
« Previous Showing results 1 — 15 out of 41,364 results