Filters








381 Hits in 3.0 sec

Improving the Certified Robustness of Neural Networks via Consistency Regularization [article]

Mengting Xu, Tao Zhang, Zhongnian Li, Daoqiang Zhang
2021 arXiv   pre-print
neural networks that are certifiably robust to the attacker.  ...  A range of defense methods have been proposed to improve the robustness of neural networks on adversarial examples, among which provable defense methods have been demonstrated to be effective to train  ...  Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 1097-1105. Kurakin, A.; Goodfellow, I.; and Bengio, S. 2016.  ... 
arXiv:2012.13103v2 fatcat:ulg5a53muna63jenjwjvigmotm

The Benefit of the Doubt: Uncertainty Aware Sensing for Edge Computing Platforms [article]

Lorena Qendro, Jagmohan Chauhan, Alberto Gil C. P. Ramos, Cecilia Mascolo
2021 arXiv   pre-print
Our layerwise distribution approximation to the convolution layer cascades through the network, providing uncertainty estimates in one single run which ensures minimal overhead, especially compared with  ...  Neural networks (NNs) lack measures of "reliability" estimation that would enable reasoning over their predictions.  ...  In the same way as using dropout on the FC layer, MCDrop can be applied to the individual convolutions in convolution neural networks [18] .  ... 
arXiv:2102.05956v1 fatcat:6kiliqxbyvelvh4xcordc4blki

Learning Low-Precision Structured Subnetworks Using Joint Layerwise Channel Pruning and Uniform Quantization

Xinyu Zhang, Ian Colbert, Srinjoy Das
2022 Applied Sciences  
Pruning and quantization are core techniques used to reduce the inference costs of deep neural networks.  ...  Using our proposed algorithms, we demonstrate increased performance per memory footprint over existing solutions across a range of discriminative and generative networks.  ...  In Section 2, we review prior work in neural network pruning and quantization.  ... 
doi:10.3390/app12157829 fatcat:6ihyiujmordovf3stohpzue22q

Deep Context-Aware Kernel Networks [article]

Mingyuan Jiu, Hichem Sahbi
2019 arXiv   pre-print
The solution of this objective function defines a particular deep network architecture whose parameters correspond to different variants of learned contexts including layerwise, stationary and classwise  ...  In the particular scenario of kernel machines, context-aware kernel design aims at learning positive semi-definite similarity functions which return high values not only when data share similar contents  ...  Note that with the resurgence of deep convolutional neural networks (CNNs) [36] , [38] , a further "impressive" progress has recently been observed in the aforementioned image classification methods.  ... 
arXiv:1912.12735v1 fatcat:wypxuigr4fegbkk3tkzmvzwul4

Towards Demystifying Subliminal Persuasiveness: Using XAI-Techniques to Highlight Persuasive Markers of Public Speeches [chapter]

Klaus Weber, Lukas Tinnes, Tobias Huber, Alexander Heimerl, Marc-Leon Reinecker, Eva Pohlen, Elisabeth André
2020 Lecture Notes in Computer Science  
and trained a neural network capable of predicting the degree of perceived convincingness based on visual input only.  ...  Our results show that the neural network learned to focus on the person, more specifically their posture and contours, as well as on their hands and face.  ...  However, since people often seem not to be aware of the importance of body-language-based argumentation, we trained a convolutional neural network, which can predict perceived persuasiveness solely based  ... 
doi:10.1007/978-3-030-51924-7_7 fatcat:2dmmt6dpkvgb3njsf2rxry3fya

Cellular automata as convolutional neural networks [article]

William Gilpin
2018 arXiv   pre-print
We show that any CA may readily be represented using a convolutional neural network with a network-in-network architecture.  ...  Our results suggest how the entropy of a physical process can affect its representation when learned by neural networks.  ...  Figure 2 . 2 Training 2560 convolutional neural networks on random cellular automata.  ... 
arXiv:1809.02942v1 fatcat:wwggh25xxbc5fpl5ehruhdgyoq

Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE) [article]

Jacques Kaiser and Hesham Mostafa and Emre Neftci
2019 arXiv   pre-print
A growing body of work underlines striking similarities between biological neural networks and recurrent, binary neural networks.  ...  A relatively smaller body of work, however, discusses similarities between learning dynamics employed in deep artificial neural networks and synaptic plasticity in spiking neural networks.  ...  Decoupled Neural Interfaces (DNI) were proposed to mitigate layerwise locking in training deep neural networks (Jaderberg et al., 2016) .  ... 
arXiv:1811.10766v3 fatcat:36i7zpgd7beirjbudi36sus5jy

Alzheimer's Disease: A Survey

Harshitha, Gowthami Chamarajan, Charishma Y
2021 International Journal of Artificial Intelligence  
Based on our survey we came across many methods like Convolution Neural Network (CNN) where in each brain area is been split into small three dimensional patches which acts as input samples for CNN.  ...  The other method used was Deep Neural Networks (DNN) where the brain MRI images are segmented to extract the brain chambers and then features are extracted from the segmented area.  ...  Deep Convolutional Neural Network Jyoti & Zhang [17] have proposed a method which performs four basic operations-convolution, batch normalization, and verified linear unit and pooling.  ... 
doi:10.36079/lamintang.ijai-0801.220 fatcat:s5375uw5j5hxlci3yj4tz6icfu

Where Should We Begin? A Low-Level Exploration of Weight Initialization Impact on Quantized Behaviour of Deep Neural Networks

Stone Yun, Alexander Wong
2021 Journal of Computational Vision and Imaging Systems  
With the proliferation of deep convolutional neural network (CNN) algorithms for mobile processing, limited precision quantization has become an essential tool for CNN efficiency.  ...  The fine-grained, layerwise analysis enables us to gain deep insights on how initial weights distributions will affect final accuracy and quantized behaviour.  ...  Introduction Deep Convolutional Neural Networks (CNN) have enabled dramatic advances in the field of computer vision.  ... 
doi:10.15353/jcvis.v6i1.3538 fatcat:b57xmmcja5cuxmalahuwneuxmy

Ensemble One-Dimensional Convolution Neural Networks for Skeleton-Based Action Recognition

Yangyang Xu, Jun Cheng, Lei Wang, Haiying Xia, Feng Liu, Dapeng Tao
2018 IEEE Signal Processing Letters  
Limited by the skeleton sequence representations, two-dimensional convolution neural network cannot be used directly, we chose one-dimensional convolution layer as the basic layer.  ...  In this paper, we proposed a effective but extensible residual one-dimensional convolution neural network as base network, based on the this network, we proposed four subnets to explore the features of  ...  First, we proposed an effectiveness and extensible one-dimensional residual convolution neural network as basenet.  ... 
doi:10.1109/lsp.2018.2841649 fatcat:dtnehmjgivhc3mvbymp6l6piu4

On Implicit Filter Level Sparsity in Convolutional Neural Networks

Dushyant Mehta, Kwang In Kim, Christian Theobalt
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
We investigate filter level sparsity that emerges in convolutional neural networks (CNNs) which employ Batch Normalization and ReLU activation, and are trained with adaptive gradient descent techniques  ...  Lastly, we show that the implicit sparsity can be harnessed for neural network speedup at par or better than explicit sparsification / pruning approaches, without needing any modifications to the typical  ...  Introduction In this work we show that filter 1 level sparsity emerges in certain types of feedforward convolutional neural networks.  ... 
doi:10.1109/cvpr.2019.00061 dblp:conf/cvpr/MehtaKT19 fatcat:qmezbqojfvahpdrabwz4ziebfm

SAWNet: A Spatially Aware Deep Neural Network for 3D Point Cloud Processing [article]

Chaitanya Kaul, Nick Pears, Suresh Manandhar
2019 arXiv   pre-print
Deep neural networks have established themselves as the state-of-the-art methodology in almost all computer vision tasks to date.  ...  In our work, we introduce a neural network layer that combines both global and local information to produce better embeddings of these points.  ...  Convolutional Neural Networks: Alexnet [18] and VGGNet [28] started the deep learning revolution with their state-of-the-art results on the Imagenet dataset.  ... 
arXiv:1905.07650v1 fatcat:ckyraevd3vftve7ylickbbqdse

On Implicit Filter Level Sparsity in Convolutional Neural Networks [article]

Dushyant Mehta, Kwang In Kim, Christian Theobalt
2019 arXiv   pre-print
We investigate filter level sparsity that emerges in convolutional neural networks (CNNs) which employ Batch Normalization and ReLU activation, and are trained with adaptive gradient descent techniques  ...  Lastly, we show that the implicit sparsity can be harnessed for neural network speedup at par or better than explicit sparsification / pruning approaches, with no modifications to the typical training  ...  Introduction In this work we show that filter 1 level sparsity emerges in certain types of feedforward convolutional neural networks.  ... 
arXiv:1811.12495v2 fatcat:7vj37vbzrffipl3tafovfreggy

Stacks of convolutional Restricted Boltzmann Machines for shift-invariant feature learning

Mohammad Norouzi, Mani Ranjbar, Greg Mori
2009 2009 IEEE Conference on Computer Vision and Pattern Recognition  
Recently a greedy layerwise procedure was proposed to initialize weights of deep belief networks, by viewing each layer as a separate Restricted Boltzmann Machine (RBM).  ...  This framework learns a set of features that can generate the images of a specific object class.  ...  [4] developed the convolutional neural network (CNN), a specialized type of neural network in which weight sharing is employed with the result that the learned weights play the role of convolution kernels  ... 
doi:10.1109/cvpr.2009.5206577 dblp:conf/cvpr/NorouziRM09 fatcat:uh67wi37vvbtzm4nvjrcllbw3i

Stacks of convolutional Restricted Boltzmann Machines for shift-invariant feature learning

M. Norouzi, M. Ranjbar, G. Mori
2009 2009 IEEE Conference on Computer Vision and Pattern Recognition  
Recently a greedy layerwise procedure was proposed to initialize weights of deep belief networks, by viewing each layer as a separate Restricted Boltzmann Machine (RBM).  ...  This framework learns a set of features that can generate the images of a specific object class.  ...  [4] developed the convolutional neural network (CNN), a specialized type of neural network in which weight sharing is employed with the result that the learned weights play the role of convolution kernels  ... 
doi:10.1109/cvprw.2009.5206577 fatcat:i2eanucmq5a4fbsssgn26xumrm
« Previous Showing results 1 — 15 out of 381 results