Filters








103,663 Hits in 3.6 sec

Deep Neural Network with l2-Norm Unit for Brain Lesions Detection [chapter]

Mina Rezaei, Haojin Yang, Christoph Meinel
2017 Lecture Notes in Computer Science  
We proposed a new operating unit which receives features from several projections of a subset units of the bottom layer and computes a normalized l2-norm for next layer.  ...  We evaluated the proposed approach on two different CNN architectures and number of popular benchmark datasets. The experimental results demonstrate the superior ability of the proposed approach.  ...  Table 1 . 1 Brain lesions classification performance of the re-designed ResNet architecture using l2-norm unit.  ... 
doi:10.1007/978-3-319-70093-9_85 fatcat:yynb4tmkkndb7muxlxpy2cytwi

HyCAD-OCT: A Hybrid Computer-Aided Diagnosis of Retinopathy by Optical Coherence Tomography Integrating Machine Learning and Feature Maps Localization

Mohamed Ramzy Ibrahim, Karma M. Fathalla, Sherin M. Youssef
2020 Applied Sciences  
A new modified deep learning architecture (Norm-VGG16) is introduced integrating a kernel regularizer.  ...  The proposed system assimilates a range of techniques including RoI localization and feature extraction, followed by classification and diagnosis.  ...  Feature Fusion and Classification A set of 512 CNN-based features are extracted per image from our Norm-VGG16 network architecture. The features are extracted from the global average pooling layer.  ... 
doi:10.3390/app10144716 fatcat:cyrxxr4u2neadkmnfma2wwoeni

Deep neural network ensemble by data augmentation and bagging for skin lesion classification [article]

Manik Goyal, Jagath C. Rajapakse
2018 arXiv   pre-print
The DNN architectures are combined in to an ensemble by using a 1×1 convolution for fusion in a meta-learning layer.  ...  This work summarizes our submission for the Task 3: Disease Classification of ISIC 2018 challenge in Skin Lesion Analysis Towards Melanoma Detection.  ...  We build and train an ensemble of CNN architecture, or DABEA, for two-class classification of skin lesions.  ... 
arXiv:1807.05496v2 fatcat:rv6q7iwjlrhqdni63yfr3iogou

Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint [article]

Xanadu Halkias, Sebastien Paris, Herve Glotin
2013 arXiv   pre-print
In this paper we present a theoretical approach for sparse constraints in the DBN using the mixed norm for both non-overlapping and overlapping groups.  ...  We explore how these constraints affect the classification accuracy for digit recognition in three different datasets (MNIST, USPS, RIMES) and provide initial estimations of their usefulness by altering  ...  probabilities for the vanilla RBM and mixed norm RBM using a batch of the USPS data set and classification accuracy for the USPS data set using the different architectures From Table 1 we can infer  ... 
arXiv:1301.3533v2 fatcat:l2cthbunsvcjzf4ynf65ore57u

Evaluation of Complexity Measures for Deep Learning Generalization in Medical Image Analysis [article]

Aleksandar Vakanski, Min Xian
2021 arXiv   pre-print
A better understanding of the generalization capacity on new images is crucial for clinicians' trustworthiness in deep learning.  ...  The results indicate that PAC-Bayes flatness-based and path norm-based measures produce the most consistent explanation for the combination of models and data.  ...  Supported by the Center for Modeling Complex Interactions at the University of Idaho through National Institutes of Health Award P20GM104420.  ... 
arXiv:2103.03328v2 fatcat:6cmc76svtvb4viranh24qabsty

Learning the Structure of Deep Architectures Using L1 Regularization

Praveen Kulkarni, Joaquin Zepeda, Frederic Jurie, Patrick Pérez, Louis Chevallier
2015 Procedings of the British Machine Vision Conference 2015  
The architecture we consider consists of a sequence of fully-connected layers, with a diagonal matrix between them.  ...  We present a simple algorithm to solve the proposed formulation and demonstrate it experimentally on a standard image classification benchmark. We can express the architecture in Fig.  ...  The architecture we consider consists of a sequence of fully-connected layers, with a diagonal matrix between them.  ... 
doi:10.5244/c.29.23 dblp:conf/bmvc/KulkarniZJPC15 fatcat:5446jqhc4jesnkh27axnhb5vmi

Generative Adversarial Perturbations [article]

Omid Poursaeed, Isay Katsman, Bicheng Gao, Serge Belongie
2018 arXiv   pre-print
We also demonstrate that similar architectures can achieve impressive results in fooling classification and semantic segmentation models, obviating the need for hand-crafting attack methods for each task  ...  Using extensive experiments on challenging high-resolution datasets such as ImageNet and Cityscapes, we show that our perturbations achieve high fooling rates with small perturbation norms.  ...  Similar to [35] , a value of 2000 is set as the L 2 -norm threshold of the universal perturbation, and a value of 10 is set for the L ∞ -norm when images are considered in [0, 255] range 8 .  ... 
arXiv:1712.02328v3 fatcat:wazc637kn5amtdkbgmxxupkrxm

Sparsifying Word Representations for Deep Unordered Sentence Modeling

Prasanna Sattigeri, Jayaraman J. Thiagarajan
2016 Proceedings of the 1st Workshop on Representation Learning for NLP  
The proposed approach produces competitive results in sentiment and topic classification tasks with high degree of sparsity.  ...  In this paper, we introduce an architecture to infer the appropriate sparsity pattern for the word embeddings while learning the sentence composition in a deep network.  ...  Figure 5(a) shows the mean`1-norm of each dimension of the word vector across all the words in the vocabulary for the SST sentiment classification dataset.  ... 
doi:10.18653/v1/w16-1624 dblp:conf/rep4nlp/SattigeriT16 fatcat:avyl6hk4frachgg2cndfw5azmq

Neuro-Inspired Deep Neural Networks with Sparse, Strong Activations [article]

Metehan Cekic, Can Bakiskan, Upamanyu Madhow
2022 arXiv   pre-print
Experiments with standard image classification tasks on CIFAR-10 demonstrate that, relative to baseline end-to-end trained architectures, our proposed architecture (a) leads to sparser activations (with  ...  Instead of batch norm, we use divisive normalization of activations (suppressing weak outputs using strong outputs), along with implicit ℓ_2 normalization of neuronal weights.  ...  Fig. 4 : 2 dB ( 6 ) 426 Fig. 4: Comparison of classification accuracies as a function of noise σ.  ... 
arXiv:2202.13074v3 fatcat:sfr3wdfu4ncrffviaebp5rapdm

Set Norm and Equivariant Skip Connections: Putting the Deep in Deep Sets [article]

Lily H. Zhang, Veronica Tozzo, John M. Higgins, Rajesh Ranganath
2022 arXiv   pre-print
Additionally, layer norm, the normalization of choice in Set Transformer, can hurt performance by removing information useful for prediction.  ...  We additionally introduce Flow-RBC, a new single-cell dataset and real-world application of permutation invariant prediction.  ...  Acknowledgements This work was supported by NIH/NHLBI Award R01HL148248, NSF Award 1922658 NRT-HDR: FUTURE Foundations, Translation, and Responsibility for Data Science, a DeepMind Fellowship, and NIH  ... 
arXiv:2206.11925v2 fatcat:t73s6zhfdvcpjj35mcj6y22gxa

Generative Adversarial Perturbations

Omid Poursaeed, Isay Katsman, Bicheng Gao, Serge Belongie
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
We also demonstrate that similar architectures can achieve impressive results in fooling both classification and semantic segmentation models, obviating the need for hand-crafting attack methods for each  ...  Using extensive experiments on challenging high-resolution datasets such as ImageNet and Cityscapes, we show that our perturbations achieve high fooling rates with small perturbation norms.  ...  This work is supported in part by a Google Focused Research Award and a Facebook equipment donation.  ... 
doi:10.1109/cvpr.2018.00465 dblp:conf/cvpr/PoursaeedKGB18 fatcat:gflwwbhp6fhe5pvbfthsezpfqa

Weakly Supervised Segmentation of Cracks on Solar Cells using Normalized Lp Norm [article]

Martin Mayr, Mathis Hoffmann, Andreas Maier, Vincent Christlein
2020 arXiv   pre-print
We use a modified ResNet-50 to derive a segmentation from network activation maps. We use defect classification as a surrogate task to train the network.  ...  In this work, we propose a weakly supervised learning strategy that only uses image-level annotations to obtain a method that is capable of segmenting cracks on EL images of solar cells.  ...  Of course, neither of them should be segmented as cracks. Architecture We start with a general classification architecture and modify it to obtain the segmentation result.  ... 
arXiv:2001.11248v1 fatcat:uhlixvabxfbptouxy6qkjhoqzq

1D Convolutional Neural Network Models for Sleep Arousal Detection [article]

Morteza Zabihi, Ali Bahrami Rad, Serkan Kiranyaz, Simo Särkkä, Moncef Gabbouj
2019 arXiv   pre-print
Sleep arousals transition the depth of sleep to a more superficial stage. The occurrence of such events is often considered as a protective mechanism to alert the body of harmful stimuli.  ...  A detailed set of evaluations is performed on the benchmark dataset provided by PhysioNet/Computing in Cardiology Challenge 2018, and the results show that the best 1D CNN model has achieved an average  ...  The topology of Model 3 Convolution Batch Norm ReLU Block A MaxPooling Convolution Batch Norm ReLU Block B Convolution Batch Norm ReLU Convolution Batch Norm Convolution Batch  ... 
arXiv:1903.01552v1 fatcat:uaujed3xqrebfkchssqf7vpv3e

Scale-invariant learning and convolutional networks

Soumith Chintala, Marc'Aurelio Ranzato, Arthur Szlam, Yuandong Tian, Mark Tygert, Wojciech Zaremba
2017 Applied and Computational Harmonic Analysis  
In the specific application to supervised learning for convnets, a simple scale-invariant classification stage is more robust than multinomial logistic regression, appears to result in somewhat lower errors  ...  Multinomial logistic regression and other classification schemes used in conjunction with convolutional networks (convnets) were designed largely before the rise of the now standard coupling with convnets  ...  The spectral norm of a vector viewed as a matrix having only one column or one row is the same as the Euclidean norm of the vector; the Euclidean norm of a matrix viewed as a vector is the same as the  ... 
doi:10.1016/j.acha.2016.06.005 fatcat:lm2xonu4kffivlhhhlyaetsqoi

Vision-Based Autonomous Navigation Using Supervised Learning Techniques [chapter]

Jefferson R. Souza, Gustavo Pessin, Fernando S. Osório, Denis F. Wolf
2011 IFIP Advances in Information and Communication Technology  
This paper presents a mobile control system capable of learn behaviors based on human examples.  ...  It also uses supervised learning techniques which work with different levels of memory of the templates.  ...  Half, Double and Equal shows the different architectures tested in this work, for example, LMT = 3 changes occur in the number of neurons in the intermediate layer of a Rprop MLP, where tested the architectures  ... 
doi:10.1007/978-3-642-23957-1_2 fatcat:yjgizo4iqjgkjidhzaekgzjvby
« Previous Showing results 1 — 15 out of 103,663 results