Filters








74,503 Hits in 4.2 sec

Activized Learning with Uniform Classification Noise

Liu Yang, Steve Hanneke
2013 International Conference on Machine Learning  
distribution satisfying a uniform classification noise condition.  ...  We prove that for any VC class, it is possible to transform any passive learning algorithm into an active learning algorithm with strong asymptotic improvements in label complexity for every nontrivial  ...  Finally, define an activizer under uniform classification noise as follows. Definition 4.  ... 
dblp:conf/icml/YangH13 fatcat:w4ye3ymbrrdtxlky6vgsfv66ka

DCNet: Noise-Robust Convolutional Neural Networks for Degradation Classification on Ancient Documents

Fitri Arnia, Khairun Saddami, Khairul Munadi
2021 Journal of Imaging  
In general, the proposed architecture performed better than MobileNet, ShuffleNet, ResNet101, and conventional machine learning (support vector machine and random forest), particularly for documents with  ...  Then, we tested the resulted models with document images containing different levels of ZMGN and speckle noise.  ...  Therefore, we argue that increasing learning parameters in DCNet is a compromise to accomplish robust degradation classification on documents with heavy noise.  ... 
doi:10.3390/jimaging7070114 fatcat:kztjmgaterhprhw6456uigenx4

NICE: Noise Injection and Clamping Estimation for Neural Network Quantization

Chaim Baskin, Evgenii Zheltonozhkii, Tal Rozen, Natan Liss, Yoav Chai, Eli Schwartz, Raja Giryes, Alexander M. Bronstein, Avi Mendelson
2021 Mathematics  
This leads to state-of-the-art results on various regression and classification tasks, e.g., ImageNet classification with architectures such as ResNet-18/34/50 with as low as 3 bit weights and activations  ...  The method proposed in this work trains quantized neural networks by noise injection and a learned clamping, which improve accuracy.  ...  The scheme is based on using uniform quantized parameters, additive uniform noise injection, and learning the quantization clamping range.  ... 
doi:10.3390/math9172144 fatcat:zgrkaxcoazd3vdesjviiv25vca

Statistical Query Learning (1998; Kearns) [chapter]

Vitaly Feldman
2014 Encyclopedia of Algorithms  
Servedio INDEX TERMS: Statistical query, PAC learning, classification noise, noise-tolerant learning, SQ dimension.  ...  In the random classification noise model of Angluin and Laird [1] the label of each example given to the learning algorithm is flipped randomly and independently with some fixed probability η called the  ...  The SQ model of learning was generalized to active learning (or learning where labels are requested only for some of the points) and used to obtain new efficient noise tolerant active learning algorithms  ... 
doi:10.1007/978-3-642-27848-8_401-2 fatcat:wpv4onrvlnfiph4zr5wpaj5zvy

Statistical Query Learning [chapter]

Vitaly Feldman
2008 Encyclopedia of Algorithms  
Servedio INDEX TERMS: Statistical query, PAC learning, classification noise, noise-tolerant learning, SQ dimension.  ...  In the random classification noise model of Angluin and Laird [1] the label of each example given to the learning algorithm is flipped randomly and independently with some fixed probability η called the  ...  The SQ model of learning was generalized to active learning (or learning where labels are requested only for some of the points) and used to obtain new efficient noise tolerant active learning algorithms  ... 
doi:10.1007/978-0-387-30162-4_401 fatcat:cerelbnlzjffxniof5arf474d4

Object Detection for Accident Prevention using Convolution Neural Network

Gaurpriya Chodanker
2019 International Journal for Research in Applied Science and Engineering Technology  
This CNN model was tested with different sized datasets of images with and without noise. The results obtained using different random filter initialization functions are presented.  ...  No Total No. of Images in the Dataset Filter Initialisation Total Training Time Accuracy Loss Without Noise With Noise Without Noise With Noise Without Noise With Noise 1  ...  Classification of the test samples using CNN is very efficient and can outperform humans. We have achieved maximum accuracy of 98.23 without noise and 96.3 with noisy images.  ... 
doi:10.22214/ijraset.2019.5040 fatcat:xl23svdj4nhpdkuysbf4r5tt6i

Sufficient Conditions for Agnostic Active Learnable

Liwei Wang
2009 Neural Information Processing Systems  
We study pool-based active learning in the presence of noise, i.e. the agnostic setting.  ...  We show that under some noise condition, if the Bayesian classification boundary and the underlying distribution are smooth to a finite order, active learning achieves polynomial improvement in the label  ...  Conclusion We show that if the Bayesian classification boundary is smooth and the distribution is bounded by a smooth function, then under some noise condition active learning achieves polynomial or exponential  ... 
dblp:conf/nips/Wang09 fatcat:yvcfc466xvb2xgv4ofujrpvmuq

Neural Classifiers with Limited Connectivity and Recurrent Readouts

Lyudmila Kushnir, Stefano Fusi
2018 Journal of Neuroscience  
Our study shows that feedforward neural classifiers with numerous long-range afferent connections can be replaced by recurrent networks with sparse long-range connectivity without sacrificing the classification  ...  This observation seems to contrast with the theoretical studies showing that for many neural network models the performance scales with the number of connections per neuron and not with the total number  ...  signal-to-noise ratio with the increasing number of learned patterns.  ... 
doi:10.1523/jneurosci.3506-17.2018 pmid:30249794 pmcid:PMC6596245 fatcat:nuxtrnhtybfcvjjsrfute3jyuq

Exploiting the Short-term to Long-term Plasticity Transition in Memristive Nanodevice Learning Architectures [article]

Christopher H. Bennett, Selina La Barbera, Adrien F. Vincent, Fabien Alibart, Damien Querlioz
2016 arXiv   pre-print
Here, we explore the integration of electrochemical metallization cell (ECM) nanodevices with tunable filamentary switching in nanoscale learning systems.  ...  In attempting to classify the MNIST database under the same conditions, conventional ELM obtains 84% classification, the imprinted, uniform device system obtains 88% classification, and the imprinted,  ...  In the uniform case classification reaches 70% with 5 samples, and approaches 100% after 10.  ... 
arXiv:1606.08366v1 fatcat:2bik4swcjrbrlmzcfrjtaakr7m

An Effective Label Noise Model for DNN Text Classification [article]

Ishan Jindal, Daniel Pressel, Brian Lester, Matthew Nokleby
2019 arXiv   pre-print
While training image classification models with label noise have received much attention, training text classification models have not.  ...  Through extensive experiments on several text classification datasets, we show that this approach enables the CNN to learn better sentence representations and is robust even to extreme label noise.  ...  Table 3 : 3 SVM Classification the last fully-connected layer's activations for all the training samples and treat them as the learned feature representation of the input sentence.  ... 
arXiv:1903.07507v1 fatcat:ivmqfimng5abhfsjkx3kbcwb3q

Statistical Moments based Noise Classification using Feed Forward Back Propagation Neural Network

Shamik Tiwari, Ajay Kumar Singh, V.P. Shukla
2011 International Journal of Computer Applications  
A neural network classification based noise identification method is presented by isolating some representative noise samples, and extracting their statistical features for noise type identification.  ...  The isolation of representative noise samples is achieved using prevalent used image filters whereas noise identification is performed using statistical moments features based classification system.  ...  This BPNN provides a computationally efficient method for changing the weights in feed forward network, with differentiable activation function units, to learn a training set of input-output data.  ... 
doi:10.5120/2254-2886 fatcat:sffo5mbdyvbw3nechf5rmezluy

Improving Noise Tolerance of Mixed-Signal Neural Networks [article]

Michael Klachko, Mohammad Reza Mahmoodi, Dmitri B. Strukov
2019 arXiv   pre-print
The resulting model is robust enough to achieve 80.2% classification accuracy on CIFAR-10 dataset with just 1.4 mW power budget, while 6 mW budget allows us to achieve 87.1% accuracy, which is within 1%  ...  Mixed-signal hardware accelerators for deep learning achieve orders of magnitude better power efficiency than their digital counterparts.  ...  Noise sampled from uniform distribution with range proportional only to the dynamic range of pre-activation values in each layer. b.  ... 
arXiv:1904.01705v1 fatcat:tlgburwhpnhdrbtr4mvyqcqxji

On the Robustness of Monte Carlo Dropout Trained with Noisy Labels [article]

Purvi Goel, Li Chen
2021 arXiv   pre-print
The memorization effect of deep learning hinders its performance to effectively generalize on test set when learning with noisy labels.  ...  deviation on each neuron's activation; 3. network sparsity: investigating the network support of MCDropout in comparison with deterministic neural networks.  ...  If the observed label is different from the true label with a uniform probability, then the noise is considered to be label-independent and this noise is called considered symmetric or uniform noise.  ... 
arXiv:2103.12002v1 fatcat:7s5bh2hmhvcttplrdwv7pkc2q4

An Effective Label Noise Model for

Ishan Jindal, Daniel Pressel, Brian Lester, Matthew Nokleby
2019 Proceedings of the 2019 Conference of the North  
While training image classification models with label noise have received much attention, training text classification models have not.  ...  Through extensive experiments on several text classification datasets, we show that this approach enables the CNN to learn better sentence representations and is robust even to extreme label noise.  ...  We would also like to thank Patrick Haffner, Sagnik Ray Choudhury, Yanjie Zhao and Amy Hemmeter for their valuable discussions with us during the course of this research.  ... 
doi:10.18653/v1/n19-1328 dblp:conf/naacl/JindalPLN19 fatcat:3nlfzdmwdjdovdmjbvqaxecf2y

CAD Scheme To Detect Brain Tumour In MR Images using Active Contour Models and Tree Classifiers

R. Helen, N. Kamaraj
2015 Journal of Electrical Engineering and Technology  
This method gives accuracy of 97% and with minimum classification error. The time taken to detect Tumour is approximately 2 mins for an examination (30 slices).  ...  The objective of this paper is to develop a Computer Aided Diagnostics (CAD) scheme for Brain Tumour detection from Magnetic Resonance Image (MRI) using active contour models and to investigate with several  ...  and without noise Average Region Non-Uniformity Average Correlation Balloon Force GVF DR LSE FCM LSM Balloon Force GVF DR LSE FCM LSM Examinations (30 slices / Exam) Withou t noise noise  ... 
doi:10.5370/jeet.2015.10.2.670 fatcat:2c3fczf5vfdehkxwlqflkcr2du
« Previous Showing results 1 — 15 out of 74,503 results