Filters








1,960 Hits in 6.3 sec

Detection of Urban Damage Using Remote Sensing and Machine Learning Algorithms: Revisiting the 2010 Haiti Earthquake

Austin Cooner, Yang Shao, James Campbell
2016 Remote Sensing  
This study evaluates the effectiveness of multilayer feedforward neural networks, radial basis neural networks, and Random Forests in detecting earthquake damage caused by the 2010 Port-au-Prince, Haiti  ...  A total of 1,214,623 undamaged and 134,327 damaged pixels were used for training.  ...  Also, we extend our gratitude to our peer-reviewers and thank them for their time and helpful comments and suggestions.  ... 
doi:10.3390/rs8100868 fatcat:c3houqm2frebtbr5epv67l7ika

A loss function for classification based on a robust similarity metric

Abhishek Singh, Jose C. Principe
2010 The 2010 International Joint Conference on Neural Networks (IJCNN)  
This makes the C-loss function a more natural cost function for training classifiers as compared to the mean squared error, which is common in neural network classifiers.  ...  square loss function commonly used in neural network classifiers.  ... 
doi:10.1109/ijcnn.2010.5596485 dblp:conf/ijcnn/SinghP10 fatcat:vouc55gzordjba6pee7px3hmby

Approximating the Gradient of Cross-entropy Loss Function

Li Li, Milos Doroslovacki, Murray H. Loew
2020 IEEE Access  
A loss function has two crucial roles in training a conventional discriminant deep neural network (DNN): (i) it measures the goodness of classification and (ii) generates the gradients that drive the training  ...  In this paper, we approximate the gradients of cross-entropy loss which is the most often used loss function in the classification DNNs.  ...  One can see that the training errors for G 1 and G 2 decay much faster than those for the cross-entropy gradient in most of the scenarios, e.g., in Fig. 3 (f) G 1 and G 2 achieve the training error of  ... 
doi:10.1109/access.2020.3001531 fatcat:weyblw7n3rf37fl7am6gfqqoye

Calibrated Adversarial Training [article]

Tianjin Huang, Vlado Menkovski, Yulong Pei, Mykola Pechenizkiy
2021 arXiv   pre-print
We provide theoretical analysis on the calibrated robust error and derive an upper bound for it.  ...  The method produces pixel-level adaptations to the perturbations based on novel calibrated robust error.  ...  We denote a neural network classifier as f θ (x), the cross-entropy loss as L(•) and Kullback-Leibler divergence as KL(•||•).  ... 
arXiv:2110.00623v2 fatcat:tvqzw5tgzffrle4l4etdjezxle

Dynamically Weighted Balanced Loss: Class Imbalanced Learning and Confidence Calibration of Deep Neural Networks

K. Ruwani M. Fernando, Chris P. Tsokos
2021 IEEE Transactions on Neural Networks and Learning Systems  
Theoretical results supported by superior empirical performance provide justification for the validity of the proposed dynamically weighted balanced (DWB) loss function.  ...  We further show that the proposed loss function is classification calibrated.  ...  Loss Function Formulation Revisiting Categorical Cross Entropy: Let the training set with n samples be denoted by D = {(x i , y i )} n i=1 ⊂ R dx × R dy , where X ⊂ R dx is the feature space and Y ⊂ R  ... 
doi:10.1109/tnnls.2020.3047335 pmid:33444149 fatcat:c3y7y4p4afdppbo5pm4dgbjhpi

E Pluribus Unum Ex Machina: Learning from Many Collider Events at Once [article]

Benjamin Nachman, Jesse Thaler
2021 arXiv   pre-print
Empirically, we find that training a single-event (per-instance) classifier is more effective than training a multi-event (per-ensemble) classifier, as least for the cases we studied, and we relate this  ...  This is illustrated for a Gaussian example as well as for classification tasks relevant for searches and measurements at the Large Hadron Collider.  ...  We train each network with 50,000 events to minimize the binary cross entropy loss function, and we test the performance with an additional 50,000 events.  ... 
arXiv:2101.07263v2 fatcat:hf3lraxh4nfo5ijiqdzmd7nuye

Emergence of a finite-size-scaling function in the supervised learning of the Ising phase transition [article]

Dongkyu Kim, Dong-Hee Kim
2020 arXiv   pre-print
Proposing a minimal one-free-parameter neural network model, we analytically formulate the supervised learning problem for the canonical ensemble being used as a training data set.  ...  We show that just one free parameter is capable enough to describe the data-driven emergence of the universal finite-size-scaling function in the network output that is observed in a large neural network  ...  For instance, two different trainings on the square and triangular lattices would give the neural networks with exactly the same scaling form of the output if it is plotted as a function of m.  ... 
arXiv:2010.00351v1 fatcat:uvz5j7jidbbn7e2qxhf2mqexoe

Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch Mode

Hyeyoung Park, Kwanyong Lee
2019 Applied Sciences  
For two representative stochastic neural network models, we present explicit rules of parameter updates and learning algorithm.  ...  Gradient descent method is an essential algorithm for learning of neural networks.  ...  typical neural network model trained with squared error function, which is widely used for regression task.  ... 
doi:10.3390/app9214568 fatcat:m5aeepltwvdgdeklizddxycmm4

Revisiting Recent and Current Anomaly Detection based on Machine Learning in Ad-Hoc Networks

Zhixiao Wang, Mingyu Chen, Wenyao Yan, Wendong Wang, Ang Gao, Gaoyang Nie, Feng Wang, Shaobo Yang
2019 Journal of Physics, Conference Series  
This article analyzes the existing security problem in Ad-Hoc network, presents the basic theory of intrusion detection for Ad-Hoc network, and reviews the current and recent anomaly detection methods  ...  Ad-Hoc network which is one kind of self-organized networks is much more vulnerable than the infrastructural network with the properties of highly changeable linkage, dynamic structure, and wireless connections  ...  The training process of artificial neural network for classification is to contradistinguish the difference between the output and the expected label, feedback the error to the former nodes and update  ... 
doi:10.1088/1742-6596/1288/1/012075 fatcat:bu542t356nacxltjwnc5l2vhnu

Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization properties of Entropy-SGD and data-dependent priors [article]

Gintare Karolina Dziugaite, Daniel M. Roy
2019 arXiv   pre-print
Indeed, available implementations of Entropy-SGD rapidly obtain zero training error on random labels and the same holds of the Gibbs posterior.  ...  We show that Entropy-SGD (Chaudhari et al., 2017), when viewed as a learning algorithm, optimizes a PAC-Bayes bound on the risk of a Gibbs (posterior) classifier, i.e., a randomized classifier obtained  ...  The authors would like to thank Pratik Chaudhari, Pascal Germain, David McAllester, and Stefano Soatto for helpful discussions. GKD is supported by an EPSRC studentship.  ... 
arXiv:1712.09376v3 fatcat:l3fssx5csbhedcrtl2ojaaznle

Generalized Negative Correlation Learning for Deep Ensembling [article]

Sebastian Buschjäger, Lukas Pfahler, Katharina Morik
2020 arXiv   pre-print
A common explanation for their excellent performance is due to the bias-variance decomposition of the mean squared error which shows that the algorithm's error can be decomposed into its bias and variance  ...  We show how GNCL encapsulates many previous works and discuss under which circumstances training of an ensemble of Neural Networks might fail and what ensembling method should be favored depending on the  ...  By using the cross entropy loss and setting ψ = − we arrive at the DivLoss function for λ 1 = M and λ 2 = 1.  ... 
arXiv:2011.02952v2 fatcat:qpnhmmezabcnzlwvwy2ydod4dm

Blind source mobile device identification based on recorded call

Mehdi Jahanirad, Ainuddin Wahid Abdul Wahab, Nor Badrul Anuar, Mohd Yamani Idna Idris, Mohd Nizam Ayub
2014 Engineering applications of artificial intelligence  
All feature sets are analyzed by using five supervised learning techniques, namely, support vector machine, naïve Bayesian, neural network, linear logistic regression, and rotation forest classifier, as  ...  The experimental results show that the best performance was achieved with entropy-MFCC features that use the naïve Bayesian classifier, which resulted in an average accuracy of 99.99% among 21 mobile devices  ...  In our experiments, the total training and testing data sets for all classifiers were selected by using 10-fold cross-validation.  ... 
doi:10.1016/j.engappai.2014.08.008 fatcat:vscggmgztrfs7g2hrtrbuqug6q

Application of Deep Networks to Oil Spill Detection Using Polarimetric Synthetic Aperture Radar Images

Guandong Chen, Yu Li, Guangmin Sun, Yuanzhi Zhang
2017 Applied Sciences  
Featured Application: Using polarimetric synthetic aperture radar (SAR) remote sensing to detect and classify sea surface oil spills, for the early warning and monitoring of marine oil spill pollution.  ...  The results show that oil spill classification achieved by deep networks outperformed both support vector machine (SVM) and traditional artificial neural networks (ANN) with similar parameter settings,  ...  The reconstruction error can be described by the cross-entropy function: For the training set, S; the average reconstruction error can hence be established as: L(x, y) = − n ∑ i=1 [x i log(y i ) + (1 −  ... 
doi:10.3390/app7100968 fatcat:eoptvxqv65d27foy5aomhodqtm

LAND COVER CLASSIFICATION BASED ON MODIS IMAGERY DATA USING ARTIFICIAL NEURAL NETWORKS

Arthur Stepchenko
2017 Environment Technology Resources Proceedings of the International Scientific and Practical Conference  
In this study, multispectral MODIS Terra NDVI images and an artificial neural network (ANN) were used in land cover classification.  ...  Artificial neural network is a computing tool that is designed to simulate the way the human brain analyzes and process information.  ...  Minimizing cross-entropy leads to good classifiers. III.  ... 
doi:10.17770/etr2017vol2.2545 fatcat:nwygs2q5xfbpteqejbavu55ixq

Combinatorial codes in ventral temporal lobe for object recognition: Haxby (2001) revisited: is there a "face" area?

Stephen José Hanson, Toshihiko Matsuka, James V. Haxby
2004 NeuroImage  
neural network classifier configurations implemented and tested in the present study Abbreviations: SSE, sum of squared error; MSE, mean squared error; BP, back propagation; SCG, scaled conjugate gradient  ...  The error function for 267 our NN classifier was the cross entropy function or: E ¼ À X N n¼1 X K k¼1 t n k ln O n k t n k 268 269 270 Scaled conjugate gradient 271 The scaled conjugate gradient (SCG)  ... 
doi:10.1016/j.neuroimage.2004.05.020 pmid:15325362 fatcat:zbtf4gqhcnedzbaghfwhzxym7e
« Previous Showing results 1 — 15 out of 1,960 results