A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
Specification and Simulation of Statistical Query Algorithms for Efficiency and Noise Tolerance
1998
Journal of computer and system sciences (Print)
We show that the learning algorithms obtained by simulating efficient relative error SQ algorithms both in the absence of noise and in the presence of malicious noise have roughly optimal sample complexity ...
We show that uniform convergence with respect to the d & metric yields"uniform convergence" with respect to (+, %) accuracy. ...
Two formalizations of learning with faulty data are the variants of the PAC model with classification noise and malicious errors. ...
doi:10.1006/jcss.1997.1558
fatcat:fzb5m5crrvbepo6qlcmw6jjrim
Learning Logic programs with random classification noise
[chapter]
1997
Lecture Notes in Computer Science
We consider the learnability of classes of logic programs in the presence of noise, assuming that the label of each example is reversed with a xed probability. ...
Also, we show that arbitrary nonrecursive Horn clauses with forest background knowledge remain polynomially PAC learnable in the presence of noise. ...
Learning Conjunctions from Noisy Data Angluin and Laird 1] gave the rst algorithm for PAC learning conjunctions from data with random classi cation noise. ...
doi:10.1007/3-540-63494-0_63
fatcat:ecttkvrzo5eytjdjnpidqai5lm
Page 1208 of Mathematical Reviews Vol. , Issue 97B
[page]
1997
Mathematical Reviews
We also present an algorithm for learn- ing in the PAC model with malicious noise. ...
each conjunct forms a constraint on positive examples. ...
Smooth Boosting and Learning with Malicious Noise
[chapter]
2001
Lecture Notes in Computer Science
We show that this new boosting algorithm can be used to construct efficient PAC learning algorithms which tolerate relatively high rates of malicious noise. ...
The bounds on sample complexity and malicious noise tolerance of these new PAC algorithms closely correspond to known bounds for the online p-norm algorithms of Grove, Littlestone and Schuurmans (1997) ...
Acknowledgments We thank Avrim Blum for a helpful discussion concerning the malicious noise tolerance of the Perceptron algorithm. ...
doi:10.1007/3-540-44581-1_31
fatcat:vgnxe3ab4vd55a3hcatkijudke
Page 502 of Mathematical Reviews Vol. , Issue 2001A
[page]
2001
Mathematical Reviews
noise rate (it is well known that any nontrivial target class cannot be PAC learned with accuracy ¢€ and malicious noise rate 7 > ¢/(1 +e), this irrespective of sample complexity). ...
Summary: “In this paper, we prove various results about PAC learning in the presence of malicious noise. Our main interest is the sample size behaviour of learning algorithms. ...
On-line learning with malicious noise and the closure algorithm
[chapter]
1994
Lecture Notes in Computer Science
Finally, we show how to efficiently turn any algorithm for the on-line noise model into a learning algorithm for the PAC model with malicious noise. ...
We investigate a variant of the on-line learning model for classes of {0, 1}-valued functions (concepts) in which the labels of a certain amount of the input instances are corrupted by adversarial noise ...
learned on-line with adversarial noise. ...
doi:10.1007/3-540-58520-6_67
fatcat:3opzv24ju5a7ppjgahpfmsvde4
Paragraph: Thwarting Signature Learning by Training Maliciously
[chapter]
2006
Lecture Notes in Computer Science
A learner typically generates a classifier automatically by analyzing two labeled training pools: one of innocuous samples, and one of samples that fall in the malicious target class. ...
In both cases, a stream of samples arrives, and a classifier must automatically determine whether each sample falls into a malicious target class (e.g., worm network traffic, or spam email). ...
Hamsa generates conjunction signatures, with improved performance and noise-tolerance over Polygraph. ...
doi:10.1007/11856214_5
fatcat:mff3e7cdsbak5cc64thh4pxx4i
Sample-efficient strategies for learning in the presence of noise
1999
Journal of the ACM
, 1}-valued functions of VC dimension d, where ⑀ is the desired accuracy and ϭ ⑀/(1 ϩ ⑀) Ϫ ⌬ the malicious noise rate (it is well known that any nontrivial target class cannot be PAC learned with accuracy ...
In this paper, we prove various results about PAC learning in the presence of malicious noise. Our main interest is the sample size behavior of learning algorithms. ...
Table I shows the known results on learning in the malicious and classification noise models. ...
doi:10.1145/324133.324221
fatcat:5futbfrcgzewjafepxkk3sqjeu
Securing IoT Devices: A Robust and Efficient Deep Learning with a Mixed Batch Adversarial Generation Process for CAPTCHA Security Verification
2021
Electronics
Therefore, this study proposed computation-efficient deep learning with a mixed batch adversarial generation process model, which attempted to break the transferability attack, and mitigate the problem ...
against malicious attacks from computer hackers, to protect Internet of Things devices and the end user's privacy. ...
We further accelerate the convergence rate of the CNN solver model with an automatic learning rate finder (LRF) algorithm to find optimal learning rates in conjunction with the cyclical learning rate ( ...
doi:10.3390/electronics10151798
fatcat:e4khz6abfvglteflhgnvzwrbiy
Learning in the presence of malicious errors
1988
Proceedings of the twentieth annual ACM symposium on Theory of computing - STOC '88
of learning with errors and standard combinatorial optimization problems. ...
In this paper we study an extension of the distribution-free model of learning introduced by V aliant 23 also known as the probably approximately correct or PAC model that allows the presence of malicious ...
Absolute limits on learning with errors In this section we prove theorems bounding the achievable error rate for both the malicious error and classi cation noise models. ...
doi:10.1145/62212.62238
dblp:conf/stoc/KearnsL88
fatcat:plqjn2irpnavhmi6yuvranzkaa
A Word-Level Analytical Approach for Identifying Malicious Domain Names Caused by Dictionary-Based DGA Malware
2021
Electronics
Computer networks are facing serious threats from the emergence of malware with sophisticated DGAs (Domain Generation Algorithms). ...
Taken together, these results suggest that malware-infected machines can be identified and removed from networks using DNS queries for detected malicious domain names as triggers. ...
Note that the weights of edges are updated with the conjunction process G i:j . ...
doi:10.3390/electronics10091039
doaj:6ffbaa98c18c4cb7956dcaadedbdadf6
fatcat:s4bxpyagyncglhjor4hnymkoie
Packet2Vec: Utilizing Word2Vec for Feature Extraction in Packet Data
[article]
2020
arXiv
pre-print
While deep learning has shown success in fields such as image classification and natural language processing, its application for feature extraction on raw network packet data for intrusion detection is ...
For the classification task of benign versus malicious traffic on a 2009 DARPA network data set, we obtain an area under the curve (AUC) of the receiver operating characteristic (ROC) between 0.988-0.996 ...
What warm start means differs depending on the classifier used. For example, with neural networks we would initialize the model with the weights learned from training on previous files. ...
arXiv:2004.14477v1
fatcat:oicnmal6mjaslcs4q3eiajdj6m
Sample-Optimal PAC Learning of Halfspaces with Malicious Noise
[article]
2021
arXiv
pre-print
We study efficient PAC learning of homogeneous halfspaces in ℝ^d in the presence of malicious noise of Valiant (1985). ...
Our main ingredient is a novel incorporation of a matrix Chernoff-type inequality to bound the spectrum of an empirical covariance matrix for well-behaved distributions, in conjunction with a careful exploration ...
Learning with Malicious Noise We elaborate on our analytic tools used to obtain the near-optimal sample complexity bound in this section. ...
arXiv:2102.06247v3
fatcat:ehabe4wof5f45cne4sgj7odmj4
ALOHA: Auxiliary Loss Optimization for Hypothesis Augmentation
[article]
2019
arXiv
pre-print
Our auxiliary loss architecture yields a significant reduction in detection error rate (false negatives) of 42.6 positive rate (FPR) of 10^-3 when compared to a similar model with only one target, and ...
/benign loss, a count loss on multi-source detections, and a semantic malware attribute tag loss. ...
Experimental Evaluation In this section, we apply the auxiliary losses presented in in Section 3, first individually, each loss in conjunction with a main malicious/benign loss, and then simultaneously ...
arXiv:1903.05700v1
fatcat:oahsl4xy6jg3bmt2flm5dgb3lq
An Efficient Spectrum Sensing Framework and Attack Detection in Cognitive Radio Networks using Hybrid ANFIS
2015
Indian Journal of Science and Technology
With more malicious nodes proposed schemes are more effective to restrain the false alarms. ...
It is found that spectrum sensing alone cannot prevent the malicious behavior without any information on users' reputation. ...
Particularly ANFIS employs a hybrid learning or training algorithm which integrates least-squares estimator and the gradient descent technique and then at the end with the Runge Kutta Learning Method ( ...
doi:10.17485/ijst/2015/v8i28/71246
fatcat:so6bsdzovzc4xdoudtiwkhqc6i
« Previous
Showing results 1 — 15 out of 4,248 results