Filters








1,415 Hits in 4.6 sec

Smooth Boosting and Learning with Malicious Noise [chapter]

Rocco A. Servedio
2001 Lecture Notes in Computer Science  
In particular, we use the new smooth boosting algorithm to construct malicious noise tolerant versions of the PAC-model p-norm linear threshold learning algorithms described by Servedio (2002).  ...  We show that this new boosting algorithm can be used to construct efficient PAC learning algorithms which tolerate relatively high rates of malicious noise.  ...  Acknowledgments We thank Avrim Blum for a helpful discussion concerning the malicious noise tolerance of the Perceptron algorithm.  ... 
doi:10.1007/3-540-44581-1_31 fatcat:vgnxe3ab4vd55a3hcatkijudke

Learning Halfspaces with Malicious Noise [chapter]

Adam R. Klivans, Philip M. Long, Rocco A. Servedio
2009 Lecture Notes in Computer Science  
Our algorithms combine an iterative outlier removal procedure using Principal Component Analysis together with "smooth" boosting.  ...  We give new algorithms for learning halfspaces in the challenging malicious noise model, where an adversary may corrupt both the labels and the underlying distribution of examples.  ...  Isotropic Log-concave Distributions and Malicious Noise Our algorithm A mlc that works for arbitrary isotropic log-concave distributions uses smooth boosting.  ... 
doi:10.1007/978-3-642-02927-1_51 fatcat:hk66wrjug5ebtjfsxpczwqaqmy

Efficient, Noise-Tolerant, and Private Learning via Boosting [article]

Mark Bun, Marco Leandro Carmosino, Jessica Sorrell
2020 arXiv   pre-print
We give natural conditions under which these algorithms are differentially private, efficient, and noise-tolerant PAC learners.  ...  We introduce a simple framework for designing private boosting algorithms.  ...  Toward the first of these applications, Servedio [2003] designed a smooth boosting algorithm (SmoothBoost) suitable for PAC learning in spite of malicious noise.  ... 
arXiv:2002.01100v1 fatcat:7vk2lhirg5ahriqfpozshbyak4

Federated Learning in Adversarial Settings [article]

Raouf Kerkouche, Gergely Ács, Claude Castelluccia
2020 arXiv   pre-print
We show that this extension performs as efficiently as the non-private but robust scheme, even with stringent privacy requirements but are less robust against model degradation and backdoor attacks.  ...  This paper presents a new federated learning scheme that provides different trade-offs between robustness, privacy, bandwidth efficiency, and model accuracy.  ...  Malicious clients still omit to add any noise to their own model updates. Instead, they boost their signs updates with DP-SignFed ( Adv = 5000).  ... 
arXiv:2010.07808v1 fatcat:6grxgyh6ubhh7dcvvue4sgtvvm

Malicious Domain Detection Based on K-means and SMOTE [chapter]

Qing Wang, Linyu Li, Bo Jiang, Zhigang Lu, Junrong Liu, Shijie Jian
2020 Lecture Notes in Computer Science  
Finally, KSDom uses Categorical Boosting (CatBoost) algorithm to identify malicious domains.  ...  However, existing detection methods usually use classification-based and association-based representations, which are not capable of dealing with the imbalanced problem between malicious and benign domains  ...  The classification model generated by using ensemble learning combined with undersampling is prone to noise, and the Decision Tree algorithm ignores the correlation between features, resulting in poor  ... 
doi:10.1007/978-3-030-50417-5_35 fatcat:onxdki2dzbbo7ekcp2sls3g6ye

Improved Quantum Boosting [article]

Adam Izdebski, Ronald de Wolf
2020 arXiv   pre-print
Recently, Arunachalam and Maity gave the first quantum improvement for boosting, by combining Freund and Schapire's AdaBoost algorithm with a quantum algorithm for approximate counting.  ...  In this paper we give a substantially faster and simpler quantum boosting algorithm, based on Servedio's SmoothBoost algorithm.  ...  Acknowledgements We thank Srinivasan Arunachalam for many helpful comments, Min-Hsiu Hsieh for sending us an updated version of [WMHY19] and answering some questions about this paper, and Yassine Hamoudi  ... 
arXiv:2009.08360v1 fatcat:3wl4ohstebczpgkguew5j5f5ue

Enabling online learning in lithography hotspot detection with information-theoretic feature optimization

Hang Zhang, Bei Yu, Evangeline F. Y. Young
2016 Proceedings of the 35th International Conference on Computer-Aided Design - ICCAD '16  
More importantly, equipped with online learning, our framework can further improve both accuracy and ODST.  ...  With the continuous shrinking of technology nodes, lithography hotspot detection and elimination in the physical verification phase is of great value.  ...  Besides the proposed MCMI scheme, Smooth Boosting can efficient eliminate malicious noise and modified Naive Bayes allows us to measure correctly the dependency of different sampling points on a same or  ... 
doi:10.1145/2966986.2967032 dblp:conf/iccad/ZhangYY16 fatcat:agopwzurpvghdk6sd4fk7xvaha

Malicious Encryption Traffic Detection Based on NLP

Hao Yang, Qin He, Zhenyan Liu, Qian Zhang, Yuan Tian
2021 Security and Communication Networks  
In recent years, the rise of artificial intelligence allows us to use machine learning and deep learning methods to detect encrypted malicious traffic without decryption, and the detection results are  ...  At present, the research on malicious encrypted traffic detection mainly focuses on the characteristics' analysis of encrypted traffic and the selection of machine learning algorithms.  ...  Acknowledgments is work was sponsored by the Sichuan Science and Technology Program (2020YFS0355 and 2020YFG0479).  ... 
doi:10.1155/2021/9960822 fatcat:heq2an2hg5hbrft3wbttl36day

Purifying data by machine learning with certainty levels

Shlomi Dolev, Guy Leshem, Reuven Yagel
2010 Proceedings of the Third International Workshop on Reliability, Availability, and Security - WRAS '10  
A fundamental paradigm used for autonomic computing, self-managing systems, and decision-making under uncertainty and faults is machine learning.  ...  Occasionally these data sets include misleading data items that were either introduced by input device malfunctions, or were maliciously inserted to lead the machine learning to wrong conclusions.  ...  In Servedio (2003) [15] , a PAC boosting algorithm is developed using smooth distributions.  ... 
doi:10.1145/1953563.1953567 dblp:conf/podc/DolevLY10 fatcat:5tsho3pw6vcdddhkvxp2ykupjm

Smooth Boosting Using an Information-Based Criterion [chapter]

Kohei Hatano
2006 Lecture Notes in Computer Science  
They are proved to be noise-tolerant and can be used in the "boosting by filtering" scheme, which is suitable for learning over huge data.  ...  In this paper, we propose a new smooth boosting algorithm with another information-based criterion based on Gini index. we show that it inherits the advantages of two approaches, smooth boosting and information-based  ...  Osamu Wannabe and Prof. Eiji Takimoto for their discussion. I also thank anonymous referees for their helpful comments.  ... 
doi:10.1007/11894841_25 fatcat:fyyipaz35zdn7crydokesdioyy

On the Security Privacy in Federated Learning [article]

Gorka Abad, Stjepan Picek, Víctor Julio Ramírez-Durán, Aitor Urbieta
2022 arXiv   pre-print
Recent privacy awareness initiatives such as the EU General Data Protection Regulation subdued Machine Learning (ML) to privacy and security assessments.  ...  Federated Learning (FL) grants a privacy-driven, decentralized training scheme that improves ML models' security.  ...  Furthermore, for preventing Evasion attacks, the authors [20] smoothed the training data by applying Gaussian noise and including adversarial data in the training dataset.  ... 
arXiv:2112.05423v2 fatcat:qcovp2cz2rfgbcvx6mtx5xighe

Adversarial Fine-tune with Dynamically Regulated Adversary [article]

Pengyue Hou, Ming Zhou, Jie Han, Petr Musilek, Xingyu Li
2022 arXiv   pre-print
Adversarial training is an effective method to boost model robustness to malicious, adversarial attacks.  ...  This work tackles this problem and proposes a simple yet effective transfer learning-based adversarial training strategy that disentangles the negative effects of adversarial samples on model's standard  ...  We claim that the lower noise level in the image background makes the learning more complex.  ... 
arXiv:2204.13232v1 fatcat:pvpalyktfrcf7cgif2h4cp5vs4

MEADE: Towards a Malicious Email Attachment Detection Engine [article]

Ethan M. Rudd, Richard Harang, Joshua Saxe
2018 arXiv   pre-print
Using deep neural networks and gradient boosted decision trees, we are able to obtain ROC curves with > 0.99 AUC on both Microsoft Office document and Zip archive datasets.  ...  In this paper, we explore the feasibility of applying machine learning as a static countermeasure to detect several types of malicious email attachments including Microsoft Office documents and Zip archives  ...  spaces that offer smooth statistical support.  ... 
arXiv:1804.08162v1 fatcat:q2mtp3sidzfmvhzqtylpsph65e

Large-Scale Malicious Software Classification with Fuzzified Features and Boosted Fuzzy Random Forest

Fangqi Li, Shilin Wang, Alan Wee-Chung Liew, Wei-Ping Ding, Gong Shen Liu
2020 IEEE transactions on fuzzy systems  
Fuzzification was used to reduce the ubiquitous impact of noise and outliers in a very large dataset.  ...  Although deep learning-based methods have reported good classification performance, the deep models usually lack interpretability and are fragile under adversarial attacks.  ...  The fuzzification of words performs an aggregation (here aggregation is done based on Hamming distance), such aggregation, which akin to kernel smoothing, has the effect of smoothing noise and statistical  ... 
doi:10.1109/tfuzz.2020.3016023 fatcat:76yi3csr6bbn7oe35ofeknbdii

FLAME: Taming Backdoors in Federated Learning [article]

Thien Duc Nguyen, Phillip Rieger, Huili Chen, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Shaza Zeitouni, Farinaz Koushanfar, Ahmad-Reza Sadeghi (+1 others)
2022 arXiv   pre-print
Federated Learning (FL) is a collaborative machine learning approach allowing participants to jointly train a model without having to share their private, potentially sensitive local datasets with others  ...  To minimize the required amount of noise, FLAME uses a model clustering and weight clipping approach.  ...  Introduction Federated learning (FL) is an emerging collaborative machine learning trend with many applications, such as next word prediction for mobile keyboards [39] , medical imaging [49] , and intrusion  ... 
arXiv:2101.02281v3 fatcat:jsd6uz5h5fgglmmfdecva3227y
« Previous Showing results 1 — 15 out of 1,415 results