Filters








1,566 Hits in 7.1 sec

Certifiably Robust Policy Learning against Adversarial Communication in Multi-agent Systems [article]

Yanchao Sun, Ruijie Zheng, Parisa Hassanzadeh, Yongyuan Liang, Soheil Feizi, Sumitra Ganesh, Furong Huang
2022 arXiv   pre-print
Communication is important in many multi-agent reinforcement learning (MARL) problems for agents to share information and make good decisions.  ...  Theoretical analysis shows that this message-ensemble policy can utilize benign communication while being certifiably robust to adversarial communication, regardless of the attacking algorithm.  ...  In a multi-agent system, especially in a cooperative game, communication usually plays an important role.  ... 
arXiv:2206.10158v2 fatcat:2toegmkccfhqlf7qfarubmkqqm

Policy Smoothing for Provably Robust Reinforcement Learning [article]

Aounon Kumar, Alexander Levine, Soheil Feizi
2022 arXiv   pre-print
Prior works in provable robustness in RL seek to certify the behaviour of the victim policy at every time-step against a non-adaptive adversary using methods developed for the static setting.  ...  We present an efficient procedure, designed specifically to defend against an adaptive RL adversary, that can directly certify the total reward without requiring the policy to be robust at each time-step  ...  ACKNOWLEDGEMENTS This project was supported in part by NSF CAREER AWARD 1942230, a grant from NIST 60NANB20D134, HR001119S0026-GARD-FP-052, HR00112090132, ONR YIP award N00014-22-1-2271, Army Grant W911NF2120076  ... 
arXiv:2106.11420v3 fatcat:toalxmperncqbi4sswsrmkkpqu

Resilient Machine Learning for Networked Cyber Physical Systems: A Survey for Machine Learning Security to Securing Machine Learning for CPS

Felix O. Olowononi, Danda B. Rawat, Chunmei Liu
2020 IEEE Communications Surveys and Tutorials  
However, in a world of increasing adversaries, it is becoming more difficult to totally prevent CPS from adversarial attacks, hence the need to focus on making CPS resilient.  ...  An attraction for cyber concerns in CPS rises from the process of sending information from sensors to actuators over the wireless communication medium, thereby widening the attack surface.  ...  Furthermore, the adversarially robust policy learning (ARPL) was proposed in [180] .  ... 
doi:10.1109/comst.2020.3036778 fatcat:tyrz76ofxfejha5kwhoptv2hwu

Adversarial Machine Learning in Wireless Communications using RF Data: A Review [article]

Damilola Adesina, Chung-Chu Hsieh, Yalin E. Sagduyu, Lijun Qian
2021 arXiv   pre-print
Machine learning (ML) provides effective means to learn from spectrum data and solve complex tasks involved in wireless communications.  ...  However, ML in general and DL in particular have been found vulnerable to manipulations thus giving rise to a field of study called adversarial machine learning (AML).  ...  are robust to the effect of adversarial attacks in wireless communication systems.  ... 
arXiv:2012.14392v2 fatcat:4d3x2scwjvh33drc745mmc4gvy

Robust Reinforcement Learning: A Review of Foundations and Recent Advances

Janosch Moos, Kay Hansel, Hany Abdulsamad, Svenja Stark, Debora Clever, Jan Peters
2022 Machine Learning and Knowledge Extraction  
transitions of the system by corrupting an agent's output; (iv) Observation robust designs exploit or distort the perceived system state of the policy.  ...  We survey the literature on robust approaches to reinforcement learning and categorize these methods in four different ways: (i) Transition robust designs account for uncertainties in the system dynamics  ...  Acknowledgments: We thank Joe Watson from the Intelligent Autonomous System group at TU Darmstadt for his constructive feedback and support.  ... 
doi:10.3390/make4010013 fatcat:ifa3z7cx7rc7homa4flywxvhvi

Robusta: Robust AutoML for Feature Selection via Reinforcement Learning [article]

Xiaoyang Wang, Bo Li, Yibo Zhang, Bhavya Kailkhura, Klara Nahrstedt
2021 arXiv   pre-print
However, these AutoML pipelines only focus on improving the learning accuracy of benign samples while ignoring the ML model robustness under adversarial attacks.  ...  As ML systems are increasingly being used in a variety of mission-critical applications, improving the robustness of ML systems has become of utmost importance.  ...  We choose these datasets because they are widely known in the machine learning community.  ... 
arXiv:2101.05950v1 fatcat:yxb2hsd6d5cqvoenv6lcvitvra

How to Certify Machine Learning Based Safety-critical Systems? A Systematic Literature Review [article]

Florian Tambon, Gabriel Laberge, Le An, Amin Nikanjam, Paulina Stevia Nouwou Mindom, Yann Pequignot, Foutse Khomh, Giulio Antoniol, Ettore Merlo, François Laviolette
2021 arXiv   pre-print
the question 'How to Certify Machine Learning Based Safety-critical Systems?'.  ...  In total, we identified 217 papers covering topics considered to be the main pillars of ML certification: Robustness, Uncertainty, Explainability, Verification, Safe Reinforcement Learning, and Direct  ...  Acknowledgements We would like to thank the following authors (in no particular order) who kindly provided us feedback about our review of their work: Mahum Naseer, Hoang-Dung Tran, Jie Ren, David Isele  ... 
arXiv:2107.12045v3 fatcat:43vqxywawbeflhs6ehzovvsevm

Advances in adversarial attacks and defenses in computer vision: A survey [article]

Naveed Akhtar, Ajmal Mian, Navid Kardan, Mubarak Shah
2021 arXiv   pre-print
In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018.  ...  Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision.  ...  From the defense perspective, the research community (especially machine learning community) is focusing more on adversarial training and certified defenses due to their principled nature.  ... 
arXiv:2108.00401v2 fatcat:23gw74oj6bblnpbpeacpg3hq5y

Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey [article]

Samuel Henrique Silva, Peyman Najafirad
2020 arXiv   pre-print
We survey the most recent and important results in adversarial example generation, defense mechanisms with adversarial (re)Training as their main defense against perturbations.  ...  This paper studies strategies to implement adversary robustly trained algorithms towards guaranteeing safety in machine learning algorithms.  ...  The victim's policy is trained using Proximal Policy Optimization and learns to "play" against a fair opponent. The adversarial policy is trained to trigger failures in the victim's policy.  ... 
arXiv:2007.00753v2 fatcat:6xjcd5kinzeevleev26jpj4mym

safe-control-gym: a Unified Benchmark Suite for Safe Learning-based Control and Reinforcement Learning in Robotics [article]

Zhaocong Yuan, Adam W. Hall, Siqi Zhou, Lukas Brunke, Melissa Greeff, Jacopo Panerati, Angela P. Schoellig
2022 arXiv   pre-print
In recent years, both reinforcement learning and learning-based control -- as well as the study of their safety, which is crucial for deployment in real-world robots -- have gained significant traction  ...  However, to adequately gauge the progress and applicability of new results, we need the tools to equitably compare the approaches proposed by the controls and reinforcement learning communities.  ...  Robust RL aims to learn policies that generalize across systems or tasks. We adapt two methods based on adversarial learning: RARL [23] and RAP [32] .  ... 
arXiv:2109.06325v4 fatcat:udrzru36kzahpmapulhluu7rau

Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning [article]

Lukas Brunke, Melissa Greeff, Adam W. Hall, Zhaocong Yuan, Siqi Zhou, Jacopo Panerati, Angela P. Schoellig
2021 arXiv   pre-print
that can formally certify the safety of a learned control policy.  ...  The last half-decade has seen a steep rise in the number of contributions on safe learning methods for real-world robotic deployments from both the control and reinforcement learning communities.  ...  in which an agent (protagonist) learns policy π to control the system and another agent (adversary) learns a separate policy to destabilize the system.  ... 
arXiv:2108.06266v2 fatcat:gbbe3qyatfgelgzhqzglecr5qm

Adversarial Attacks and Defenses in Deep Learning

Kui Ren, Tianhang Zheng, Zhan Qin, Xue Liu
2020 Engineering  
Hence, adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.  ...  With the rapid developments of artificial intelligence (AI) and deep learning (DL) techniques, it is critical to ensure the security and robustness of the deployed algorithms.  ...  Experiments show that EAT models exhibit robustness against adversarial samples generated by various single-step and multi-step attacks on the other models.  ... 
doi:10.1016/j.eng.2019.12.012 fatcat:zig3ascmqjfgboauj2276wuvcy

Key Considerations for the Responsible Development and Fielding of Artificial Intelligence [article]

Eric Horvitz, Jessica Young, Rama G. Elluru, Chuck Howell
2021 arXiv   pre-print
However, they are relevant more generally for the design, construction, and use of AI systems.  ...  We describe critical challenges and make recommendations on topics that should be given priority consideration, practices that should be implemented, and policies that should be defined or updated to reflect  ...  We also thank Lance Lantier for insights on DoD policies and directives, and Nik Marda, Samuel Trotter, and Jaide Tarwid for editorial support.  ... 
arXiv:2108.12289v1 fatcat:howvfaog6vfqpiel6vhlqeie7a

More Than Privacy: Applying Differential Privacy in Key Areas of Artificial Intelligence [article]

Tianqing Zhu and Dayong Ye and Wei Wang and Wanlei Zhou and Philip S. Yu
2020 arXiv   pre-print
With a focus on regular machine learning, distributed machine learning, deep learning, and multi-agent systems, the purpose of this article is to deliver a new view on many possibilities for improving  ...  It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.  ...  Multi-agent systems 1) Multi-agent advising learning: When an agent is in an unfamiliar state during a multi-agent learning process, it may ask for advice from another agent [106] .  ... 
arXiv:2008.01916v1 fatcat:ujmxv7eq6jcppndfu5shbzkdom

Secure and Robust Machine Learning for Healthcare: A Survey

Adnan Qayyum, Junaid Qadir, Muhammad Bilal, Ala Al Fuqaha
2020 IEEE Reviews in Biomedical Engineering  
the myriad security and privacy issues involved), especially in light of recent results that have shown that ML/DL are vulnerable to adversarial attacks.  ...  Notwithstanding the impressive performance of ML/DL, there are still lingering doubts regarding the robustness of ML/DL in healthcare settings (which is traditionally considered quite challenging due to  ...  ., target class in multi-class classification problem) is important to ensure fair predictions. 5) Regulatory and Policy Challenges: The full potential of ML/DL systems (which essentially constitutes  ... 
doi:10.1109/rbme.2020.3013489 pmid:32746371 fatcat:wd2flezcjng4jjsn46t24c5yb4
« Previous Showing results 1 — 15 out of 1,566 results