Filters








79 Hits in 5.9 sec

An Improved Optimized Model for Invisible Backdoor Attack Creation Using Steganography

Daniyal M. Alghazzawi, Osama Bassam J. Rabie, Surbhi Bhatia, Syed Hamid Hasan
2022 Computers Materials & Continua  
To overcome this problem, in this work we are creating a backdoor attack to check their strength to withstand complex defense strategies, and in order to achieve this objective, we are developing an improved  ...  The results demonstrate that the proposed methodology offers significant defense against the conventional backdoor attack detection frameworks such as STRIP and Neutral cleanse.  ...  They embedded the backdoor attacks along with facial attributes and the method is known as Backdoor Hidden in Facial features (BHFF).  ... 
doi:10.32604/cmc.2022.022748 fatcat:2uvdid32bjbj7mlqzm5ilygwgy

Advances in privacy-preserving computing

Kaiping Xue, Zhe Liu, Haojin Zhu, Miao Pan, David S. L. Wei
2021 Peer-to-Peer Networking and Applications  
The ninth article by Mingfu Xue et al. on 'Backdoors Hidden in Facial Features: A Novel Invisible Backdoor Attack against Face Recognition Systems' proposes two novel stealthy backdoor attack methods,  ...  BHF2 (Backdoor Hidden in Facial Features) and BHF2N (Backdoor Hidden in Facial Features Naturally), which hide the generated backdoors into facial features (eyebrows and beard) for the first time.  ...  Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. (2002) Professor and then a Full Professor). Dr.  ... 
doi:10.1007/s12083-021-01110-9 fatcat:o5vvf6ezcna2pc32g6oapioalu

Invisible Backdoor Attacks on Deep Neural Networks via Steganography and Regularization [article]

Shaofeng Li, Minhui Xue, Benjamin Zi Hao Zhao, Haojin Zhu, Xinpeng Zhang
2020 arXiv   pre-print
Deep neural networks (DNNs) have been proven vulnerable to backdoor attacks, where hidden features (patterns) trained to a normal model, which is only activated by some specific input (called triggers)  ...  In this paper, we create covert and scattered triggers for backdoor attacks, invisible backdoors, where triggers can fool both DNN models and human inspection.  ...  This work was supported, in part, by the National Natural Science Foundation of China, under Grants No. 61972453, No. 61672350, No. U1936214, and No. U1636206.  ... 
arXiv:1909.02742v3 fatcat:hrsjs2ncv5c6fpeiwjtq43q72i

Defense-Resistant Backdoor Attacks against Deep Neural Networks in Outsourced Cloud Environment

Xueluan Gong, Yanjiao Chen, Qian Wang, Huayang Huang, Lingshuo Meng, Chao Shen, Qian Zhang
2021 IEEE Journal on Selected Areas in Communications  
In stark contrast to existing works that fix the trigger location, we design a multi-location patching method to make the model less sensitive to mild displacement of triggers in real attacks.  ...  The comparison with two state-of-the-art baselines BadNets and Hidden Backdoors demonstrates that RobNet achieves higher attack success rate and is more resistant to potential defenses.  ...  To the best of our knowledge, there is only one work that investigated backdoor attacks against facial recognition in the physical world [41] .  ... 
doi:10.1109/jsac.2021.3087237 fatcat:2edrxpa3unfklg34rtgelsfzi4

A Survey of Neural Trojan Attacks and Defenses in Deep Learning [article]

Jie Wang, Ghulam Mubashar Hassan, Naveed Akhtar
2022 arXiv   pre-print
We conduct a comprehensive review of the techniques that devise Trojan attacks for deep learning and explore their defenses.  ...  It provides a comprehensible gateway to the broader community to understand the recent developments in Neural Trojans.  ...  [26] also designed an attack against facial recognition systems in the physical space by using physical objects as Trojan triggers, see Fig. 6 .  ... 
arXiv:2202.07183v1 fatcat:cmvnrimoofbgveg2btpu42ibeu

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning [article]

Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli
2022 arXiv   pre-print
In this survey, we provide a comprehensive systematization of poisoning attacks and defenses in machine learning, reviewing more than 200 papers published in the field in the last 15 years.  ...  Although poisoning has been acknowledged as a relevant threat in industry applications, and a variety of different attacks and defenses have been proposed so far, a complete systematization and critical  ...  [136] used facial expressions or image filters (e.g., old-age, smile) as backdoor triggers against real-world facial recognition systems.  ... 
arXiv:2205.01992v1 fatcat:634zayldxfgfrlucascahjesxm

Privacy and Security Issues in Deep Learning: A Survey

Ximeng Liu, Lehui Xie, Yaopeng Wang, Jian Zou, Jinbo Xiong, Zuobin Ying, Athanasios V. Vasilakos
2020 IEEE Access  
However, the privacy and security issues of DL have been revealed that the DL model can be stolen or reverse engineered, sensitive training data can be inferred, even a recognizable face image of the victim  ...  In this paper, we first briefly introduces the four types of attacks and privacy-preserving techniques in DL.  ...  [153] developed a systematic method of attacking against face recognition systems by simply adding a pair of eyeglass frames to make the face recognition system recognize errors. Zhou et al.  ... 
doi:10.1109/access.2020.3045078 fatcat:kbpqgmbg4raerc6txivacpgcia

Security and Privacy Issues in Deep Learning [article]

Ho Bae, Jaehee Jang, Dahuin Jung, Hyemi Jang, Heonseok Ha, Hyungyu Lee, Sungroh Yoon
2021 arXiv   pre-print
Defenses proposed against such attacks include techniques to recognize and remove malicious data, train a model to be insensitive to such data, and mask the model's structure and parameters to render attacks  ...  Security attacks can be divided based on when they occur: if an attack occurs during training, it is known as a poisoning attack, and if it occurs during inference (after training) it is termed an evasion  ...  proposed an invisible backdoor attack method that is similar to a trojaning attack with distributed trigger, which is invisible.  ... 
arXiv:1807.11655v4 fatcat:k7mizsqgrfhltktu6pf5htlmy4

Data Hiding with Deep Learning: A Survey Unifying Digital Watermarking and Steganography [article]

Olivia Byrnes, Wendy La, Hu Wang, Congbo Ma, Minhui Xue, Qi Wu
2021 arXiv   pre-print
Data hiding is the process of embedding information into a noise-tolerant signal such as a piece of audio, video, or image.  ...  This survey summarises recent developments in deep learning techniques for data hiding for the purposes of watermarking and steganography, categorising them based on model architectures and noise injection  ...  Auto-encoder based CNNs were chosen based on their uses in feature extraction and denoising in visual tasks, such as facial recognition and generation, and reconstructing handwritten digits.  ... 
arXiv:2107.09287v1 fatcat:2sqcyzv6t5ccdiffk5cmag7tya

Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching [article]

Jonas Geiping, Liam Fowl, W. Ronny Huang, Wojciech Czaja, Gavin Taylor, Michael Moeller, Tom Goldstein
2021 arXiv   pre-print
Finally we demonstrate the limitations of existing defensive strategies against such an attack, concluding that data poisoning is a credible threat, even for large-scale deep learning systems.  ...  Previous poisoning attacks against deep neural networks in this setting have been limited in scope and success, working only in simplified settings or being prohibitively expensive for large datasets.  ...  For this setting, we especially refer to an interesting application study in Shan et al. (2020) in the context of facial recognition.  ... 
arXiv:2009.02276v2 fatcat:ajx3kkrg7vbgtjosxpdpqxdmzm

Cybersecurity: Past, Present and Future [article]

Shahid Alam
2022 arXiv   pre-print
Human and AI collaboration can significantly increase the performance of a cybersecurity system.  ...  ., explainable AI is an emerging field of study and has a lot of potentials to improve the role of AI in cybersecurity.  ...  BioStar 2 uses fingerprinting and face recognition to identify and authenticate users. Over 1 million fingerprint records and facial recognition information were leaked.  ... 
arXiv:2207.01227v1 fatcat:vfx54hq3ejc7dlfestj6dkstpa

The Nooscope manifested: AI as instrument of knowledge extractivism

Matteo Pasquinelli, Vladan Joler
2020 AI & Society: The Journal of Human-Centred Systems and Machine Intelligence  
Faults of a statistical instrument: the undetection of the new. Adversarial intelligence vs. statistical intelligence: labour in the age of AI.  ...  The learning algorithm: compressing the world into a statistical model. All models are wrong, but some are useful. World to vector: the society of classification and prediction bots.  ...  In doing so, it alters the accuracy of the statistical model and creates a backdoor that can be eventually exploited by an adversarial attack. 32 Adversarial attack seems to point to a mathematical vulnerability  ... 
doi:10.1007/s00146-020-01097-6 pmid:33250587 pmcid:PMC7680082 fatcat:ewoo5aro5nca7a7o4orhzprxpy

Dark Eden Abstracts [article]

Dark Eden
2021 figshare.com  
An abandoned amusement park; a lost world? Or is it a derelict museum, shrouded in the darkness of disuse and of stagnant time? This is not just idle speculation.  ...  But, turning this story in reverse, what now might lie behind those closed gates of Eden, with its divine creator and caretaker absent, presumed dead? A garden gone to seed or a seething wilderness?  ...  In Teshigahara's Japanese noir Face by Another (1966) , based on the novel by Kōbō Abe, the protagonist also takes someone else's face.  ... 
doi:10.6084/m9.figshare.17004544.v1 fatcat:amxbrcaw6ra7rh462gax4uhmfu

Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutions

Eike Petersen, Yannik Potdevin, Esfandiar Mohammadi, Stephan Zidowitz, Sabrina Breyer, Dirk Nowotka, Sandra Henn, Ludwig Pechmann, Martin Leucker, Philipp Rostalski, Christian Herzog
2022 IEEE Access  
This survey provides an overview of the technical and procedural challenges involved in creating medical machine learning systems responsibly and in conformity with existing regulations, as well as possible  ...  Machine learning is expected to fuel significant improvements in medical care.  ...  These techniques increase the difficulty of injecting backdoors yet cannot provide security guarantees against strong attackers.  ... 
doi:10.1109/access.2022.3178382 fatcat:cwpkgkx2ibcgbdatd4aidwa4xy

Where the physical world meets the digital world: representations of power structures and cyberspace in television series set in New York

Julie Ambal, Florent Favard
2020 TV Series  
CONGER, Kate, "San Francisco Bans Facial Recognition Technology", The New York Times, May 14 th , 2019, https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html.  ...  Robot sets Elliot in the heart of the economic system he is fighting against, like a virus in a computer; Gossip Girl feeds on the wealth and prestige of the city; Elementary needs an American equivalent  ... 
doi:10.4000/tvseries.4623 fatcat:lmeym4b4nnbxzejd5vi57iscku
« Previous Showing results 1 — 15 out of 79 results