A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Protecting Against Image Translation Deepfakes by Leaking Universal Perturbations from Black-Box Neural Networks
[article]
2020
arXiv
pre-print
In this work, we develop efficient disruptions of black-box image translation deepfake generation systems. ...
We are the first to demonstrate black-box deepfake generation disruption by presenting image translation formulations of attacks initially proposed for classification models. ...
Instead of detecting deepfakes after the fact, Ruiz et al. [32] recently proposed using white-box adversarial attacks to protect an image from modification by image translation networks. ...
arXiv:2006.06493v1
fatcat:dpcglkyqozh5rkxkjypfd6iaou
Restricted Black-box Adversarial Attack Against DeepFake Face Swapping
[article]
2022
arXiv
pre-print
Moreover, we demonstrate that the proposed algorithm can be generalized to offer face image protection against various face translation methods. ...
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models. ...
image translation models in the restricted black-box scenario. ...
arXiv:2204.12347v1
fatcat:76rcbkxadfbmpl2j7a3pekl3gi
Generative Models for Security: Attacks, Defenses, and Opportunities
[article]
2021
arXiv
pre-print
Generative models learn the distribution of data from a sample dataset and can then generate new data instances. ...
Finally, we discuss new threats due to generative models: the creation of synthetic media such as deepfakes that can be used for disinformation. ...
Also, within all of these categories, there are different attack scenarios and threat models such as whether the attacker has black-box or white-box access to the model. ...
arXiv:2107.10139v2
fatcat:wjb4dcdpvveztd2h4aretus56a
A Survey on Adversarial Attack in the Age of Artificial Intelligence
2021
Wireless Communications and Mobile Computing
Facing the increasingly complex neural network model, this paper focuses on the fields of image, text, and malicious code and focuses on the adversarial attack classifications and methods of these three ...
Image
l 0
One-pixel 2017 [38]
Black box
Targeted
Image
l 0
C&W 2017 [39]
White box
Nontargeted
Image
l 0, l 2, l ∞
Universal perturbations 2017 [40]
White box
Nontargeted
Universal ...
DISTFLIP 2019 [31]
Black box
Nontargeted
Text
Gradient
UPSET 2011 [32]
Black box
Targeted
Universal
l ∞
L-BFGS 2014 [33]
White and black box
Targeted
Image
l ∞
FGSM-based 2015 [34]
White ...
doi:10.1155/2021/4907754
fatcat:rm6xcf6ryrh6ngro4sl5ifprgy
Trustworthy AI: From Principles to Practices
[article]
2022
arXiv
pre-print
However, many current AI systems are found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection. ...
To unify currently available but fragmented approaches toward trustworthy AI, we organize them in a systematic approach that considers the entire lifecycle of AI systems, ranging from data acquisition ...
It is shown to further provide defense against black-box attacks which do not require knowledge of the model parameters This can help defend against black-box attacks, which do not require knowledge of ...
arXiv:2110.01167v2
fatcat:2u7hqdrfujc5lbcsmwpmxxsd74
The State of AI Ethics Report (June 2020)
[article]
2020
arXiv
pre-print
Artificial intelligence has become the byword for technological progress and is being used in everything from helping us combat the COVID-19 pandemic to nudging our attention in different directions as ...
Our staff has worked tirelessly over the past quarter surfacing signal from the noise so that you are equipped with the right tools and knowledge to confidently tread this complex yet consequential domain ...
They are encoded deep within our biological systems and suffer from the same lack of explainability as decisions made by artificial neural networks, the so-called black box problem. ...
arXiv:2006.14662v1
fatcat:q76dnqzh4ja5pofurjmpmyeyey
On the Opportunities and Risks of Foundation Models
[article]
2022
arXiv
pre-print
Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. ...
This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical ...
Foundation Models (CRFM), a center at Stanford University borne out of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). ...
arXiv:2108.07258v3
fatcat:kohwrwk2ybf7fd7wsuz2gp65ki
D1.3 Cyberthreats and countermeasures
2019
In some areas, artificial intelligence has become powerful to the point that trained models have been withheld from the public over concerns of potential malicious use. ...
This black box inference technique works very well against models generated by online machinelearning-as-a-service offerings, such as those available from Google and Amazon. ...
Dropout in neural networks. ...
doi:10.21253/dmu.7951292.v1
fatcat:w3z55dymsjcwfkhp7opx4pvhui
Machine Learning Based User Modeling for Enterprise Security and Privacy Risk Mitigation
2019
log files created by the technology itself. ...
The challenges are diverse, ranging from malicious parties to vulnerable hardware. ...
ese black box systems can only be tested in situ. BUBA was designed for exactly that purpose. ...
doi:10.7916/d8-w2k6-gp40
fatcat:5rhn467gdbghjiv3i76fyh6qtu