Filters








159 Hits in 6.3 sec

On the Robustness of the Backdoor-based Watermarking in Deep Neural Networks [article]

Masoumeh Shafieinejad, Jiaqi Wang, Nils Lukas, Xinda Li, Florian Kerschbaum
2019 arXiv   pre-print
We investigate the robustness and reliability of state-of-the-art deep neural network watermarking schemes.  ...  We focus on backdoor-based watermarking and propose two -- a black-box and a white-box -- attacks that remove the watermark.  ...  Conclusion We present three attacks on the recent backdoor-based watermarking schemes in deep neural networks: i) black-box attack, ii) white-box attack, and iii) property inference attack.  ... 
arXiv:1906.07745v2 fatcat:pmdgoccw2rfwllcgypnsoyqnau

Neural Network Laundering: Removing Black-Box Backdoor Watermarks from Deep Neural Networks [article]

William Aiken, Hyoungshick Kim, Simon Woo
2020 arXiv   pre-print
One of the main methods for achieving such protection involves relying on the susceptibility of neural networks to backdoor attacks, but the robustness of these tactics has been primarily evaluated against  ...  In this work, we propose a neural network "laundering" algorithm to remove black-box backdoor watermarks from neural networks even when the adversary has no prior knowledge of the structure of the watermark  ...  Backdoors exploit the vulnerability of the overparameterization of deep neural networks to hide deliberately-designed backdoors in the model.  ... 
arXiv:2004.11368v1 fatcat:j3wqtta6ivfxvl2blyxft2njme

Have You Stolen My Model? Evasion Attacks Against Deep Neural Network Watermarking Techniques [article]

Dorjan Hitaj, Luigi V. Mancini
2018 arXiv   pre-print
This paper focuses on verifying the robustness and reliability of state-of- the-art deep neural network watermarking schemes.  ...  Recently, this problem was tackled by introducing in deep neural networks the concept of watermarking, which allows a legitimate owner to embed some secret information(watermark) in a given model.  ...  ACKNOWLEDGMENTS The authors would like to thank Briland Hitaj for the valuable comments and discussions on this work.  ... 
arXiv:1809.00615v1 fatcat:6skk543x2jfftd3m66ofxdw7he

Removing Backdoor-Based Watermarks in Neural Networks with Limited Data [article]

Xuankai Liu, Fengting Li, Bihan Wen, Qi Li
2020 arXiv   pre-print
In this paper, we benchmark the robustness of watermarking, and propose a novel backdoor-based watermark removal framework using limited data, dubbed WILD.  ...  Deep neural networks have been widely applied and achieved great success in various fields.  ...  Neural Network Backdoors A DNN backdoor is a hidden pattern trained into the neural network model. The model could misbehave by the presence of trigger pattern.  ... 
arXiv:2008.00407v2 fatcat:4dgrrto2zfbp7badjmzcslzpxi

Watermarking Graph Neural Networks based on Backdoor Attacks [article]

Jing Xu, Stjepan Picek
2021 arXiv   pre-print
Graph Neural Networks (GNNs) have achieved promising performance in various real-world applications.  ...  watermarked GNN model, and 3) verify the ownership of the suspicious model in a black-box setting.  ...  Backdoor Attacks in GNNs Deep Neural Networks (DNNs) are vulnerable to backdoor attacks [15, 14] .  ... 
arXiv:2110.11024v2 fatcat:hhds4cbjwrhwrauocjbb3vzgni

Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring [article]

Yossi Adi, Carsten Baum, Moustapha Cisse, Benny Pinkas, Joseph Keshet
2018 arXiv   pre-print
In this work, we present an approach for watermarking Deep Neural Networks in a black-box way.  ...  Deep Neural Networks have recently gained lots of success after enabling several breakthroughs in notoriously challenging problems.  ...  Acknowledgments This work was supported by the BIU Center for Research in Applied Cryptography and Cyber Security in conjunction with the Israel National Cyber Directorate in the Prime Minister's Office  ... 
arXiv:1802.04633v3 fatcat:qaojyy4ccngafl66z4hkqwyuwm

Detect and remove watermark in deep neural networks via generative adversarial networks [article]

Haoqi Wang, Mingfu Xue, Shichang Sun, Yushu Zhang, Jian Wang, Weiqiang Liu
2021 arXiv   pre-print
In this paper, we propose a scheme to detect and remove watermark in deep neural networks via generative adversarial networks (GAN).  ...  In the second phase, we fine-tune the watermarked DNN based on the reversed backdoor images.  ...  [6] proposed a fined-tuning based method, named REFET, to remove the watermark in deep neural networks.  ... 
arXiv:2106.08104v1 fatcat:q7jbue3ngzhohjqnayqlhvpmce

Intellectual Property Protection for Deep Learning Models: Taxonomy, Methods, Attacks, and Evaluations [article]

Mingfu Xue, Yushu Zhang, Jian Wang, Weiqiang Liu
2021 arXiv   pre-print
To deal with such security threats, a few deep neural networks (DNN) IP protection methods have been proposed in recent years.  ...  Then, we present a survey on existing DNN IP protection works in terms of the above six attributes, especially focusing on the challenges these methods face, whether these methods can provide proactive  ...  [25] use a backdoor as the watermark key image, and use overparameterization of the neural network to implement the watermark scheme.  ... 
arXiv:2011.13564v2 fatcat:lbts5q52b5axzlmuitb2u62dfa

Reversible Watermarking in Deep Convolutional Neural Networks for Integrity Authentication

Xiquan Guan, Huamin Feng, Weiming Zhang, Hang Zhou, Jie Zhang, Nenghai Yu
2020 Proceedings of the 28th ACM International Conference on Multimedia  
Deep convolutional neural networks have made outstanding contributions in many fields such as computer vision in the past few years and many researchers published well-trained network for downloading.  ...  Specifically, we present the reversible watermarking problem of deep convolutional neural networks and utilize the pruning theory of model compression technology to construct a host sequence used for embedding  ...  The backdoor is defined as a hidden pattern injected into a deep neural network model by modifying the parameters while training.  ... 
doi:10.1145/3394171.3413729 dblp:conf/mm/GuanFZZZY20 fatcat:5xw37ckspffp3ilb2cbgl4m36q

Secure Watermark for Deep Neural Networks with Multi-task Learning [article]

Fangqi Li, Shilin Wang
2021 arXiv   pre-print
To explicitly meet the formal definitions of the security requirements and increase the applicability of deep neural network watermarking schemes, we propose a new framework based on multi-task learning  ...  An important prerequisite in commercializing and protecting deep neural networks is the reliable identification of their genuine author.  ...  Availability Materials of this paper, including source code and part of the dataset, are available at http://github.com/a_new_ account/xxx.  ... 
arXiv:2103.10021v3 fatcat:kgbgbphqf5dcrd4x5hixdtw4em

Robust Black-box Watermarking for Deep NeuralNetwork using Inverse Document Frequency [article]

Mohammad Mehdi Yadollahi, Farzaneh Shoeleh, Sajjad Dadkhah, Ali A. Ghorbani
2021 arXiv   pre-print
Deep learning techniques are one of the most significant elements of any Artificial Intelligence (AI) services.  ...  Recently, these Machine Learning (ML) methods, such as Deep Neural Networks (DNNs), presented exceptional achievement in implementing human-level capabilities for various predicaments, such as Natural  ...  The combination of deep learning and neural networks are called deep neural networks (DNNs).  ... 
arXiv:2103.05590v1 fatcat:esfm4plqjjgwjobal2nuwuh7be

Protecting the Intellectual Properties of Deep Neural Networks with an Additional Class and Steganographic Images [article]

Shichang Sun, Mingfu Xue, Jian Wang, Weiqiang Liu
2021 arXiv   pre-print
Recently, the research on protecting the intellectual properties (IP) of deep neural networks (DNN) has attracted serious concerns. A number of DNN copyright protection methods have been proposed.  ...  In addition, the query modification attack which was proposed recently can invalidate most of the existing backdoor-based watermarking methods.  ...  [5] embed the watermark into the parameters of the deep neural network, but this method can only be applied in the white-box scenarios. Adi et al.  ... 
arXiv:2104.09203v1 fatcat:xqnlgikcrfdrhgin745pkickji

HufuNet: Embedding the Left Piece as Watermark and Keeping the Right Piece for Ownership Verification in Deep Neural Networks [article]

Peizhuo Lv, Pan Li, Shengzhi Zhang, Kai Chen, Ruigang Liang, Yue Zhao, Yingjiu Li
2021 arXiv   pre-print
Due to the wide use of highly-valuable and large-scale deep neural networks (DNNs), it becomes crucial to protect the intellectual property of DNNs so that the ownership of disputed or stolen DNNs can  ...  We evaluate HufuNet rigorously on four benchmark datasets with five popular DNN models, including convolutional neural network (CNN) and recurrent neural network (RNN).  ...  Introduction The rapid development of artificial intelligence and machine learning technologies in recent years has driven the broad adoption of deep neural networks (DNNs) in numerous applications such  ... 
arXiv:2103.13628v1 fatcat:z7sl7g437jdpfnhmybyd7dwzea

Entangled Watermarks as a Defense against Model Extraction [article]

Hengrui Jia, Christopher A. Choquette-Choo, Varun Chandrasekaran, Nicolas Papernot
2021 arXiv   pre-print
below 0.81 percentage points on average in the defended model's performance.  ...  The effectiveness of watermarks remains limited because they are distinct from the task distribution and can thus be easily removed through compression or other forms of knowledge transfer.  ...  Acknowledgments The authors would like to thank Varun Chandrasekaran for his generous help with the paper, in particular with the presentation of ideas and extensive feedback on the writing.  ... 
arXiv:2002.12200v2 fatcat:lz2unazz7feahiadxqsqd6rqxm

TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation [article]

Todd Huster, Emmanuel Ekwedike
2021 arXiv   pre-print
Deep neural networks (DNNs) are vulnerable to "backdoor" poisoning attacks, in which an adversary implants a secret trigger into an otherwise normally functioning model.  ...  Detection of backdoors in trained models without access to the training data or example triggers is an important open problem.  ...  The content of this paper does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. We would like to thank Jeremy E.J.  ... 
arXiv:2103.10274v1 fatcat:cach3gi6kjewvftlqyjwfaphpm
« Previous Showing results 1 — 15 out of 159 results