Filters








302 Hits in 4.5 sec

Unlearning Protected User Attributes in Recommendations with Adversarial Training [article]

Christian Ganhör, David Penz, Navid Rekabsaz, Oleg Lesota, Markus Schedl
2022 pre-print
the disclosure of users' protected attributes.  ...  Specifically, we incorporate adversarial training into the state-of-the-art MultVAE architecture, resulting in a novel model, Adversarial Variational Auto-Encoder with Multinomial Likelihood (Adv-MultVAE  ...  Unlearning Protected User Attributes in Recommendations with Adversarial Training.  ... 
doi:10.1145/3477495.3531820 arXiv:2206.04500v1 fatcat:xrb2gpu2lnhw3o3ei6s5bmbsde

Survey: Leakage and Privacy at Inference Time [article]

Marija Jegorova, Chaitanya Kaul, Charlie Mayor, Alison Q. O'Neil, Alexander Weir, Roderick Murray-Smith, Sotirios A. Tsaftaris
2021 arXiv   pre-print
We first discuss what leakage is in the context of different data, tasks, and model architectures.  ...  We conclude with outstanding challenges and open questions, outlining some promising directions for future research.  ...  At Model Level: Machine Unlearning / Forgetting The General Data Protection Regulation (GDPR), [15] , enforced by the European Union in May 2018, is aimed at protecting user privacy.  ... 
arXiv:2107.01614v1 fatcat:76a724yzkjfvjisrokssl6assa

Learn to Forget: Machine Unlearning via Neuron Masking [article]

Yang Liu, Zhuo Ma, Ximeng Liu, Jian Liu, Zhongyuan Jiang, Jianfeng Ma, Philip Yu, Kui Ren
2021 arXiv   pre-print
Nowadays, machine learning models, especially neural networks, become prevalent in many real-world applications.These models are trained based on a one-way trip from user data: as long as users contribute  ...  To this end, machine unlearning becomes a popular research topic, which allows users to eliminate memorization of their private data from a trained machine learning model.In this paper, we propose the  ...  Recently released data protection regulations, e.g., the California Consumer Privacy Act [6] and the General Data Protection Regulation in the European Union [7] , clearly state that users should have  ... 
arXiv:2003.10933v3 fatcat:spjwz2chi5ecld7wdskt2adtbu

Efficient Attribute Unlearning: Towards Selective Removal of Input Attributes from Feature Representations [article]

Tao Guo, Song Guo, Jiewei Zhang, Wenchao Xu, Junxiao Wang
2022 arXiv   pre-print
The particular attributes will be progressively eliminated along with the training procedure towards convergence, while the rest of attributes related to the main task are preserved for achieving competitive  ...  In this paradigm, certain attributes will be accurately captured and detached from the learned feature representations at the stage of training, according to their mutual information.  ...  Recently, with the widespread application of Generative Adversarial Network [11] , several works use GAN to augment biased real-world datasets with multiple target labels and protected attributes [34  ... 
arXiv:2202.13295v2 fatcat:3z5zoubzwfexxatmpepk27327i

Privacy Enhanced Multimodal Neural Representations for Emotion Recognition

Mimansa Jaiswal, Emily Mower Provost
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
We use an adversarial learning paradigm to unlearn the private information present in a representation and investigate the effect of varying the strength of the adversarial component on the primary task  ...  by the user.  ...  Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF, IBM, TRI, or any other Toyota entity.  ... 
doi:10.1609/aaai.v34i05.6307 fatcat:r6iazmlvjzdavopnx6nlhoc6uq

Human-Imitating Metrics for Training and Evaluating Privacy Preserving Emotion Recognition Models Using Sociolinguistic Knowledge [article]

Mimansa Jaiswal, Emily Mower Provost
2021 arXiv   pre-print
But, in applications relying on machine learning backends, privacy is challenging because models often capture more than what the model was initially trained for, resulting in the potential leakage of  ...  We use the degree with which differences in interpretation of general vs privacy preserving models correlate with sociolinguistic biases to inform metric design.  ...  Privacy by Adversarial Training We use an adversarial paradigm to train models to preserve privacy of the generated embeddings with respect to the demographic variable of gender.  ... 
arXiv:2104.08792v2 fatcat:irogim2gbbhkzcvhqt643jgv4e

Zero-Shot Machine Unlearning [article]

Vikram S Chundawat, Ayush K Tarun, Murari Mandal, Mohan Kankanhalli
2022 arXiv   pre-print
We therefore ask the question: is it possible to achieve unlearning with zero training samples?  ...  Thus, in many cases, no data related to the training process or training samples may be accessible even for the unlearning purpose.  ...  Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.  ... 
arXiv:2201.05629v2 fatcat:nhpz3562qrgivloisvbwy4hcsa

How to keep text private? A systematic review of deep learning methods for privacy-preserving natural language processing [article]

Samuel Sousa, Roman Kern
2022 arXiv   pre-print
Data protection laws, such as the European Union's General Data Protection Regulation (GDPR), thereby enforce the need for privacy.  ...  Third, throughout the review, we describe privacy issues in the NLP pipeline in a holistic view.  ...  Statements and declarations All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter  ... 
arXiv:2205.10095v1 fatcat:rksy7oxxlbde5bol3ay44yycru

Machine Learning Security: Threats, Countermeasures, and Evaluations

Mingfu Xue, Chengxiang Yuan, Heyi Wu, Yushu Zhang, Weiqiang Liu
2020 IEEE Access  
Then, the machine learning security-related issues are classified into five categories: training set poisoning; backdoors in the training set; adversarial example attacks; model theft; recovery of sensitive  ...  It has demonstrated significant success in dealing with various complex problems, and shows capabilities close to humans or even beyond humans.  ...  They generate fake users with crafted rating scores based on an optimization problem, and inject them to the recommender system.  ... 
doi:10.1109/access.2020.2987435 fatcat:ksinvcvcdvavxkzyn7fmsa27ji

Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review [article]

Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, Hyoungshick Kim
2020 arXiv   pre-print
In some cases, an attacker can intelligently bypass existing defenses with an adaptive attack.  ...  We have also reviewed the flip side of backdoor attacks, which are explored for i) protecting intellectual property of deep learning models, ii) acting as a honeypot to catch adversarial example attacks  ...  In this case, the server needs to unlearn the data from this user. One intuitive way is to retrain the model from scratch after removing the user data as requested.  ... 
arXiv:2007.10760v3 fatcat:6i4za6345jedbe5ek57l2tgld4

SoK: Managing Longitudinal Privacy of Publicly Shared Personal Online Data

Theodor Schnitzler, Shujaat Mirza, Markus Dürmuth, Christina Pöpper
2021 Proceedings on Privacy Enhancing Technologies  
While limitations of data availability in proposed approaches and real systems are mostly time-based, users' desired models are rather complex, taking into account content, audience, and the context in  ...  In this work, we systematize research on longitudinal privacy management of publicly shared personal online data from these two perspectives: user studies capturing users' interactions related to the availability  ...  When the data is used to aggregate statistics or to train machine-learning models, e. g., for image classification or recommender systems, the information that data carries will implicitly remain in the  ... 
doi:10.2478/popets-2021-0013 fatcat:5ehzkvuhbvhend3mf5xd2tl434

Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [article]

Alejandro Peña, Ignacio Serna, Aythami Morales, Julian Fierrez
2020 arXiv   pre-print
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.  ...  With the aim of studying how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, we propose a fictitious automated  ...  datasets that approximate a given original one, but more fair with respect to certain protected attributes.  ... 
arXiv:2004.07173v1 fatcat:x6zbp3jd7bdwvptwtkiorisrya

Bias in Multimodal AI: Testbed for Fair Automatic Recruitment

Alejandro Pena, Ignacio Serna, Aythami Morales, Julian Fierrez
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.  ...  With the aim of studying how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, we propose a fictitious automated  ...  datasets that approximate a given original one, but more fair with respect to certain protected attributes.  ... 
doi:10.1109/cvprw50498.2020.00022 dblp:conf/cvpr/PenaSMF20 fatcat:7fbnokkqfngbvn5yneac4b2amm

A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning [article]

Hugo Berg, Siobhan Mackenzie Hall, Yash Bhalgat, Wonsuk Yang, Hannah Rose Kirk, Aleksandar Shtedritski, Max Bain
2022 arXiv   pre-print
We address both of these challenges in this paper: First, we evaluate different bias measures and propose the use of retrieval metrics to image-text representations via a bias measuring framework.  ...  Second, we investigate debiasing methods and show that optimizing for adversarial loss via learnable token embeddings minimizes various bias measures without substantially degrading feature representations  ...  Model-based debiasing methods are more similar to our work, these include optimizing confusion [4] , domain adversarial training [18] , or training a network to unlearn bias information [27] .  ... 
arXiv:2203.11933v2 fatcat:d5ahgqbrmbhlndyr6ln5tjvbwa

Adaptive Machine Unlearning [article]

Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, Chris Waites
2021 arXiv   pre-print
Combined with ideas from prior work which give guarantees for non-adaptive deletion sequences, this leads to extremely flexible algorithms able to handle arbitrary model classes and training methodologies  ...  Data deletion algorithms aim to remove the influence of deleted data points from trained models at a cheaper computational cost than fully retraining those models.  ...  against adaptive adversarial streams to streaming algorithms with guarantees against oblivious adversaries.  ... 
arXiv:2106.04378v1 fatcat:lgiq7qk2kvgonln3ktpafnlrx4
« Previous Showing results 1 — 15 out of 302 results