Filters








395 Hits in 6.2 sec

Quantifying identifiability to choose and audit ϵ in differentially private deep learning [article]

Daniel Bernau, Günther Eibl, Philip W. Grassal, Hannah Keller, Florian Kerschbaum
2021 arXiv   pre-print
To use differential privacy in machine learning, data scientists must choose privacy parameters $(\epsilon,\delta)$.  ...  We formulate an implementation of this differential privacy adversary that allows data scientists to audit model training and compute empirical identifiability scores and empirical $(\epsilon,\delta)$.  ...  learning repository [7] .  ... 
arXiv:2103.02913v3 fatcat:ffnff2vyujh7dndoz7lswit6ty

Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning [article]

Milad Nasr, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Nicholas Carlini
2021 arXiv   pre-print
Differentially private (DP) machine learning allows us to train models on private data while limiting data leakage.  ...  deployments, there is a gap between our lower bounds and the upper bounds provided by the analysis: differential privacy is conservative and adversaries may not be able to leak as much information as  ...  Milad Nasr is supported by a Google PhD Fellowship in Security and Privacy.  ... 
arXiv:2101.04535v1 fatcat:dd63skimefcanemach2qdkharu

Investigating Membership Inference Attacks under Data Dependencies [article]

Thomas Humphries, Simon Oya, Lindsey Tulloch, Matthew Rafuse, Ian Goldberg, Urs Hengartner, Florian Kerschbaum
2021 arXiv   pre-print
A growing body of literature uses Differentially Private (DP) training algorithms as a defence against such attacks.  ...  Training machine learning models on privacy-sensitive data has become a popular practice, driving innovation in ever-expanding fields.  ...  private model publishing for deep learning.  ... 
arXiv:2010.12112v3 fatcat:xrv65rf4yrdzxjvsoy5y22irm4

SoK: Machine Learning Governance [article]

Varun Chandrasekaran, Hengrui Jia, Anvith Thudi, Adelin Travers, Mohammad Yaghini, Nicolas Papernot
2021 arXiv   pre-print
The application of machine learning (ML) in computer systems introduces not only many benefits but also risks to society.  ...  Building on this foundation, we use identities to hold principals accountable for failures of ML systems through both attribution and auditing.  ...  The model owner can choose its own specifications for ε and γ and commission two model builders to produce the models.  ... 
arXiv:2109.10870v1 fatcat:7zklvf3ocjeaje6pq45cgp4zkm

PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy [article]

Xiaolan Gu, Ming Li, Li Xiong
2021 arXiv   pre-print
Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively by keeping their datasets local and only exchanging model updates.  ...  In this paper, we develop a framework called PRECAD, which simultaneously achieves differential privacy (DP) and enhances robustness against model poisoning attacks with the help of cryptography.  ...  Applied to machine learning, a differentially private training mechanism allows the public release of model parameters with a strong privacy guarantee: adversaries are limited in what they can learn about  ... 
arXiv:2110.11578v1 fatcat:ndwe2a7g6zhxxlb6clouu7tl3e

DPWeka: Achieving Differential Privacy in WEKA

Srinidhi Katla, Depeng Xu, Yongkai Wu, Qiuping Pan, Xintao Wu
2017 2017 IEEE Symposium on Privacy-Aware Computing (PAC)  
This thesis examines various mechanisms to realize differential privacy in practice and investigates methods to integrate them with a popular machine learning toolkit, WEKA.  ...  While analyzing such data, the private information of the individuals present in the data must be protected for moral and legal reasons.  ...  Some others such as [9] - [13] , [16] , concentrated on building differentially private learning schemes, including decision trees, classification, regression and deep learning.  ... 
doi:10.1109/pac.2017.25 dblp:conf/pac/KatlaXWPW17 fatcat:ghi6wu7si5fl5fonqfo57qhpnu

Auditor Choice and the Pricing of Initial Public Debt Issues

Steve Fortin, Jeffrey A. Pittman
2004 Social Science Research Network  
implicit insurance coverage in the event of audit failure.  ...  implicit insurance coverage in the event of audit failure.  ...  We also examine whether choosing a Big Five auditor particularly benefits firms planning to replace private debt with public debt, which reduces cross-monitoring among lenders and elevates default risk  ... 
doi:10.2139/ssrn.613321 fatcat:v2z5rb4aejc37bqowhehtypoyi

Security and Privacy Issues in Deep Learning [article]

Ho Bae, Jaehee Jang, Dahuin Jung, Hyemi Jang, Heonseok Ha, Hyungyu Lee, Sungroh Yoon
2021 arXiv   pre-print
In this paper, we describe the notions of some of methods, e.g., homomorphic encryption, and review their advantages and challenges when implemented in deep-learning models.  ...  To promote secure and private artificial intelligence (SPAI), we review studies on the model security and data privacy of DNNs.  ...  Phan et al. [2016] proposed a deep private autoencoder (dPA) and proved that the dPA is differentially private based on the functional mechanism.  ... 
arXiv:1807.11655v4 fatcat:k7mizsqgrfhltktu6pf5htlmy4

Use Privacy in Data-Driven Systems: Theory and Experiments with Machine Learnt Programs [article]

Anupam Datta, Matthew Fredrikson, Gihyuk Ko, Piotr Mardziel, Shayak Sen
2017 arXiv   pre-print
Our definition relates proxy use to intermediate computations that occur in a program, and identify two essential properties that characterize this behavior: 1) its result is strongly associated with the  ...  This paper presents an approach to formalizing and enforcing a class of use privacy properties in data-driven systems.  ...  Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation thereon.  ... 
arXiv:1705.07807v3 fatcat:wwngusyx3fdfnhwrlr7it43z3m

An Accurate, Scalable and Verifiable Protocol for Federated Differentially Private Averaging [article]

César Sabater, Aurélien Bellet, Jan Ramon
2021 arXiv   pre-print
Learning from data owned by several parties, as in federated learning, raises challenges regarding the privacy guarantees provided to participants and the correctness of the computation in the presence  ...  We analyze the differential privacy guarantees of our protocol and the impact of the graph topology under colluding malicious parties, showing that we can nearly match the utility of the trusted curator  ...  The authors would like to thank James Bell, Pierre Dellenbach, Adrià Gascón and Alexandre Huat for fruitful discussions.  ... 
arXiv:2006.07218v2 fatcat:qisyejaoyjbitpx5pb5t2uayju

Advances and Open Problems in Federated Learning [article]

Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G.L. D'Oliveira, Hubert Eichner (+47 others)
2021 arXiv   pre-print
Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.  ...  FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science  ...  Acknowledgments The authors would like to thank Alex Ingerman and David Petrou for their useful suggestions and insightful comments during the review process.  ... 
arXiv:1912.04977v3 fatcat:efkbqh4lwfacfeuxpe5pp7mk6a

The explanation game: a formal framework for interpretable machine learning

David S. Watson, Luciano Floridi
2020 Synthese  
Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation(s) for a given  ...  The game serves a descriptive and a normative function, establishing a conceptual space in which to analyse and compare existing proposals, as well as design new and improved solutions.  ...  Finally, we would like to thank our anonymous reviewers for their thorough reading and valuable contributions.  ... 
doi:10.1007/s11229-020-02629-9 fatcat:wukaxyipjzhj7mrkmnkl4a5vta

Risk Management Failures

Matthieu Bouvard, Samuel Lee
2015 Social Science Research Network  
Firms choose privately optimal risk management regimes to be competitive in a market with shortlived trading opportunities but in aggregate can find themselves in a constrained inefficient "race to the  ...  We identify two sources of market failure operating through opportunity costs and agency rents, and discuss approaches to regulating risk management as a governance problem or as a public goods problem  ...  the risk management process for another "small" period dt in the hope of learning his private value α k .  ... 
doi:10.2139/ssrn.2614468 fatcat:juls3f2zarb7bfs6yo73i3fsxu

Proxy Non-Discrimination in Data-Driven Systems [article]

Anupam Datta, Matt Fredrikson, Gihyuk Ko, Piotr Mardziel, Shayak Sen
2017 arXiv   pre-print
We evaluate an implementation on a corpus of social datasets, demonstrating how to validate systems against these properties and to repair violations where they occur.  ...  Usually, these biases are not explicit, they rely on subtle correlations discovered by training algorithms, and are therefore difficult to detect.  ...  Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation thereon.  ... 
arXiv:1707.08120v1 fatcat:foscoggsffhithhsnyzipqwqei

A Comprehensive Survey on Graph Anomaly Detection with Deep Learning [article]

Xiaoxiao Ma, Jia Wu, Shan Xue, Jian Yang, Chuan Zhou, Quan Z. Sheng, Hui Xiong, Leman Akoglu
2021 arXiv   pre-print
In this survey, we aim to provide a systematic and comprehensive review of the contemporary deep learning techniques for graph anomaly detection.  ...  For the advent of deep learning, graph anomaly detection with deep learning has received a growing attention recently.  ...  , and more recently, to various deep learning technologies.  ... 
arXiv:2106.07178v4 fatcat:efargsqnxndqbfqat2q5iz54u4
« Previous Showing results 1 — 15 out of 395 results