A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Utility Fairness for the Differentially Private Federated Learning
[article]
2021
arXiv
pre-print
In this paper, for a FL setting, we model the learning gain achieved by an IoT device against its participation cost as its utility. ...
This is achieved by employing differential privacy to curtail global model divulgence based on the learning contribution. ...
In a communication round, each MTD performs multiple local iterations following Algorithm 1 Differentially Private Federated Learning Algo. ...
arXiv:2109.05267v1
fatcat:e7dqgrk6kbfejcil2tkp3qj23i
More Than Privacy: Applying Differential Privacy in Key Areas of Artificial Intelligence
[article]
2020
arXiv
pre-print
With a focus on regular machine learning, distributed machine learning, deep learning, and multi-agent systems, the purpose of this article is to deliver a new view on many possibilities for improving ...
For this reason, differential privacy has been broadly applied in AI but to date, no study has documented which differential privacy mechanisms can or have been leveraged to overcome its issues or the ...
However, if the differential privacy can contribute to stability, or security, the utility may increase, such as in federated learning or fairness. ...
arXiv:2008.01916v1
fatcat:ujmxv7eq6jcppndfu5shbzkdom
More Than Privacy: Applying Differential Privacy in Key Areas of Artificial Intelligence
2020
IEEE Transactions on Knowledge and Data Engineering
With a focus on regular machine learning, distributed machine learning, deep learning, and multi-agent systems, the purpose of this article is to deliver a new view on many possibilities for improving ...
For this reason, differential privacy has been broadly applied in AI but to date, no study has documented which differential privacy mechanisms can or have been leveraged to overcome its issues or the ...
However, if the differential privacy can contribute to stability, or security, the utility may increase, such as in federated learning or fairness. ...
doi:10.1109/tkde.2020.3014246
fatcat:33rl6jxy5rgexpnuel5rvlkg5a
Dynamic Asynchronous Anti Poisoning Federated Deep Learning with Blockchain-Based Reputation-Aware Solutions
2022
Sensors
This paper proposes a lightweight dynamic asynchronous algorithm considering the averaging frequency control and parameter selection for federated learning to speed up model averaging and improve efficiency ...
As promising privacy-preserving machine learning technology, federated learning enables multiple clients to train the joint global model via sharing model parameters. ...
Conflicts of Interest: The authors declare no conflict of interest. ...
doi:10.3390/s22020684
pmid:35062645
pmcid:PMC8777936
fatcat:jw72thgz2jgb3jxq2kusy5cq5e
Towards Fair Federated Learning with Zero-Shot Data Augmentation
[article]
2021
arXiv
pre-print
In this work, we aim to provide federated learning schemes with improved fairness. ...
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server). ...
Acknowledgments CC is partly supported by the Verizon Media FREP program. ...
arXiv:2104.13417v1
fatcat:ycavxam7kne3vmj7cmijglx25i
Enforcing fairness in private federated learning via the modified method of differential multipliers
[article]
2022
arXiv
pre-print
Federated learning with differential privacy, or private federated learning, provides a strategy to train machine learning models while respecting users' privacy. ...
Then, this algorithm is extended to the private federated learning setting. ...
Acknowledgments The authors would like to thank Kamal Benkiran andÁine Cahill for their helpful discussions. ...
arXiv:2109.08604v2
fatcat:sbhrofuawzc7rficcymewkhab4
Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy
[article]
2020
arXiv
pre-print
In this paper, we aim to study these effects within differentially private deep learning. ...
To achieve rigorous privacy guarantees, differentially private training mechanisms are used. ...
Differentially Private SGD (DP-SGD) Differential Privacy [11, 12] provides a strong privacy guarantee for algorithms on aggregate databases. ...
arXiv:2009.06389v3
fatcat:hq6u7dtcubhingsgsjgj6qpoly
CaPC Learning: Confidential and Private Collaborative Learning
[article]
2021
arXiv
pre-print
Federated learning only provides confidentiality, not privacy, since gradients shared still contain private information. Differentially private learning assumes unreasonably large datasets. ...
We introduce Confidential and Private Collaborative (CaPC) learning, the first method provably achieving both confidentiality and privacy in a collaborative setting. ...
used in our aggregation is lower than that of the gradients aggregated in federated learning. ...
arXiv:2102.05188v2
fatcat:gphcc6vtefftzb2er5jbnjennm
Emerging Trends in Federated Learning: From Model Fusion to Federated X Learning
[article]
2021
arXiv
pre-print
Following the emerging trends, we also discuss federated learning in the intersection with other learning paradigms, termed as federated x learning, where x includes multitask learning, meta-learning, ...
As a flexible learning setting, federated learning has the potential to integrate with other learning frameworks. ...
[39] proposed differentially private federated generative models to address the challenges of non-inspectable data scenario. ...
arXiv:2102.12920v2
fatcat:5fcwfhxibbedbcbuzrfyqdedky
Towards Fair and Privacy-Preserving Federated Deep Models
[article]
2020
arXiv
pre-print
The current standalone deep learning framework tends to result in overfitting and low utility. ...
Existing federated learning frameworks overlook an important aspect of participation: fairness. All parties are given the same final model without regard to their contributions. ...
The authors would like to thank Prof. Benjamin Rubinstein, Dr. Kumar Bhaskaran, and Prof. Marimuthu Palaniswami for their insightful discussions. ...
arXiv:1906.01167v3
fatcat:qgfj73ux55b7nbz3rvetqvgyou
Advances and Open Problems in Federated Learning
[article]
2021
arXiv
pre-print
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service ...
FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science ...
Acknowledgments The authors would like to thank Alex Ingerman and David Petrou for their useful suggestions and insightful comments during the review process. ...
arXiv:1912.04977v3
fatcat:efkbqh4lwfacfeuxpe5pp7mk6a
Incentive Design and Differential Privacy based Federated Learning: A mechanism design Perspective
2020
IEEE Access
INDEX TERMS Federated learning, Differential privacy, Internet of Things, Incentive mechanism design, Joint control model. ...
Due to stricter data management regulations and large size of the training data, distributed learning paradigm such as federated learning (FL) has gained attention recently. ...
IoT devices are local data owners, and have the computation capability for the federated learning process. ...
doi:10.1109/access.2020.3030888
fatcat:vcqvjt77rzavrdjo34f6jx2pte
Removing Disparate Impact of Differentially Private Stochastic Gradient Descent on Model Accuracy
[article]
2020
arXiv
pre-print
When we enforce differential privacy in machine learning, the utility-privacy trade-off is different w.r.t. each group. ...
In this work, we analyze the inequality in utility loss by differential privacy and propose a modified differentially private stochastic gradient descent (DPSGD), called DPSGD-F, to remove the potential ...
We define a new fairness notion called equality of privacy impact for differentially private learning, which requires that the utility loss due to differential privacy is the same for all groups. ...
arXiv:2003.03699v2
fatcat:6wyc7maiebbl7panda5ypzyexe
How to Democratise and Protect AI: Fair and Differentially Private Decentralised Deep Learning
2020
IEEE Transactions on Dependable and Secure Computing
In particular, we build a fair and differentially private decentralised deep learning framework called FDPDDL, which enables parties to derive more accurate local models in a fair and private manner by ...
This paper firstly considers the research problem of fairness in collaborative deep learning, while ensuring privacy. ...
Composition for ( , δ)differential privacy (the epsilons and the deltas add up): the composition of k differentially private mechanisms is ( i i , i δ i )differentially private, where for any 1 ≤ i ≤ k ...
doi:10.1109/tdsc.2020.3006287
fatcat:kwhzr3qj5vbtlehkh4areejsoa
FedSel: Federated SGD under Local Differential Privacy with Top-k Dimension Selection
[article]
2020
arXiv
pre-print
In the federated setting, Stochastic Gradient Descent (SGD) has been widely used in federated learning for various machine learning models. ...
Specifically, we propose three private dimension selection mechanisms and adapt the gradient accumulation technique to stabilize the learning process with noisy updates. ...
[33] design LDP mechanisms for reconstruction attack with a large magnitude of privacy budget to get rid of the utility limitation of a normal locally differentially private learning. ...
arXiv:2003.10637v1
fatcat:q5q3g4cvv5fh5puo2aj4jlhgf4
« Previous
Showing results 1 — 15 out of 38,060 results