A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Debiasing Model Updates for Improving Personalized Federated Training
2021
International Conference on Machine Learning
The trained global meta-model is then personalized locally by each device to meet its specific objective. ...
Different from the conventional federated learning setting, training customized models for each device is hindered by both the inherent data biases of the various devices, as well as the requirements imposed ...
We can infer that PFL is capable of debiasing meta-model updates at the server allowing for superior device personalization. ...
dblp:conf/icml/AcarZZNMWS21
fatcat:73ku5gefqzeizdbuk6uh6r6anu
Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets
[article]
2022
arXiv
pre-print
Results show that models trained on our debiased datasets generalise better than those trained on the original datasets in all settings. ...
We propose to tackle this problem by generating a debiased version of a dataset, which can then be used to train a debiased, off-the-shelf model, by simply replacing its training data. ...
Acknowledgments The authors would like to thank Max Bartolo, Alexis Ross, Doug Downey, Jesse Dodge, Pasquale Minervini, and Sebastian Riedel for their helpful discussion and feedback. ...
arXiv:2203.12942v1
fatcat:2rcxn3wwmfav5jzyg3tcuz3ie4
Algorithmic Fairness and Bias Mitigation for Clinical Machine Learning: Insights from Rapid COVID-19 Diagnosis by Adversarial Learning
[article]
2022
medRxiv
pre-print
For example, if one class is over-presented or errors/inconsistencies in practice are reflected in the training data, then a model can be biased by these. ...
We trained our framework on a large, real-world COVID-19 dataset and demonstrated that adversarial training demonstrably improves outcome fairness (with respect to equalized odds), while still achieving ...
gratitude to Jingyi Wang & Dr Jolene Atia at University Hospitals Birmingham NHS Foundation trust, Phillip Dickson at Bedfordshire Hospitals, and Paul Meredith at Portsmouth Hospitals University NHS Trust for ...
doi:10.1101/2022.01.13.22268948
fatcat:g65bmzfly5db7mrxuqgep4dhki
Federated Learning of Molecular Properties with Graph Neural Networks in a Heterogeneous Setting
[article]
2022
arXiv
pre-print
FLIT(+) can align the local training across heterogeneous clients by improving the performance for uncertain samples. ...
Federated learning allows end-users to build a global model collaboratively while keeping the training data distributed over isolated clients. ...
Client-side Updates For completeness, we describe typical training steps to update the GNN model for client side training. ...
arXiv:2109.07258v3
fatcat:e4dntvhf7jcsdmnbn67wum23xa
Real-Time Decentralized knowledge Transfer at the Edge
[article]
2021
arXiv
pre-print
We propose a method based on knowledge distillation for pairwise knowledge transfer pipelines from models trained on non-i.i.d. data and compare it to other popular knowledge transfer methods. ...
Transferring knowledge in a selective decentralized approach enables models to retain their local insights, allowing for local flavors of a machine learning model. ...
Additionally, while federated solutions allow for some consolidation of models, federation will retract the benefit of private local models or personalized individual models. ...
arXiv:2011.05961v4
fatcat:zjcpc5ipqzbkxpzyfrcegny7su
Personalized News Recommendation: Methods and Challenges
[article]
2022
arXiv
pre-print
Next, we introduce the public datasets and evaluation methods for personalized news recommendation. ...
We first review the techniques for tackling each core problem in a personalized news recommender system and the challenges they face. ...
The local model updates are uploaded to a central server that coordinates a number of user clients for model training. ...
arXiv:2106.08934v3
fatcat:iagqsw73hrehxaxpvpydvtr26m
Algorithm Fairness in AI for Medicine and Healthcare
[article]
2022
arXiv
pre-print
Lastly, we also review emerging technology for mitigating bias via federated learning, disentanglement, and model explainability, and their role in AI-SaMD development. ...
Recent evaluation of AI models stratified across race sub-populations have revealed inequalities in how patients are diagnosed, given treatments, and billed for healthcare costs. ...
site in updating the global model and the varying frequencies at which different sites participate in training 209 . ...
arXiv:2110.00603v2
fatcat:pspb6bqqxjh45an5mhqohysswu
A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability
[article]
2022
arXiv
pre-print
For example, existing works demonstrate that attackers can fool the GNNs to give the outcome they desire with unnoticeable perturbation on training graph. ...
For each aspect, we give the taxonomy of the related methods and formulate the general frameworks for the multiple categories of trustworthy GNNs. ...
Then, federated learning methods are leveraged to further update the global model. ...
arXiv:2204.08570v1
fatcat:7c3pkxitrbhgxj6fytn6f3r644
Disentangled Federated Learning for Tackling Attributes Skew via Invariant Aggregation and Diversity Transferring
[article]
2022
arXiv
pre-print
To cope with these, we proposed disentangled federated learning (DFL) to disentangle the domain-specific and cross-invariant attributes into two complementary branches, which are trained by the proposed ...
Importantly, convergence analysis proves that the FL system can be stably converged even if incomplete client models participate in the global aggregation, which greatly expands the application scope of ...
During each process, the parameters of some model parts are frozen for more targeted training. ...
arXiv:2206.06818v1
fatcat:4d75lcz45ved5aqmtjjsg34ofu
No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data
[article]
2021
arXiv
pre-print
after federated training. ...
A central challenge in training classification models in the real-world federated system is learning with non-IID data. ...
Acknowledgement We would like to thank the anonymous reviewers for their insightful comments and suggestions. ...
arXiv:2106.05001v2
fatcat:fmbnjqpmdbc6dj3mlc6usg3nhe
Personalized News Recommendation: Methods and Challenges
2022
ACM Transactions on Information Systems
Next, we introduce the public datasets and evaluation methods for personalized news recommendation. ...
Personalized news recommendation is important for users to find interested news information and alleviate information overload. ...
The local model updates are uploaded to a central server that coordinates a number of user clients for model training. ...
doi:10.1145/3530257
fatcat:xzghh6cut5ahhgxz4mkzgy74ja
Generative Models for Effective ML on Private, Decentralized Datasets
[article]
2020
arXiv
pre-print
To improve real-world applications of machine learning, experienced modelers develop intuition about their datasets, their models, and how the two interact. ...
This paper demonstrates that generative models - trained using federated methods and with formal differential privacy guarantees - can be used effectively to debug many commonly occurring data issues even ...
job is to develop and improve the machine learned models. ...
arXiv:1911.06679v2
fatcat:qdupc7zyh5gwpgu5yj2fim2kdu
Intrinisic Gradient Compression for Federated Learning
[article]
2021
arXiv
pre-print
Federated learning is a rapidly-growing area of research which enables a large number of clients to jointly train a machine learning model on privately-held data. ...
One of the largest barriers to wider adoption of federated learning is the communication cost of sending model updates from and to the clients, which is accentuated by the fact that many of these devices ...
in model training. ...
arXiv:2112.02656v1
fatcat:bmkxosl22rgnln5ikdbaayzofi
PrivFairFL: Privacy-Preserving Group Fairness in Federated Learning
[article]
2022
arXiv
pre-print
In doing so, we propose a method for training group-fair ML models in cross-device FL under complete and formal privacy guarantees, without requiring the clients to disclose their sensitive attribute values ...
Achieving group fairness in Federated Learning (FL) is challenging because mitigating bias inherently requires using the sensitive attribute values of all clients, while FL is aimed precisely at protecting ...
Research Award for Privacy Enhancing Technologies, and the Google Cloud Research Credits Program. ...
arXiv:2205.11584v1
fatcat:ii4dzz6qtvdjtgxsmjbq7drgai
Worker overconfidence: Field evidence and implications for employee turnover and firm profits
2020
Quantitative Economics
To study the implications of overconfidence for worker welfare and firm profits, we estimate a structural learning model with biased beliefs that accounts for many key features of the data. ...
Combining weekly productivity data with weekly productivity beliefs for a large sample of truckers over 2 years, we show that workers tend to systematically and persistently overpredict their productivity ...
For work on overoptimism and stock options for nonexecutive workers see, for example, Oyer and Schaefer (2005) . 344 Hoffman and Burks Quantitative Economics 11 (2020) ...
doi:10.3982/qe834
fatcat:e65kk25asvaslbjprtyf2bpawu
« Previous
Showing results 1 — 15 out of 222 results