4 Hits in 1.8 sec

FairFed: Enabling Group Fairness in Federated Learning [article]

Yahya H. Ezzeldin, Shen Yan, Chaoyang He, Emilio Ferrara, Salman Avestimehr
2022 arXiv   pre-print
Motivated by the importance and challenges of group fairness in federated learning, in this work, we propose FairFed, a novel algorithm to enhance group fairness via a fairness-aware aggregation method  ...  We build our FairFed algorithm around the secure aggregation protocol of federated learning.  ...  In general, Fig. 1 . 1 Fig. 1. FairFed: Group fairness-aware federated learning framework.  ... 
arXiv:2110.00857v2 fatcat:4cxk6cco3fclboyq76fylmsmee

Improving Fairness via Federated Learning [article]

Yuchen Zeng, Hongxu Chen, Kangwook Lee
2022 arXiv   pre-print
In this work, we first propose a new theoretical framework, with which we demonstrate that federated learning can strictly boost model fairness compared with such non-federated algorithms.  ...  First, is federated learning necessary, i.e., can we simply train locally fair classifiers and aggregate them?  ...  group fairness under federated learning.  ... 
arXiv:2110.15545v2 fatcat:47yx6ijuazannjrqz7toho3cry

E2FL: Equal and Equitable Federated Learning [article]

Hamid Mozaffari, Amir Houmansadr
2022 arXiv   pre-print
In this work, we present Equal and Equitable Federated Learning (E2FL) to produce fair federated learning models by preserving two main fairness properties, equity and equality, concurrently.  ...  Federated Learning (FL) enables data owners to train a shared global model without sharing their private data.  ...  Our ranking-based FL training enables attractive fairness properties, as shown through our experiments, which is intuitively due to the following reason: In rank-based federated learning, each client computes  ... 
arXiv:2205.10454v1 fatcat:wpxc3qoc4zbcxi73uo5pyojx5u

PrivFairFL: Privacy-Preserving Group Fairness in Federated Learning [article]

Sikha Pentyala, Nicola Neophytou, Anderson Nascimento, Martine De Cock, Golnoosh Farnadi
2022 arXiv   pre-print
Achieving group fairness in Federated Learning (FL) is challenging because mitigating bias inherently requires using the sensitive attribute values of all clients, while FL is aimed precisely at protecting  ...  Group fairness ensures that the outcome of machine learning (ML) based decision making systems are not biased towards a certain group of people defined by a sensitive attribute such as gender or ethnicity  ...  In this paper, we proposed PrivFairFL, an MPC-based framework for training group-fair models in Federated Learning (FL).  ... 
arXiv:2205.11584v1 fatcat:ii4dzz6qtvdjtgxsmjbq7drgai