Fairness Without Demographics in Human-Centered Federated Learning
CoRR(2024)
摘要
Federated learning (FL) enables collaborative model training while preserving
data privacy, making it suitable for decentralized human-centered AI
applications. However, a significant research gap remains in ensuring fairness
in these systems. Current fairness strategies in FL require knowledge of
bias-creating/sensitive attributes, clashing with FL's privacy principles.
Moreover, in human-centered datasets, sensitive attributes may remain latent.
To tackle these challenges, we present a novel bias mitigation approach
inspired by "Fairness without Demographics" in machine learning. The presented
approach achieves fairness without needing knowledge of sensitive attributes by
minimizing the top eigenvalue of the Hessian matrix during training, ensuring
equitable loss landscapes across FL participants. Notably, we introduce a novel
FL aggregation scheme that promotes participating models based on error rates
and loss landscape curvature attributes, fostering fairness across the FL
system. This work represents the first approach to attaining "Fairness without
Demographics" in human-centered FL. Through comprehensive evaluation, our
approach demonstrates effectiveness in balancing fairness and efficacy across
various real-world applications, FL setups, and scenarios involving single and
multiple bias-inducing factors, representing a significant advancement in
human-centered FL.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要