AIM: Attributing, Interpreting, Mitigating Data Unfairness

KDD 2024(2024)

引用 0|浏览27
暂无评分
摘要
Data collected in the real world often encapsulates historical discriminationagainst disadvantaged groups and individuals. Existing fair machine learning(FairML) research has predominantly focused on mitigating discriminative biasin the model prediction, with far less effort dedicated towards exploring howto trace biases present in the data, despite its importance for thetransparency and interpretability of FairML. To fill this gap, we investigate anovel research problem: discovering samples that reflect biases/prejudices fromthe training data. Grounding on the existing fairness notions, we lay out asample bias criterion and propose practical algorithms for measuring andcountering sample bias. The derived bias score provides intuitive sample-levelattribution and explanation of historical bias in data. On this basis, wefurther design two FairML strategies via sample-bias-informed minimal dataediting. They can mitigate both group and individual unfairness at the cost ofminimal or zero predictive utility loss. Extensive experiments and analyses onmultiple real-world datasets demonstrate the effectiveness of our methods inexplaining and mitigating unfairness. Code is available athttps://github.com/ZhiningLiu1998/AIM.
更多
查看译文
关键词
Fair Machine Learning,Data Bias,Sample-level Attribution,Mitigating Unfairness,Transparency and Interpretability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要