Making Fair ML Software using Trustworthy Explanation [article]

Joymallya Chakraborty, Kewen Peng, TIm Menzies
2020 Figshare  
Machine learning software is being used in many applications (finance, hiring, admissions, criminal justice) having huge social impact. But sometimes the behavior of this software is biased and it shows discrimination based on some sensitive attributes such as sex, race etc. Prior works concentrated on finding and mitigating bias in ML models. A recent trend is using instance-based model-agnosticexplanation methods such as LIME to find out biasin the model prediction. Our work concentrates on
more » ... k concentrates on finding shortcomings of current bias measures and explanation methods. We show how our proposed method based on K nearest neighbors can overcome those shortcomings and find the underlying bias of black box models. Our results are more trustworthy and helpful for the practitioners. Finally,We describe our future framework combining explanation and planning to build fair software.
doi:10.6084/m9.figshare.12612449.v3 fatcat:jmxymewc4bd6lkr4cuqgybea2m