PCIV method for Indirect Bias Quantification in AI and ML Models

Ashish Garg, Dr. Rajesh SL
2021 International Journal of Scientific Research in Computer Science Engineering and Information Technology  
Data Scientists nowadays make extensive use of black-box AI models (such as Neural Networks and the various ensemble techniques) to solve various business problems. Though these models often provide higher accuracy, these models are also less explanatory at the same time and hence more prone to bias. Further, AI systems rely upon the available training data and hence remain prone to data bias as well. Many sensitive attributes such as race, religion, gender, ethnicity, etc. can form the basis
more » ... unethical bias in data or the algorithm. As the world is becoming more and more dependent on AI algorithms for making a wide range of decisions such as to determine access to services such as credit, insurance, and employment, the fairness & ethical aspects of the models are becoming increasingly important. There are many bias detection & mitigation algorithms which have evolved and many of the algorithms handle indirect attributes as well without requiring to explicitly identify them. However, these algorithms have gaps and do not quantify the indirect bias. This paper discusses the various bias detection methodologies and various tools/ libraries to detect & mitigate bias. Thereafter, this paper presents a new methodical approach to detect and quantify indirect bias in an AI/ ML models.
doi:10.32628/cseit217251 fatcat:byuqw7ttxnghpmaal3msgtg35m