Filters








1,008 Hits in 3.4 sec

Auditing ML Models for Individual Bias and Unfairness [article]

Songkai Xue, Mikhail Yurochkin, Yuekai Sun
2020 arXiv   pre-print
We consider the task of auditing ML models for individual bias/unfairness. We formalize the task in an optimization problem and develop a suite of inferential tools for the optimal value.  ...  To demonstrate the utility of our tools, we use them to reveal the gender and racial biases in Northpointe's COMPAS recidivism prediction instrument.  ...  Acknowledgements This work was supported by the National Science Foundation under grants DMS-1830247 and DMS-1916271.  ... 
arXiv:2003.05048v1 fatcat:5zj3bbbot5e4rl45nv6ro7z6eq

Auditing Fairness and Imputation Impact in Predictive Analytics for Higher Education [article]

Hadis Anahideh, Nazanin Nezami, Denisa G`andara
2021 arXiv   pre-print
In this paper, we set out to first assess the disparities in predictive modeling outcome for college-student success, then investigate the impact of imputation techniques on the model performance and fairness  ...  These challenges present in different steps of modeling including data preparation, model development, and evaluation.  ...  To the best of our knowledge, there is no work in ML for higher education that have transparently audit ML performance and unfairness in education using a real dataset.  ... 
arXiv:2109.07908v1 fatcat:6bwborwgk5dhxnpl2dm5kfxxiy

Improving Fairness in Machine Learning Systems

Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé, Miro Dudik, Hanna Wallach
2019 Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI '19  
The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention.  ...  Based on these findings, we highlight directions for future ML and HCI research that will better address industry practitioners' needs.  ...  We thank all interviewees and survey respondents for their participation. In addition, we thank Michael Veale, Ben Shneiderman, and our anonymous reviewers for their insightful feedback.  ... 
doi:10.1145/3290605.3300830 dblp:conf/chi/HolsteinVDDW19 fatcat:dtzrp6hx3rce5awd3tvt7ml25q

Software Fairness: An Analysis and Survey [article]

Ezekiel Soremekun, Mike Papadakis, Maxime Cordy, Yves Le Traon
2022 arXiv   pre-print
machine learning (ML) analysis methods.  ...  In summary, we observed several open challenges including the need to study intersectional/sequential bias, policy-based bias handling, and human-in-the-loop, socio-technical bias mitigation.  ...  , Analysis, Auditing how to measure, understand, and mitigate the implicit historical biases in socially sensitive data pre, in & post API for Fair ML for simple binary classifier White, Black, & Grey  ... 
arXiv:2205.08809v1 fatcat:63whhiyjvvaida4kjvpsyd7gh4

A Framework for Fairer Machine Learning in Organizations [article]

Lily Morse, Mike H.M. Teodorescu, Yazeed Awwad, Gerald Kane
2020 arXiv   pre-print
model, but also to avoid the common situation that as an algorithm learns with more data it can become unfair over time.  ...  We advance the research by introducing an organizing framework for selecting and implementing fair algorithms in organizations.  ...  individuals to constantly monitor and revise the quality of the ML model.  ... 
arXiv:2009.04661v1 fatcat:5bmipd5awrey3izobqskekxjmq

Aequitas: A Bias and Fairness Audit Toolkit [article]

Pedro Saleiro, Benedict Kuester, Loren Hinkson, Jesse London, Abby Stevens, Ari Anisfeld, Kit T. Rodolfa, Rayid Ghani
2019 arXiv   pre-print
We present Aequitas, an open source bias and fairness audit toolkit that is an intuitive and easy to use addition to the machine learning workflow, enabling users to seamlessly test models for several  ...  Therefore, despite recent awareness, auditing for bias and fairness when developing and deploying AI systems is not yet a standard practice.  ...  We now audit the selected model that is ready for deployment for bias. We find that there is indeed unfairness in the model.  ... 
arXiv:1811.05577v2 fatcat:azzzplwjh5gwtgp6iwulqvmghe

The Right Tool for the Job: Open-Source Auditing Tools in Machine Learning [article]

Cherie M Poland
2022 arXiv   pre-print
Model auditing and evaluation are not frequently emphasized skills in machine learning.  ...  Many open-source auditing tools are available, but users aren't always aware of the tools, what they are useful for, or how to access them.  ...  Different methods of auditing are required in order to search for and test both the data and the algorithms for underlying or hidden bias and attributes.  ... 
arXiv:2206.10613v1 fatcat:cesbmgmdpzc6xpzkjwgb7aka3q

Mitigating Bias in Algorithmic Systems - A Fish-Eye View

Kalia Orphanou, Jahna Otterbacher, Styliani Kleanthous, Khuyagbaatar Batsuren, Fausto Giunchiglia, Veronika Bogina, Avital Shulner-Tal, Alan Hartman, Tsvi Kuflik
2021 Zenodo  
Mitigating bias in algorithmic systems is a critical issue drawing attention across communities within the information and computer sciences.  ...  Given the complexity of the problem and the involvement of multiple stakeholders – including developers, end-users and third-parties – there is a need to understand the landscape of the sources of bias  ...  It should also be noted that within ML, beyond involving the model, inputs and outputs, auditing can also involve the generation of biased datasets for conducting a black-box audit.  ... 
doi:10.5281/zenodo.6240582 fatcat:vftoi4woebhrrp5tlmkclabgf4

Accountability in AI: From Principles to Industry-specific Accreditation [article]

Chris Percy, Simo Dragicevic, Sanjoy Sarkar, Artur S. d'Avila Garcez
2021 arXiv   pre-print
We define and evaluate critically the implementation of key accountability principles in the gambling industry, namely addressing algorithmic bias and model explainability, before concluding and discussing  ...  We argue that the present ecosystem is unbalanced, with a need for improved transparency via AI explainability and adequate documentation and process formalisation to support internal audit, leading up  ...  Acknowledgements Thanks to Playtech Plc for its R&D programme to improve responsible gambling and AI accountability, and for making data, domain expertise and software implementations available to support  ... 
arXiv:2110.09232v1 fatcat:53urolg5e5hxzjomzqzikhw7yi

AI-Enabled Underwriting Brings New Challenges for Life Insurance: Policy and Regulatory Considerations

Azish Filabi, Sophia Duffy
2021 Journal of Insurance Regulation  
Part II discusses the unfair discrimination that can occur due to factors that reflect societal biases, and the unfair discrimination that could occur in artificially intelligent systems if facially neutral  ...  The current industry standards and regulatory scheme for unfair discrimination in underwriting is also discussed in Part II.  ...  Submissions must be original work and not being considered for publication elsewhere; papers from presentations should note the meeting.  ... 
doi:10.52227/25114.2021 fatcat:6d3hiiknnvdmdha73sh5uk5eqi

Mitigating Bias in Algorithmic Systems - A Fish-Eye View

Kalia Orphanou, Jahna Otterbacher, Styliani Kleanthous, BATSUREN KHUYAGBAATAR, Fausto GIUNCHIGLIA, Veronika Bogina, Avital Shulner Tal, Alan Hartman, Tsvi Kuflik
2022 Zenodo  
Mitigating bias in algorithmic systems is a critical issue drawing attention across communities within the information and computer sciences.  ...  Given the complexity of the problem and the involvement of multiple stakeholders – including developers, end users and third-parties – there is a need to understand the landscape of the sources of bias  ...  It should also be noted that within ML, beyond involving the model, inputs and outputs, auditing can also involve the generation of biased datasets for conducting a black-box audit.  ... 
doi:10.5281/zenodo.6782985 fatcat:oc6qovumv5eszl5ukns4l3t6d4

Bias and unfairness in machine learning models: a systematic literature review [article]

Tiago Palma Pagano, Rafael Bessa Loureiro, Maira Matos Araujo, Fernanda Vitoria Nascimento Lisboa, Rodrigo Matos Peixoto, Guilherme Aragao de Sousa Guimaraes, Lucas Lisboa dos Santos, Gustavo Oliveira Ramos Cruz, Ewerton Lopes Silva de Oliveira, Marco Cruz, Ingrid Winkler, Erick Giovani Sperandio Nascimento
2022 arXiv   pre-print
The results show numerous bias and unfairness detection and mitigation approaches for ML technologies, with clearly defined metrics in the literature, and varied metrics can be highlighted.  ...  This study aims to examine existing knowledge on bias and unfairness in Machine Learning models, identifying mitigation methods, fairness metrics, and supporting tools.  ...  Q2: What are the challenges and opportunities for identifying and mitigating bias and unfairness in ML models?  ... 
arXiv:2202.08176v2 fatcat:m7rexaf26ra3zadl2dsqaqmjgq

Visual Auditor: Interactive Visualization for Detection and Summarization of Model Biases [article]

David Munechika, Zijie J. Wang, Jack Reidy, Josh Rubin, Krishna Gade, Krishnaram Kenthapadi, Duen Horng Chau
2022 arXiv   pre-print
We propose Visual Auditor, an interactive visualization tool for auditing and summarizing model biases.  ...  As machine learning (ML) systems become increasingly widespread, it is necessary to audit these systems for biases prior to their deployment.  ...  unfairness. • V ISUAL AUDITOR, an interactive visualization tool for audit- Other recent work has emerged within the visualization commu- ing and summarizing ML model biases.  ... 
arXiv:2206.12540v1 fatcat:vybiv4foyjcuxl5qrdvedtl7jm

End-To-End Bias Mitigation: Removing Gender Bias in Deep Learning [article]

Tal Feldman, Ashley Peake
2021 arXiv   pre-print
Although these models offer streamlined solutions to large problems, they may contain biases and treat groups or individuals unfairly based on protected attributes such as gender.  ...  To provide readers with the tools to assess the fairness of machine learning models and mitigate the biases present in them, we discuss multiple open source packages for fairness in AI.  ...  We detail several examples of ML models that are gender biased to motivate further research in this area, formalize notions of fairness in ML, and survey a number of algorithms for mitigating gender bias  ... 
arXiv:2104.02532v3 fatcat:vlkprfcylzeu5j5lzmwmgyamny

Proposing an Interactive Audit Pipeline for Visual Privacy Research [article]

Jasmine DeHart, Chenguang Xu, Lisa Egede, Christan Grant
2021 arXiv   pre-print
Our goal is to systematically analyze the machine learning pipeline for visual privacy and bias issues.  ...  The continued use of biased datasets and processes will adversely damage communities and increase the cost of fixing the problem later.  ...  Acknowledgments The researchers are partially supported by awards from the Department of Defense SMART Scholarship and the National Science Foundation under Grant No. #1952181.  ... 
arXiv:2111.03984v2 fatcat:chrnfyevfbc5ljzxdxvlpuxody
« Previous Showing results 1 — 15 out of 1,008 results