Filters








247 Hits in 14.5 sec

Making Better Job Hiring Decisions using "Human in the Loop" Techniques

Christopher G. Harris
2018 International Semantic Web Conference  
Using machine learning techniques to filter and sort job candidates has been done for more than two decades; however, there are always humans involved in the final hiring decision.  ...  One primary reason is that rarely are two hiring decisions made with the same information and in the same context.  ...  In this paper, we conduct an empirical evaluation of how humans in the loop can be used to better train AI systems.  ... 
dblp:conf/semweb/Harris18 fatcat:zz7noifysvfwtbdgf5zmilia7y

D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias [article]

Bhavya Ghai, Klaus Mueller
2022 arXiv   pre-print
Moreover, our human-in-the-loop based approach significantly outperforms an automated approach on trust, interpretability and accountability.  ...  To alleviate this problem, we propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases from tabular datasets.  ...  Involving a human in the loop for identifying and debiasing data is a double edged sword.  ... 
arXiv:2208.05126v1 fatcat:xxeqm7aj2bfzdnowpxfdyzmcjm

Augmenting the algorithm: Emerging human-in-the-loop work configurations

Tor Grønsund, Margunn Aanestad
2020 Journal of strategic information systems  
Our analysis suggests that the new configuration resembled a human-in-the-loop pattern, comprised of both the augmentation work of auditing (i.e. the generation of a ground truth and assessment of the  ...  Our research points to the strategic importance of a human-in-the-loop pattern for organizational reflexivity to ensure that the performance of the algorithm meets the organization's requirements and changes  ...  The human-in-the-loop configuration emerges as a strategic capability.  ... 
doi:10.1016/j.jsis.2020.101614 fatcat:2pvlzgvhxbb2nptqsblekwi4ua

Novel Human-in-the-Loop (HIL) Simulation Method to Study Synthetic Agents and Standardize Human–Machine Teams (HMT)

Praveen Damacharla, Parashar Dhakal, Jyothi Priyanka Bandreddi, Ahmad Y. Javaid, Jennie J. Gallimore, Colin Elkin, Vijay K. Devabhaktuni
2020 Applied Sciences  
Followed by the identification of processes and metrics to test and validate the proposed model, we present a novel human-in-the-loop (HIL) simulation method.  ...  The effectiveness of this method is demonstrated using two controlled HMT scenarios: Emergency care provider (ECP) training and patient treatment by an experienced medic.  ...  This is followed by software-driven results that are used to further assess point-to-point human-in-the-loop connections.  ... 
doi:10.3390/app10238390 fatcat:arbfx4le55fv3jmbqse2xdxua4

Calendar.help

Justin Cranshaw, Emad Elwany, Todd Newman, Rafal Kocielnik, Bowen Yu, Sandeep Soni, Jaime Teevan, Andrés Monroy-Hernández
2017 Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI '17  
We describe the iterative approach we used to develop Calendar.help, and share the lessons learned from scheduling thousands of meetings during a year of real-world deployments.  ...  Common scheduling scenarios are broken down using well-defined workflows and completed as a series of microtasks that are automated when possible and executed by a human otherwise.  ...  We may find that some users in the automated future wish to keep these human-in-the-loop elements, perhaps at a premium.  ... 
doi:10.1145/3025453.3025780 dblp:conf/chi/CranshawENKYSTM17 fatcat:sgcddmvjz5awllmvgvolfg577m

Designing Closed Human-in-the-loop Deferral Pipelines [article]

Vijay Keswani, Matthew Lease, Krishnaram Kenthapadi
2022
In contrast, we consider a "closed" decision-making pipeline in which the same fallible human decision-makers used in deferral also provide training labels.  ...  How can imperfect and biased human expert labels be used to train a fair and accurate deferral framework?  ...  Figure 1 : 1 Figure 1: A closed human-in-the-loop deferral pipeline.  ... 
doi:10.48550/arxiv.2202.04718 fatcat:ao63korywrfufgelqqiaghn56m

TopGuNN: Fast NLP Training Data Augmentation using Large Corpora

Rebecca Iglesias-Flores, Megha Mishra, Ajay Patel, Akanksha Malhotra, Reno Kriz, Martha Palmer, Chris Callison-Burch
2021 Proceedings of the Second Workshop on Data Science with Human in the Loop: Language Advances   unpublished
TopGuNN is demonstrated for a semantic role labeling training data augmentation use case over the Gigaword corpus.  ...  Using approximate k-NN and an efficient architecture, TopGuNN performs queries over an embedding space of 4.63TB (approximately 1.5B embeddings) in less than a day.  ...  We have open sourced our efficient, scalable system that makes the most efficient use of human-in-the-loop annotation.  ... 
doi:10.18653/v1/2021.dash-1.14 fatcat:qm6ucsiyfva4hengklxhg7fxhy

What You See Is What You Get? The Impact of Representation Criteria on Human Bias in Hiring [article]

Andi Peng, Besmira Nushi, Emre Kiciman, Kori Inkpen, Siddharth Suri, Ece Kamar
2019 arXiv   pre-print
By decoupling sources of bias, we can better isolate strategies for bias mitigation in human-in-the-loop systems.  ...  decision-making.  ...  Acknowledgements We would like to thank Adam Kalai for his wisdom on word embeddings, Krishnaram Kenthapadi for feedback on algorithmic hiring tools, Mary Gray and the MSR ethics and privacy team for IRB  ... 
arXiv:1909.03567v1 fatcat:4igwpsx72rcyvluopjhsvzu4qi

Think About the Stakeholders First! Towards an Algorithmic Transparency Playbook for Regulatory Compliance [article]

Andrew Bell, Oded Nov, Julia Stoyanovich
2022 arXiv   pre-print
Many of these regulations address the transparency of AI systems, and related citizen-aware issues like allowing individuals to have the right to an explanation about how an AI system makes a decision  ...  We also describe a real-world case-study that illustrates how this approach can be used in practice.  ...  This playbook would be useful to a number of audiences including technologists, humans-in-the-loop, and policymakers.  ... 
arXiv:2207.01482v1 fatcat:zpk27klf5vdpznmk76dzczslmq

The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems

Kathleen Creel, Deborah Hellman
2022 Canadian Journal of Philosophy  
This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions.  ...  However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities.  ...  In fact, adding a second classification method can be accomplished without a human in the loop, and this strategy will be part of our positive proposal.  ... 
doi:10.1017/can.2022.3 fatcat:3oxpghknjfafnc6sgkoclkgsfu

What does it mean to solve the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems [article]

Javier Sanchez-Monedero, Lina Dencik, Lilian Edwards
2020 arXiv   pre-print
Yet the way decisions are made on who is eligible for jobs, and why, are rapidly changing with the advent and growth in uptake of automated hiring systems (AHSs) powered by data-driven tools.  ...  In this paper, we introduce a perspective outside the US by critically examining how three prominent automated hiring systems (AHSs) in regular use in the UK, HireVue, Pymetrics and Applied, understand  ...  The latter provision caused great academic stir in 2016 when it was claimed somewhat controversially it could be interpreted to provide data subjects, not just with a right to a "human in the loop" as  ... 
arXiv:1910.06144v2 fatcat:uv5bnixnyjdmvhap3sdstcnxsi

The impact of the Artificial Intelligence on the accounting profession, a literature's assessment

Mirela Simina Stancu, Adriana Duţescu
2021 Proceedings of the International Conference on Business Excellence  
This paper is highlighting the potential changes Artificial Intelligence can bring to the accounting jobs and the necessary steps to be taken in order to prepare for the new jobs, in which Artificial Intelligence  ...  the capability of experts to adapt faster to the new status quo and to acquire the necessary skills to be able to work with Artificial Intelligence solutions and to overcome the fear of losing their jobs  ...  Cognitive automation solutions are the intermediary step in the evolution from robot process automation and AI and is represented by machine learning, unstructured data processing and human-in-the-loop  ... 
doi:10.2478/picbe-2021-0070 fatcat:sru4f56ykjeppldaynzapiijmi

Designing Ethical Algorithms

Kirsten Martin
2019 MIS Quarterly Executive  
Such algorithmic decisions, like all decisions, are biased and make mistakes. Yet, who is responsible for managing those mistakes?  ...  Second, by creating inscrutable algorithms, which are difficult to understand or govern in use, developers may voluntarily take on accountability for the role of the algorithm in a decision. 1,2  ...  ," Journal of Business Ethics (127:4), April 2014, pp. 707-715. 29 Meg Jones refers to this concept (the need for an individual to have a larger role in a decision) as "a right to a human in the loop that  ... 
doi:10.17705/2msqe.00012 fatcat:a25wnlrjy5eutp5tkmgv23ykva

Survey on Fair Reinforcement Learning: Theory and Practice [article]

Pratik Gajane, Akrati Saxena, Maryam Tavakol, George Fletcher, Mykola Pechenizkiy
2022 arXiv   pre-print
However, many dynamic real-world applications can be better modeled using sequential decision-making problems and fair reinforcement learning provides a more suitable alternative for addressing these problems  ...  Fairness-aware learning aims at satisfying various fairness constraints in addition to the usual performance criteria via data-driven machine learning techniques.  ...  Furthermore, human-in-the-loop IoT systems are popular means to provide a personalized experience.  ... 
arXiv:2205.10032v1 fatcat:rrc7a5aumnbe3dmptkeh5ohapa

aiSTROM – A roadmap for developing a successful AI strategy [article]

Dorien Herremans
2021 arXiv   pre-print
Looking at new technologies, we have to consider challenges such as bias, legality of black-box-models, and keeping humans in the loop.  ...  Finally, we should make sure that our strategy includes continuous education of employees to enable a culture of adoption.  ...  Human-in-the-loop To evaluate and measure a model's performance, we typically use ground truth labels.  ... 
arXiv:2107.06071v1 fatcat:5jh2grv5rffutjidiqhj23b6nq
« Previous Showing results 1 — 15 out of 247 results