Filters








23,505 Hits in 1.6 sec

Stakeholders in Explainable AI [article]

Alun Preece, Dan Harborne, Dave Braines, Richard Tomsett, Supriyo Chakraborty
2018 arXiv   pre-print
In this paper, we argue that this lack of consensus is due to there being several distinct stakeholder communities.  ...  of an AI.  ...  Conclusion In this paper we have attempted to 'tease apart' some of the issues in explainable AI by focusing on the various stakeholder communities and arguing that their motives and requirements for explainable  ... 
arXiv:1810.00184v1 fatcat:izkonor3urg3recwz4gdvoxnu4

Exploring Explainable AI in the Financial Sector: Perspectives of Banks and Supervisory Authorities [article]

Ouren Kuiper, Martin van den Berg, Joost van den Burgt, Stefan Leijnen
2021 arXiv   pre-print
We argue that the financial sector could benefit from clear differentiation between technical AI (model) ex-plainability requirements and explainability requirements of the broader AI system in relation  ...  Explainable artificial intelligence (xAI) is seen as a solution to making AI systems less of a black box.  ...  The requirements regarding explainable AI reported in the interviews varied widely per use case and stakeholder.  ... 
arXiv:2111.02244v1 fatcat:b6rhwxrz2ncopiqluxdt3l7ttq

A Multistakeholder Approach Towards Evaluating AI Transparency Mechanisms [article]

Ana Lucic, Madhulika Srikumar, Umang Bhatt, Alice Xiang, Ankur Taly, Q. Vera Liao, Maarten de Rijke
2021 arXiv   pre-print
Given that there are a variety of stakeholders involved in, and affected by, decisions from machine learning (ML) models, it is important to consider that different stakeholders have different transparency  ...  In our work, we want to investigate how well transparency mechanisms might work in practice for a more diverse set of stakeholders by conducting a large-scale, mixed-methods user study across a range of  ...  a user's trust in an AI system?  ... 
arXiv:2103.14976v2 fatcat:3q3hyt53onh77hqpwqmyt33fzi

On Two XAI Cultures: A Case Study of Non-technical Explanations in Deployed AI System [article]

Helen Jiang, Erwen Senge
2021 arXiv   pre-print
Not much of XAI is comprehensible to non-AI experts, who nonetheless, are the primary audience and major stakeholders of deployed AI systems in practice.  ...  The gap is glaring: what is considered "explained" to AI-experts versus non-experts are very different in practical scenarios.  ...  decisions to non-technical stakeholders who try to understand deployed AI systems in real-life.  ... 
arXiv:2112.01016v1 fatcat:yzstvwmhnve3llzybeam7xlshy

Human-Centered Explainable AI (HCXAI): Beyond Opening the Black-Box of AI

Upol Ehsan, Philipp Wintersberger, Q. Vera Liao, Elizabeth Anne Watkins, Carina Manger, Hal Daumé III, Andreas Riener, Mark O Riedl
2022 CHI Conference on Human Factors in Computing Systems Extended Abstracts  
Explainability of AI systems is crucial to hold them accountable because they are increasingly becoming consequential in our lives by powering high-stakes decisions in domains like healthcare and law.  ...  When it comes to Explainable AI (XAI), understanding who interacts with the black-box of AI is just as important as "opening" it, if not more.  ...  Explainability is as much a human factors problem as it is a technical one. Implicit in Explainable AI is the question: "explainable to whom?" [8] .  ... 
doi:10.1145/3491101.3503727 fatcat:gbix2u5a2faihdkfkra4egu3fy

How AI Developers Overcome Communication Challenges in a Multidisciplinary Team: A Case Study [article]

David Piorkowski, Soya Park, April Yi Wang, Dakuo Wang, Michael Muller, Felix Portnoy
2021 arXiv   pre-print
During these collaborations, there is a knowledge mismatch between AI developers, who are skilled in data science, and external stakeholders who are typically not.  ...  This difference leads to communication gaps, and the onus falls on AI developers to explain data science concepts to their collaborators.  ...  In such situations, the AI team has to be quick on their feet as further explained in Section 4.2.  ... 
arXiv:2101.06098v1 fatcat:ezvgtc6zrja2pco4ymeqrwjp3a

Evidence-based explanation to promote fairness in AI systems [article]

Juliana Jansen Ferreira, Mateus de Souza Monteiro
2020 arXiv   pre-print
In order to explain their decisions with AI support, people need to understand how AI is part of that decision.  ...  People make decisions and usually, they need to explain their decision to others or in some matter. It is particularly critical in contexts where human expertise is central to decision-making.  ...  Reasons for eXplainable AI (XAI) in courts are abundant, such as ( [1] , pp. 1845-1846).  ... 
arXiv:2003.01525v1 fatcat:ocuh673whrh6nlqlz45pd2vljq

Managing the tension between opposing effects of explainability of artificial intelligence: a contingency theory perspective

Babak Abedin
2021 Internet Research  
explainability.FindingsThe author finds five opposing effects of explainability: comprehensibility, conduct, confidentiality, completeness and confidence in AI (5Cs).  ...  The results guide practitioners in developing metrics and standards for AI explainability, with the context of AI operation as the focus.Originality/valueThis study addresses polarized beliefs amongst  ...  to particular stakeholders (Preece, 2018) . 4.1.5 Confidence in AI.  ... 
doi:10.1108/intr-05-2020-0300 fatcat:7cb7vum7xzhurfz7sgewrhlcq4

Towards a Global Taxonomy of Interpretable AI Workshop Ai4Media [article]

Vijay Arya, Lode Lawaert, Tobias Blanke, Mor Vered, Jean Gabriel Piguet, Valeria Pulignano
2021 Zenodo  
. • Support and advance research efforts in explainability. • Contribute efforts to engender trust in AI.  ...  AI Explainability 360 Explainability Algorithms 10 ways to explain data and AI models Check if code has comments10 × 2019 IBM Corporation ONE EXPLANATION DOES NOT FIT ALL Different stakeholders require  ...  EXPLAINABILITY TAXONOMY & GUIDANCE  ... 
doi:10.5281/zenodo.4733477 fatcat:ugdqxokgargbhbo2jyfsgm2eye

Bridging AI and HCI: Incorporating Human Values into the Development of AI Technologies - Wikimedia Research Showcase - June 2021 [article]

Haiyi Zhu
2021 figshare.com  
Slides from the June 2021 Wikimedia Research showcase by Haiyi Zhu presenting "Bridging AI and HCI: Incorporating Human Values into the Development of AI Technologies"  ...  Contribute to a broader understanding of human values related to AI-supported governance in online communities. 2.  ...  Design novel approach (visualizations and deliberation workshops) that facilitates greater community control and community agency in AI design. 3.  ... 
doi:10.6084/m9.figshare.15022422.v1 fatcat:6avzqq7qangcbixx3wtdq5dbfe

AI in Africa : Framing AI through an African Lens

Angeline Wairegi, Melissa Omino, Isaac Rutenberg
2021 Communication technologies et développement  
Intelligence artificielle, pratiques sociales et politiques publiques AI in Africa : Framing AI through an African Lens IA en África : enmarcando la IA a través de una lente africana L'IA en Afrique :  ...  as stakeholders, (ii) explain the effects of management decisions on different stakeholders, (iii) identify which groups have valid claims on the firm, (iv) explain how stakeholder analysis can help the  ...  technology and AI stakeholders present in Africa.  ... 
doi:10.4000/ctd.4775 fatcat:j2v6emfacbdnzc7wid5j3zx3iq

On Explainable AI Solutions for Targeting in Cyber Military Operations

Clara Maathuis
2022 International Conference on Cyber Warfare and Security (ICIW)  
Hence, this article starts by discussing the meaning of explainable AI in the context of targeting in military cyber operations, continues by analyzing the challenges of embedding AI solutions (e.g., intelligent  ...  However, planning and conducting AI-based cyber military operations are actions still in the beginning of development.  ...  digital twin solutions that facilitate and strengthen the responsibility, transparency, and fairness of i) the stakeholders involved in developing XAI models in the military cyber domain, and ii) the XAI  ... 
doi:10.34190/iccws.17.1.38 fatcat:vvv2lia6dzeldnb67nwj2i2efy

Ethical AI-Powered Regression Test Selection [article]

Per Erik Strandberg, Mirgita Frasheri, Eduard Paul Enoiu
2021 arXiv   pre-print
Additionally, we provide a checklist for ethical AI-RTS to help guide the decision-making of the stakeholders involved in the process.  ...  While such challenges in AI are in general well studied, there is a gap with respect to ethical AI-RTS.  ...  8) Have all relevant stakeholders had the opportunity to participate in the design of the AI-RTS system? 9) How are stakeholders made aware of RTS decisions?  ... 
arXiv:2106.16050v1 fatcat:udbeuhhvtjc6ppcbx2abmhffoa

Reviewing the Need for Explainable Artificial Intelligence (xAI) [article]

Julie Gerlings, Arisa Shollo, Ioanna Constantiou
2021 arXiv   pre-print
The diffusion of artificial intelligence (AI) applications in organizations and society has fueled research on explaining AI decisions.  ...  Yet, we have a limited understanding of how xAI research addresses the need for explainable AI.  ...  We used the terms 'xAI', 'explainable artificial intelligence' or 'explainable AI' to search for articles in the above databases.  ... 
arXiv:2012.01007v2 fatcat:s5r2d2ovgfdy5oeyedb3md45oe

Beneficial Artificial Intelligence Coordination by Means of a Value Sensitive Design Approach

Steven Umbrello
2019 Big Data and Cognitive Computing  
This empirical investigation shows that the different and often disparate stakeholder groups that are implicated in AI design and use share some common values that can be used to further strengthen design  ...  VSD is shown to be both able to distill these common values as well as provide a framework for stakeholder coordination.  ...  The views in the paper are the authors' alone and not the views of the Institute for Ethics and Emerging Technologies. Conflicts of Interest: The author declares no conflict of interest.  ... 
doi:10.3390/bdcc3010005 fatcat:6meaqdivhzbdxk7vlq5bwc73uy
« Previous Showing results 1 — 15 out of 23,505 results