65,677 Hits in 5.8 sec

Can we open the black box of AI?

Davide Castelvecchi
2016 Nature  
ean Pomerleau can still remember his first tussle with the black-box problem.  ...  "At some point, it's like explaining Shakespeare to a dog. " Faced with such challenges, AI researchers are responding just as Pomerleau did -by opening up the black box and doing the equivalent of neuroscience  ... 
doi:10.1038/538020a pmid:27708329 fatcat:drrgxkczgbdmhjdgqz4e6jku34

The AI detectives

Paul Voosen
2017 Science  
The drive to open the black box of AI is also coming from Silicon Valley itself, says Maya Gupta, a machine-learning researcher at Google in Mountain View, California.  ...  Color Opening up the black box Loosely modeled after the brain, deep neural networks are spurring innovation across science. But the mechanics of the models are mysterious: They are black boxes.  ...  The AI detectives  ... 
doi:10.1126/science.357.6346.22 pmid:28684483 fatcat:jeszwd5mife7pfyr5slhdfreum

Trustworthy AI [article]

Richa Singh, Mayank Vatsa, Nalini Ratha
2020 arXiv   pre-print
Modern AI systems are reaping the advantage of novel learning methods. With their increasing usage, we are realizing the limitations and shortfalls of these systems.  ...  We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems, namely: (i) bias and fairness, (ii) explainability, (iii) robust mitigation of  ...  By that we mean that a decent AI system can have the properties of being fair, dependable, and trusted additionally.  ... 
arXiv:2011.02272v1 fatcat:ccqs2ysklfbtrbnnqkouk5qlh4

From "Explainable AI" to "Graspable AI"

Maliheh Ghajargar, Jeffrey Bardzell, Alison Smith Renner, Peter Gall Krogh, Kristina Höök, David Cuartielles, Laurens Boer, Mikael Wiberg
2021 Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction  
, leading to debates around issues of biased AI systems, ML black-box, user trust, user's perception of control over the system, and system's transparency, to name a few.  ...  We note that the affordances of physical forms and their behaviors potentially can not only contribute to the explainability of ML systems, but also can contribute to an open environment for criticism.  ...  experience, and the sociocultural context in which it was produced/consumed, we believe the Graspable AI can provide opportunities for an open and more democratic environment for criticism.  ... 
doi:10.1145/3430524.3442704 fatcat:esszzs6adnax3al2fto3cmy2mq

AI Opaqueness: What Makes AI Systems More Transparent?

Victoria Rubin
2020 Proceedings of the Annual Conference of CAIS / Actes du congrès annuel de l'ACSI  
We offer insights from interviews with AI system users about their perceptions and developers' lessons learned.  ...  What does AI transparency mean? What explanations do AI system users desire?  ...  As these algorithms become more advanced, there is a concern for the ethical implications of subjecting people to these 'black boxes.'  ... 
doi:10.29173/cais1139 fatcat:utydhhohhrhn5i63wszdzkft2q

Human-Centered Explainable AI (HCXAI): Beyond Opening the Black-Box of AI

Upol Ehsan, Philipp Wintersberger, Q. Vera Liao, Elizabeth Anne Watkins, Carina Manger, Hal Daumé III, Andreas Riener, Mark O Riedl
2022 CHI Conference on Human Factors in Computing Systems Extended Abstracts  
When it comes to Explainable AI (XAI), understanding who interacts with the black-box of AI is just as important as "opening" it, if not more.  ...  The goal of the second installment is to go beyond the black box and examine how human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels.  ...  to "open" the black-box of AI [13] .  ... 
doi:10.1145/3491101.3503727 fatcat:gbix2u5a2faihdkfkra4egu3fy

Sustainable AI: AI for sustainability and the sustainability of AI

Aimee van Wynsberghe
2021 AI and Ethics  
This paper is not meant to engage with each of the three pillars of sustainability (i.e. social, economic, environment), and as such the pillars of sustainable AI.  ...  As such, Sustainable AI is focused on more than AI applications; rather, it addresses the whole sociotechnical system of AI.  ...  The second wave of AI ethics addressed the practical concerns of machine learning (ML) techniques: the black-box algorithm and the problem of explainability [9, 16] , the lack of equal representation  ... 
doi:10.1007/s43681-021-00043-6 fatcat:soqjltv735crtg7vp3gbilukqe

Experiential AI

Drew Hemment, Ruth Aylett, Vaishak Belle, Dave Murray-Rust, Ewa Luger, Jane Hillston, Michael Rovatsos, Frank Broz
2019 AI Matters  
Artists can make the boundaries of systems visible and offer novel ways to make the reasoning of AI transparent and decipherable.  ...  The hypothesis is that art can mediate between computer code and human comprehension to overcome the limitations of explanations in and for AI systems.  ...  This has led to a call to limit the use of "black box" systems in such settings (Campolo, Sanfilippo, Whittaker, & Crawford, 2017).  ... 
doi:10.1145/3320254.3320264 fatcat:rdkafk7vbbc7tjp5nktuzlspem

Explainable AI without Interpretable Model [article]

Kary Främling
2020 arXiv   pre-print
Therefore, CIU explanations map accurately to the black-box model itself. CIU is completely model-agnostic and can be used with any black-box system.  ...  However, the interpretable model does not necessarily map accurately to the original black-box model. Furthermore, the understandability of interpretable models for an end-user remains questionable.  ...  We will begin the formal definition of CIU by providing a set of definitions. Definition 1 (Black-box model).  ... 
arXiv:2009.13996v1 fatcat:nf54mg3jizdqtkhvnqs7prmghe

Automated Testing of AI Models [article]

Swagatam Haldar, Deepak Vijaykeerthy, Diptikalyan Saha
2021 arXiv   pre-print
The last decade has seen tremendous progress in AI technology and applications. With such widespread adoption, ensuring the reliability of the AI models is crucial.  ...  In this paper, we extend the capability of the AITEST tool to include the testing techniques for Image and Speech-to-text models along with interpretability testing for tabular models.  ...  The current implementation only supports black-box testing, which when configured, can be applied to a large number of other similar models of the same type without much change.  ... 
arXiv:2110.03320v1 fatcat:pj54fu3mbne4np55schxd5ul6m

Self-explaining AI as an alternative to interpretable AI [article]

Daniel C. Elton
2020 arXiv   pre-print
To show how we might be able to trust AI despite these problems we introduce the concept of self-explaining AI.  ...  Self-explaining AIs are capable of providing a human-understandable explanation of each decision along with confidence levels for both the decision and explanation.  ...  Acknowledgements The author appreciates the helpful feedback from Robert Kirk, Kyle Vedder, Jacob Reinhold, Dr. W. James Murdoch, and Dr. Ulas Bagci.  ... 
arXiv:2002.05149v6 fatcat:zo2uq7sfgbak7j63vktctn4yaq

AI Nuclear Winter or AI That Saves Humanity? AI and Nuclear Deterrence [chapter]

Nobumasa Akiyama
2021 Robotics, AI, and Humanity  
AbstractNuclear deterrence is an integral aspect of the current security architecture and the question has arisen whether adoption of AI will enhance the stability of this architecture or weaken it.  ...  Conversely, judgments about what does or does not suit the "national interest" are not well suited to AI (at least in its current state of development).  ...  AI as Black Box For decision makers who are responsible for the consequences of their decisions, a critical question is how and to what extent they can trust AI.  ... 
doi:10.1007/978-3-030-54173-6_13 fatcat:rgmtfzjr6rhihoo4mi6ffivyca

The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations [article]

Upol Ehsan, Samir Passi, Q. Vera Liao, Larry Chan, I-Hsiang Lee, Michael Muller, Mark O. Riedl
2021 arXiv   pre-print
While "opening the opaque box" is important, understanding who opens the box can govern if the Human-AI interaction is effective.  ...  These groups were chosen to look at how disparities in AI backgrounds can exacerbate the creator-consumer gap.  ...  While opening the opaque box is important, who opens the box also matters. Implicit in Explainable AI is the question: "explainable to whom?" [41] .  ... 
arXiv:2107.13509v1 fatcat:si6dwi57njg27fkk6hqyqpunce

Explaining Explanations in AI

Brent Mittelstadt, Chris Russell, Sandra Wachter
2019 Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* '19  
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions.  ...  We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.  ...  contest decisions made by black-box algorithmic models.  ... 
doi:10.1145/3287560.3287574 dblp:conf/fat/MittelstadtRW19 fatcat:n7novurzcvatfkxg27o55mybmm

Trustworthy AI: A Computational Perspective [article]

Haochen Liu, Yiqi Wang, Wenqi Fan, Xiaorui Liu, Yaxin Li, Shaili Jain, Yunhao Liu, Anil K. Jain, Jiliang Tang
2021 arXiv   pre-print
In this survey, we present a comprehensive survey of trustworthy AI from a computational perspective, to help readers understand the latest technologies for achieving trustworthy AI.  ...  In this work, we focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability  ...  collaborative, and agile governance. • Explainable AI [17] : The basic aim of explainable AI is to open up the "black box" of AI, to offer a trustworthy explanation of AI to users.  ... 
arXiv:2107.06641v3 fatcat:ymqaxvzsoncqrcosj5mxcvgsuy
« Previous Showing results 1 — 15 out of 65,677 results