A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations
[article]
2021
arXiv
pre-print
In this paper, we conduct a mixed-methods study of how two different groups of whos--people with and without a background in AI--perceive different types of AI explanations. ...
By bringing conscious awareness to how and why AI backgrounds shape perceptions of potential creators and consumers in XAI, our work takes a formative step in advancing a pluralistic Human-centered Explainable ...
CONCLUSIONS In this paper, we focus on the who of XAI by investigating how two different groups of whos-people with and without a background in AI-perceive different types of AI explanations. ...
arXiv:2107.13509v1
fatcat:si6dwi57njg27fkk6hqyqpunce
Human-Centered Explainable AI (HCXAI): Beyond Opening the Black-Box of AI
2022
CHI Conference on Human Factors in Computing Systems Extended Abstracts
When it comes to Explainable AI (XAI), understanding who interacts with the black-box of AI is just as important as "opening" it, if not more. ...
Explainability of AI systems is crucial to hold them accountable because they are increasingly becoming consequential in our lives by powering high-stakes decisions in domains like healthcare and law. ...
• Given the contextual nature of explanations, what are the potential pitfalls of evaluation metrics standardization? How might we take into account the who, why, and where in the evaluation methods? ...
doi:10.1145/3491101.3503727
fatcat:gbix2u5a2faihdkfkra4egu3fy
eXplainable AI: Take one Step Back, Move two Steps forward
2020
Mensch & Computer
To shape appropriate mental models of AI-based systems, explainable AI has been suggested by AI practitioners. ...
In 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of "where is AI" from the users, who were interacting with AI but ...
Hence, we cannot design for explainable AI when we do not know how people perceive AI and their theories of how it works in the Sirst place. ...
doi:10.18420/muc2020-ws111-369
dblp:conf/mc/AlizadehESC20
fatcat:y2u3wieexrh4hiucswzjwfcwe4
Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach
[article]
2020
arXiv
pre-print
In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design. ...
It develops a holistic understanding of "who" the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems. ...
Acknowledgements Sincerest thanks to all past and present teammates of the Human-centered XAI group at the Entertainment Intelligence Lab whose hard work made the case ...
arXiv:2002.01092v2
fatcat:ty2un4l3rve7vjdli6qgclijia
In a controlled study, 14 of 16 professional designers preferred the CCB-augmented tool. ...
The application case of digital mood board design is presented, wherein visual inspirational materials are collected and curated in collages. ...
Further, to support interpretability, our algorithm can explain how its suggestions are related to the features of the visual collage created. ...
doi:10.1145/3290605.3300863
dblp:conf/chi/KochLHO19
fatcat:4lswl2ykpvdrzeql7sk3jjfpk4
Artificial Intelligence (AI) and IT identity: Antecedents Identifying with AI Applications
[article]
2020
arXiv
pre-print
In the age of Artificial Intelligence and automation, machines have taken over many key managerial tasks. Replacing managers with AI systems may have a negative impact on workers outcomes. ...
We draw on IT identity to understand the influence of identification with AI systems on job performance. ...
The present work examines the role of IT identity as an explanation for how individuals interact with AI systems, answering the call of Vodanovich, Sundaram, and Myers (2010) . ...
arXiv:2005.12196v1
fatcat:rh6z2r4runh5ldba326jls7hnm
Montreal AI Ethics Institute's Response to Scotland's AI Strategy
[article]
2020
arXiv
pre-print
In addition to examining the points above, MAIEI suggests that the strategy be extended to include considerations on biometric data and how that will be processed and used in the context of AI. ...
overarching vision; Scotland's AI ecosystem; the proposed strategic themes; and how to grow public confidence in AI by building responsible and ethical systems. ...
As presented in the scoping document, the AI ecosystem captures the large elements that form and shape the space. ...
arXiv:2006.06300v1
fatcat:t36lsj4rr5c7thjwjyuoutbgjm
Truthful AI: Developing and governing AI that does not lie
[article]
2021
arXiv
pre-print
This raises the question of how we should limit the harm caused by AI "lies" (i.e. falsehoods that are actively selected for). ...
In many contexts, lying -- the use of verbal falsehoods to deceive -- is harmful. ...
We especially want to thank: Toby Ord, who provided some of the early momentum to get the project started; David Dalrymple and Paul Christiano, conversations with whom deepened our understanding of the ...
arXiv:2110.06674v1
fatcat:yttuvho7wrd6bgbi6fijh7wy4e
How to Improve Fairness Perceptions of AI in Hiring: The Crucial Role of Positioning and Sensitization
2021
AI Ethics Journal
bias in the selection process have a significant effect on people's perceptions of fairness. ...
The findings may help organizations to optimize their deployment of AI in selection processes to improve people's perceptions of fairness and thus attract top talent. ...
The increasing incorporation of AI in the hiring process raises new questions about how applicants' perceptions are shaped in this AI-enabled process. ...
doi:10.47289/aiej20210716-3
fatcat:6ihmlfd3krdbhezxdklj3maz5u
Human-AI Collaboration in Data Science
2019
Proceedings of the ACM on Human-Computer Interaction
New techniques in automating the creation of AI, known as AutoAI or AutoML, aim to automate the work practices of data scientists. ...
Though not yet widely adopted, we are interested in understanding how AutoAI will impact the practice of data science. ...
They reported mixed perceptions of automated AI (AutoAI) technology. ...
doi:10.1145/3359313
fatcat:slieqtiphjbxnlrz5h2dq3gcau
Uncommon voices of AI
2017
AI & Society: The Journal of Human-Centred Systems and Machine Intelligence
The new wave of artificial super intelligence raises a number of serious societal concerns: what are the crises and shocks of the AI machine that will trigger fundamental change and how should we cope ...
Living through dramatic technological change, we may feel trapped and disrupted, being left behind in the myth and reality of AI, and miss what is really at stake. ...
Nath argues that the causal explanation of the 'how' and 'what' of consciousness fails to explain the 'why' of consciousness. ...
doi:10.1007/s00146-017-0755-y
fatcat:vhp2wmyrhfbeper35u6c2uybgi
In AI We Trust? Factors That Influence Trustworthiness of AI-infused Decision-Making Processes
[article]
2019
arXiv
pre-print
We aim to understand how different factors about a decision-making process, and an AI model that supports that process, influences peoples' perceptions of the trustworthiness of that process. ...
In all of these cases, AI models are being trained to help human decision makers reach accurate and fair judgments, but little is known about what factors influence the extent to which people consider ...
differences that exist around peoples' perceptions of the use of AI in decision making. ...
arXiv:1912.02675v1
fatcat:y66gq6t6gfaj3imhbcgzvbbkta
On the Importance of User Backgrounds and Impressions: Lessons Learned from Interactive AI Applications
2022
ACM transactions on interactive intelligent systems (TiiS)
In this paper, we first showcase a user study on how anchoring bias can potentially affect mental model formations when users initially interact with an intelligent system and the role of explanations ...
We controlled the order of the policies and the presence of explanations to test our hypotheses. ...
The authors of this paper would like to thank the reviewers for their constructive feedback on the earlier manuscript from this paper. ...
doi:10.1145/3531066
fatcat:lo2ol7ymvfc2zaqpk7kqanfmii
Paradox in AI – AI 2.0: The Way to Machine Consciousness
[chapter]
2009
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
Artificial Intelligence, the big promise of the last millennium, has apparently made its way into our daily lives. ...
The original expectation of true intelligence and thinking machines lies still ahead of us. Researchers are, however, optimistic as never before. ...
important part in the brain's function and could form the basis of an explanation of consciousness. ...
doi:10.1007/978-3-642-03978-2_18
fatcat:uhuqmimofbg45j35qpy57cqm7q
Toward the Clinic: Understanding Patient Perspectives on AI and Data-Sharing for AI-Driven Oncology Drug Development
[chapter]
2020
Artificial Intelligence in Oncology Drug Discovery and Development
Assuming an in-depth, semi-structured interview protocol, this qualitative study examines cancer patients' perceptions of the burgeoning development of AI-led systems for oncology as well as their perspectives ...
The increasing application of AI-led systems for oncology drug development and patient care holds the potential to usher pronounced impacts for patients' well-being. ...
color who have received cancer diagnoses although the organization remains open patients of all backgrounds. ...
doi:10.5772/intechopen.92787
fatcat:joextvcnovht5ncjwcz4e7s4r4
« Previous
Showing results 1 — 15 out of 17,903 results