Filters








66,429 Hits in 4.4 sec

Self-explaining AI as an alternative to interpretable AI [article]

Daniel C. Elton
2020 arXiv   pre-print
To show how we might be able to trust AI despite these problems we introduce the concept of self-explaining AI.  ...  As a result, neural networks trained on complex real world data are inherently hard to interpret and prone to failure if asked to extrapolate.  ...  Motivated by how trust works between humans, in this work we explore the idea of self-explaining AIs. Self-explaining AIs yield two outputs -the decision and an explanation of that decision.  ... 
arXiv:2002.05149v6 fatcat:zo2uq7sfgbak7j63vktctn4yaq

Does Explainable Artificial Intelligence Improve Human Decision-Making? [article]

Yasmeen Alufaisan, Laura R. Marusich, Jonathan Z. Bakdash, Yan Zhou, Murat Kantarcioglu
2020 arXiv   pre-print
Prior research on human and explainable AI interactions has focused on measures such as interpretability, trust, and usability of the explanation.  ...  Explainable AI provides insight into the "why" for model predictions, offering potential for users to better understand and trust a model, and to recognize and correct AI predictions that are incorrect  ...  Acknowledgments and Disclosure of Funding The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed  ... 
arXiv:2006.11194v1 fatcat:ht6gmmcl7jff7cmykykoktylfa

The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies

Aniek F. Markus, Jan A. Kors, Peter R. Rijnbeek
2020 Journal of Biomedical Informatics  
We argue the reason to demand explainability determines what should be explained as this determines the relative importance of the properties of explainability (i.e. interpretability and fidelity).  ...  Lack of transparency is identified as one of the main barriers to implementation, as clinicians should be confident the AI system can be trusted.  ...  Acknowledgements The authors like to thank Dr. Jenna Reps for her valuable feedback on this manuscript.  ... 
doi:10.1016/j.jbi.2020.103655 pmid:33309898 fatcat:plvpe7itmjhalenryuu7f2eyd4

The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies [article]

Aniek F. Markus, Jan A. Kors, Peter R. Rijnbeek
2020 arXiv   pre-print
We argue the reason to demand explainability determines what should be explained as this determines the relative importance of the properties of explainability (i.e. interpretability and fidelity).  ...  Lack of transparency is identified as one of the main barriers to implementation, as clinicians should be confident the AI system can be trusted.  ...  Acknowledgements The authors like to thank Dr. Jenna Reps for her valuable feedback on this manuscript.  ... 
arXiv:2007.15911v1 fatcat:hcupk4qssbaslf2wj7rmxj7r5e

The Transformation of Patient-Clinician Relationships With AI-Based Medical Advice: A "Bring Your Own Algorithm" Era in Healthcare [article]

Oded Nov, Yindalon Aphinyanaphongs, Yvonne W. Lui, Devin Mann, Maurizio Porfiri, Mark Riedl, John-Ross Rizzo, Batia Wiesenfeld
2020 arXiv   pre-print
One of the dramatic trends at the intersection of computing and healthcare has been patients' increased access to medical information, ranging from self-tracked physiological data to genetic data, tests  ...  Consequently, just as organizations have had to deal with a "Bring Your Own Device" (BYOD) reality in which employees use their personal devices (phones and tablets) for some aspects of their work, a similar  ...  Lui is a practicing neuro-radiologist and an associate professor and Associate Chair for AI at the Radiology Department, NYU Grossman School of Medicine.  ... 
arXiv:2008.05855v1 fatcat:j55mf6jm6zgetdds7pi7hbbmqa

Artificial intelligence: Explainability, ethical issues and bias

Marshan Alaa
2021 Annals of Robotics and Automation  
Despite this overlapping, however, AI refers to the wider range of intelligent tasks such as resembling human cognitive ability to support learning, reasoning, and self-correction [12] .  ...  that fi t all stakeholders involved in the implementation and interpretation of the results of an AI system [13] .  ... 
doi:10.17352/ara.000011 fatcat:dhxlagfh2vhkpeeda5qromnzxq

Human-centered artificial intelligence in education: seeing the invisible through the visible

Stephen J.H. Yang, Hiroaki Ogata, Tatsunori Matsui, Nian-Shing Chen
2021 Computers and Education: Artificial Intelligence  
However, we explore AI can also inhibit the human condition and advocate for an in-depth dialog between technology-and humanity-based researchers to improve understanding of HAI from various perspectives  ...  The use of AI can enhance human welfare in numerous respects, such as through improving the productivity of food, health, water, education, and energy services.  ...  Acknowledgments We would like to thank Academician Chao-Han Liu and Academician Wing-Huen Ip of the Academia Sinica, Taiwan, for their inspiration and leadership toward the achievement of human-centered  ... 
doi:10.1016/j.caeai.2021.100008 fatcat:cw6pt6ip4rea3fuqd45puft7xm

Explainable AI: A Neurally-Inspired Decision Stack Framework [article]

J.L. Olds, M.S. Khan, M. Nayebpour, N. Koizumi
2019 arXiv   pre-print
European Law now requires AI to be explainable in the context of adverse decisions affecting European Union (EU) citizens.  ...  AI decision came to its conclusion.  ...  Reading out the machine code to derive the explanation for a program failure is an alternative to current software debugging routines.  ... 
arXiv:1908.10300v1 fatcat:cn2n2dajejgv7a5bavotx4zcby

The Legalhood of Artificial Intelligence: AI Applications as Energy Services

Lambrini Seremeti, School of Education, Frederick University, Nicosia, Cyprus, Ioannis Kougias, Laboratory, of Interdisciplinary Semantic Interconnected Symbiotic Education Environments, Department of Electrical and Computer Engineering, University of Peloponnese, Greece, Laboratory, of Interdisciplinary Semantic Interconnected Symbiotic Education Environments, Department of Electrical and Computer Engineering, University of Peloponnese, Greece
2021 Journal of Artificial Intelligence and Systems  
The importance of data has increased in the last century and these days it is an essential resource for any human activity as well as a vital component for our society.  ...  It is our belief that unforeseeable and ground-breaking AI applications can be regulatorily tackled with respect to energy law.  ...  Section 2 of the paper constitutes an effort to explain the need for granting a legal hypostasis to AI entities. Section 3 touches on ways on which AI instrument is identified by law.  ... 
doi:10.33969/ais.2021.31006 fatcat:lo2agmyzjfdyzpbt5araycst2i

An Artificial Intelligence Life Cycle: From Conception to Production [article]

Daswin De Silva, Damminda Alahakoon
2021 arXiv   pre-print
An ontological mapping of AI algorithms to applications, followed by an organisational context for the AI life cycle are further contributions of this article.  ...  The 'Develop' phase is technique-oriented, as it transforms data and algorithms into AI models that are benchmarked, evaluated and explained.  ...  fine-tuned parameter settings • Investigate bias variance trade-off across all models 2.12 AI model explainability (or XAI) • Apply intrinsic methods (endemic methods to the algorithm) for interpreting  ... 
arXiv:2108.13861v1 fatcat:mcxhfytpcfhxvgj5u4okhy2rei

Algorithms in future insurance markets

Małgorzata Śmietanka, Adriano Koshiyama, Philip Treleaven
2021 International Journal of Data Science and Big Data Analytics  
These technologies are important since they underpin the automation of the insurance markets and risk analysis, and provide the context for the algorithms, such as AI machine learning and computational  ...  Albeit these modes of learning have been in the AI/ML field more than a decade, they are now more applicable due to the availability of data, computing power and infrastructure.  ...  Applications in insurance markets Interpretability/Explainability of AI algorithms In the context of AI and ML, Explainability and Interpretability are often used interchangeably.  ... 
doi:10.51483/ijdsbda.1.1.2021.1-19 fatcat:gty5qdugnbhm3mophojqyxmkja

Human-Centric AI: The Symbiosis of Human and Artificial Intelligence

Davor Horvatić, Tomislav Lipic
2021 Entropy  
Most of the recent success in AI comes from the utilization of representation learning with end-to-end trained deep neural network models in tasks such as image, text,and speech recognition or strategic  ...  Well-evidenced advances of data-driven complex machine learning approaches emerging within the so-called second wave of artificial intelligence (AI) fostered the exploration of possible AI applications  ...  Acknowledgments: The articles presented in this Special Issue provide insights into the field of newly emerging human-centric explainable AI, enabling us to control and improve performance, robustness,  ... 
doi:10.3390/e23030332 pmid:33799841 pmcid:PMC7998306 fatcat:weok427uv5hyvgh225mtz5yh3a

Pretrained AI Models: Performativity, Mobility, and Change [article]

Lav R. Varshney, Nitish Shirish Keskar, Richard Socher
2019 arXiv   pre-print
We discuss how pretrained models are developed and compared under the common task framework, but that this may make self-regulation inadequate.  ...  We close by discussing how this sociological understanding of pretrained models can inform AI governance frameworks for fairness, accountability, and transparency.  ...  Self-Governance As detailed in the responsible innovation literature [86, 93] , contrary to self-governance by innovators, an alternative is deliberative and inclusive governance with broad stakeholder  ... 
arXiv:1909.03290v1 fatcat:7doni7tc3rginpokkow2wtiqmy

Levels of Explainable Artificial Intelligence for Human-Aligned Conversational Explanations

Richard Dazeley, Peter Vamplew, Cameron Foale, Charlotte Young, Sunil Aryal, Francisco Cruz
2021 Artificial Intelligence  
While important, these explanations rarely provide insights into an agent's: beliefs and motivations; hypotheses of other (human, animal or AI) agents' intentions; interpretation of external cultural expectations  ...  Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML).  ...  Alternatively, the system will provide an interpretation of a single instance's output, such as an image, text or graph, illustrating how an individual data instance was processed [93, 94] .  ... 
doi:10.1016/j.artint.2021.103525 fatcat:3u5hkk3tmrhx7kdlbgx5vxncxi

What Kind of Artificial Intelligence Should We Want for Use in Healthcare Decision-Making Applications?

Jordan Joseph Wadden
2021 Canadian Journal of Bioethics  
For example, Eric Topol compares AI in healthcare to AI in self-driving cars (4). Topol draws on an analysis of self-driving cars by Steven E.  ...  In other words, these patients may accept AI as a diagnostic tool but refuse to consent to treatment if an AI was making the decision.  ...  an article.  ... 
doi:10.7202/1077636ar fatcat:4yll3awzgbcypbuqwmacdo6uda
« Previous Showing results 1 — 15 out of 66,429 results