Filters








1,981 Hits in 4.1 sec

Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI

Richard Tomsett, Alun Preece, Dave Braines, Federico Cerutti, Supriyo Chakraborty, Mani Srivastava, Gavin Pearson, Lance Kaplan
2020 Patterns  
We propose that AI services can achieve this by being both interpretable and uncertainty-aware. Creating such AI systems poses various technical and human factors challenges.  ...  We suggest that these problems can be mitigated by taking steps that allow rapid trust calibration so that decision makers understand the AI system's limitations and likely failures and can calibrate their  ...  The views and conclusions contained in this document are those of the authors  ... 
doi:10.1016/j.patter.2020.100049 pmid:33205113 pmcid:PMC7660448 fatcat:znum6ebievgg5aqchwhszjflyy

Scientific AI in materials science: a path to a sustainable and scalable paradigm [article]

Brian DeCost, Jason Hattrick-Simpers, Zachary Trautt, Aaron Kusne, Eva Campo, Martin Green
2020 arXiv   pre-print
We provide a brief introduction to AI in materials science and engineering, followed by detailed discussions of each of the opportunities and paths forward.  ...  Recently there has been an ever-increasing trend in the use of machine learning (ML) and artificial intelligence (AI) methods by the materials science, condensed matter physics, and chemistry communities  ...  for quantifying interpretability and trust.  ... 
arXiv:2003.08471v1 fatcat:5bskrkxhsvfxbhot5dh3z5ynea

Certifiable Artificial Intelligence Through Data Fusion [article]

Erik Blasch, Junchi Bin, Zheng Liu
2021 arXiv   pre-print
While the AI community has made rapid progress, there are challenges in certifying AI systems.  ...  This paper reviews and proposes concerns in adopting, fielding, and maintaining artificial intelligence (AI) systems.  ...  Acknowledgments The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or  ... 
arXiv:2111.02001v1 fatcat:kqaz2e5kxbairnr7povat6uci4

An Experimentation Platform for Explainable Coalition Situational Understanding [article]

Katie Barrett-Powell, Jack Furby, Liam Hiley, Marc Roig Vilamala, Harrison Taylor, Federico Cerutti, Alun Preece, Tianwei Xing, Luis Garcia, Mani Srivastava, Dave Braines
2020 arXiv   pre-print
and subsymbolic AI/ML approaches for event processing.  ...  We present an experimentation platform for coalition situational understanding research that highlights capabilities in explainable artificial intelligence/machine learning (AI/ML) and integration of symbolic  ...  Uncertainty management A key element of our work in trust calibration is the management of uncertainty, including distinctions between aleatoric and epistemic uncertainty (Tomsett et al. 2020) .  ... 
arXiv:2010.14388v2 fatcat:clysndaa6fgdbfclnfrippl2uy

Machine Learning Meets Visualization to Make Artificial Intelligence Interpretable (Dagstuhl Seminar 19452)

Enrico Bertini, Peer-Timo Bremer, Daniela Oelke, Jayaraman Thiagarajan
2020 Dagstuhl Reports  
This report documents the program and the outcomes of Dagstuhl Seminar 19452 "Machine Learning Meets Visualization to Make Artificial Intelligence Interpretable".  ...  Presenting system uncertainty in automotive UIs for supporting trust calibration in autonomous driving. Proc. 5th Int.  ...  acceptance and trust.  ... 
doi:10.4230/dagrep.9.11.24 dblp:journals/dagstuhl-reports/BertiniBOT19 fatcat:z2ykpioo3jbcvkcpegdemaef34

"I can assure you [...] that it's going to be all right" -- A definition, case for, and survey of algorithmic assurances in human-autonomy trust relationships [article]

Brett W Israelsen
2017 arXiv   pre-print
Trust between humans and autonomy is reviewed, and the implications for the design of assurances are highlighted.  ...  In essence they want to be able to trust the systems that are being designed. In this survey we present assurances that are the method by which users can understand how to trust this technology.  ...  amenable to calibrated trust.  ... 
arXiv:1708.00495v2 fatcat:bx42oqcijfhchna3xhl7nktnqa

Towards an Equitable Digital Society: Artificial Intelligence (AI) and Corporate Digital Responsibility (CDR)

Karen Elliott, Rob Price, Patricia Shaw, Tasos Spiliotopoulos, Magdalene Ng, Kovila Coopamootoo, Aad van Moorsel
2021 Society (New Brunswick)  
The paper seeks to harmonise and align approaches, illustrating the opportunities and threats of AI, while raising awareness of Corporate Digital Responsibility (CDR) as a potential collaborative mechanism  ...  We need to think carefully about how we implement these algorithms, the delegation of decisions and data usage, in the absence of human oversight and AI governance.  ...  On the other hand, we observe substantial uncertainty and anxiety that the rapid adoption of AI across the digital "space" will impact society in negative ways: widespread job loss, income inequality,  ... 
doi:10.1007/s12115-021-00594-8 pmid:34149122 pmcid:PMC8202049 fatcat:t42gu6clyncv5pfh5nqoed4oq4

Improving Reproducibility in Research: The Role of Measurement Science

Robert J. Hanisch, Ian S. Gilmore, Anne L. Plant
2019 Journal of Research of the National Institute of Standards and Technology  
The workshop brought together experts from the measurement and wider research communities (Physical-, Data- and Life-sciences, Engineering, and Geology) to understand the issues and to explore how good  ...  The participants came from UK, US, Korea, France, Germany, Australia, Bosnia and Herzegovina, Canada, Turkey, and Singapore.  ...  Due to the rapid development of digital manufacturing (Industry 4.0, etc.) and AI, machinereadable methods and protocols, as well as the transfer of digital calibration certificates, should become standard  ... 
doi:10.6028/jres.124.024 pmid:34877174 pmcid:PMC7340550 fatcat:chx7xl6fffffbc3wzohb7xq244

"Dave...I can assure you ...that it's going to be all right ..." A Definition, Case for, and Survey of Algorithmic Assurances in Human-Autonomy Trust Relationships

Brett W. Israelsen, Nisar R. Ahmed
2019 ACM Computing Surveys  
This paper presents a survey of algorithmic assurances, i.e. programmed components of agent operation that are expressly designed to calibrate user trust in artificially intelligent agents.  ...  Many techniques have been devised to assess and influence human trust in artificially intelligent agents.  ...  calibrating 'trust', as opposed to calibrating the TRBs.  ... 
doi:10.1145/3267338 fatcat:7wzpdawvwbdmvavttqfsu2fly4

Urref Self-Confidence In Information Fusion Trust

Erik Blasch, Audun Jøsang, Jean Dezert, Paulo C. G. Costa, Anne-Laure Jousselme
2015 Zenodo  
Evaluation of Techniques for Uncertainty Representation Working Group (ETURWG) on self-confidence and trust in information fusion systems design.  ...  We argue that uncertainty can be minimized through confidence (information evidence) and self-confidence (source agent) processing, The results here seek to enrich the ongoing discussion at the ISIF's  ...  The user interface was shown to have a strong impact on trust, cooperation, and situation awareness [38] .  ... 
doi:10.5281/zenodo.22665 fatcat:44mcnwln4zh3dmwjmb6erx5tfi

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience [article]

Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel Bellamy, Klaus Mueller
2020 arXiv   pre-print
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with  ...  When the model matures, the machine teacher should be able to recognize its progress in order to trust and feel confident about their teaching outcome.  ...  This work was done as an internship project at IBM Research AI, and partially supported by NSF grants IIS 1527200 and IIS 1941613.  ... 
arXiv:2001.09219v4 fatcat:xfxzvroakvelhiwbjhdmrpy6my

What can AI do for me: Evaluating Machine Learning Interpretations in Cooperative Play [article]

Shi Feng, Jordan Boyd-Graber
2019 arXiv   pre-print
Machine learning is an important tool for decision making, but its ethical and responsible application requires rigorous vetting of its interpretability and utility: an understudied problem, particularly  ...  We recruit both trivia experts and novices to play this game with computer as their teammate, who communicates its prediction via three different interpretations.  ...  Efforts including the Explainable AI (XAI) initiative [17] led to the conceptualization of a series of human-AI cooperation paradigms, including human-aware AI [7] , and humanrobot teaming [62] .  ... 
arXiv:1810.09648v3 fatcat:h6bgh4srkbdyrhsdp7ikuvrlni

The global landscape of AI ethics guidelines

Anna Jobin, Marcello Ienca, Effy Vayena
2019 Nature Machine Intelligence  
"ethical AI" and which ethical requirements, technical standards and best practices are needed for its realization.  ...  to how these principles are interpreted; why they are deemed important; what issue, domain or actors they pertain to; and how they should be implemented.  ...  The protocol was pilot-tested and calibrated prior to data collection.  ... 
doi:10.1038/s42256-019-0088-2 fatcat:ok3sbwujwzegpaijtyvj4emhnm

Bringing AI to BI

Darren Edge, Jonathan Larson, Christopher White
2018 Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems - CHI '18  
Through our creation of multiple end-to-end data applications, we learned that representing the varying quality of inferred data structures was crucial for making the use and limitations of AI transparent  ...  We conclude with reflections on BI in the age of AI, big data, and democratized access to data analytics.  ...  Acknowledgements We would like to thank our collaborators in Microsoft Research, Uncharted Software, Microsoft Power BI, and the Power BI Solution Templates team for their substantial contributions to  ... 
doi:10.1145/3170427.3174367 dblp:conf/chi/EdgeLW18 fatcat:z2dxzqpeyrbgzpsmh2hqw4r23a

Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [article]

Gagan Bansal, Besmira Nushi, Ece Kamar, Eric Horvitz, Daniel S. Weld
2021 arXiv   pre-print
training through improvements in expected team utility across datasets, considering parameters such as human skill and the cost of mistakes.  ...  In such AI-advised decision making, humans and machines form a team, where the human is responsible for making final decisions. But is the most accurate AI the best teammate?  ...  Acknowledgments This material is based upon research initially performed during Gagan Bansal's summer internship at Microsoft Research, with continuing support by ONR grant N00014-18-1-2193, NSF RAPID  ... 
arXiv:2004.13102v3 fatcat:2ikxhc7hl5erzadgkzsygtztze
« Previous Showing results 1 — 15 out of 1,981 results