Filters








40,898 Hits in 6.7 sec

Impact of Agent Reliability and Predictability on Trust in Real Time Human-Agent Collaboration

Sylvain Daronnat, Leif Azzopardi, Martin Halvey, Mateusz Dubiel
2020 Proceedings of the 8th International Conference on Human-Agent Interaction  
In addition, we modelled the human-agent trust relationship and demonstrated that it is possible to reliably predict users' trust ratings using real-time interaction data.  ...  While past work has studied how trust relates to an agent's reliability, it has been mainly carried out in turn based scenarios, rather than during real-time ones.  ...  ACKNOWLEDGMENTS We thank the Thales company for their on-going support in the project.  ... 
doi:10.1145/3406499.3415063 dblp:conf/hai/DaronnatAHD20 fatcat:2dooj7jq5bcjfegde3qquqteda

Human-Agent Trust Relationships in a Real-Time Collaborative Game

Sylvain Daronnat
2020 Extended Abstracts of the 2020 Annual Symposium on Computer-Human Interaction in Play  
We then study the impact that different agents have on reliance, performance, cognitive load and trust. We seek to understand which aspects of an agent influence the development of trust the most.  ...  We hope to pave the way for trust-aware agents, capable of adapting their behaviours to users in real-time.  ...  These findings further highlight the importance of predictability and consistency in the design of potentially error-prone agents, and how it impacts human-agent collaboration in real-time.  ... 
doi:10.1145/3383668.3419953 dblp:conf/chiplay/Daronnat20 fatcat:f3i2ld2rnnfxlm5malhs7gzov4

Inferring Trust From Users' Behaviours; Agents' Predictability Positively Affects Trust, Task Performance and Cognitive Load in Human-Agent Real-Time Collaboration

Sylvain Daronnat, Leif Azzopardi, Martin Halvey, Mateusz Dubiel
2021 Frontiers in Robotics and AI  
Our work focuses on how agents' predictability affects cognitive load, performance and users' trust in a real-time human-agent collaborative task.  ...  Collaborative virtual agents help human operators to perform tasks in real-time.  ...  ACKNOWLEDGMENTS We thank the Thales company for their on-going support in the project as well as all the participants that took part in this study.  ... 
doi:10.3389/frobt.2021.642201 fatcat:2pwbewc35jhubagqhxqadw4pqi

Using fNIRS to Identify Transparency- and Reliability-Sensitive Markers of Trust Across Multiple Timescales in Collaborative Human-Human-Agent Triads

Lucca Eloy, Emily J. Doherty, Cara A. Spencer, Philip Bobko, Leanne Hirshfield
2022 Frontiers in Neuroergonomics  
real time.  ...  Transparency and reliability levels are found to significantly affect trust in the agent, while transparency explanations do not impact mental demand.  ...  In other words, can fNIRS data be used in real-time machine learning models to predict whether or not a human is likely to rely on an AI suggestion?  ... 
doi:10.3389/fnrgo.2022.838625 fatcat:cs3regoy3ncfzijm4g5dji7ymq

Exploring Trust in Human-Agent Collaboration

Isabel Schwaninger, Geraldine Fitzpatrick, Astrid Weiss
2019 European Conference on Computer Supported Cooperative Work  
trust in human-agent collaboration.  ...  real-life settings, while concepts embraced in CSCW can lead to a more thorough understanding of the situatedness and dynamics of trust going beyond the attributes of the agent itself.  ...  Different collaboration formations are also likely to impact issues of trust in human-agent collaboration studies but to date this has not been well explored, especially in complex real world settings  ... 
doi:10.18420/ecscw2019_ep08 dblp:conf/ecscw/SchwaningerFW19 fatcat:yhpm75kod5bz7ksn35ulofe5ay

Towards Modeling Real-Time Trust in Asymmetric Human–Robot Collaborations [chapter]

Anqi Xu, Gregory Dudek
2016 Springer Tracts in Advanced Robotics  
We further construct and optimize a predictive model of users' trust responses to discrete events, which provides both insights on this fundamental aspect of real-time human-machine interaction, and also  ...  Our analyses quantify key correlations between real-time human-robot trust assessments and diverse factors, including properties of failure events reflecting causal trust attribution, as well as strong  ...  We would also like to thank all of the participants who contributed to our user study.  ... 
doi:10.1007/978-3-319-28872-7_7 fatcat:5ul6z22xqvhpteimqwpkm2g6nq

How do people incorporate advice from artificial agents when making physical judgments? [article]

Erik Brockbank, Haoliang Wang, Justin Yang, Suvir Mirchandani, Erdem Bıyık, Dorsa Sadigh, Judith E. Fan
2022 arXiv   pre-print
Prior work has largely focused on appraisal of simple, static skills; in contrast, we probe competence evaluations in a rich setting with agents that learn over time.  ...  Results provide a quantitative measure of how people integrate a partner's competence into their own decisions and may help facilitate better coordination between humans and artificial agents.  ...  Acknowledgments EB 1 , JEF, and DS are supported by an ONR Science of Autonomy award. JEF is additionally supported by NSF CA-REER #2047191 and a Stanford Hoffman-Yee grant.  ... 
arXiv:2205.11613v1 fatcat:jawgkjy4dvfixc6xcodygnkt2e

Neural Correlates of Trust in Automation: Considerations and Generalizability Between Technology Domains

Sarah K. Hopko, Ranjana K. Mehta
2021 Frontiers in Neuroergonomics  
of trustworthiness or pain points of technology, or for human-in-the-loop cyber intrusion detection.  ...  As such, this manuscript discusses the current-state-of-knowledge in trust perceptions, factors that influence trust, and corresponding neural correlates of trust as generalizable between domains.  ...  In collaborative human-automation teaming, the operators can choose when and how to rely on or utilize automation features, highly dependent on how well calibrated their trust is.  ... 
doi:10.3389/fnrgo.2021.731327 fatcat:kaz2vf6tjvbw7azxcivmdtoixe

Domain-Level Explainability – A Challenge for Creating Trust in Superhuman AI Strategies [article]

Jonas Andrulis, Ole Meyer, Grégory Schott, Samuel Weinbach, Volker Gruhn
2020 arXiv   pre-print
and therefore requires trust in their transparency and reliability.  ...  With superhuman strategies being non-intuitive and complex by definition and real-world scenarios prohibiting a reliable performance evaluation, the key components for trust in these systems are difficult  ...  The main questions and drivers for scenarios are predictions of the future (E1) and the impact of changes in hypothetical scenarios (E2).  ... 
arXiv:2011.06665v1 fatcat:qwqpfzjz7nhmjg2ngx2bkip57i

Engineering Human–Machine Teams for Trusted Collaboration

Basel Alhaji, Janine Beecken, Rüdiger Ehlers, Jan Gertheiss, Felix Merz, Jörg P. Müller, Michael Prilla, Andreas Rausch, Andreas Reinhardt, Delphine Reinhardt, Christian Rembe, Niels-Ole Rohweder (+3 others)
2020 Big Data and Cognitive Computing  
Based on our analysis, we propose and outline three important areas of future research on engineering and operating human–machine teams for trusted collaboration.  ...  In terms of methods, we focus on how reciprocal trust between humans and intelligent machines is defined, built, measured, and maintained from a systems engineering and planning perspective in literature  ...  Acknowledgments: We gratefully acknowledge Alexander Herzog, Carsten Hesselmann and Marc Schlegel for providing administrative support of the HERMES project.  ... 
doi:10.3390/bdcc4040035 fatcat:uoanpxph5fbglo7sl3pytbkykm

Investigating Adjustable Social Autonomy in Human Robot Interaction

Filippo Cantucci, Rino Falcone, Cristiano Castelfranchi
2021 Workshop From Objects to Agents  
The experiment has been designed in order to demonstrate how the robot's capability to learn its own level of self-trust on its predictive abilities in perceiving the user and building a model of her/him  ...  the mental states and the features of its human interlocutor, in order to adapt their social autonomy every time humans require the robot's help.  ...  We call these collaborative conflicts, as they are based on the desire to collaborate beyond what is required but in doing this errors and discrepancies occur.  ... 
dblp:conf/woa/CantucciFC21 fatcat:hcmlbt7aubd4noyuzkjsqotiem

Would a robot trust you? Developmental robotics model of trust and theory of mind

Samuele Vinanzi, Massimiliano Patacchiola, Antonio Chella, Angelo Cangelosi
2019 Philosophical Transactions of the Royal Society of London. Biological Sciences  
The ability for an agent to evaluate the trustworthiness of its sources of information is particularly useful in joint task situations where people and robots must collaborate to reach shared goals.  ...  Trust is a critical issue in human-robot interactions: as robotic systems gain complexity, it becomes crucial for them to be able to blend into our society by maximizing their acceptability and reliability  ...  This means that collaborative scenarios between humans and robots will become more frequent and will have a deeper impact on everyday life.  ... 
doi:10.1098/rstb.2018.0032 pmid:30852993 pmcid:PMC6452250 fatcat:6mzca62qzzf65bpwv3uwyk56ke

The influence of agent reliability on trust in human-agent collaboration

Xiaocong Fan, Sooyoung Oh, Michael McNeese, John Yen, Haydee Cuevas, Laura Strater, Mica R. Endsley
2008 Proceedings of the 15th European conference on Cognitive ergonomics the ergonomics of cool interaction - ECCE '08  
More importantly, the knowledge of agent's reliability and the ratio of unreliable tasks have significant effects on human's trust, as manifested in both team performance and human operators' rectification  ...  Originality/Value -It represents an important step toward uncovering the nature of human trust in humanagent collaboration.  ...  of human-agent trust: What factors might have impacts on a human's trust (and use) of his/her decision aids?  ... 
doi:10.1145/1473018.1473028 dblp:conf/ecce/FanOMYCSE08 fatcat:b6x25j5drfcjlpav5igue3ao3a

Adaptive trust calibration for human-AI collaboration

Kazuo Okamura, Seiji Yamada, Chen Lv
2020 PLoS ONE  
Safety and efficiency of human-AI collaboration often depend on how humans could appropriately calibrate their trust towards the AI agents.  ...  Although many studies focused on the importance of system transparency in keeping proper trust calibration, the research in detecting and mitigating improper trust calibration remains very limited.  ...  Collaboration between human users and autonomous AI agents is always essential as such technologies are never perfect. One key aspect of such collaborations is that users trust the agents.  ... 
doi:10.1371/journal.pone.0229132 pmid:32084201 fatcat:yvtrg653cfatfalt6e7uflpr5i

Robot's self-trust as precondition for being a good collaborator

Filippo Cantucci, Rino Falcone, Cristiano Castelfranchi
2021 International Joint Conference on Autonomous Agents & Multiagent Systems  
The experiment has been designed in order to demonstrate how the robot's capability to learn its own level of self-trust on its predictive abilities in perceiving the user and building a model of her/him  ...  states and the features of its human interlocutor, in order to adapt its behavior every time she/he requires the robot's help.  ...  Figure 1c shows a statistical description of the impact of the self-trust building process in the level of user's satisfaction on the robot's smart collaboration.  ... 
dblp:conf/atal/CantucciFC21 fatcat:df6r27sxkzbfpfx25i7nxwgsdq
« Previous Showing results 1 — 15 out of 40,898 results