Exploring the Dynamic Nature of Trust Using Interventions in a Human-AI Collaborative Task
release_fo7w57t57ncszorf34wmpmwiii
by
Sachini Weerawardhana,
Michael Akintunde,
Luc Moreau
Abstract
People are increasingly interacting with machines embedded with intelligent decision aids, sometimes in high-stakes environments. When a human user comes into contact with a decision-making agent for the first time, it is likely that the agent's behaviour or decisions do not precisely align with the human user's goals. This phenomenon, known as goal alignment, has been recognised as a critical concern for human-machine teams. Prior work has focused on the effect of automation's behavioural properties, such as predictability and reliability, on trust in human-machine interaction scenarios. However, little is known about situations where automation's capabilities are misaligned with humans' expectations and its impact on trust. Even less is known about the effect of environmental factors on trust. We study the relationship between intervention behaviours and trust in a simulated navigation task where the human user collaborates with an agent with misaligned goals. We evaluate trust quantitatively using intervention frequency as a behavioural measure and qualitatively using self-reports. By advancing the understanding and measurement of trust in collaborative settings, this research contributes to the development of trustworthy and symbiotic human-AI systems.
In application/xml+jats
format
Archived Content
There are no accessible files associated with this release. You could check other releases for this work for an accessible version.
Know of a fulltext copy of on the public web? Submit a URL and we will archive it
chapter
Stage
published
Date 2024-06-05
access all versions, variants, and formats of this works (eg, pre-prints)
Crossref Metadata (via API)
Worldcat
SHERPA/RoMEO (journal policies)
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar