Formal Specification of Actual Trust in Multiagent Systems
release_vtm7kf2oi5fnfhr3ogammbgyzm
by
Michael Akintunde,
Vahid Yazdanpanah,
Asieh Salehi Fathabadi,
Corina Cirstea,
Mehdi Dastani,
Luc Moreau
Abstract
This research focuses on establishing trust in multiagent systems where human and AI agents collaborate. We propose a computational notion of actual trust, emphasising the modelling of an agent's capacity to deliver tasks. Unlike reputation-based trust or performing a statistical analysis on past behaviour, our approach considers the specific setting in which agents interact. We integrate non-deterministic semantics for capturing inherent uncertainties within the behaviour of a multiagent system, but stress the importance of verifying an agent's actual capabilities. We provide a conceptual analysis of actual trust's characteristics and highlight relevant trust verification tools. By advancing the understanding and verification of trust in collaborative systems, this research contributes to responsible and trustworthy human-AI interactions, enhancing reliability in various domains.
In application/xml+jats
format
Archived Content
There are no accessible files associated with this release. You could check other releases for this work for an accessible version.
Know of a fulltext copy of on the public web? Submit a URL and we will archive it
chapter
Stage
published
Date 2024-06-05
access all versions, variants, and formats of this works (eg, pre-prints)
Crossref Metadata (via API)
Worldcat
SHERPA/RoMEO (journal policies)
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar