Formal Specification of Actual Trust in Multiagent Systems release_vtm7kf2oi5fnfhr3ogammbgyzm

by Michael Akintunde, Vahid Yazdanpanah, Asieh Salehi Fathabadi, Corina Cirstea, Mehdi Dastani, Luc Moreau

Published in Frontiers in Artificial Intelligence and Applications by IOS Press.

2024  

Abstract

This research focuses on establishing trust in multiagent systems where human and AI agents collaborate. We propose a computational notion of actual trust, emphasising the modelling of an agent's capacity to deliver tasks. Unlike reputation-based trust or performing a statistical analysis on past behaviour, our approach considers the specific setting in which agents interact. We integrate non-deterministic semantics for capturing inherent uncertainties within the behaviour of a multiagent system, but stress the importance of verifying an agent's actual capabilities. We provide a conceptual analysis of actual trust's characteristics and highlight relevant trust verification tools. By advancing the understanding and verification of trust in collaborative systems, this research contributes to responsible and trustworthy human-AI interactions, enhancing reliability in various domains.
In application/xml+jats format

Archived Content

There are no accessible files associated with this release. You could check other releases for this work for an accessible version.

Not Preserved
Save Paper Now!

Know of a fulltext copy of on the public web? Submit a URL and we will archive it

Type  chapter
Stage   published
Date   2024-06-05
Container Metadata
Not in DOAJ
Not in Keepers Registry
ISSN-L:  0922-6389
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: fb9fc5bb-110c-4107-aab9-fd48944f50c3
API URL: JSON