scispace - formally typeset
Open AccessJournal ArticleDOI

Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust

Mariarosaria Taddeo
- 01 Jul 2010 - 
- Vol. 20, Iss: 2, pp 243-257
TLDR
It is shown that the second-order-property of e-trust has the effect of minimising an agent’s effort and commitment in the achievement of a given goal.
Abstract
This paper provides a new analysis of e-trust, trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. The analysis first focuses on an agent's trustworthiness, this one is presented as the necessary requirement for e-trust to occur. Then, a new definition of e-trust as a second-order-property of first-order relations is presented. It is shown that the second-order-property of e-trust has the effect of minimising an agent's effort and commitment in the achievement of a given goal. On this basis, a method is provided for the objective assessment of the levels of e-trust occurring among the artificial agents of a distributed artificial system.

read more

Content maybe subject to copyright    Report

1
Modelling trust in artificial agents, a first step toward the analysis of e-trust
Mariarosaria Taddeo
Information Ethics Group, University of Oxford, UK,
mariarosariataddeo@gmail.com

2
Modelling trust in artificial agents, a first step toward the analysis of e-trust
Abstract
This paper provides a new analysis of e-trust, trust occurring in digital contexts,
among the artificial agents of a distributed artificial system. The analysis endorses a
non-psychological approach and rests on a Kantian regulative ideal of a rational agent,
able to choose the best option for itself, given a specific scenario and a goal to
achieve.
The paper first introduces e-trust describing its relevance for the contemporary
society and then presents a new theoretical analysis of this phenomenon. The analysis
first focuses on an agent’s trustworthiness, this one is presented as the necessary
requirement for e-trust to occur. Then, a new definition of e-trust as a second-order-
property of first-order relations is presented. It is shown that the second-order-
property of e-trust has the effect of minimising an agent’s effort and commitment in
the achievement of a given goal. On this basis, a method is provided for the objective
assessment of the levels of e-trust occurring among the artificial agents of a
distributed artificial system.
Keywords: artificial agent, artificial distributed system, e-trust, trust, trustworthiness.
1. Introduction
Trust is a widely diffused phenomenon, affecting many of our daily practises. It is the
key to effective communication, interaction and cooperation in any kind of distributed
system, including our society (Lagenspetz 1992). As Luhmann (Luhmann 1979)
suggests, “a complete absence of trust would prevent even getting up in the morning”,
(p. 4). It is because we trust the other parts of the system to work properly that we can
delegate some of our tasks and may focus only on the activities that we prefer. As in
Plato’s Republic, for example, one can trust other agents to defend the city and
dedicate one’s time to studying philosophy.
Although trust is largely recognised as an important issue in many fields of
research, we still lack a satisfactory analysis. Moreover, in recent years, with the
emergence of trust in digital contexts known as e-trust new theoretical problems
have arisen.

3
The debate focuses on two main issues: the definition and the management of
trust and e-trust. While the definition of trust and e-trust is left to sociologists and
philosophers, the management of their occurrences is a topic of research for ICT.
In digital contexts, like e-Bay or an artificial distributed system, all the
interactions are mediated by digital devices and there is no space for direct and
physical contacts. In the digital sphere moral and social pressures are also perceived
differently than in real environments. These differences from the traditional context in
which trust occurs give the raise to a major problem, that is “whether trust is affected
by environmental features in such a way that it can only occur in non-digital
environments, or it is mainly affected by features of the agents and their abilities, so
that trust is viable even in digital contexts”, (Taddeo 2009) (p. 29). Those who argue
against the case of e-trust, see Nissenbaum (Nissenbaum 2001) for example, suggest
that the absence of physical interactions and of moral and social pressure in the digital
context constitute an obstacle to the occurrence of trust, and that for this reason e-trust
should not be considered as an occurrence of trust in digital contexts but as a different
phenomenon. Despite their plausibility, the objections against e-trust can be rebutted
(Taddeo 2009). Several accounts of e-trust consider this phenomenon as an
occurrence of trust in digital environments, see for example (Weckert 2005), (Vries
2006), (Seamons, Winslett et al. 2003), and (Papadopoulou 2007).
Trust and e-trust have been defined in different ways: as a probabilistic
evaluation of trustworthiness in (Gambetta 1998) and (Castelfranchi and Falcone
1998) as a relationship based on ethical norms (Tuomela and Hofmann 2003), and as
an agent’s attitude in (Weckert 2005). Unfortunately, all these analyses focus only on
the trustor’s beliefs, and so provide a partial explanation of the phenomenon, leaving
many questions unanswered. These include, for example, what the effects of trust
and e-trust on the involved agents’ behaviours are, for what reason an agent decides
to trust, or the role of trust and e-trust in the development of social systems.
The research for the management of trust and e-trust seeks to identify the
necessary parameters for their emergence and to define a method for the objective
assessment of the level of trust and e-trust occurring between two or more agents.
This research often rests on theoretical analyses of trust and e-trust uncritically, thus
inheriting their limits. It will therefore benefit from an analysis of trust and e-trust
able to overcome these boundaries and to answer the questions described above.

4
This paper seeks to contribute to the debate by presenting a new analysis of e-
trust. This analysis rests on a Kantian regulative ideal of a rational agent, able to
choose the best option for itself, given a specific scenario and a goal to achieve. E-
trust is considered a result of a rational choice that is expected to be convenient for
the agent. This approach does not reduce the entire phenomenon of trust to a matter
of pure rational choice but, in this paper, we shall be concerned with the specific
occurrences of e-trust among artificial agents (AAs) of distributed systems. So, given
this scenario, it is simply more realistic to consider these AAs to be designed (or at
least designable) as fully rational. One might object that trust involves more than
rational choice and that a model unable to consider other than rational aspects could
not provide a satisfactory analysis of this phenomenon. This is correct and therefore I
will show how the provided analysis can be extended to consider non-rational factors
as well.
Here is a summary of the paper. In section 2, I describe e-trust in general. In
section 3, I analyse its foundation. In section 4, I present and defend a new definition
of e-trust. In section 4.1, I describe a method for the objective assessment of the
levels of e-trust by looking at the implementation of the new definition in a
distributed artificial system. In section 5, I show how the new model can be used to
explain more complex occurrences of e-trust, such as those involving human agents
(HAs). Section 6 concludes the paper.
2. E-trust
Trusting AAs to perform actions that are usually performed by human agents (HAs) is
not science-fiction but a matter of daily experience. There are simple cases, such as
that of refrigerators able to shop online autonomously for our food
1
, and complex
ones, such as that of Chicago’s video surveillance network,
2
one of the most advanced
in the US. In the latter case, cameras record what happens, and by matching that
information with patterns of events such as murder or terrorist attacks, seek to
recognise dangerous situations. The crucial difference between this system and the
traditional ones is that it does not require a monitoring HA. In this case, HAs the
1
http://www.lginternetfamily.co.uk/fridge.asp
2
http://www.nytimes.com/2004/09/21/national/21cameras.html

5
entire Chicago police department trust an artificial system to discern dangerous
situations from non-dangerous ones.
In the same way, HAs trust AAs to act in the right way in military contexts. The
US army used robots ‘Warrior’ and ‘Sword’ in Iran and Afghanistan.
3
More
sophisticated machines are now used at the borders between Israel and Palestine in the
so-called ‘automatic kill zone’.
4
The robots are trusted to detect the presence of
potential enemies and to mediate the action of the HAs. It is to be hoped that, in a not
too distant future, we shall trust these robots to distinguish military enemies from
civilians, and not to fire upon the latter.
5
Even more futuristic are the cases of AAs that trust each other without the
involvement of HAs. Consider, for example, unmanned aerial vehicles like the
Predators RQ-1 / MQ-1 / MQ-9 Reaper. These vehicles are “long-endurance,
medium-altitude unmanned aircraft system for surveillance and reconnaissance
missions”.
6
Surveillance imagery from synthetic aperture radar, video cameras and
infrared can be distributed in real time to the front line soldier and the operational
commander, and worldwide, via satellite communication links. The system in the
vehicle receives and records video signals in the ground control station and can pass
them to another system, the Trojan Spirit van, for worldwide distribution or directly to
operational users via a commercial global broadcast system. In this case, there are two
occurrences of e-trust: that between the artificial system and HAs, and that between
the artificial systems that manage the data flow. The Predator’s system of data-
collecting trusts the broadcast systems to acquire the data and transmit them to the
users without modifying or damaging them or disclosing them to the wrong users.
As the previous examples illustrate, there are at least two kinds of e-trust. In the
first, HAs trust (possibly a combination of) computational artefacts, a digital devices
or services, such as a particular website, to achieve a given goal. The users of eBay,
for example, trust its rating system. In the second, only AAs are involved. This is the
case of trust in distributed artificial systems.
This paper is concerned with e-trust between AAs in a distributed system. This
occurrence of e-trust is simpler to describe and model than those in which HAs are
3
http://blog.wired.com/defense/2007/08/httpwwwnational.html
4
http://blog.wired.com/defense/2007/06/for_years_and_y.html
5
http://blog.wired.com/defense/2007/06/for_years_and_y.html
6
http://www.airforce-technology.com/projects/predator/

Citations
More filters
Journal ArticleDOI

The ethics of algorithms: Mapping the debate:

TL;DR: This paper makes three contributions to clarify the ethical importance of algorithmic mediation, including a prescriptive map to organise the debate, and assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
Journal ArticleDOI

Blockchain as a confidence machine: The problem of trust & challenges of governance

TL;DR: It is claimed that blockchain technology relies on cryptographic rules, mathematics, and game-theoretical incentives in order to increase confidence in the operations of a computational system, yet such an increase in confidence ultimately relies on the proper operation and governance of the underlying blockchain-based network, which requires trusting a variety of actors.
Journal ArticleDOI

In AI We Trust: Ethics, Artificial Intelligence, and Reliability.

TL;DR: This paper will show that AI cannot be something that has the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or can be held responsible for their actions—requirements of the affective and normative accounts of trust.
Journal ArticleDOI

Distributed Morality in an Information Society

TL;DR: The article explains the nature of distributed morality, as a feature of moral agency, and explores the implications of its occurrence in advanced information societies, and the concept of infraethics is introduced.
Journal ArticleDOI

Can we trust robots

TL;DR: In this article, a phenomenological-social approach to trust is articulated, which provides a way of thinking about trust that puts less emphasis on individual choice and control than the contractarian-individualist approach.
References
More filters
Book

An Introduction to MultiAgent Systems

TL;DR: A multi-agent system is a distributed computing system with autonomous interacting intelligent agents that coordinate their actions so as to achieve its goal(s) jointly or competitively.
Book

Trust and Power

TL;DR: In this article, Niklas Luhmann's Sociological Enlightenment and its realisation in trust and power is discussed, where trust is defined as a reduction of complexity, and trust is viewed as an opportunity and a constraint.
Book

Introduction to Multiagent Systems

TL;DR: A multi-agent system (MAS) as discussed by the authors is a distributed computing system with autonomous interacting intelligent agents that coordinate their actions so as to achieve its goal(s) jointly or competitively.
Journal Article

Trust : making and breaking cooperative relations

TL;DR: In this paper, the authors considered formal structures and social reality, Bernard Williams the biological evolution of co-operation and trust, Patrick Bateson individuals, interpersonal relations and trust in interpersonal relations, David Good trust as a commodity, Partha Dasgupta trust and political agency, John Dunn familiarity, confidence, trust - problems and alternatives.

Can We Trust Trust

TL;DR: In this article, the authors try to reconstruct what seem to me the central questions about trust that the individual contributions presented in this volume raise and partly answer, and discuss the extent to which cooperation can come about independently of trust, and also whether trust can be seen as a result rather than a precondition of cooperation.
Frequently Asked Questions (7)
Q1. What are the contributions in this paper?

This paper provides a new analysis of e-trust, trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. The analysis first focuses on an agent ’ s trustworthiness, this one is presented as the necessary requirement for e-trust to occur. Then, a new definition of e-trust as a second-orderproperty of first-order relations is presented. On this basis, a method is provided for the objective assessment of the levels of e-trust occurring among the artificial agents of a distributed artificial system. 

in recent years, with the emergence of trust in digital contexts – known as e-trust – new theoretical problems have arisen. 

Generally speaking, an agent like G prefers to trust, because it allows him to delegate an action and so not to have to care about it. 

The axioms are: (1) completeness: for any pair of alternatives (x and y), the AA either prefers x to y, prefers y to x, or is indifferent between x and y. 

e-Trust plays the same role in digital contexts, it promotes the delegation of tasks among the AAs of a system, in doing so it effectively reduces the complexity of the system (Luhmann 1979) and favours the development of social interactions. 

Trustworthiness is like a chronicle of an agent’s actions, a report that any agent of the system considers in order to decide whether to trust that agent. 

Shifting from higher to lower LoAs is like looking at a map of a city first with the naked eye, and then with a magnifying glass.