scispace - formally typeset
Search or ask a question
Dissertation

Designing and trusting multi-agent systems for B2B applications

01 Jan 2008-
TL;DR: A trust model allowing agents to evaluate the credibility of other peers in the environment using agents' credibility is proposed, which applies a number of measurements in trust evaluation of other party's likely behavior.
Abstract: This thesis includes two main contributions. The first one is designing and implementing B usiness-to-B usiness (B2B ) applications using multi-agent systems and computational argumentation theory. The second one is trust management in such multi-agent systems using agents' credibility. Our first contribution presents a framework for modeling and deploying B2B applications, with autonomous agents exposing the individual components that implement these applications. This framework consists of three levels identified by strategic, application, and resource, with focus here on the first two levels. The strategic level is about the common vision that independent businesses define as part of their decision of partnership. The application level is about the business processes, which are virtually integrated as result of this common vision. Since conflicts are bound to arise among the independent applications/agents, the framework uses a formal model based upon computational argumentation theory through a persuasion protocol to detect and resolve these conflicts. Termination, soundness, and completeness properties of this protocol are presented. Distributed and centralized coordination strategies are also supported in this framework, which is illustrated with an online purchasing case study followed by its implementation in Jadex, a java-based platform for multi-agent systems. An important issue in such open multi-agent systems is how much agents trust each other. Considering the size of these systems, agents that are service providers or customers in a B2B setting cannot avoid interacting with others that are unknown or partially known regarding to some past experience. Due to the fact that agents are self-interested, they may jeopardize the mutual trust by not performing the actions as they are supposed to. To this end, our second contribution is proposing a trust model allowing agents to evaluate the credibility of other peers in the environment. Our multi-factor model applies a number of measurements in trust evaluation of other party's likely behavior. After a period of time, the actual performance of the testimony agent is compared against the information provided by interfering agents. This comparison process leads to both adjusting the credibility of the contributing agents in trust evaluation and improving the system trust evaluation by minimizing the estimation error.

Content maybe subject to copyright    Report

Citations
More filters
Journal Article
Henry Prakken1
TL;DR: In this paper, the authors investigated to what extent protocols for dynamic disputes, i.e., disputes in which the information base can vary at different stages, can be justified in terms of logics for defeasible argumentation.
Abstract: This article investigates to what extent protocols for dynamic disputes, i.e., disputes in which the information base can vary at different stages, can be justified in terms of logics for defeasible argumentation. First a general framework is formulated for dialectical proof theories for such logics. Then this framework is adapted to serve as a framework for protocols for dynamic disputes, after which soundness and fairness properties are formulated or such protocols relative to dialectical proof theories. It then turns out that certain types of protocols that are perfectly fine with a static information base, are not sound or fair in a dynamic setting. Finally, a natural dynamic protocol is defined for which soundness and fairness can be established.

10 citations

Dissertation
01 Jan 2009
TL;DR: The designing and specification of a trust and contextual information aggregation model, intended to be a reliable alternative to the trust aggregation models already existing, and trying to set apart from those by including rules to emulate human common sense regarding trust building, and mechanisms to obtain a recommendation grade concerning how likely is a potential partner to perform as the authors desire in the fulfilment of a given contract.
Abstract: The study of trust aggregation mechanisms to assist the selection of companies in Business-to-Business systems, is becoming increasingly important to researchers in the areas of Multi-Agent Systems and Electronic Business, because it has been proved that it can provide means to increase the performance and reliability of the existing electronic business communities, by endowing them with human-like social defence mechanisms. The study we present in this document concerns the designing and specification of a trust and contextual information aggregation model, intended to be a reliable alternative to the trust aggregation models already existing, and trying to set apart from those by including rules to emulate human common sense regarding trust building, and mechanisms to obtain a recommendation grade concerning how likely is a potential partner to perform as we desire in the fulfilment of a given contract. This dissertation has three main parts. In the first, we present the trust and contextual information model, showing how we use an S-shaped curve to aggregate the past contract results of a given entity. From there we can retrieve a degree of trust which represents, in an abstract and simplified way, how likely is a given entity to fulfil the next contract, given how well she fulfilled the previous ones. The aggregation of contextual information can act as a disambiguation tool, because the information of the past contracts is treated concerning the context in which they were celebrated, providing, this way, a mean to assess if a given company is the most adjusted to do business with, regarding the specificities of the contract, and independently of how much trust do we deposit in them. In the second part we specify the application that we developed to simulate the process of company selection. This application implements the models that we propose as solution together with a third one, developed by another research group, to compare the performance and utility of our model. We simulate a fabric market, in which a group of buyers needs to buy certain quantities from sellers. In this process, each buyer is going to need the degree of trust and the degree of recommendation for each candidate seller, deciding which one(s) to buy from depending on that information. In the third part we demonstrate and analyze the results that we got from the simulations we have made. We developed three kinds of validation tests for the models: how fast were they identifying the companies violating fewer contracts, how well they react to an abrupt company behaviour change, and how much will the use of a recommendation grade affect the process of selecting a business partner. The results we got from the simulations show that our system for trust and contextual information measure represents a reliable option as a trust aggregation models, since, when compared to other model, it proves to be capable of selecting more times the best business partner, which understandably ends up in fewer violated contracts by the selected seller and higher business utility for the buyer.
References
More filters
Journal ArticleDOI
TL;DR: Two dialectic procedures for the sceptical ideal semantics for argumentation are presented, defined in terms of dispute trees, for abstract argumentation frameworks and a variant of the procedure of [P.A. Kowalski, F. Toni, Dialectic proof procedures for assumption-based, admissible argumentation, Artificial Intelligence 170 (2006) 114-159] is presented.

430 citations

Journal ArticleDOI
TL;DR: The focus of this review will be on regulating the interaction between agents rather than on the design and behaviour of individual agents within a dialogue, taking a game-theoretic view on dialogue systems.
Abstract: This article reviews formal systems that regulate persuasion dialogues. In such dialogues two or more participants aim to resolve a difference of opinion, each trying to persuade the other participants to adopt their point of view. Systems for persuasion dialogue have found application in various fields of computer science, such as non-monotonic logic, artificial intelligence and law, multi-agent systems, intelligent tutoring and computer-supported collaborative argumentation. Taking a game-theoretic view on dialogue systems, this review proposes a formal specification of the main elements of dialogue systems for persuasion and then uses it to critically review some of the main formal systems for persuasion. The focus of this review will be on regulating the interaction between agents rather than on the design and behaviour of individual agents within a dialogue.

318 citations

Journal ArticleDOI
TL;DR: A logic-based formalism for modeling of dialogues between intelligent and autonomous software agents, which enables representation of complex dialogues as sequences of moves in a combination of dialogue games, and allows dialogues to be embedded inside one another.
Abstract: We present a logic-based formalism for modeling of dialogues between intelligent and autonomous software agents, building on a theory of abstract dialogue games which we present. The formalism enables representation of complex dialogues as sequences of moves in a combination of dialogue games, and allows dialogues to be embedded inside one another. The formalism is computational and its modular nature enables different types of dialogues to be represented.

285 citations

Proceedings Article
06 Jan 2007
TL;DR: Despite a more subtle definition than previous approaches, this paper establishes a bijection between evidence and trust spaces, enabling robust combination of trust reports and provides an efficient algorithm for computing this bijection.
Abstract: Trust should be substantially based on evidence. Further, a key challenge for multiagent systems is how to determine trust based on reports from multiple sources, who might themselves be trusted to varying degrees. Hence an ability to combine evidence-based trust reports in a manner that discounts for imperfect trust in the reporting agents is crucial for multiagent systems. This paper understands trust in terms of belief and certainty: A's trust in B is reflected in the strength of A's belief that B is trustworthy. This paper formulates certainty in terms of evidence based on a statistical measure defined over a probability distribution of the probability of positive outcomes. This novel definition supports important mathematical properties, including (1) certainty increases as conflict increases provided the amount of evidence is unchanged, and (2) certainty increases as the amount of evidence increases provided conflict is unchanged. Moreover, despite a more subtle definition than previous approaches, this paper (3) establishes a bijection between evidence and trust spaces, enabling robust combination of trust reports and (4) provides an efficient algorithm for computing this bijection.

271 citations

Book ChapterDOI
19 Aug 1995
TL;DR: This chapter discusses the desirable features of languages and protocols for communication among intelligent information agents and KQML is described and evaluated as an agent communication language relative to the desiderata.
Abstract: This chapter discusses the desirable features of languages and protocols for communication among intelligent information agents. These desiderata are divided into seven categories: form, content, semantics, implementation, networking, environment, and reliability. The Knowledge Query and Manipulation Language (KQML) is a new language and protocol for exchanging information and knowledge. This work is part of a larger effort, the ARPA Knowledge Sharing Effort, which is aimed at developing techniques and methodologies for building large-scale knowledge bases that are sharable and reusable. KQML is both a message format and a message-handling protocol to support run-time knowledge sharing among agents. KQML is described and evaluated as an agent communication language relative to the desiderata.

259 citations