scispace - formally typeset
Search or ask a question
Journal Article

A New Quantitative Trust Model for Negotiating Agents using Argumentation

TL;DR: A new quantitative trust model for argumentation-based negotiating agents is proposed to provide a secure environment for agent negotiation within multi-agent systems.
Abstract: In this paper, we propose a new quantitative trust model for argumentation-based negotiating agents. The purpose of such a model is to provide a secure environment for agent negotiation within multi-agent systems. The problem of securing agent negotiation in a distributed setting is core to a number of applications, particularly the emerging semantic grid computing-based applications such as e-business. Current approaches to trust fail to adequately address the challenges for trust in these emerging applications. These approaches are either centralized on mechanisms such as digital certificates, and thus are particularly vulnerable to attacks, or are not suitable for argumentation-based negotiation in which agents use arguments to reason about trust.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: A wide-ranging look at basic and advanced biostatistical concepts and methods in a format calibrated to individual interests and levels of proficiency can be found in this paper, where the authors examine the design of medical studies, descriptive statistics, and introductory ideas of probability theory and statistical inference.
Abstract: This versatile reference provides a wide-ranging look at basic and advanced biostatistical concepts and methods in a format calibrated to individual interests and levels of proficiency. Written with an eye toward the use of computer applications, the book examines the design of medical studies, descriptive statistics, and introductory ideas of probability theory and statistical inference; explores more advanced statistical methods; and illustrates important current uses of biostatistics.

355 citations

Journal ArticleDOI
TL;DR: A comprehensive trust framework as a multi-factor model, which applies a number of measurements to evaluate the trust of interacting agents, and its novelty in after-interaction investigation and performance analysis prove the applicability of the proposed model in distributed multi-agent systems.
Abstract: In open multi-agent systems, agents engage in interactions to share and exchange information. Due to the fact that these agents are self-interested, they may jeopardize mutual trust by not performing actions as they are expected to do. To this end, different models of trust have been proposed to assess the credibility of peers in the environment. These frameworks fail to consider and analyze the multiple factors impacting the trust. In this paper, we overcome this limit by proposing a comprehensive trust framework as a multi-factor model, which applies a number of measurements to evaluate the trust of interacting agents. First, this framework considers direct interactions among agents, and this part of the framework is called online trust estimation. Furthermore, after a variable interval of time, the actual performance of the evaluated agent is compared against the information provided by some other agents (consulting agents). This comparison in the off-line process leads to both adjusting the credibility of the contributing agents in trust evaluation and improving the system trust evaluation by minimizing the estimation error. What specifically distinguishes this work from the previous proposals in the same domain is its novelty in after-interaction investigation and performance analysis that prove the applicability of the proposed model in distributed multi-agent systems. In this paper, the agent structure and interaction mechanism of the proposed framework are described. A theoretical analysis of trust assessment and the system implementation along with simulations are also discussed. Finally, a comparison of our trust framework with other well-known frameworks from the literature is provided.

70 citations


Cites background from "A New Quantitative Trust Model for ..."

  • ...…responses and any corrections to: E-mail: corrections.esch@elsevier.sps.co.in Fax: +31 2048 52799 Dear Author, Please check your proof carefully and mark all corrections at the appropriate place in the proof (e.g., by using on-screen annotation in the PDF file) or compile them in a separate list....

    [...]

Journal ArticleDOI
TL;DR: A novel fuzzy and argumentation based trust model is proposed which is also integrated within the practical reasoning of agents in the multi-agent recommender systems and allows the agent to take trustworthy decisions and reason about them as well.

52 citations


Cites background from "A New Quantitative Trust Model for ..."

  • ...In recent years, several models of trust have been developed [5,6,12,38,42]....

    [...]

  • ...The trust model in [5] is meant to handle the interactions only between negotiating agents and unlike our work it does not handle the situations when the two parties disagree over an issue....

    [...]

  • ...agents can communicate with each other and assist each other in achieving larger and more complex tasks [4,5,15,29,45]....

    [...]

01 Jan 2015
TL;DR: The final author version and the galley proof are versions of the publication after peer review and the final published version features the final layout of the paper including the volume, issue and page numbers.
Abstract: • A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.

34 citations


Cites background from "A New Quantitative Trust Model for ..."

  • ...In multi-agent systems, the authors in [12] presented a quantitative and probabilisticbased trust model for argumentation-based negotiating agents....

    [...]

Journal ArticleDOI
TL;DR: A new classification for the arguments based on their certainty to be accepted by the addressee is introduced, and a new set of uncertainty measures are defined in negotiation dialogue games from an external agent's point of view.
Abstract: Nowadays, multiagent systems have become a widely used technology in everyday life, and many authors have adopted the view of communication or interaction between agents as a joint activity regulated by means of dialogue games. Dialogue games are a set of communication rules that agents can combine in their complex interactions. In these games, uncertainty is an important problem that each agent faces when making decisions, especially in the absence of enough information. This paper focuses on the uncertainty in a particular type of dialogue games, namely argumentation-based negotiation. There exist several proposals on this type of dialogue games in the literature, and most of them are concerned with proposing protocols to show how agents can communicate with each other, and how arguments and offers can be generated, evaluated and exchanged. Nevertheless, none of them is directly targeting the agents' uncertainty about the exchanged arguments and how this uncertainty could be measured at each dialogue step to assist those agents make better decisions. The aim of this paper is to tackle this problem by defining a new set of uncertainty measures in negotiation dialogue games from an external agent's point of view. In particular, we introduce two types of uncertainty: Type I and Type II. Type I is about the uncertainty index of playing the right move. For this, we use Shannon entropy to measure: (i) the uncertainty index of the agent that he is selecting the right move at each dialogue step; and (ii) the uncertainty index of participating agents in the negotiation about the whole dialogue. This is done in two different ways; the first is by taking the average of the uncertainty index of all moves, and the second is by determining all possible dialogues and applying the general formula of Shannon entropy. Type II is about the uncertainty degree of the agent that the move will be accepted by the addressee. In this context, we introduce a new classification for the arguments based on their certainty to be accepted by the addressee.

25 citations


Cites background from "A New Quantitative Trust Model for ..."

  • ...It is known from Bentahar et al. (2007), that given a Horn knowledge base C, a subset H # C, and a formula h; checking whether ðH;hÞ is an argument is polynomial....

    [...]

References
More filters
Book
01 Jan 1991
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Abstract: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index.

45,034 citations

Journal ArticleDOI
04 Jun 1998-Nature
TL;DR: Simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder are explored, finding that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs.
Abstract: Networks of coupled dynamical systems have been used to model biological oscillators, Josephson junction arrays, excitable media, neural networks, spatial games, genetic control networks and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon (popularly known as six degrees of separation. The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.

39,297 citations

Journal ArticleDOI
TL;DR: A new graphical display is proposed for partitioning techniques, where each cluster is represented by a so-called silhouette, which is based on the comparison of its tightness and separation, and provides an evaluation of clustering validity.

14,144 citations

Journal ArticleDOI
TL;DR: This paper describes a mechanism for defining ontologies that are portable over representation systems, basing Ontolingua itself on an ontology of domain-independent, representational idioms.

12,962 citations

Book
01 Jan 1983
TL;DR: Reading is a need and a hobby at once and this condition is the on that will make you feel that you must read.
Abstract: Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd introduction to modern information retrieval as the choice of reading, you can find here.

12,059 citations