scispace - formally typeset
Search or ask a question

Showing papers by "Giovanni Pezzulo published in 2004"


Proceedings ArticleDOI
19 Jul 2004
TL;DR: This work uses a Contract Net protocol for comparing various strategies for trusting other agents and introduces three classes of trustiers: a random trustiers, a statistical trustier, and a cognitive trustier.
Abstract: We use a Contract Net protocol for comparing various strategies for trusting other agents. We introduce three classes of trustiers: a random trustier, a statistical trustier, and a cognitive trustier.

28 citations


01 Jan 2004
TL;DR: In this article, Pezzulo et al. proposed a cognitive approach for reasoning and deciding in uncertain scenarios, with possibly infinite source of information (open world). This involves representing ignorance, uncertainty and contradiction; they present and analyze those concepts, integrating them in the notion of lack of confidence or perplexity.
Abstract: How do I Know how much I don’t Know? A cognitive approach about Uncertainty and Ignorance Giovanni Pezzulo (pezzulo@ip.rm.cnr.it) Istituto di Scienze e Tecnologie della Cognizione - CNR, viale Marx 15 – 00137 Roma, Italy and Universita degli Studi di Roma “La Sapienza” – piazzale Aldo Moro, 9 – 00185 Roma, Italy Emiliano Lorini (e.lorini@istc.cnr.it) Universita degli Studi di Siena – via Banchi di Sotto, 5 - 3100 Siena, Italy Gianguglielmo Calvi (calvi@noze.it) Istituto di Scienze e Tecnologie della Cognizione - CNR, viale Marx 15 – 00137 Roma, Italy information from the world (from witnesses) and that leads the agents to be “ready” to decide. We will claim that this process involves strength of beliefs that are relevant for deciding, as well as uncertainty and ignorance. The results of the current work are suitable e.g. for MAS environments, where an agent has to take decisions in open worlds. Abstract We propose a general framework for reasoning and deciding in uncertain scenarios, with possibly infinite source of information (open world). This involves representing ignorance, uncertainty and contradiction; we present and analyze those concepts, integrating them in the notion of lack of confidence or perplexity. We introduce and quantify the strength of the beliefs of an agent and investigate how he can do explicit epistemic actions in order to supply information lacks. Next we introduce a simple distributed game (RBG) and we use it as a testbed for comparing the performance of agents using the (classical) “expected utility maximization” and the “perplexity minimization“ strategies. The Red-or-Blue Card Game (RBG) Introduction In an “open world” uncertainty and ignorance are difficult categories to deal with; how much can I be certain of a belief of mines? how much information there is that I have not considered and I should? The first aim of the present work is to provide an analysis of epistemic dimensions such as strength of belief, uncertainty, contradiction and ignorance (or ambiguity). A special focus will be given to the third dimension. In Economical literature the notion of ignorance has been extensively investigated (Shackle, 1972) and ways to quantify it have been proposed (Shafer, 1976). In those approaches “lack of information” has been shown to affect the decision process and ambiguity aversion in subject has been identified; see Camerer & Weber (1992) for a review of the literature on decisions under ambiguity. We will argue in the following analysis that Ignorance is a subjective evaluation of actual lack of information on the basis of cognitive evidential models. The agent has a model (script) of his sources that allows him to evaluate that a certain type and a certain number of sources can provide sufficient information for reducing ignorance close to zero. In this way the strength of the belief and the (perceived) ignorance are two different measures, the second belonging to the meta-level. The second aim of the present work is to investigate the decision dynamics in an open world, with conflicting beliefs and multiple sources of information. We will formalize the process that leads the agent to acquire new We introduce a simple distributed game that is suitable for Multi-agent system simulations as well as for human experiments: the agents (players) have to bid on the color of a card (red or blue) and they have many sources of information (their perception and potentially infinite witnesses); the game can last an indefinite number of turns. The bidding game is the following: a card is shown (very quickly) to the player; it can be either red or blue and the player has to bid on the right color (he starts with 1000 Credits). We assume that he cannot be totally sure of his own perception (e.g. it is shown very quickly, or the lights in the room are low), but he is able to provide a degree of certainty about the color. Before bidding he can ask for help to a (potentially) infinite number of witnesses that have observed the scene and provide the answer “red” or “blue” (without degrees of certainty); those new information can lead the agent to confirm or revise his beliefs. Asking a witness has not a cost in Credits but it costs 1 Time. Credits and Times can be aggregated in different ways. When he decides that he is “ready”, he can bid from 0 to 10 Credits on the color he wants. The true card color is shown: if he was right, he gains two times the bid; otherwise he loses the bid. The game lasts an indefinite number of turns; between the turns, depending on the result, the agent can revise the reliability he attributes to his sources: his perception and the witnesses (depending for example on the number of correct answers they provided) as well as his SCAI. Besides, his perception and the witnesses have true reliability values that determine the average correctness of their answers. True values are not known by the player; at the first game round they are initialized and they do not change during the game. At the end of the game the agent will collect a certain amount of Credits; a set of reliability values for his sources; a SCAI and he will have spent a

8 citations


Book ChapterDOI
19 Jul 2004
TL;DR: AKIRA as mentioned in this paper is a framework for agent-based cognitive and social simulations, which is an open-source project, currently developed mainly at ISTC-CNR, that exploits state-of-theart techniques and tools.
Abstract: Here we present AKIRA, a framework for Agent-based cognitive and social simulations. AKIRA is an open-source project, currently developed mainly at ISTC-CNR, that exploits state-of-the-art techniques and tools. It gives to the programmer a number of facilities for building Agents at different levels of complexity (e.g. reactive, deliberative, layered). Here we describe the main architectural features (i.e. Hybridism of the Agents and the Energy Model) and the theoretical assumptions that motivate it. We also present some simulations.

5 citations


Proceedings Article
01 Jan 2004
TL;DR: This introductory paper explains the underlying theoretical assumptions that lead to its central features, e.g. hybridism of the components, access to a common pool of resources, homeostasis of the system, and shows the potentiality of AKIRA for cognitive modeling.
Abstract: AKIRA is an open-source framework for agent-based cognitive and socio-cognitive modeling and simulation. In this introductory paper we explain the underlying theoretical assumptions that lead to its central features, e.g. hybridism of the components, access to a common pool of resources, homeostasis of the system. In order to show the potentiality of AKIRA for cognitive modeling, we provide and example of a Goal Directed Agent. However, we are not focused on a single agent model, mechanism or computational tool; we use AKIRA as an “experimental laboratory” for modeling and implementing many cognitive functions (e.g. belief and goal dynamics, epistemic actions, anticipation, attention), exploring how higher-order cognition emerges from the interplay of many specialized agents and coalitions that compete, cooperate and learn how to exploit each other.