scispace - formally typeset
Search or ask a question

Showing papers by "Cristiano Castelfranchi published in 1996"


BookDOI
01 Jan 1996
TL;DR: Agent Theories I as discussed by the authors describe high-level robot control through logic and high level Robot Control through Logic, including high level robot control of high level agent behaviour via Architectural Transformation, and Agent Theory for Team Formation by Dialogue.
Abstract: Agent Theories I.- Optimistic and Disjunctive Agent Design Problems.- Updating Mental States from Communication.- Sensing Actions, Time, and Concurrency in the Situation Calculus.- Agent Development Tools and Platforms.- Developing Multiagent Systems with agentTool.- Layered Disclosure: Revealing Agents' Internals.- Architectures and Idioms: Making Progress in Agent Design.- Developing Multi-agent Systems with JADE.- Agent Theories II.- High-Level Robot Control through Logic.- Determining the Envelope of Emergent Agent Behaviour via Architectural Transformation.- Models of Agent Communication and Coordination.- Delegation and Responsibility.- Agent Theory for Team Formation by Dialogue.- Task Coordination Paradigms for Information Agents.- Autonomy and Models of Agent Coordination.- Plan Analysis for Autonomous Sociological Agents.- Multiagent Bidding Mechanisms for Robot Qualitative Navigation.- Performance of Coordinating Concurrent Hierarchical Planning Agents Using Summary Information.- Agent Languages.- Agent Programming with Declarative Goals.- Modeling Multiagent Systems with CASL - A Feature Interaction Resolution Application.- Generalised Object-Oriented Concepts for Inter-agent Communication.- Specification of Heterogeneous Agent Architectures.- Planning, Decision Making, and Learning.- Improving Choice Mechanisms within the BVG Architecture.- Planning-Task Transformations for Soft Deadlines.- An Architectural Framework for Integrated Multiagent Planning, Reacting, and Learning.- Panel Summary: Agent Development Tools.- Panel Summary: Agent Development Tools.- Panel Summary: Autonomy -Theory, Dimensions, and Regulation.- Again on Agents' Autonomy: A Homage to AlanTuring - Panel Chair's Statement.- Autonomy as Decision-Making Control.- Autonomy: Theory, Dimensions, and Regulation.- Situated Autonomy.- Autonomy: A Nice Idea in Theory.- Adjustable Autonomy: A Response.

373 citations


Proceedings Article
01 Jan 1996
TL;DR: The role of beliefs in the Processing of goals, from their firing to their satisfaction or giving up: how beliefs determine such a process step by step is analysed.
Abstract: The paper is devoted to the structural relation between beliefs and goals. I discuss its importance in modelling cognitive agents; its origin in cognitive processing; its structure (belief structure relative to a goal); its crucial role in rationality, mediating between epistemic and pragmatic rationality; its role in goal Dynamics. I stress the crucial contribution of the supporting beliefs to the Processing of goals; to the Revision of goals (or Dynamics in a narrow sense), i.e. the change of goals either on the basis of the change of a dynamic external environment, or of internal cycles of the agent; and to the Typology of goals, that may be partially characterized just on the basis of their typical belief structure. In particular, I will analyse in this paper the role of beliefs in the Processing of goals, from their firing to their satisfaction or giving up: how beliefs determine such a process step by step. The paper will not give a complete or formal account of any of these aspects. It is more an exploratory paper, which tries to identify basic ontological categories and principles, and fruitful directions of analysis for modelling the relation between beliefs and golas.

57 citations


Book ChapterDOI
01 Jan 1996
TL;DR: A DAI system, conceived of for managing problems of communication and coordination in a distributed environment, will be applied to examine some issues of social theory to show how complex structures of interdependencies emerge from agents endowed with simple architectures and situated in a common world.
Abstract: In this paper, a DAI system called DEPNET, conceived of for managing problems of communication and coordination in a distributed environment, will be applied to examine some issues of social theory. In particular, it will be applied to show how complex structures of interdependencies emerge from agents endowed with simple architectures and situated in a common world, and how in turn these structures determine other properties of the system at both the individual level (agents’ inequalities and negotiation powers) and the collective level (the emergence of coalitions and organisational structures; etc.).

39 citations



Book ChapterDOI
12 Aug 1996

15 citations


Proceedings Article
01 Jan 1996
TL;DR: It is claimed that any support system for cooperation and any theory of cooperation require an analytic theoly of delegation and adoption, and that each level of task delegation requiresspecific beliefs (modeling) about both the delegate and thedelegee.
Abstract: Cristiano Castelfranchi, Rind FalconeIP-CNR, Group of "Artificial Intelligence, Cognitive Modeling and Interaction"Viale Marx, 15 - 00137 ROMA - ItalyE-mail: {cris, falcone}@pscs2.irmkant.rm.cnr.itIntroductionThe huge majority of DAI and MA, CSCW and negotiationsystems, communication protocols, cooperative softwareagents, etc. are based on the idea that cooperation worksthrough the allocation of some task (or sub-task) of a givenagent (individual or complex) to another agent, via some"request" (offer, proposal, announcement, etc.) meetingsome "commitment" (bid, contract, adoption, etc.). Thiscore constituent of any interactive, negotial, cooperativesystem is not so clear, well founded and systematicallystudied as it could seem. Our claim is that any supportsystem for cooperation and any theory of cooperationrequire an analytic theoly of delegation and adoption. Wewill contribute to an important aspect of this theoo, with aplan-based analysis of delegation.In this paper we try to propose a foundation of the variouslevels of delegation and adoption (help), characterizing theirbasic principles and representations. We try also to identifydifferent agent modeling requirements in relation to thedifferent levels of delegation and/or adoption.We characterize the various levels of the delegation-adoption relation (executive or open; implicit or explicit;on the domain or on the planning; etc.) on the basis of theory of plans, actions and agents.Our claim is that each level of task delegation requiresspecific beliefs (modeling) about both the delegate and thedelegee.Delegation, adoption and their meetingIn this section we supply a general definition of delegation,adoption, and contract, before entering into a more formaland detailed ,analysis of these concepts.Let A and B be two agents. There are two main forms ofdelegation:- A delegates to B a result ? (goal state): i.e. A delegates Bto "bring it about that g", where "to bring it about that g"means to find and execute an action that has g mnong itsrelevant results/effects. Sub-delegation is not excluded.Delegation from A does not require that A knows which isthe action that B has to c,’uxy out: A has only to guess thatthere is such an action.-A delegates to B an action a, i.e. A delegates B toperform (or sub-delegate) We assume that, to delegate an action necessarily impliesto delegate some result of that action [postulate I].Conversely, to delegate a result alwa~,s implies thedelegation of at least one action that produces such a result[postulate II]. Thus, in the following we will consider asthe object of the delegation the couple action/goM x=(a,g)that we call task. With x, we will refer to the action, to itsresulting world state, or to both: this is because a or gmight be implicit or non specified in the request.By definition, a task is a piece/part of a plan (possibly theentire plan); therefore the task has the s,’une hierarchicalstructure of composition and of abstraction of plans.Weak Delegation ("to rely on", "to exploit")Given two agents A and B, and a task x, to assert that Aweakly-d~legates x to B means that:la) (A believes that) isa goal or subgoal of A; thatimplies [1] that:-A believes that (to perform) x is possible;- A believes that (to perform) x is preferable;- A believes that Not (performed) lb) A believes that B is able to perform/bring it about thatx;lc) (A believes that) A has the goal that B performs/bringsit about that x;ld) A believes that B will perform z in time (or A believesthat B is internally committed to perform x in time).le) A has the goal (’relativized’ to ld) of not performing zby itself.In Weak Delegation A exploits B’s activities while Bmight be unaware of this.Weak Adoption ("to take care of")Given two agents A and B, and a task x, to assert that Bweakly-adopts ’r for A means tlmt:2a) B believes that z is a goal or subgoal of A;2b) B believes that B is able to perform/bring it about that2c) (B believes that) B has the goal to perform zfor A; thathnplies that:-B believes that (to perform) ’r is possible;- B believes that (to perform) x is preferable for - B believes that Not (performed) x 2d) B believes that B will perform T in time (or B believesthat B is internally committed to perform x in time).2e) B believes that A will not perform z by itself.Notice that this help can be completely unilateral andspontaneous from B (without any request of A), and/or evenignored by A.Delegation.Adoption (Contract)In Strict Delegation, the delegate knows that the delegee isrelying on him and accepts the task; in Strict Adoption,the helped agent knows about the adoption and accepts it.In other words, Suict Delegation requires Strict Adoption,and vice versa: they are two facets of a unitary socialrelation that we will call "delegation-adoption" or"contract".Given two agents A (the client) and B (the contractor), a task x, to assert that there is a delegation-adoptionrelationship between A and B for x, means that: (la) (2a)(lb) (2b) (lc) (2c) (ld) (2d) (le) (2e). 3a) A and B believe that the other agent believe that x is goal or subgo,’d of A;3b) A and B believe that the other agent believes that B isable to perform/bring it about that x ;3f) A and B believe that A’s goal is that B performs x forA;3g) A and B believe that B is socially committed with A toperform x for A [2];3h) A is socially co~mnitted with B to not perfonning x byhhnself;3i) A and B mutually believe about their reciprocalcommitments.

7 citations