scispace - formally typeset
Search or ask a question

Showing papers on "Multi-agent system published in 1993"


Journal ArticleDOI
TL;DR: Here agent communities are modelled using a distributed goal search formalism and it is argued that commitments (pledges to undertake a specified course of action) and conventions are the foundation of coordination in multi-agent systems.
Abstract: Distributed Artificial Intelligence systems, in which multiple agents interact to improve their individual performance and to enhance the systems' overall utility, are becoming an increasingly pervasive means of conceptualising a diverse range of applications. As the discipline matures, researchers are beginning to strive for the underlying theories and principles which guide the central processes of coordination and cooperation. Here agent communities are modelled using a distributed goal search formalism, and it is argued that commitments (pledges to undertake a specific course of action) and conventions (means of monitoring commitments in changing circumstances) are the foundation of coordination in multi-agent systems. An analysis of existing coordination models which use concepts akin to commitments and conventions is undertaken before a new unifying framework is presented. Finally, a number of prominent coordination techniques which do not explicitly involve commitments or conventions are reformulated in these terms to demonstrate their compliance with the central hypothesis of this paper.

426 citations


Journal ArticleDOI
TL;DR: A synergistic review of existing models of cooperation is presented, their weaknesses are highlighted and a new model (called joint responsibility) is introduced, which is used to specify a novel high-level agent architecture for cooperative problem solving.
Abstract: Systems composed of multiple interacting problem solvers are becoming increasingly pervasive and have been championed in some quarters as the basis of the next generation of intelligent information systems. If this technology is to fulfill its true potential then it is important that the systems which are developed have a sound theoretical grounding. One aspect of this foundation, namely the model of collaborative problem solving, is examined in this paper. A synergistic review of existing models of cooperation is presented, their weaknesses are highlighted and a new model (called joint responsibility) is introduced. Joint responsibility is then used to specify a novel high-level agent architecture for cooperative problem solving in which the mentalistic notions of belief, desire, intention and joint intention play a central role in guiding an individual’s and the group’s problem solving behaviour. An implementation of this highlevel architecture is then discussed and its utility is illustrated for the real-world domain of electricity transportation management.

125 citations


Proceedings Article
28 Aug 1993
TL;DR: In this paper, two reinforcement learning algorithms, ACE and AGE, are proposed for the reinforcement learning of appropriate sequences of action sets in multi-agent systems, and experimental results illustrate the learning abilities of these algorithms.
Abstract: This paper deals with learning in reactive multi-agent systems. The central problem addressed is how several agents can collectively learn to coordinate their actions such that they solve a given environmental task together. In approaching this problem, two important constraints have to be taken into consideration: the incompatibility constraint, that is, the fact that different actions may be mutually exclusive; and the local information constraint, that is, the fact that each agent typically knows only a fraction of its environment. The contents of the paper is as follows. First, the topic of learning in multi-agent systems is motivated (section 1). Then, two algorithms called ACE and AGE (standing for "ACtion Estimation" and "Action Group Estimation", respectively) for the reinforcement learning of appropriate sequences of action sets in multi agent systems are described (section 2). Next, experimental results illustrating the learning abilities of these algorithms are presented (section 3). Finally, the algorithms are discussed and an outlook on future research is provided (section 4).

106 citations


Journal ArticleDOI
TL;DR: This paper reports on an experiment undertaken at the CERN laboratories in which two pre-existing and standalone expert systems for diagnosing faults in a particle accelerator were transformed into a community of cooperating agents.

82 citations


Book ChapterDOI
25 Aug 1993
TL;DR: A unified and general mechanism for developing cooperation protocols in multi-agent systems that are generic in the sense that a protocol execution algorithm can treat the domain independent parts separately from the application dependent reasoning and deciding processes involved.
Abstract: In this paper we propose a unified and general mechanism for developing cooperation protocols in multi-agent systems. The protocols are essentially speech act based but have considerable advantages as compared to previous approaches: First, they are generic in the sense that a protocol execution algorithm can treat the domain independent parts separately from the application dependent reasoning and deciding processes involved. And second, they are recursively defined from primitives which allow a designer (or eventually the agents themselves) configure the appropriate general or domain-specific cooperation protocols.

59 citations


Journal ArticleDOI
Shawn D. Bird1
TL;DR: This paper presents a taxonomy for multi-agent systems that defines alternative architectures based on fundamental distributed, intelligent system characteristics and presents a step toward the development of general principles for their integration.
Abstract: As intelligent systems become more pervasive and capture more expert and organizational knowledge, the expectation that they be integrated into larger problem-solving systems is heightened. To capitalize on these investments and more fully exploit their potential as knowledge repositories, general principles for their integration must be developed. Although simulated and prototype systems described in the literature provide solutions to some practical problems, most are empirical (or often simply intuitive) in design, emerging from implementation strategy instead of general principles. As a step toward the development of such principles, this paper presents a taxonomy for multi-agent systems that defines alternative architectures based on fundamental distributed, intelligent system characteristics.

52 citations


Proceedings ArticleDOI
02 May 1993
TL;DR: A strategy for motion planning of multiple robots as a multi-agent system is proposed, where each robot determines its motion selfishly, planning its motion while considering the known environment and using empirical knowledge.
Abstract: A strategy for motion planning of multiple robots as a multi-agent system is proposed. All the robots cannot communicate globally, but some robots can communicate locally and coordinate to avoid competition for public resources. In such systems, it is difficult for each robot to plan its motion effectively, while considering other robots. Therefore, each robot determines its motion selfishly, planning its motion while considering the known environment and using empirical knowledge. The robot also considers its unknown environment, which includes the other robots, in the empirical knowledge. The genetic algorithm is used to optimize the motion of the planning. Each robot iteratively acquires knowledge of its unknown environment, expressed by fuzzy logic, and the system behaves efficiently as an evolutionary process. As an illustration, path planning by multiple mobile robots is considered. >

46 citations


Journal ArticleDOI
TL;DR: This article outlines, through a number of examples, a method that can be used by autonomous agents to decide among potential messages to send to other agents, without having to assume that a message must be truthful and that it must be believed by the hearer.
Abstract: This article outlines, through a number of examples, a method that can be used by autonomous agents to decide among potential messages to send to other agents, without having to assume that a message must be truthful and that it must be believed by the hearer. The main idea is that communicative behavior of autonomous agents is guided by the principle of economic rationality, whereby agents transmit messages to increase the effectiveness of interaction measured by their expected utilities. We are using a recursive, decision-theoretic formalism that allows agents to model each other and to infer the impact of a message on its recipient. The recursion can be continued into deeper levels, and agents can model the recipient modeling the sender in an effort to assess the truthfulness of the received message. We show how our method often allows the agents to decide to communicate in spite of the possibility that the messages will not be believed. In certain situations, on the other hand, our method shows that the possibility of the hearer not believing what it hears makes communication useless. Our method thus provides the rudiments of a theory of how honesty and trust could emerge through rational, selfish behavior.

35 citations


Proceedings Article
28 Aug 1993
TL;DR: Concepts from fields such as decision theory and game theory can provide standards to be used in the design of appropriate negotiation and interaction environments are considered.
Abstract: As distributed systems of computers play an increasingly important role in society, it will be necessary to consider ways in which these machines can be made to interact effectively. Especially when the interacting machines have been independently designed, it is essential that the interaction environment be conducive to the aims of their designers. These designers might, for example, wish their machines to behave efficiently, and with a minimum of overhead required by the coordination mechanism itself. The rules of interaction should satisfy these needs, and others. Formal tools and analysis can help in the appropriate design of these rules. We here consider how concepts from fields such as decision theory and game theory can provide standards to be used in the design of appropriate negotiation and interaction environments. This design is highly sensitive to the domain in which the interaction is taking place. Different interaction mechanisms are suitable for different domains, if attributes like efficiency and stability are to be maintained.

34 citations


Journal ArticleDOI
TL;DR: This paper explores the specification and semantics of multiagent problem-solving systems, focusing on the representations that agents have of each other, and provides a declarative representation for such systems.
Abstract: This paper explores the specification and semantics of multiagent problem-solving systems, focusing on the representations that agents have of each other. It provides a declarative representation for such systems. Several procedural solutions to a well-known test-bed problem are considered, and the requirements they impose on different agents are identified. A study of these requirements yields a representational scheme based on temporal logic for specifying the acting, perceiving, communicating, and reasoning abilities of computational agents. A formal semantics is provided for this scheme. The resulting representation is highly declarative, and useful for describing systems of agents solving problems reactively. >

27 citations


Proceedings ArticleDOI
08 Nov 1993
TL;DR: A model of a cognitive agent being currently developed at LIFIA is detailed and taking this model as reference, the authors examine alternatives for supporting cognitive agents in distributed and heterogeneous environments.
Abstract: The authors discuss the implementation and run-time support for multi-agent systems (MAS). They start presenting MAS in the context of open distributed processing (ODP). Next, a model of a cognitive agent being currently developed at LIFIA is detailed. Taking this model as reference, the authors examine alternatives for supporting cognitive agents in distributed and heterogeneous environments. Finally, a distributed processing tool developed by the authors is presented. This tool follows the active object model and it is shown that active object and agent are strongly related concepts.

Proceedings ArticleDOI
30 Mar 1993
TL;DR: UPShell, a tool for building up coarse grain semiautonomous cooperating agents (which are expert systems) is presented and negotiation and conflict resolution protocols have been integrated into the agents, whose architecture is presented.
Abstract: UPShell, a tool for building up coarse grain semiautonomous cooperating agents (which are expert systems) is presented. Negotiation and conflict resolution protocols have been integrated into the agents, whose architecture is also presented. Several basic functionalities dealing with task scheduling as well as cooperation policies are briefly specified. A realistic scenario, involving a sophisticated electricity distribution network management application, is presented to illustrate the concepts and tool features. >

Journal ArticleDOI
TL;DR: The analytical results hold the promise for explaining in general terms many experimental observations made in specific distributed AI systems, and are empirically validated by showing that it correctly predicts performance for the Towers of Hanoi problem.
Abstract: Knoblock and Korf have determined that abstraction can reduce search at a single agent from exponential to linear complexity (Knoblock 1991; Korf 1987). We extend their results by showing how concurrent problem solving among multiple agents using abstraction can further reduce search to logarithmic complexity. We empirically validate our formal analysis by showing that it correctly predicts performance for the Towers of Hanoi problem (which meets all of the assumptions of the analysis). Furthermore, a powerful form of abstraction for large multiagent systems is to group agents into teams, and teams of agents into larger teams, to form an organizational pyramid. We apply our analysis to such an organization of agents and demonstrate the results in a delivery task domain. Our predictions about abstraction's benefits can also be met in this more realistic domain, even though assumptions made in our analysis are violated. Our analytical results thus hold the promise for explaining in general terms many experimental observations made in specific distributed AI systems, and we demonstrate this ability with examples from prior research.

Proceedings ArticleDOI
30 Mar 1993
TL;DR: Two stages of higher decentralization are developed: the decentralized task decomposition and the completely decentralized model, which provide greater flexibility in cooperative problem-solving processes and more adequate forms of communication and coordination between agents having equal rights.
Abstract: A framework for task decomposition in dynamic societies of autonomous agents is presented. A general model of task decomposition in multiagent systems is introduced. Then, starting from the contract net model for task decomposition and allocation, two stages of higher decentralization are developed: the decentralized task decomposition and the completely decentralized model. These models provide greater flexibility in cooperative problem-solving processes and more adequate forms of communication and coordination between agents having equal rights. It is indicated how negotiation can be applied for task decomposition problems in a concrete scenario, the MARS scenario of shipping companies. >

Proceedings Article
28 Aug 1993
TL;DR: It turns out that the properties of a multi-agent system need not correspond to separately definable properties of the agents (e.g. a community of fair agents need not constitute a fair multi- agent system).
Abstract: Problems of liveness and fairness are considered in multi-agent systems by means of abstract languages. Different approaches to define such properties for the agents and for a multi-agent system as a whole are discussed. It turns out that the properties of a multi-agent system need not correspond to separately definable properties of the agents (e.g. a community of fair agents need not constitute a fair multi-agent system). In general, analysis and verification need the consideration of the whole system, and the agents have to be considered in the context of the system, too. The results are not unique, there are different results for deadlock freedom, liveness and fairness, respectively.

Journal ArticleDOI
TL;DR: A system for cooperating intelligent agents which openly supports multi- agent conflict detection and resolution, based on the insights that each agent has its own conflict knowledge which is separated from its domain-level knowledge.
Abstract: Distributed artificial intelligence attempts to integrate and coordinate the activities of multiple, intelligent problem solvers that come together to solve complex tasks in domains such as design, medical diagnosis, business management, and so on Due to the different goals, knowledge, and viewpoint of the agents, conflicts might arise at any phase of the problem-solving process. Managing diverse knowledge requires well-organized models of conflict resolution. In this paper, a system for cooperating intelligent agents which openly supports multi- agent conflict detection and resolution is described. The system is based on the insights, first, that each agent has its own conflict knowledge which is separated from its domain-level knowledge; and, second, that each agent has its own conflict management knowledge which is not accessible to or known by others. Furthermore, there are no globally-known conflict-resolution strategies. Each agent involved in a conflict chooses a resolution scheme according to its ...

Proceedings ArticleDOI
26 Jul 1993
TL;DR: The fundamental software of an agent is discussed, which is called an action interpreter, that determines its activities, which include control, monitoring, and interaction with environment outside the agent.
Abstract: A multi-agent system is introduced as an information-processing model for a robot control mechanism. This model represents the real world in an object-oriented way and models each physical object in the world as a logical object. An agent is an active, independent logical entity that can control these logical objects. The agents work concurrently and communicate with each other to cooperatively perform a job assigned to the system. An agent can be regarded as a personification of a robot control process, and the robot programmer can image that there are personified entities in charge of executing the subjobs making up the robot's job. This paper discusses the fundamental software of an agent, which is called an action interpreter, that determines its activities, which include control, monitoring, and interaction with environment outside the agent. An experimental implementation of the system is described.

01 Jan 1993
TL;DR: In this article, the authors present a paradigm for cooperative search and conflict resolution among loosely-coupled expert agents, which is realized in TEAM, a framework that provides a flexible environment for agent integration.
Abstract: We present negotiated search, a paradigm for cooperative search and conflict resolution among loosely-coupled expert agents. The paradigm is realized in TEAM, a framework that provides a flexible environment for agent integration. TEAM enables agents with heterogeneous characteristics and capabilities to work together cooperatively. Experimental results from a design application program implemented in TEAM are presented. These results indicate that system performance is correlated with the organization of the agent set based on the ability of agents to communicate, the interaction capabilities instantiated at each agent, and by the texture of agents’ local solution spaces. The experiments show that heterogeneous agents can work together without tightly coordinated organization. However, they also demonstrate that some agent organizations have more potential for effective cooperation than others. We analyze agent characteristics that affect this potential and discuss the use of negotiated-search strategies that take advantage of the particular strengths of the agents in an agent set. Both agent characteristics and group dynamics have far-reaching implications for the development of multi-agent systems and for the design of agents that are intended to work within agent sets.

Proceedings ArticleDOI
26 Jul 1993
TL;DR: An approach to control of multiagent flexible manufacturing work cells is described, based on a strategic architectural model for hierarchical manufacturing control, which focuses on process sequence coordination in the bottom hierarchical layers.
Abstract: An approach to control of multiagent flexible manufacturing work cells is described. The concept is based on a strategic architectural model for hierarchical manufacturing control, which focuses on process sequence coordination in the bottom hierarchical layers. A work cell control system with universal applicability and free programmability is introduced. By utilization of user-defined device drivers the system can be adapted to different applications. The process sequence in the work cell depends on the individual product type and is described by process plans. A process planning system which supports user programming of the work cell controller is introduced.

Journal ArticleDOI
TL;DR: This dissertation develops and implements a new model of multiagent coordination, called JOINT RESPONSIBILITY (Jennings 1992b), based on the notion of joint intentions, which was devised specifically for coordinating behavior in complex, unpredictable, and dynamic environments, such as industrial control.
Abstract: My Ph.D. dissertation develops and implements a new model of multiagent coordination, called JOINT RESPONSIBILITY (Jennings 1992b), based on the notion of joint intentions. The responsibility framework was devised specifically for coordinating behavior in complex, unpredictable, and dynamic environments, such as industrial control. The need for such a principled model became apparent during the development and the application of a general-purpose cooperation framework (GRATE) to two real-world industrial applications.

Proceedings ArticleDOI
06 Sep 1993
TL;DR: A multi-agent system is proposed, using Distributed Artificial Intelligence concepts, to control the parameters inside the network, to manage the flow control and congestion control parameters.
Abstract: High-speed communication networks are expected soon. They should provide support for transmission of a large area of applications. To support these integrated services, high-speed networks are absolutely necessary. The ATM (Asynchronous Transfer Mode) technique seems suitable to take in charge all the constraints. However, a large number of problems have to be dealt with. In particular, flow control schemes to be applied and congestion control methods should be well adapted to the resources of the network. Moreover, flow control decisions have to be performed in a distributed manner. This leads to a very difficult problem: how to manage the flow control and congestion control parameters? In this paper a new flow control scheme is proposed based on a distributed algorithm. Due to the complexity of the problem, a multi-agent system is proposed, using Distributed Artificial Intelligence concepts, to control the parameters inside the network.

Journal ArticleDOI
TL;DR: A rule-based agent integration for construction of autonomous subsystems and systems useful in integrated manufacturing is presented in this article, which formalizes the integration process that a system analyst solves in an ad hoc manner.
Abstract: A three stage approach to the development of multi-agent systems is proposed involving functional decomposition, agent representation, and information flow integration. The information flow integration stage is discussed in detail. A rule-based agent integration for construction of autonomous subsystems and systems useful in integrated manufacturing is presented. The approach formalizes the integration process that a system analyst solves in an ad hoc manner. The approach presented facilitates the discovery of new integrated manufacturing solutions. It involves generation of a set of autonomous agents for building an overall system. Hierarchically structured subsystems are developed at different levels of abstraction that parallel the approach used by an expert system analyst. The implementation of an integration process in an object-oriented programming environment is also suggested.

Book ChapterDOI
26 Oct 1993
TL;DR: The paper shows how a large set of communication primitives, defined on the spirit of Speech Act theory, are suitable to model agent interactions and which can be specialised to implement specific communication protocols.
Abstract: This paper presents a distributed object-oriented language, called Multi-Agent Programming language, whose features are suitable to develop multi-agent systems. This language is based on an object, called agent, (i) performing private actions, (ii) communicating with other agents, and (iii) re-configuring system structure through the creation of other agents and changing its behaviour. The main feature of this language is the use of a large set of communication primitives, defined on the spirit of Speech Act theory, which are suitable to model agent interactions and which can be specialised to implement specific communication protocols. In particular, the paper shows how these primitives are suitable to model negotiation protocols.

Proceedings ArticleDOI
01 Dec 1993
TL;DR: A formalization of the concept, of ranking functions and mechanisms that establish the comparability of different, rankings are developed.
Abstract: A central problem in the study of autonomous cooperating systems is that of how to establish mechanisms for controlling the interactions between different parts (which are called agents) of the system. For heterogeneous agents this aims at, modeling the basics for their decisions. The mechanisms we discuss in this paper are based on the assumption that, the agents can estimate the effects of being attached to a certain set of goals. In the simplest case this is expressed by a single value e.g., the cost that will arise for the accomplishment of these goals. But in general this estimation may be arbitrarily complicated. In addition, we assume that, the agents have a function available to mnk their goals according to the estimated values and they pursue the goals they rank best. Then, these values can be used to resolve various kinds of conflicts in this kind of systems. For example in the task allocation phase the case of multiple applications for the allocation of a goal can be decided by allocating the goal to the agent with the ‘(best estimation”. Another way of using these values is for establishing collaborative actions between a pair (or a set) of agents: If one agent wants to get support in the accomplishment, of a particular goal he will try to persuade another agent to modify his ranking of the goal in such a way that, they both will rank best this “common” goal. Thus, the use of ranking functions provides a general framework for considering cooperative aspects within the study of multi-agent systems. An essential question in this context is how the rankings of different agents can be compared. Therefore, we develop in this paper a formalization of the concept, of ranking functions and discuss mechanisms that establish the comparability of different, rankings. *This work was supported by the German Ministry of Research and Technology under grant ITW9104 COOCS’93 -1 I1931CA, USA o 1993 ACM O-89791 -627 -1 19310010... $1 .50

Proceedings ArticleDOI
03 Nov 1993
TL;DR: A model which consists of many agents which have limited communication ability is introduced, the extension of the ant algorithm, and the condition of cooperation between agents in the limited communication environment and the effect of the communication methods is evaluated.
Abstract: In systems which complete a goal by agent cooperation, i.e. a multi-agent system, the communication between agents is a very important problem. To solve a problem by the agent cooperation, the communication between agents is necessary and indispensable. We introduce a model which consists of many agents. These agents have limited communication ability. They complete a goal in cooperation with each other. This model is the extension of the ant algorithm. In this model, it is possible that the communication between the agents may fail. The agents have to communicate differently using traditional methods; the agents have some communication abilities, called the sensing communication abilities. We evaluate the condition of cooperation between agents in the limited communication environment and the effect of the communication methods. >


Journal ArticleDOI
TL;DR: An overview of the distributed blackboard paradigm for which DKRL was developed is given as well as the design considerations and the current status of and preliminary experience with DKRL in using it for the development of a gate assignment problem.
Abstract: For distributed problem solving systems, there is a need to define knowledge at two levels, one external to the agents and the other internal to the agents. External knowledge is required to achieve cooperation among agents and global convergence of the problem solving process, whereas internal knowledge is required to solve the sub-problems assigned to the agents. External knowledge specifies the role of each agent and its relationship with other agents in the system. Internal knowledge specifies knowledge structure and the problem solving process within each agent. DKRL is an object-oriented language for describing distributed blackboard systems. In DKRL a problem solving system is described as a collection of distributed intelligent, autonomous agents (modelled as objects), cooperating to solve the problem. An agent consists of a knowledge base, a behaviour part, a local controller, a monitor, and a communication controller. DKRL has characteristics of a dynamic nature, i.e. the agents can be created dynamically and the relationship among them also changes dynamically. An agent in DKRL’s computational model cannot be activated by more than one message at the same time and uses a virtual synchrony environment for message transmission among agents. This model combines the advantages of remote procedure calls with those of asynchronous message passing. DKRL allows object-oriented programming techniques to be used for system development and facilitates the development by allowing one-to-one mapping between the objects in the knowledge model and the knowledge base of the agent. In this paper, we give an overview of the distributed blackboard paradigm for which DKRL was developed as well as the design considerations. We also propose and formally describe the underlying models of DKRL and explain how concurrency is exploited by DKRL. We conclude with the current status of and preliminary experience with DKRL in using it for the development of a gate assignment problem.

Journal Article
TL;DR: This article considers two main approaches: the emergent behaviour of autonomous agents in multiagent environment and the cooperation between the agents.
Abstract: This article deals with distributed artificial intelligence, decentralized artificial intelligence and multiagent systems. It considers two main approaches: the emergent behaviour of autonomous agents in multiagent environment and the cooperation between the agents

Proceedings ArticleDOI
14 Sep 1993
TL;DR: The aim of the paper is to describe an Agent Formalism using an object-oriented approach based on High Level Petri Nets (HLPNs) which permits a concise and adequate way to formalize complex parallel or distributed systems allowing very subtle formal validation.
Abstract: The emergence of multi-agents systems in distributed artificial intelligence (DAI) is concerned with coordinating intelligent behaviour among a collection of autonomous intelligent agents. Methodologically speaking, the conception of multi-agents systems can be viewed as a specialization of object-oriented paradigm. The use of Petri nets permits a concise and adequate way to formalize complex parallel or distributed systems allowing very subtle formal validation. Hence, the merging of agent's concept, object paradigm and Petri nets is a powerful way to model, specify and analyse complex distributed or parallel systems and applications. This is the aim of the paper which describes an Agent Formalism using an object-oriented approach based on High Level Petri Nets (HLPNs). >

Proceedings ArticleDOI
A. Haddadi1, K. Sundermeyer1
30 Mar 1993
TL;DR: Interactions in multiagent systems; a class of autonomous distributed systems, are considered and the factors influencing the search for and discovery of relevant systems are specified.
Abstract: Interactions in multiagent systems; a class of autonomous distributed systems, are considered. Autonomy in such settings implies rational actions in unforeseen surroundings where one system may have to interact with a varying number and various types of systems at different times. In order to adapt adequately and effectively to new situations, a system should dynamically become knowledgeable of a selected number of systems in its surroundings. For this purpose, it has to find those systems that are relevant to it, that is, those that may support or hinder it in achieving its goals, and those that may require its support. The factors influencing the search for and discovery of relevant systems are specified, and the reasoning and decision processes involved are described. >