scispace - formally typeset
Search or ask a question

Showing papers on "Multi-agent system published in 1996"


Book ChapterDOI
12 Aug 1996
TL;DR: This work proposes a formal definition of an autonomous agent which clearly distinguishes a software agent from just any program, and offers the beginnings of a natural kinds taxonomy of autonomous agents.
Abstract: The advent of software agents gave rise to much discussion of just what such an agent is, and of how they differ from programs in general. Here we propose a formal definition of an autonomous agent which clearly distinguishes a software agent from just any program. We also offer the beginnings of a natural kinds taxonomy of autonomous agents, and discuss possibilities for further classification. Finally, we discuss subagents and multiagent systems.

2,504 citations


Proceedings Article
17 Mar 1996
TL;DR: The extent to which methods from single-agent planning and learning can be applied in multiagent settings is investigated and the decomposition of sequential decision processes so that coordination can be learned locally, at the level of individual states.
Abstract: There has been a growing interest in AI in the design of multiagent systems, especially in multiagent cooperative planning. In this paper, we investigate the extent to which methods from single-agent planning and learning can be applied in multiagent settings. We survey a number of different techniques from decision-theoretic planning and reinforcement learning and describe a number of interesting issues that arise with regard to coordinating the policies of individual agents. To this end, we describe multiagent Markov decision processes as a general model in which to frame this discussion. These are special n-person cooperative games in which agents share the same utility function. We discuss coordination mechanisms based on imposed conventions (or social laws) as well as learning methods for coordination. Our focus is on the decomposition of sequential decision processes so that coordination can be learned (or imposed) locally, at the level of individual states. We also discuss the use of structured problem representations and their role in the generalization of learned conventions and in approximation.

496 citations


01 Jan 1996
TL;DR: This dissertation analyses negotiations among agents that try to maximize payoff without concern of the global good, where computational limitations restrict each agent's rationality: in combinatorial negotiation domains computational complexity precludes enumerating and evaluating all possible outcomes.
Abstract: In multiagent systems, computational agents search for and make contracts on behalf of the real world parties that they represent. This dissertation analyses negotiations among agents that try to maximize payoff without concern of the global good. Such a self-interested agent will choose the best negotiation strategy for itself. Accordingly, the interaction protocols need to be designed normatively so that the desired local strategies are best for the agents--and thus the agents will use them--then certain desirable social outcomes follow. The normative approach allows the agents to be constructed by separate designers and/or to represent different parties. Game theory also takes a normative approach, but full rationality of the agents is usually assumed. This dissertation focuses on situations where computational limitations restrict each agent's rationality: in combinatorial negotiation domains computational complexity precludes enumerating and evaluating all possible outcomes. The dissertation contributes to: automated contracting, coalition formation, and contract execution. The contract net framework is extended to work among self-interested, computationally limited agents. The original contract net lacked a formal model for making bidding and awarding decisions, while in this work these decisions are based on marginal approximations of cost calculations. Agents pay each other for handling tasks. An iterative scheme for anytime task reallocation is presented. Next it is proven that a leveled commitment contracting protocol enables contracts that are impossible via classical full commitment contracts. Three new contract types are presented: clustering, swaps and multiagent contracts. These can be combined into a new type, CSM-contract, which is provably necessary and sufficient for reaching a globally optimal task allocation. Next, contracting implications of limited computation are discussed, including the necessity of local deliberation scheduling, and tradeoffs between computational complexity and monetary risk when an agent can participate in multiple simultaneous negotiations. Finally, issues in distributed asynchronous implementation are discussed. A normative theory of coalitions among self-interested, computationally limited agents is developed. It states which agents should form coalitions and which coalition structures are stable. These analytical prescriptions depend on the performance profiles of the agents' problem solving algorithms and the unit cost of computation. The prescriptions differ significantly from those for fully rational agents. The developed theory includes a formal application independent domain classification for bounded rational agents, and relates it precisely to two traditional domain classifications of fully rational agents. Experimental results are presented. Unenforced exchange methods are particularly desirable among computational agents because litigation is difficult. A method for carrying out exchanges without enforcement is presented. It is based on splitting the exchange into chunks that are delivered one at a time. Two chunking algorithms are developed, as well as a nontrivial sound and complete quadratic chunk sequencing algorithm. Optimal stable strategies for carrying out the exchange are derived. The role of real-time is also analyzed, and deadline methods are developed that do not themselves require enforcement. All of these analyses are carried out for isolated exchanges as well as for exchanges where reputation effects prevail. Finally, it is argued that the unenforced exchange method hinders unfair renegotiation. The developed methods in all three subareas are domain independent. The possibility of scaling to large problem instances was shown experimentally on an ${\cal N}P$-complete distributed vehicle routing problem. The large-scale problem instance was collected from five real-world dispatch centers. (Abstract shortened by UMI.)

182 citations


BookDOI
01 Jan 1996

155 citations


Book ChapterDOI
TL;DR: It is shown that fault handling in Multi-Agent Systems is not much addressed in current research, and that this is not necessarily true, at least not with the assumptions on applications made.
Abstract: Fault handling in Multi-Agent Systems (MAS) is not much addressed in current research. Normally, it is considered difficult to address in detail and often well covered by traditional methods, relying on the underlying communication and operating system. In this paper it is shown that this is not necessarily true, at least not with the assumptions on applications we have made. These assumptions are a massive distribution of computing components, a heterogeneous underlying infrastructure (in terms of hardware, software and communication methods), an emerging configuration, possibly different parties in control of sub-systems, and real-time demands in parts of the system.

120 citations


Journal ArticleDOI
TL;DR: This article examines how the Clarke tax could be used as an effective consensus mechanism in domains consisting of automated agents, and considers how agents can come to a consensus without needing to reveal full information about their preferences, and without need to generate alternatives prior to the voting process.

100 citations


Proceedings ArticleDOI
19 Jun 1996
TL;DR: A conceptualization of the coordination task around the notion of structured "conversation" amongst agents is proposed and a complete multiagent programming language and system for explicitly representing, applying and capturing coordination knowledge is built.
Abstract: The agent view provides a level of abstraction at which we envisage computational systems carrying out cooperative work by interoperating across net worked people, organizations and machines. A major challenge in building such systems is coordinating the behavior of the individual agents to achieve the individual and shared goals of the participants. We propose a conceptualization of the coordination task around the notion of structured "conversation" amongst agents. Based on this notion we build a complete multiagent programming language and system for explicitly representing, applying and capturing coordination knowledge. The language provides KQML-based communication, an agent definition and execution environment, support for describing interactions as multiple structured conversations among agents and rule-based approaches to conversation selection, conversation execution and event handling. The major application of the system is the construction and integration of multiagent supply chain systems for manufacturing enterprises. This application is used throughout the paper to illustrate the introduced concepts and language constructs.

79 citations


Book ChapterDOI
12 Aug 1996
TL;DR: The construction of an agent simulation environment that is based strongly on a formal theory of agent systems, but which is intended to serve in exactly this way as a basis for practical development is described.
Abstract: There is a growing body of work that concentrates on theoretical aspects of agents and multi-agent systems, and a complementary body of work concerned with building practical systems. However, the two have typically been unrelated. This gap between the theory and practice of intelligent agents has only relatively recently begun to be addressed. In this paper we describe the construction of an agent simulation environment that is based strongly on a formal theory of agent systems, but which is intended to serve in exactly this way as a basis for practical development. The paper briefly introduces the theory, then describes the system and the simple reactive agents built with it, but most importantly shows how it reflects the theoretical framework and how it facilitates incremental agent design and implementation. Using this example as a case-study, some possibilities for a methodology for the development of agent systems are discussed.

79 citations


Book ChapterDOI
12 Aug 1996
TL;DR: Conurrent Descriptive Dynamic Logic (an extension of Peleg's Concurrent Dynamic Logic) is introduced as the specification language to account for the computational interpretation of multi-agent systems.
Abstract: We present a formalism for the specification of agents within a multi-agent system in which we characterize agents through a layered architecture with bridge rules between formal theories, and multi-agent systems through dialogical frameworks. Concurrent Descriptive Dynamic Logic (an extension of Peleg's Concurrent Dynamic Logic) is introduced as the specification language to account for the computational interpretation of such multi-agent systems.

72 citations


01 Jan 1996
TL;DR: An approach to the interleaving of execution and planning which is based on the RPN semantics is provided and it is shown how this approach can be used to coordinate agents' plans in a shared and dynamic environment.
Abstract: Distributed planning is fundamental to the generation of cooperative activities in Multi-Agent Systems. It requires both an adequate plan representation and efficient interacting methods allowing agents to coordinate their plans. This paper proposes a recursive model for the representation and the handling of plans by means of Recursive Petri Nets (RPN) which support the specification of concurrent activities, reasoning about simultaneous actions and continuous processes, a theory of verification and mechanisms of transformation (e.g. abstraction, refinement, merging) . The main features of the RPN formalism are domain independence, broad coverage of interacting situations and operational coordination. This paper also provides an approach to the interleaving of execution and planning which is based on the RPN semantics and gives some significant methods allowing plan management in distributed planning. It goes on to show how this approach can be used to coordinate agents' plans in a shared and dynamic environment.

70 citations


01 Jan 1996
TL;DR: In this paper, the authors present an approach to the problem of how an agent, within an economic multi-agent system, can determine when it should behave strategically (i.e. model the other agents), and how it should act as a simple price-taker.
Abstract: We present our approach to the problem of how an agent, within an economic Multi-Agent System, can determine when it should behave strategically (i.e. model the other agents), and when it should act as a simple price-taker. We provide a framework for the incremental implementation of modeling capabilities in agents. These agents were implemented and different populations simulated in order to learn more about their behavior and the merits of using agent models. Our results show, among other lessons, how savvy buyers can avoid being "cheated" by sellers, how price volatility can be used to quantitatively predict the benefits of deeper models, and how specific types of agent populations influence system behavior.

Journal ArticleDOI
TL;DR: In this article, the authors distinguish between two types of agents within a multi-agent system: controllable agents which are directly controlled by the system's designer, and uncontrollable agents, which are not under the designer's direct control.
Abstract: Motivated by the control theoretic distinction between controllable and uncontrollable events, we distinguish between two types of agents within a multi-agent system: controllable agents, which are directly controlled by the system's designer, and uncontrollable agents, which are not under the designer's direct control. We refer to such systems as partially controlled multi-agent systems, and we investigate how one might influence the behavior of the uncontrolled agents through appropriate design of the controlled agents. In particular, we wish to understand which problems are naturally described in these terms, what methods can be applied to influence the uncontrollable agents, the effectiveness of such methods, and whether similar methods work across different domains. Using a game-theoretic framework, this paper studies the design of partially controlled multi-agent systems in two contexts: in one context, the uncontrollable agents are expected utility maximizers, while in the other they are reinforcement learners. We suggest different techniques for controlling agents' behavior in each domain, assess their success, and examine their relationship.

Proceedings ArticleDOI
22 Apr 1996
TL;DR: A new architecture and negotiation protocol for the dynamic scheduling of manufacturing systems based on two paradigms: multi-agent systems and holonic systems to dynamically assign operations to the resources of the manufacturing system in order to accomplish the proposed tasks.
Abstract: This paper deals with a new architecture and negotiation protocol for the dynamic scheduling of manufacturing systems. The architecture is based on two paradigms: multi-agent systems and holonic systems. The main contribution in the architecture is the existence of holons representing tasks together with holons representing resources. The well known contract net protocol has been adapted to handle temporal constraints and to deal with conflicts. The purpose of this protocol is to dynamically assign operations to the resources of the manufacturing system in order to accomplish the proposed tasks. This protocol involves a renegotiation phase whenever exceptions appear. It also deals with conflict situations, namely with the case of the "indecision problem". The approach we are using assumes that deadlines are the most important constraints to consider, thus the acceptance or refusal of a resource for a specific operation depends on the capability of executing the operation in the specified deadline.

Journal ArticleDOI
TL;DR: The paper shows how the coordination language is used in the Agent Building Shell to manage content-based information distribution scenarios among agents and the coordination aspects of conflict management processes that occur when agents encounter inconsistencies.
Abstract: The agent view provides a level of abstraction at which we envisage computational systems carrying out cooperative work by interoperating globally across networks connecting people, organizations and machines. A major challenge in building such systems is coordinating the behavior of the individual agents to achieve the individual and shared goals of the participants. As part of a larger project targeted at developing an Agent Building Shell for multiagent applications, we have designed and implemented a coordination language aimed at explicitly representing, applying and capturing coordination knowledge for multiagent systems. The language provides KQML-based communication, an agent definition and execution environment, support for modeling interactions as multiple structured conversations among agents, rule-based approaches to conversation selection and execution, as well as an interactive tool for in context acquisition and debugging of cooperation knowledge. The paper presents these components in detail and then shows how the coordination language is used in the Agent Building Shell to manage content-based information distribution scenarios among agents and the coordination aspects of conflict management processes that occur when agents encounter inconsistencies. The major application of the system is the construction and integration of multiagent supply chain systems for manufacturing enterprises. This application is used throughout the paper to illustrate the introduced concepts and language constructs.

Proceedings Article
04 Aug 1996
TL;DR: The integration of a learning module into a communication-intensive negotiating agent architecture gives the agents the ability to learn about other agents' preferences via past interactions, which allows them to make better coordinated decisions.
Abstract: In multiagent systems, an agent does not usually have complete information about the preferences and decision making processes of other agents. This might prevent the agents from making coordinated choices, purely due to their ignorance of what others want. This paper describes the integration of a learning module into a communication-intensive negotiating agent architecture. The learning module gives the agents the ability to learn about other agents' preferences via past interactions. Over time, the agents can incrementally update their models of other agents' preferences and use them to make better coordinated decisions. Combining both communication and learning, as two complement knowledge acquisition methods, helps to reduce the amount of communication needed on average, and is justified in situations where communication is computationally costly or simply not desirable (e.g. to preserve the individual privacy).

Journal ArticleDOI
TL;DR: A heterogeneous multi-agent concurrent engineering system consisting of multiple feature-based design sub-systems, multiple simulated shop-floor resource groups, a supervisory control interface and the coordination mechanisms for multi- agent cooperation, has been developed.
Abstract: The centralized planning and control that has defined the traditional information processing structure of manufacturing systems is no longer suited to the current rapidly changing manufacturing environment. For efficient use of manufacturing resources and increased flexibility, it is necessary to migrate to a distributed information processing system in which individual entities can work cooperatively towards overall system goals. The next generation of manufacturing systems requires such an information technology framework to integrate the system components and activities into a larger collaborative enterprise. This paper describes a multi-agent approach to concurrent design, manufacturability analysis, process planning, routing and scheduling. A heterogeneous multi-agent concurrent engineering system consisting of multiple feature-based design sub-systems, multiple simulated shop-floor resource groups, a supervisory control interface and the coordination mechanisms for multi-agent cooperation, has been developed. The architecture of this distributed system and the associated implementation issues are discussed.

Book ChapterDOI
12 Aug 1996
TL;DR: A conceptualization of the coordination task around the notion of structured “conversation” amongst agents is proposed and a complete multiagent programming language and system for explicitly representing, applying and capturing coordination knowledge is built.
Abstract: The agent view provides a level of abstraction at which we envisage computational systems carrying out cooperative work by interoperating across networked people, organizations and machines. A major challenge in building such systems is coordinating the behavior of the individual agents to achieve the individual and shared goals of the participants. In this paper we propose a conceptualization of the coordination task around the notion of structured “conversation” amongst agents. Based on this notion we build a complete multiagent programming language and system for explicitly representing, applying and capturing coordination knowledge. The language provides KQML-based communication, an agent definition and execution environment, support for describing interactions as multiple structured conversations among agents, rule-based approaches to conversation selection, conversation execution and event handling, a model and an interactive graphical tool for in context acquisition and debugging of coordination knowledge. The major application of the system is the construction and integration of multiagent supply chain systems for manufacturing enterprises. This application is used throughout the paper to illustrate the introduced concepts and language constructs.

Book ChapterDOI
27 Aug 1996
TL;DR: This paper presents the CoMoMAS methodology and environment for the development of multi-agent systems, which takes several agent-specific aspects into consideration, in particular, the use of additional knowledge structures and their flexible representation to guarantee an agent's autonomy at execution time.
Abstract: This paper presents the CoMoMAS methodology and environment for the development of multi-agent systems. We use a conceptual model set to describe a multi-agent system under different views. These models are derived from the knowledge-engineering methodology CommonKADS. In contrast to CommonKADS, our approach takes several agent-specific aspects into consideration, in particular, the use of additional knowledge structures and their flexible representation to guarantee an agent's autonomy at execution time. A knowledge engineering environment has been conceived to demonstrate the feasibility of this conceptual approach. Conceptual models are represented in an extended version of the Conceptual Modeling Language (CML).

01 Jan 1996
TL;DR: A novel, general, sound method for multiple, reinforcement learning agents living a single life with limited computational resources in an unrestricted environment based on an efficient, stack-based backtracking procedure called "environment-independent reinforcement acceleration" (EIRA), which is guaranteed to make each agents learning history a history of performance improvements (long term reinforcement accelerations).
Abstract: Previous approaches to multi-agent reinforcement learning are either very limited or heuristic by nature. The main reason is: each agent’s environment continually changes because the other agents keep changing. Traditional reinforcement learning algorithms cannot properly deal with this. This paper, however, introduces a novel, general, sound method for multiple, reinforcement learning agents living a single life with limited computational resources in an unrestricted environment. The method properly takes into account that whatever some agent learns at some point may affect learning conditions for other agents or for itself at any later point. It is based on an efficient, stack-based backtracking procedure called "environment-independent reinforcement acceleration" (EIRA), which is guaranteed to make each agents learning history a history of performance improvements (long term reinforcement accelerations). The principles have been implemented in an illustrative multi-agent system, where each agent is in fact just a connection in a fully recurrent reinforcement learning neural net.

Journal ArticleDOI
TL;DR: To permit agents to rely on low levels, a suggestion is developed: agents are provided with social laws so as to guarantee coordination between agents and minimize the need for calling a central coordinator or for engaging in negotiation which requires intense communication.
Abstract: A framework for designing a Multiagent System (MAS) in which agents are capable of coordinating their activities in routine, familiar, and unfamiliar situations is proposed. This framework is based on the Skills, Rules and Knowledge (S-R-K) taxonomy of Rasmussen. Thus, the proposed framework should allow agents to prefer the lower skill-based and rule-based levels rather than the higher knowledge-based level because it is generally easier to obtain and maintain coordination between agents in routine and familiar situations than in unfamiliar situations. The framework should also support each of the three levels because complex tasks combined with complex interactions require all levels. To permit agents to rely on low levels, a suggestion is developed: agents are provided with social laws so as to guarantee coordination between agents and minimize the need for calling a central coordinator or for engaging in negotiation which requires intense communication. Finally, implementation and experiments demonstrated, on some scenarios of urban traffic, the applicability of major concepts developed in this article.

Journal ArticleDOI
TL;DR: A high-performance software system provides the necessary facilities to experiment with various multiagent-coordination choices at different abstraction levels and empirically evaluate their usefulness for the construction of distributed systems.
Abstract: A high-performance software system provides the necessary facilities to experiment with various multiagent-coordination choices at different abstraction levels. The authors discuss these abstractions and empirically evaluate their usefulness for the construction of distributed systems.

Proceedings ArticleDOI
11 Dec 1996
TL;DR: In this paper, a methodology for designing hybrid controllers for large scale, multi-agent systems is presented based on optimal control and game theory, where two players compete over cost functions that encode properties that the closed loop hybrid system needs to satisfy (e.g., safety).
Abstract: A methodology for designing hybrid controllers for large scale, multiagent systems is presented. Our approach is based on optimal control and game theory. The hybrid design is seen as a game between two players: the control, which is to be chosen by the designer and the disturbances that encode the actions of other agents (in a multi-agent setting), the actions of high level controllers or the usual unmodeled environmental disturbances. The two players compete over cost functions that encode properties that the closed loop hybrid system needs to satisfy (e.g., safety). The control "wins" the game if it can keep the system "safe" for any allowable disturbance. The solution to the game theory problem provides the designer with continuous controllers as well as sets of safe states where the control "wins" the game. The sets of safe states are used to construct an interface to the discrete domain that guarantees the safe operation of the combined hybrid system. The design methodology is illustrated by means of examples.

Journal ArticleDOI
TL;DR: A learning mechanism that allows a multiagent system to cooperate to achieve a gathering task efficiently in unknown and changing environments is presented and simulation results show that the multi agent system always achieves near-optimal performances.
Abstract: In this article, we present a learning mechanism that allows a multiagent system to cooperate to achieve a gathering task efficiently in unknown and changing environments. The multiagent system is a team of autonomous behavior-based agents with limited communication capabilities. Cooperation is based on the acquisition of signaling behaviors and on the specialization of the agents into different types. Every agent has the same collection of built-in reactive behaviors. Some of the built-in behaviors are fixed, whereas others can be modified through reinforcement learning. The reinforcement signal is delayed until a trial is completed and assesses the collective performance of the team. Each agent uses this common signal to learn what individual behaviors are more suitable for the team. Simulation results, and the corresponding statistical analysis, show that the multiagent system always achieves near-optimal performances.

Book ChapterDOI
16 Nov 1996
TL;DR: In this article, the authors describe a diagnosis agent using logic and logic programming to specify and implement the agent: the knowledge base uses extended logic programming, and the inference machine provides algorithms to compute diagnoses, as well as the reactive layer that realises a meta interpreter for the agent behaviour.
Abstract: We briefly overview the architecture of a diagnosis agent. We employ logic and logic programming to specify and implement the agent: the knowledge base uses extended logic programming to specify the agent's behaviour and its knowledge about the system to be diagnosed. The inference machine, which provides algorithms to compute diagnoses, as well as the reactive layer that realises a meta interpreter for the agent behaviour are implemented in PVM-Prolog, that enhances standard Prolog by message passing facilities.

Proceedings ArticleDOI
01 Jan 1996
TL;DR: This work states that principled negotiation can be quickly introduced into air traffic operations and allows the agents to achieve a solution as good as that achieved by a centralized controller with perfect knowledge.
Abstract: Principled negotiation coordinates the actions of agents with different interests, allowing distributed optimization. In principled negotiation, agents search for and propose options for mutual gain. If the other agents agree to the proposal, it is implemented. Under certain conditions, an agent can search for options for individual gain without impacting other agents. In these cases, the agent can negotiate with a coordinator, rather than obtain agreement from all other agents. The tenets of principled negotiation are outlined and stated mathematically. Two problems representing air traffic operations are formulated to test the performance of principled negotiation. The first, based on keeping separation between aircraft, has no coupling between the agent actions if certain requirements are met. Principled negotiation allows the agents to achieve a solution as good as that achieved by a centralized controller with perfect knowledge. The second problem, based on negotiating arrival slots, is highly coupled, constraining each agent's available set of actions. Principled negotiation allows agents to search options that would not be available otherwise, improving the utility function of all agents. Principled negotiation can be quickly introduced into air traffic operations. INTRODUCTION The ground-based air traffic control (ATC) system was created to insure the safety of flights operating in controlled airspace. Aircraft are separated by a combination of procedures and tactical maneuvering instructions. As air traffic has grown, the ATC system has increasingly depended on computer systems. Computers now not only process radar and flight plan data, but also help controllers to manage flow, avert conflicts, and maneuver traffic in terminal areas [1,2]. Today's ATC system has many problems that are characteristic of traditional control systems for largescale industrial systems [3]. To manage the growing amount of air traffic, ATC computer systems arc becoming more complex, increasing expense and making new systems more difficult to introduce. In * Research Assistant ** Professor, Associate Fellow AIAA Copyright © 1996 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved. addition, the aircraft/airspace system (AAS) is not responsive to the desires of users (aircraft and operators). The procedures and actions of the ATC system prevent users from dynamically optimizing their operations and causes many hours of delays. Distributed artificial intelligence (DAT) deals with small, simple systems working cooperatively to better control large-scale systems. In multi-agent systems (MAS), each agent has its own goals, and it must anticipate the actions of other agents and coordinate actions to meet these goals. The AAS is a MAS. It is a collection of agents, each with its own goals and interests. Agents include aircraft, operators, and traffic management agents (TrMAs, a generic term for any air traffic control unit). Each agent makes decisions and takes actions that affect the air traffic process. Their actions interact because aircraft must stay safely separated. The ATC system coordinates the actions of agents because, until recently, only the ATC system had sufficient data (on traffic, flight plans, and the weather) and sufficient computing power to analyze the situation. Now, airlines and aircraft also have powerful computer systems, and they can access large amounts of data from their own sensors and through high-bandwidth communications. They are also capable of making declarative decisions regarding the traffic situation. Steeb et al [4] studied whether aircraft alone could resolve conflicts. When a conflict arose, the affected aircraft used a variety of criteria to determine which aircraft was best-suited to formulate a resolution plan. The chosen aircraft then calculated the plan and transmitted it to the other aircraft. This was a centralized control system, but the air traffic process was broken down into distributed conflict areas each with a controlling aircraft. Davis and Smith used a contract net approach to assign surveillance tasks for particular areas to individual aircraft [5], A manager divided the task and issued a request for bids. The agents then sent in bids, and the manager selected the successful bidders. Levy and Rosenschein distributed the coordination function using game theory [6]. In the Pursuit Problem, each pursuer first evaluated the solution of the local game to calculate the total payoff received by all the agents from their combined actions. Each agent then solved the global game to establish its share of the

01 Jan 1996
TL;DR: This work utilizes a testbed problem from the distributed AI literature to show that simultaneous learning by group members can lead to significant improvement in group performance and efficiency over groups following static behavioral rules.
Abstract: Groups of agents following fixed behavioral rules can be limited in performance and etficiency. Adaptability and flexibility are key components of intelligent behavior which allow agent groups to improve performance in a given domain using prior problem solving experience. We motivate the usefulness of individual learning by group members in the context of overall group behavior. We propose a framework in which individual group members learn cases to improve their model of other group members. We utilize a testbed problem from the distributed AI literature to show that simultaneous learning by group members can lead to significant improvement in group performance and efficiency over groups following static behavioral rules.

Book ChapterDOI
17 Jun 1996
TL;DR: The UNL’s approach to extend the HOLOS agile scheduling system towards the extended enterprise and the necessity to develop a high level decision support system to manage the scheduling of a business process during its production in the network of enterprises is presented.
Abstract: This paper presents the UNL’s approach to extend the HOLOS agile scheduling system towards the extended enterprise. The concept of extended enterprise and related problems arc brought up, as well as the notion of agile scheduling. The HOLOS system architecture is briefly explained. The necessity to develop a high level decision support system (DSS) to manage the scheduling of a business process during its production in the network of enterprises is emphasized, as well as its main functions are presented. A general framework to integrate the DSS with the scheduling system is also shown. Finally, a flexible communication infrastructure to support the integration of an enterprise in the network is introduced.

Journal ArticleDOI
TL;DR: A concurrent engineering system has been developed using a multiagent architecture to address the issues of design, manufactur ability analysis, incremental process planning, dynamic routing, and scheduling simultaneously.
Abstract: A concurrent engineering system has been developed using a multiagent architecture to address the issues of design, manufactur ability analysis, incremental process planning, dynamic routing, and scheduling simultaneously The system includes a feature-based intelligent design subsystem for prismatic components, a shop-floor subsystem to represent available resources, and a supervisory control interface to manage the shop-floor resources The evaluation of the system used a simulated shop-floor environment with four production machines for the design of a prismatic component. As the design progressed, manufacturability was evaluated and shop-floor planning was carried out concur rently Valid process plans, routing, and scheduling were generated The system is now being extended to incorporate additional design systems and shop-floor environments

Journal ArticleDOI
TL;DR: The articles in this special issue discuss intelligent agents, with an emphasis on the relationship between artificial intelligence and information technology (as discussed in the accompanying article).
Abstract: The articles in this special issue discuss intelligent agents, with an emphasis on the relationship between artificial intelligence and information technology (as discussed in the accompanying article). As guest editor, I wanted to include an overview and technical material, speculation and implementation, controversial claims and evaluated techniques, as well as research articles and articles of interest to application developers. Although no finite number of articles could actually do all this, I'm very happy to have collected the material included here. There are four main articles, as well as a collection of short pieces by young scientists, making the content of this issue quite diverse and covering a wide spectrum of the work in this field.In the first article, Charles Petrie writes about agents and their relevance to engineering applications. He also discusses many issues in terms of what agents are and presents the important issue of differing views about the communication between agents?peer-to-peer versus client/server. This article includes numerous Web pointers and can serve as an excellent starting place for those interested in learning more about agents using the World Wide Web.The next article, by David King and Dan O'Leary, focuses on executive information systems. These authors point out that the combination of AI and information technology leads to a new way of looking at information systems for corporate use, and new approaches to accessing information external to the corporation's own data resources. This article also includes many Web pointers, which we hope you will enjoy exploring.The third article, by Katia Sycara, Keith Decker, Anandeep Pannu, Mike Williamson, and Dajun Zeng, presents examples of the use of distributed intelligent agents for helping users to retrieve, filter, and fuse information relevant to their tasks. The authors show how such agents are helping in a variety of applications. This article dramatically demonstrates why agent-based computing has become such an important idea in recent years.The final feature article, by Moises Lejter and Thomas Dean, focuses on agent-related research, particularly in the area of multiagent architectures. This technical article concentrates on developing and evaluating a framework that addresses a number of issues in understanding multiagent systems. While some might argue that this article is more fitting to Artificial Intelligence or other such journals, I felt it was important to have an article that reflected on some of the exciting research issues in the agents field. I'm grateful to Moises and Tom for letting us publish it here.Last, but definitely not least, is a special section for this issue. In an effort to display some of the excitement in the agents field, I asked several young scientists who are doing the leading work in the field to write short pieces speculating on some of the exciting new directions for agents technology. The short pieces by Jim Firby, Ken Haase, Hiroaki Kitano, Jose Ambite and Craig Knoblock, Lynn Stein, Lee Spector, and Pattie Maes indicate many of the new directions being taken by some of the best and brightest of AI's up and coming generation.

Proceedings Article
01 Jan 1996
TL;DR: A generic framework for developing network fault managers using a object-oriented multiagent architecture, made up of largely autonomous and decentralised components, CNFM agents, cooperating together to carry out knowledge based fault management in networks.
Abstract: The ever increasing complexity of network management demands tools to aid human operators to perform their tasks Many expert systems have been developed to perform automatically fault correlation and diagnosis in telecommunications networks, but most of them are monolithic systems, oriented to a specific domain of application, and only a few are in accordance with present ISO standards for OSI network management This paper presents a generic framework, the CNFM (Cooperative Network-Fault Manager), for developing network fault managers using a object-oriented multiagent architecture The system is made up of largely autonomous and decentralised components, CNFM agents, cooperating together to carry out knowledge based fault management in networks CNFM agents are dynamically created with only generic functional knowledge when faults appear During operation every agent acquires from the network itself the structural and behavioral knowledge it needs on faulty elements A high level interaction language is used by CNFM agents for cooperation tasks, but all communication with network element agents is made through primitives of a standard interaction protocol, allowing the integration of CNFM in management distributed platforms