scispace - formally typeset
Search or ask a question
Topic

Multi-agent system

About: Multi-agent system is a research topic. Over the lifetime, 27978 publications have been published within this topic receiving 465191 citations. The topic is also known as: multi-agent systems & multiagent system.


Papers
More filters
Journal ArticleDOI
TL;DR: An event-triggered formation protocol is delicately proposed by using only locally triggered sampled data in a distributed manner and the state formation control problem is cast into an asymptotic stability problem of a reduced-order closed-loop system.
Abstract: This paper addresses the distributed formation control problem of a networked multi-agent system (MAS) subject to limited communication resources. First, a dynamic event-triggered communication mechanism (DECM) is developed to schedule inter-agent communication such that some unnecessary data exchanges among agents can be reduced so as to achieve better resource efficiency. Different from most of the existing event-triggered communication mechanisms, wherein threshold parameters are fixed all the time, the threshold parameter in the developed event triggering condition is dynamically adjustable in accordance with a dynamic rule. It is numerically shown that the proposed DECM can achieve a better tradeoff between reducing inter-agent communication frequency and preserving an expected formation than some existing ones. Second, an event-triggered formation protocol is delicately proposed by using only locally triggered sampled data in a distributed manner. Based on the formation protocol, it is shown that the state formation control problem is cast into an asymptotic stability problem of a reduced-order closed-loop system. Then, criteria for designing desired formation protocol and communication mechanism are derived. Finally, the effectiveness and advantages of the proposed approach are demonstrated through a comparative study in multirobot formation control.

448 citations

Journal ArticleDOI
TL;DR: This paper presents the development and evaluation of a novel system of multiagent reinforcement learning for integrated network of adaptive traffic signal controllers (MARLIN-ATSC), and shows unprecedented reduction in the average intersection delay.
Abstract: Population is steadily increasing worldwide, resulting in intractable traffic congestion in dense urban areas. Adaptive traffic signal control (ATSC) has shown strong potential to effectively alleviate urban traffic congestion by adjusting signal timing plans in real time in response to traffic fluctuations to achieve desirable objectives (e.g., minimize delay). Efficient and robust ATSC can be designed using a multiagent reinforcement learning (MARL) approach in which each controller (agent) is responsible for the control of traffic lights around a single traffic junction. Applying MARL approaches to the ATSC problem is associated with a few challenges as agents typically react to changes in the environment at the individual level, but the overall behavior of all agents may not be optimal. This paper presents the development and evaluation of a novel system of multiagent reinforcement learning for integrated network of adaptive traffic signal controllers (MARLIN-ATSC). MARLIN-ATSC offers two possible modes: 1) independent mode, where each intersection controller works independently of other agents; and 2) integrated mode, where each controller coordinates signal control actions with neighboring intersections. MARLIN-ATSC is tested on a large-scale simulated network of 59 intersections in the lower downtown core of the City of Toronto, ON, Canada, for the morning rush hour. The results show unprecedented reduction in the average intersection delay ranging from 27% in mode 1 to 39% in mode 2 at the network level and travel-time savings of 15% in mode 1 and 26% in mode 2, along the busiest routes in Downtown Toronto.

437 citations

Journal ArticleDOI
TL;DR: This study proposes a secondary voltage and frequency control scheme based on the distributed cooperative control of multi-agent systems that is fully distributed such that each distributed generator only requires its own information and the information of its neighbours on the communication digraph.
Abstract: This study proposes a secondary voltage and frequency control scheme based on the distributed cooperative control of multi-agent systems. The proposed secondary control is implemented through a communication network with one-way communication links. The required communication network is modelled by a directed graph (digraph). The proposed secondary control is fully distributed such that each distributed generator only requires its own information and the information of its neighbours on the communication digraph. Thus, the requirements for a central controller and complex communication network are obviated, and the system reliability is improved. The simulation results verify the effectiveness of the proposed secondary control for a microgrid test system.

432 citations

Journal ArticleDOI
TL;DR: Here agent communities are modelled using a distributed goal search formalism and it is argued that commitments (pledges to undertake a specified course of action) and conventions are the foundation of coordination in multi-agent systems.
Abstract: Distributed Artificial Intelligence systems, in which multiple agents interact to improve their individual performance and to enhance the systems' overall utility, are becoming an increasingly pervasive means of conceptualising a diverse range of applications. As the discipline matures, researchers are beginning to strive for the underlying theories and principles which guide the central processes of coordination and cooperation. Here agent communities are modelled using a distributed goal search formalism, and it is argued that commitments (pledges to undertake a specific course of action) and conventions (means of monitoring commitments in changing circumstances) are the foundation of coordination in multi-agent systems. An analysis of existing coordination models which use concepts akin to commitments and conventions is undertaken before a new unifying framework is presented. Finally, a number of prominent coordination techniques which do not explicitly involve commitments or conventions are reformulated in these terms to demonstrate their compliance with the central hypothesis of this paper.

426 citations

ReportDOI
01 Jan 1994
TL;DR: A novel formulation of reinforcement learning is proposed that makes behavior selection learnable in noisy, uncertain multi-agent environments with stochastic dynamics, and enables and accelerates learning in complex multi-robot domains.
Abstract: This thesis addresses situated, embodied agents interacting in complex domains. It focuses on two problems: (1) synthesis and analysis of intelligent group behavior, and (2) learning in complex group environments. Behaviors are proposed as the appropriate level for control and learning. Basic behaviors are introduced as building blocks for synthesizing and analyzing system behavior. The thesis describes the process of selecting such basic behaviors, formally specifying them, algorithmically implementing them, and empirically evaluating them. All of the proposed ideas are validated with a group of up to 20 mobile robots using a basic behavior set consisting of: avoidance, following, aggregation, dispersion, and homing. The set of basic behaviors acts as a substrate for achieving more complex high-level goals and tasks. Two behavior combination operators are introduced, and verified by combining subsets of the above basic behavior set to implement collective flocking and foraging. A methodology is introduced for automatically constructing higher-level behaviors by learning to select among the basic behavior set. A novel formulation of reinforcement learning is proposed that makes behavior selection learnable in noisy, uncertain multi-agent environments with stochastic dynamics. It consists of using conditions and behaviors for more robust control and minimized state-spaces, and a reinforcement shaping methodology that enables principled embedding of domain knowledge with two types of shaping functions: heterogeneous reward functions and progress estimators. The methodology outperforms two alternatives when tested on a collection of robots learning to forage. The proposed formulation enables and accelerates learning in complex multi-robot domains. The generality of the approach makes it compatible with the existing reinforcement learning algorithms, allowing it to accelerate learning in a variety of domains and applications. The presented methodologies and results are aimed at extending our understanding of synthesis, analysis, and learning of group behavior. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

425 citations


Network Information
Related Topics (5)
Control theory
299.6K papers, 3.1M citations
90% related
Optimization problem
96.4K papers, 2.1M citations
87% related
Fuzzy logic
151.2K papers, 2.3M citations
86% related
Artificial neural network
207K papers, 4.5M citations
85% related
Wireless sensor network
142K papers, 2.4M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023536
20221,212
2021849
20201,098
20191,079
20181,105