scispace - formally typeset
Search or ask a question
Topic

Multi-agent system

About: Multi-agent system is a research topic. Over the lifetime, 27978 publications have been published within this topic receiving 465191 citations. The topic is also known as: multi-agent systems & multiagent system.


Papers
More filters
Journal ArticleDOI
TL;DR: By combining tools from switching systems and Lyapunov stability theory, some sufficient conditions are established for consensus of multi-agent systems without any external disturbances under a fixed strongly connected topology.
Abstract: This article addresses the consensus problem for cooperative multiple agents with nonlinear dynamics on a fixed directed information network, where each agent can only communicate with its neighbours intermittently. A class of control algorithms is first introduced, using only intermittent relative local information. By combining tools from switching systems and Lyapunov stability theory, some sufficient conditions are established for consensus of multi-agent systems without any external disturbances under a fixed strongly connected topology. Theoretical analyses are further provided for consensus of multi-agent systems in the presence of external disturbances. It is shown that a finite ℒ2-gain performance index for the closed-loop multi-agent systems can be guaranteed if the coupling strength of the network is larger than a threshold value determined by the average communication rate and the generalised algebraic connectivity of the strongly connected topology. The results are then extended to consensus ...

132 citations

Journal ArticleDOI
TL;DR: In this article, a multi-agent system consisting of $N$ agents is considered and the problem of steering each agent from its initial position to a desired goal while avoiding collisions with obstacles and other agents is studied.
Abstract: A multi-agent system consisting of $N$ agents is considered. The problem of steering each agent from its initial position to a desired goal while avoiding collisions with obstacles and other agents is studied. This problem, referred to as the multi-agent collision avoidance problem , is formulated as a differential game. Dynamic feedback strategies that approximate the feedback Nash equilibrium solutions of the differential game are constructed and it is shown that, provided certain assumptions are satisfied, these guarantee that the agents reach their targets while avoiding collisions.

132 citations

Journal ArticleDOI
TL;DR: It is shown how evolutionary dynamics from Evolutionary Game Theory can help the developer of a MAS in good choices of parameter settings of the used RL algorithms and how the improved results for MAS RL in COIN, and a developed extension, are predicted by the ED.
Abstract: In this paper, we investigate Reinforcement learning (RL) in multi-agent systems (MAS) from an evolutionary dynamical perspective. Typical for a MAS is that the environment is not stationary and the Markov property is not valid. This requires agents to be adaptive. RL is a natural approach to model the learning of individual agents. These Learning algorithms are however known to be sensitive to the correct choice of parameter settings for single agent systems. This issue is more prevalent in the MAS case due to the changing interactions amongst the agents. It is largely an open question for a developer of MAS of how to design the individual agents such that, through learning, the agents as a collective arrive at good solutions. We will show that modeling RL in MAS, by taking an evolutionary game theoretic point of view, is a new and potentially successful way to guide learning agents to the most suitable solution for their task at hand. We show how evolutionary dynamics (ED) from Evolutionary Game Theory can help the developer of a MAS in good choices of parameter settings of the used RL algorithms. The ED essentially predict the equilibriums outcomes of the MAS where the agents use individual RL algorithms. More specifically, we show how the ED predict the learning trajectories of Q-Learners for iterated games. Moreover, we apply our results to (an extension of) the COllective INtelligence framework (COIN). COIN is a proved engineering approach for learning of cooperative tasks in MASs. The utilities of the agents are re-engineered to contribute to the global utility. We show how the improved results for MAS RL in COIN, and a developed extension, are predicted by the ED.

132 citations

Journal ArticleDOI
TL;DR: One feature of the proposed control laws is that they guarantee that the spatial ordering of the agents are preserved throughout the system's evolution, and thus no collision may take place during the process of forming circle formations.
Abstract: We propose distributed control laws for a group of anonymous mobile agents to form desired circle formations when the agents move in the one-dimensional space of a circle. The agents are modeled by kinematic points. They share the common knowledge of the orientation of the circle, but are oblivious and anonymous. Moreover, each agent can only sense the relative positions of its neighboring two agents that are immediately in front of or behind itself. Distributed control strategies are designed for the agents using only the information of the relative positions of their two neighbors and also the given desired distances to its neighboring two agents. To make the control strategies more practical, we discuss the corresponding sampled-data control laws, and utilizing the technique of adopting time-varying gains, we obtain control laws that are able to guide the agents to form the desired circle formation within any given finite time. One feature of the proposed control laws is that they guarantee that the spatial ordering of the agents are preserved throughout the system's evolution, and thus no collision may take place during the process of forming circle formations. Both theoretical analysis and numerical simulations are given to show the effectiveness of the proposed formation control strategies.

132 citations

Journal ArticleDOI
TL;DR: It is shown that the algorithm is robust to arbitrarily bounded communication delays and arbitrarily switching communication graphs provided that the union of the graphs has directed spanning trees among each certain time interval.
Abstract: In this technical note, a distributed velocity-constrained consensus problem is studied for discrete-time multi-agent systems, where each agent's velocity is constrained to lie in a nonconvex set. A distributed constrained control algorithm is proposed to enable all agents to converge to a common point using only local information. The gains of the algorithm for all agents need not to be the same or predesigned and can be adjusted by each agent itself based on its own and neighbors' information. It is shown that the algorithm is robust to arbitrarily bounded communication delays and arbitrarily switching communication graphs provided that the union of the graphs has directed spanning trees among each certain time interval. The analysis approach is based on multiple novel model transformations, proper control parameter selections, boundedness analysis of state-dependent stochastic matrices1, exploitation of the convexity of stochastic matrices, and the joint connectivity of the communication graphs. Numerical examples are included to illustrate the theoretical results.

132 citations


Network Information
Related Topics (5)
Control theory
299.6K papers, 3.1M citations
90% related
Optimization problem
96.4K papers, 2.1M citations
87% related
Fuzzy logic
151.2K papers, 2.3M citations
86% related
Artificial neural network
207K papers, 4.5M citations
85% related
Wireless sensor network
142K papers, 2.4M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023536
20221,212
2021849
20201,098
20191,079
20181,105