scispace - formally typeset
Search or ask a question
Topic

Multi-agent system

About: Multi-agent system is a research topic. Over the lifetime, 27978 publications have been published within this topic receiving 465191 citations. The topic is also known as: multi-agent systems & multiagent system.


Papers
More filters
Journal ArticleDOI
TL;DR: This work uses a high-gain methodology to construct linear decentralized controllers for consensus, in networks with identical but general multi-input linear time-invariant (LTI) agents and quitegeneral time- Invariant and time-varying observation topologies.
Abstract: We use a high-gain methodology to construct linear decentralized controllers for consensus, in networks with identical but general multi-input linear time-invariant (LTI) agents and quitegeneral time-invariant and time-varying observation topologies.

131 citations

Proceedings ArticleDOI
14 Jun 2006
TL;DR: This paper establishes an existence theorem for the connectivity maintenance problem by introducing a novel state-dependent graph, called the double-integrator disk graph, and designs a distributed "flow-control" algorithm to compute optimal connectivity-maintaining controls.
Abstract: In this paper we consider ad-hoc networks of robotic agents with double integrator dynamics. For such networks, the connectivity maintenance problems are: (i) do there exist control inputs for each agent to maintain network connectivity, and (ii) given desired controls for each agent, can one compute the closest connectivity-maintaining controls in a distributed fashion. The proposed solution is based on three contributions. First, we define and characterize admissible sets for double integrators to remain inside disks. Second, we establish an existence theorem for the connectivity maintenance problem by introducing a novel state-dependent graph, called the double-integrator disk graph. Finally, we design a distributed "flow-control" algorithm to compute optimal connectivity-maintaining controls.

130 citations

Proceedings ArticleDOI
28 May 2001
TL;DR: This paper applies this hierarchical multi-agent reinforcement learning algorithm to a complex AGV scheduling task and compares its performance and speed with other learning approaches, including flat multi- agent, single agent using MAXQ, selfish multiple agents usingMAXQ, as well as several well-known AGV heuristics like "first come first serve", "highest queue first" and "nearest station first".
Abstract: In this paper we investigate the use of hierarchical reinforcement learning to speed up the acquisition of cooperative multi-agent tasks. We extend the MAXQ framework to the multi-agent case. Each agent uses the same MAXQ hierarchy to decompose a task into sub-tasks. Learning is decentralized, with each agent learning three interrelated skills: how to perform subtasks, which order to do them in, and how to coordinate with other agents. Coordination skills among agents are learned by using joint actions at the highest level(s) of the hierarchy. The Q nodes at the highest level(s) of the hierarchy are configured to represent the joint task-action space among multiple agents. In this approach, each agent only knows what other agents are doing at the level of sub-tasks, and is unaware of lower level (primitive) actions. This hierarchical approach allows agents to learn coordination faster by sharing information at the level of sub-tasks, rather than attempting to learn coordination taking into account primitive joint state-action values. We apply this hierarchical multi-agent reinforcement learning algorithm to a complex AGV scheduling task and compare its performance and speed with other learning approaches, including flat multi-agent, single agent using MAXQ, selfish multiple agents using MAXQ (where each agent acts independently without communicating with the other agents), as well as several well-known AGV heuristics like "first come first serve", "highest queue first" and "nearest station first". We also compare the tradeoffs in learning speed vs. performance of modeling joint action values at multiple levels in the MAXQ hierarchy.

130 citations

Book ChapterDOI
TL;DR: The author proposes a conceptual shift from individual agent representations to social interaction and looks at the underlying reasons why agents from different vendors--or even different research projects--cannot communicate with each other.
Abstract: Agent communication languages have been used for years in proprietary multiagent systems. Yet agents from different vendors--or even different research projects--cannot communicate with each other. The author looks at the underlying reasons and proposes a conceptual shift from individual agent representations to social interaction.

130 citations


Network Information
Related Topics (5)
Control theory
299.6K papers, 3.1M citations
90% related
Optimization problem
96.4K papers, 2.1M citations
87% related
Fuzzy logic
151.2K papers, 2.3M citations
86% related
Artificial neural network
207K papers, 4.5M citations
85% related
Wireless sensor network
142K papers, 2.4M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023536
20221,212
2021849
20201,098
20191,079
20181,105