scispace - formally typeset
Search or ask a question
Topic

Multi-agent system

About: Multi-agent system is a research topic. Over the lifetime, 27978 publications have been published within this topic receiving 465191 citations. The topic is also known as: multi-agent systems & multiagent system.


Papers
More filters
Journal ArticleDOI
TL;DR: Simulation results show that, for most of the performance measures, a MAS integrated with well-designed ant-inspired coordination performs well compared to a MAS using dispatching rules.

210 citations

Journal ArticleDOI
01 Mar 2007
TL;DR: This architecture is a peer-to-peer (P2P) model based on multiple OSGi platforms, in which service-oriented mechanisms are used for system components to interact with one another, and MA technology is applied to augment the interaction mechanisms.
Abstract: The architecture of a conventional smart home is usually server-centric and thus causes many problems. Mobile devices and dynamic services affect a dynamically changing environment, which can result in very difficult interaction. In addition, how to provide services efficiently and appropriately is always an important issue for a smart home. To solve the problems caused by traditional architectures, to deal with the dynamic environment, and to provide appropriate services, we propose a service-oriented architecture (SOA) for smart-home environments, based on Open Services Gateway Initiative (OSGi) and mobile-agent (MA) technology. This architecture is a peer-to-peer (P2P) model based on multiple OSGi platforms, in which service-oriented mechanisms are used for system components to interact with one another, and MA technology is applied to augment the interaction mechanisms

208 citations

Proceedings ArticleDOI
Tom Schaul1
17 Oct 2013
TL;DR: It is shown how to learn competent behaviors when a model of the game dynamics is available or when it is not, when full state information is given to the agent or just subjective observations, when learning is interactive or in batch-mode, and for a number of different learning algorithms, including reinforcement learning and evolutionary search.
Abstract: We propose a powerful new tool for conducting research on computational intelligence and games. `PyVGDL' is a simple, high-level description language for 2D video games, and the accompanying software library permits parsing and instantly playing those games. The streamlined design of the language is based on defining locations and dynamics for simple building blocks, and the interaction effects when such objects collide, all of which are provided in a rich ontology. It can be used to quickly design games, without needing to deal with control structures, and the concise language is also accessible to generative approaches. We show how the dynamics of many classical games can be generated from a few lines of PyVGDL. The main objective of these generated games is to serve as diverse benchmark problems for learning and planning algorithms; so we provide a collection of interfaces for different types of learning agents, with visual or abstract observations, from a global or first-person viewpoint. To demonstrate the library's usefulness in a broad range of learning scenarios, we show how to learn competent behaviors when a model of the game dynamics is available or when it is not, when full state information is given to the agent or just subjective observations, when learning is interactive or in batch-mode, and for a number of different learning algorithms, including reinforcement learning and evolutionary search.

208 citations

Journal ArticleDOI
TL;DR: The Gala system is described, an implemented system that allows the specification and efficient solution of large imperfect information games and provides a new declarative language for compactly and naturally representing games by their rules.

208 citations

Proceedings Article
Gerald Tesauro1
09 Dec 2003
TL;DR: This paper proposes a fundamentally different approach to Q-Learning, dubbed Hyper-Q, in which values of mixed strategies rather than base actions are learned, and in which other agents' strategies are estimated from observed actions via Bayesian inference.
Abstract: Recent multi-agent extensions of Q-Learning require knowledge of other agents' payoffs and Q-functions, and assume game-theoretic play at all times by all other agents. This paper proposes a fundamentally different approach, dubbed "Hyper-Q" Learning, in which values of mixed strategies rather than base actions are learned, and in which other agents' strategies are estimated from observed actions via Bayesian inference. Hyper-Q may be effective against many different types of adaptive agents, even if they are persistently dynamic. Against certain broad categories of adaptation, it is argued that Hyper-Q may converge to exact optimal time-varying policies. In tests using Rock-Paper-Scissors, Hyper-Q learns to significantly exploit an Infinitesimal Gradient Ascent (IGA) player, as well as a Policy Hill Climber (PHC) player. Preliminary analysis of Hyper-Q against itself is also presented.

206 citations


Network Information
Related Topics (5)
Control theory
299.6K papers, 3.1M citations
90% related
Optimization problem
96.4K papers, 2.1M citations
87% related
Fuzzy logic
151.2K papers, 2.3M citations
86% related
Artificial neural network
207K papers, 4.5M citations
85% related
Wireless sensor network
142K papers, 2.4M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023536
20221,212
2021849
20201,098
20191,079
20181,105