Topic
Multi-agent system
About: Multi-agent system is a research topic. Over the lifetime, 27978 publications have been published within this topic receiving 465191 citations. The topic is also known as: multi-agent systems & multiagent system.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: Simulation results show that, for most of the performance measures, a MAS integrated with well-designed ant-inspired coordination performs well compared to a MAS using dispatching rules.
210 citations
••
01 Mar 2007TL;DR: This architecture is a peer-to-peer (P2P) model based on multiple OSGi platforms, in which service-oriented mechanisms are used for system components to interact with one another, and MA technology is applied to augment the interaction mechanisms.
Abstract: The architecture of a conventional smart home is usually server-centric and thus causes many problems. Mobile devices and dynamic services affect a dynamically changing environment, which can result in very difficult interaction. In addition, how to provide services efficiently and appropriately is always an important issue for a smart home. To solve the problems caused by traditional architectures, to deal with the dynamic environment, and to provide appropriate services, we propose a service-oriented architecture (SOA) for smart-home environments, based on Open Services Gateway Initiative (OSGi) and mobile-agent (MA) technology. This architecture is a peer-to-peer (P2P) model based on multiple OSGi platforms, in which service-oriented mechanisms are used for system components to interact with one another, and MA technology is applied to augment the interaction mechanisms
208 citations
••
17 Oct 2013TL;DR: It is shown how to learn competent behaviors when a model of the game dynamics is available or when it is not, when full state information is given to the agent or just subjective observations, when learning is interactive or in batch-mode, and for a number of different learning algorithms, including reinforcement learning and evolutionary search.
Abstract: We propose a powerful new tool for conducting research on computational intelligence and games. `PyVGDL' is a simple, high-level description language for 2D video games, and the accompanying software library permits parsing and instantly playing those games. The streamlined design of the language is based on defining locations and dynamics for simple building blocks, and the interaction effects when such objects collide, all of which are provided in a rich ontology. It can be used to quickly design games, without needing to deal with control structures, and the concise language is also accessible to generative approaches. We show how the dynamics of many classical games can be generated from a few lines of PyVGDL. The main objective of these generated games is to serve as diverse benchmark problems for learning and planning algorithms; so we provide a collection of interfaces for different types of learning agents, with visual or abstract observations, from a global or first-person viewpoint. To demonstrate the library's usefulness in a broad range of learning scenarios, we show how to learn competent behaviors when a model of the game dynamics is available or when it is not, when full state information is given to the agent or just subjective observations, when learning is interactive or in batch-mode, and for a number of different learning algorithms, including reinforcement learning and evolutionary search.
208 citations
••
TL;DR: The Gala system is described, an implemented system that allows the specification and efficient solution of large imperfect information games and provides a new declarative language for compactly and naturally representing games by their rules.
208 citations
•
IBM1
TL;DR: This paper proposes a fundamentally different approach to Q-Learning, dubbed Hyper-Q, in which values of mixed strategies rather than base actions are learned, and in which other agents' strategies are estimated from observed actions via Bayesian inference.
Abstract: Recent multi-agent extensions of Q-Learning require knowledge of other agents' payoffs and Q-functions, and assume game-theoretic play at all times by all other agents. This paper proposes a fundamentally different approach, dubbed "Hyper-Q" Learning, in which values of mixed strategies rather than base actions are learned, and in which other agents' strategies are estimated from observed actions via Bayesian inference. Hyper-Q may be effective against many different types of adaptive agents, even if they are persistently dynamic. Against certain broad categories of adaptation, it is argued that Hyper-Q may converge to exact optimal time-varying policies. In tests using Rock-Paper-Scissors, Hyper-Q learns to significantly exploit an Infinitesimal Gradient Ascent (IGA) player, as well as a Policy Hill Climber (PHC) player. Preliminary analysis of Hyper-Q against itself is also presented.
206 citations