scispace - formally typeset
Search or ask a question

Showing papers in "Autonomous Agents and Multi-Agent Systems in 2011"


Journal ArticleDOI
TL;DR: A concrete computational and programming model based on the artifact abstraction and implemented by the CArtAgO framework is described, which is meant to improve the modularity, extensibility and reusability of the MAS as a software system.
Abstract: This article introduces the notion of environment programming in software multi-agent systems (MAS) and describes a concrete computational and programming model based on the artifact abstraction and implemented by the CArtAgO framework. Environment programming accounts for conceiving the computational environment where agents are situated as a first-class abstraction for programming MAS, namely a part of the system that can be designed and programmed--aside to agents--to encapsulate functionalities that will be exploited by agents at runtime. From a programming and software engineering perspective, this is meant to improve the modularity, extensibility and reusability of the MAS as a software system. By adopting the A&A meta-model, we consider environments populated by a dynamic set of computational entities called artifacts, collected in workspaces. From the agent viewpoint, artifacts are first-class entities of their environment, representing resources and tools that they can dynamically instantiate, share and use to support individual and collective activities. From the MAS programmer viewpoint, artifacts are a first-class abstraction to shape and program functional environments that agents will exploit at runtime, including functionalities that concern agent interaction, coordination, organisation, and the interaction with the external environment. The article includes a description of the main concepts concerning artifact-based environments and related CArtAgO technology, as well as an overview of their application in MAS programming.

202 citations


Journal ArticleDOI
TL;DR: It is claimed that collective irrationality should not be the only worry of judgment aggregation, and three aggregation operators that satisfy the condition above are introduced, and two definitions of compatibility are offered.
Abstract: Judgment aggregation is a field in which individuals are required to vote for or against a certain decision (the conclusion) while providing reasons for their choice. The reasons and the conclusion are logically connected propositions. The problem is how a collective judgment on logically interconnected propositions can be defined from individual judgments on the same propositions. It turns out that, despite the fact that the individuals are logically consistent, the aggregation of their judgments may lead to an inconsistent group outcome, where the reasons do not support the conclusion. However, in this paper we claim that collective irrationality should not be the only worry of judgment aggregation. For example, judgment aggregation would not reject a consistent combination of reasons and conclusion that no member voted for. In our view this may not be a desirable solution. This motivates our research about when a social outcome is `compatible' with the individuals' judgments. The key notion that we want to capture is that any individual member has to be able to defend the collective decision. This is guaranteed when the group outcome is compatible with its members views. Judgment aggregation problems are usually studied using classical propositional logic. However, for our analysis we use an argumentation approach to judgment aggregation problems. Indeed the question of how individual evaluations can be combined into a collective one can also be addressed in abstract argumentation. We introduce three aggregation operators that satisfy the condition above, and we offer two definitions of compatibility. Not only does our proposal satisfy a good number of standard judgment aggregation postulates, but it also avoids the problem of individual members of a group having to become committed to a group judgment that is in conflict with their own individual positions.

133 citations


Journal ArticleDOI
TL;DR: Petri Net Plans (PNPs) as mentioned in this paper is a language based on Petri Nets (PNs) that allows for intuitive and effective robot and multi-robot behavior design.
Abstract: Programming the behavior of multi-robot systems is a challenging task which has a key role in developing effective systems in many application domains. In this paper, we present Petri Net Plans (PNPs), a language based on Petri Nets (PNs), which allows for intuitive and effective robot and multi-robot behavior design. PNPs are very expressive and support a rich set of features that are critical to develop robotic applications, including sensing, interrupts and concurrency. As a central feature, PNPs allow for a formal analysis of plans based on standard PN tools. Moreover, PNPs are suitable for modeling multi-robot systems and the developed behaviors can be executed in a distributed setting, while preserving the properties of the modeled system. PNPs have been deployed in several robotic platforms in different application domains. In this paper, we report three case studies, which address complex single robot plans, coordination and collaboration.

127 citations


Journal ArticleDOI
TL;DR: A typical BDI-style agent-oriented programming language that enhances usual BDI programming style with three distinguished features: declarative goals, look-ahead planning, and failure handling.
Abstract: Agents are an important technology that have the potential to take over contemporary methods for analysing, designing, and implementing complex software. The Belief-Desire-Intention (BDI) agent paradigm has proven to be one of the major approaches to intelligent agent systems, both in academia and in industry. Typical BDI agent-oriented programming languages rely on user-provided "plan libraries" to achieve goals, and online context sensitive subgoal selection and expansion. These allow for the development of systems that are extremely flexible and responsive to the environment, and as a result, well suited for complex applications with (soft) real-time reasoning and control requirements. Nonetheless, complex decision making that goes beyond, but is compatible with, run-time context-dependent plan selection is one of the most natural and important next steps within this technology. In this paper we develop a typical BDI-style agent-oriented programming language that enhances usual BDI programming style with three distinguished features: declarative goals, look-ahead planning, and failure handling. First, an account that mixes both procedural and declarative aspects of goals is necessary in order to reason about important properties of goals and to decouple plans from what these plans are meant to achieve. Second, lookahead deliberation about the effects of one choice of expansion over another is clearly desirable or even mandatory in many circumstances so as to guarantee goal achievability and to avoid undesired situations. Finally, a failure handling mechanism, suitably integrated with both declarative goals and planning, is required in order to model an adequate level of commitment to goals, as well as to be consistent with most real BDI implemented systems.

101 citations


Journal ArticleDOI
TL;DR: This paper focuses on coalition formation for task allocation in both multi-agent and multi-robot domains, and presents two different algorithms that complement Shehory and Kraus’ algorithm, which provides guarantee’s on solution cost, as well as guarantees on utility.
Abstract: This paper focuses on coalition formation for task allocation in both multi-agent and multi-robot domains. Two different problem formalizations are considered, one for multi-agent domains where agent resources are transferable and one for multi-robot domains. We demonstrate complexity theoretic differences between both models and show that, under both, the coalition formation problem, with m tasks, is NP-hard to both solve exactly and to approximate within a factor of $${O(m^{1-\epsilon})}$$ for all $${\epsilon > 0}$$ . Two natural restrictions of the coalition formation problem are considered. In the first situation agents are drawn from a set of j types. Agents of each type are indistinguishable from one another. For this situation a dynamic programming based approach is presented, which, for fixed j finds the optimal coalition structure in polynomial time and is applicable in both multi-agent and multi-robot domains. We then consider situations where coalitions are restricted to k or fewer agents. We present two different algorithms. Each guarantees the generated solution to be within a constant factor, for fixed k, of the optimal in terms of utility. Our algorithms complement Shehory and Kraus' algorithm (Artif Intell 101(1---2):165---200, 1998), which provides guarantee's on solution cost, as ours provides guarantees on utility. Our algorithm for general multi-agent domains is a modification of and has the same running time as Shehory and Kraus' algorithm, while our approach for multi-robot domains runs in time $${O(n^{\frac{3}{2}}m)}$$ , much faster than Vig and Adams (J Intell Robot Syst 50(1):85---118, 2007) modifications to Shehory and Kraus' algorithm for multi-robot domains, which ran in time O(n k m), for n agents and m tasks.

100 citations


Journal ArticleDOI
TL;DR: It is empirically show how Action-GDL using a novel distributed post-processing heuristic can outperform DCPOP, and by extension DPOP, even when the latter uses the best arrangement provided by multiple state-of-the-art heuristics.
Abstract: In this paper we propose a novel message-passing algorithm, the so-called Action-GDL, as an extension to the generalized distributive law (GDL) to efficiently solve DCOPs. Action-GDL provides a unifying perspective of several dynamic programming DCOP algorithms that are based on GDL, such as DPOP and DCPOP algorithms. We empirically show how Action-GDL using a novel distributed post-processing heuristic can outperform DCPOP, and by extension DPOP, even when the latter uses the best arrangement provided by multiple state-of-the-art heuristics.

80 citations


Journal ArticleDOI
TL;DR: The core notions of the Interaction-Oriented Design of Agent simulations (IODA) approach to simulation design are presented, which includes a design methodology, a model, an architecture and also JEDI, a simple implementation of IODA concepts for reactive agents.
Abstract: Multi-Agent Systems (MAS) design methodologies and Integrated Development Environments exhibit many interesting properties that also support simulation design. Yet, in their current form, they are not appropriate enough to model Multi-Agent Based Simulations (MABS). Indeed, their design is focused on the functionalities to be achieved by the MAS and the allocation of these functionalities among software agents. In that context, the most important point of design is the organization of the agents and how they communicate with each other. On the opposite, MABS aim at studying emergent phenomena, the origin of which lies in the interactions between entities and their interaction with the environment. In that context, the interactions are not limited to exchanging messages but can also be fundamental physical interactions or any other actions involving simultaneously the environment and one or several agents. To deal with this issue, this paper presents the core notions of the Interaction-Oriented Design of Agent simulations (IODA) approach to simulation design. It includes a design methodology, a model, an architecture and also JEDI, a simple implementation of IODA concepts for reactive agents. First of all, our approach focuses on the design of an agent-independent specification of behaviors, called interactions. These interactions are not limited to the analysis phase of simulation: they are made concrete both in the model and at the implementation stage. In addition, no distinction is made between agents and objects: all entities of the simulation are agents. Owing to this principle, designing which interactions occur between agents, as well as how agents act, is achieved by means of an intuitive plug-and-play process, where interaction abilities are distributed among the agents. Besides, the guidelines provided by IODA are not limited to the specification of the model as they help the designer from the very beginning towards a concrete implementation of the simulation.

80 citations


Journal ArticleDOI
TL;DR: This article compares computational models of the role allocation problem, presents the notion of explicitly versus implicitly defined roles, gives a survey of the methods used to approach role allocation problems, and concludes with a list of open research questions related to roles in multi-agent systems.
Abstract: In cooperative multi-agent systems, roles are used as a design concept when creating large systems, they are known to facilitate specialization of agents, and they can help to reduce interference in multi-robot domains. The types of tasks that the agents are asked to solve and the communicative capabilities of the agents significantly affect the way roles are used in cooperative multi-agent systems. Along with a discussion of these issues about roles in multi-agent systems, this article compares computational models of the role allocation problem, presents the notion of explicitly versus implicitly defined roles, gives a survey of the methods used to approach role allocation problems, and concludes with a list of open research questions related to roles in multi-agent systems.

77 citations


Journal ArticleDOI
TL;DR: Negotiation agents in this paper are designed to adjust the number of tentative agreements for each resource and the amount of concession they are willing to make in response to changing market conditions and negotiation situations.
Abstract: In electronic commerce markets where selfish agents behave individually, agents often have to acquire multiple resources in order to accomplish a high level task with each resource acquisition requiring negotiations with multiple resource providers. Thus, it is crucial to efficiently coordinate these interrelated negotiations. This paper presents the design and implementation of agents that concurrently negotiate with other entities for acquiring multiple resources. Negotiation agents in this paper are designed to adjust (1) the number of tentative agreements for each resource and (2) the amount of concession they are willing to make in response to changing market conditions and negotiation situations. In our approach, agents utilize a time-dependent negotiation strategy in which the reserve price of each resource is dynamically determined by (1) the likelihood that negotiation will not be successfully completed (conflict probability), (2) the expected agreement price of the resource, and (3) the expected number of final agreements. The negotiation deadline of each resource is determined by its relative scarcity. Agents are permitted to decommit from agreements by paying a time-dependent penalty, and a buyer can make more than one tentative agreement for each resource. The maximum number of tentative agreements for each resource made by an agent is constrained by the market situation. Experimental results show that our negotiation strategy achieved significantly more utilities than simpler strategies.

75 citations


Journal ArticleDOI
TL;DR: An iterative algorithm is proposed that allows the agents to send only part of their preferences, incrementally, and results in an average of 35% savings in communications, while guaranteeing that the actual winning candidate is revealed.
Abstract: Voting is an essential mechanism that allows multiple agents to reach a joint decision. The joint decision, representing a function over the preferences of all agents, is the winner among all possible (candidate) decisions. To compute the winning candidate, previous work has typically assumed that voters send their complete set of preferences for computation, and in fact this has been shown to be required in the worst case. However, in practice, it may be infeasible for all agents to send a complete set of preferences due to communication limitations and willingness to keep as much information private as possible. The goal of this paper is to empirically evaluate algorithms to reduce communication on various sets of experiments. Accordingly, we propose an iterative algorithm that allows the agents to send only part of their preferences, incrementally. Experiments with simulated and real-world data show that this algorithm results in an average of 35% savings in communications, while guaranteeing that the actual winning candidate is revealed. A second algorithm applies a greedy heuristic to save up to 90% of communications. While this heuristic algorithm cannot guarantee that a true winning candidate is found, we show that in practice, close approximations are obtained.

66 citations


Journal ArticleDOI
TL;DR: An empirical study is presented that demonstrates (through simulation) the advantages of this interest-based negotiation approach over the more classic monotonic concession approach to negotiation.
Abstract: While argumentation-based negotiation has been accepted as a promising alternative to game-theoretic or heuristic-based negotiation, no evidence has been provided to confirm this theoretical advantage. We propose a model of bilateral negotiation extending a simple monotonic concession protocol by allowing the agents to exchange information about their underlying interests and possible alternatives to achieve them during the negotiation. We present an empirical study that demonstrates (through simulation) the advantages of this interest-based negotiation approach over the more classic monotonic concession approach to negotiation.

Journal ArticleDOI
TL;DR: This paper presents the first design-to-time constant factor approximation algorithms for coalition structure generation that guarantee high quality solutions quickly and has the same worst case time complexity of the current best design- to-time algorithms.
Abstract: Coalition structure generation is a central problem in characteristic function games. Most algorithmic work to date can be classified into one of three broad categories: anytime algorithms, design-to-time algorithms and heuristic algorithms [5]. This paper focuses on the former two approaches. Both design-to-time and anytime algorithms have pros and cons. While design-to-time algorithms guarantee finding an optimal solution, they must be run to completion in order to generate any solution. Anytime algorithms; however, permit premature termination while providing solutions of ever increasing quality along with quality guarantees. Design-to-time algorithms have a better worst case runtime (O(3 n ) for n agents) compared to the current anytime algorithms (O(n n ) for n agents), but do not provide the flexibility of anytime algorithms. In this paper we present the first design-to-time constant factor approximation algorithms for coalition structure generation that guarantee high quality solutions quickly. We show how our approach can be used as an anytime algorithm, which combines both the worst case runtime of the design-to-time algorithms and the flexibility of the anytime algorithms. This results in the first anytime algorithm for coalition structure generation which has the same worst case time complexity of the current best design-to-time algorithms.

Journal ArticleDOI
TL;DR: This work presents a modal logic intended to support reasoning about judgment aggregation scenarios (and hence, as a special case, about preference aggregation): the logical language is interpreted directly in judgment aggregation rules, and it is shown that the logic can express aggregation rules such as majority voting; rule properties such as independence; and results such as the discursive paradox, Arrow’s theorem and Condorcet's paradox—which are derivable as formal theorems of the logic.
Abstract: Agents that must reach agreements with other agents need to reason about how their preferences, judgments, and beliefs might be aggregated with those of others by the social choice mechanisms that govern their interactions. The emerging field of judgment aggregation studies aggregation from a logical perspective, and considers how multiple sets of logical formulae can be aggregated to a single consistent set. As a special case, judgment aggregation can be seen to subsume classical preference aggregation. We present a modal logic that is intended to support reasoning about judgment aggregation scenarios (and hence, as a special case, about preference aggregation): the logical language is interpreted directly in judgment aggregation rules. We present a sound and complete axiomatisation. We show that the logic can express aggregation rules such as majority voting; rule properties such as independence; and results such as the discursive paradox, Arrow's theorem and Condorcet's paradox--which are derivable as formal theorems of the logic. The logic is parameterised in such a way that it can be used as a general framework for comparing the logical properties of different types of aggregation--including classical preference aggregation. As a case study we present a logical study of, including a formal proof of, the neutrality lemma, the main ingredient in a well-known proof of Arrow's theorem.

Journal ArticleDOI
TL;DR: This paper addresses team formation in the RoboCup Rescue centered on task allocation with a swarm intelligence based approach, address all characteristics, and compare it to other two GAP-based algorithms.
Abstract: This paper addresses team formation in the RoboCup Rescue centered on task allocation. We follow a previous approach that is based on so-called extreme teams, which have four key characteristics: agents act in domains that are dynamic; agents may perform multiple tasks; agents have overlapping functionality regarding the execution of each task but differing levels of capability; and some tasks may depict constraints such as simultaneous execution. So far these four characteristics have not been fully tested in domains such as the RoboCup Rescue. We use a swarm intelligence based approach, address all characteristics, and compare it to other two GAP-based algorithms. Experiments where computational effort, communication load, and the score obtained in the RoboCup Rescue aremeasured, show that our approach outperforms the others.

Journal ArticleDOI
TL;DR: A novel, agent-oriented, approach that works by repairing violations of desired consistency rules in a design model, using the Object Constraint Language (OCL) and the Unified Modelling Language (UML) metamodel.
Abstract: Software maintenance and evolution is a lengthy and expensive phase in the life cycle of a software system. In this paper we focus on the change propagation problem: given a primary change that is made in order to meet a new or changed requirement, what additional, secondary, changes are needed? We propose a novel, agent-oriented, approach that works by repairing violations of desired consistency rules in a design model. Such consistency constraints are specified using the Object Constraint Language (OCL) and the Unified Modelling Language (UML) metamodel, which form the key inputs to our change propagation framework. The underlying change propagation mechanism of our framework is based on the well-known Belief-Desire-Intention (BDI) agent architecture. Our approach represents change options for repairing inconsistencies using event-triggered plans, as is done in BDI agent platforms. This naturally reflects the cascading nature of change propagation, where each change (primary or secondary) can require further changes to be made. We also propose a new method for generating repair plans from OCL consistency constraints. Furthermore, a given inconsistency will typically have a number of repair plans that could be used to restore consistency, and we propose a mechanism for semi-automatically selecting between alternative repair plans. This mechanism, which is based on a notion of cost, takes into account cascades (where fixing the violation of a constraint breaks another constraint), and synergies between constraints (where fixing the violation of a constraint also fixes another violated constraint). Finally, we report on an evaluation of the approach, covering effectiveness, efficiency and scalability.

Journal ArticleDOI
TL;DR: This paper presents a novel computational model, i.e., human-inspired computational fairness, that is computationally modelled, such that fair and optimal solutions emerge from agents being confronted with social dilemmas.
Abstract: In many common tasks for multi-agent systems, assuming individually rational agents leads to inferior solutions. Numerous researchers found that fairness needs to be considered in addition to individual reward, and proposed valuable computational models of fairness. In this paper, we argue that there are two opportunities for improvement. First, existing models are not specifically tailored to addressing a class of tasks named social dilemmas, even though such tasks are quite common in the context of multi-agent systems. Second, the models generally rely on the assumption that all agents will and can adhere to these models, which is not always the case. We therefore present a novel computational model, i.e., human-inspired computational fairness. Upon being confronted with social dilemmas, humans may apply a number of fully decentralized sanctioning mechanisms to ensure that optimal, fair solutions emerge, even though some participants may be deciding purely on the basis of individual reward. In this paper, we show how these human mechanisms may be computationally modelled, such that fair and optimal solutions emerge from agents being confronted with social dilemmas.

Journal ArticleDOI
TL;DR: The fact that multiple votes by each voter are known is used to demonstrate, both analytically and empirically, that a method based on maximum likelihood estimation is superior to the simple majority rule for arriving at true collective judgments.
Abstract: Given the judgments of multiple voters regarding some issue, it is generally assumed that the best way to arrive at some collective judgment is by following the majority. We consider here the now common case in which each voter expresses some (binary) judgment regarding each of a multiplicity of independent issues and assume that each voter has some fixed (unknown) probability of making a correct judgment for any given issue. We leverage the fact that multiple votes by each voter are known in order to demonstrate, both analytically and empirically, that a method based on maximum likelihood estimation is superior to the simple majority rule for arriving at true collective judgments.

Journal ArticleDOI
TL;DR: The Focal Point Learning approach results in classifiers with a 40–80% higher correct classification rate, and shorter training time, than when using regular classifiers, and a 35% highercorrect classification rate than classical focal point techniques without learning.
Abstract: We consider an automated agent that needs to coordinate with a human partner when communication between them is not possible or is undesirable (tacit coordination games). Specifically, we examine situations where an agent and human attempt to coordinate their choices among several alternatives with equivalent utilities. We use machine learning algorithms to help the agent predict human choices in these tacit coordination domains. Experiments have shown that humans are often able to coordinate with one another in communication-free games, by using focal points, "prominent" solutions to coordination problems. We integrate focal point rules into the machine learning process, by transforming raw domain data into a new hypothesis space. We present extensive empirical results from three different tacit coordination domains. The Focal Point Learning approach results in classifiers with a 40---80% higher correct classification rate, and shorter training time, than when using regular classifiers, and a 35% higher correct classification rate than classical focal point techniques without learning. In addition, the integration of focal points into learning algorithms results in agents that are more robust to changes in the environment. We also present several results describing various biases that might arise in Focal Point based coordination.

Journal ArticleDOI
TL;DR: In this paper, a family of sound and complete logics for reasoning about deliberation strategies for SimpleAPL programs is presented, which allow us to prove safety and liveness properties of simple APL programs under different strategies.
Abstract: We present a family of sound and complete logics for reasoning about deliberation strategies for SimpleAPL programs. SimpleAPL is a fragment of the agent programming language 3APL designed for the implementation of cognitive agents with beliefs, goals and plans. The logics are variants of PDL, and allow us to prove safety and liveness properties of SimpleAPL agent programs under different deliberation strategies. We show how to axiomatise different deliberation strategies for SimpleAPL programs, and, for each strategy we prove a correspondence between the operational semantics of SimpleAPL and the models of the corresponding logic. We illustrate the utility of our approach with an example in which we show how to verify correctness properties for a simple agent program under different deliberation strategies.

Journal ArticleDOI
TL;DR: A practical methodology through which a problem of this class of problems can be encoded as a Markov Decision Process and the MDP REUSE method which reuses the lower level MDP to allocate resources to the parallel subproblems is described.
Abstract: We consider a problem domain where coalitions of agents are formed in order to execute tasks. Each task is assigned at most one coalition of agents, and the coalition can be reorganized during execution. Executing a task means bringing it to one of the desired terminal states, which might take several time steps. The state of the task evolves even if no coalition is assigned to its execution and depends nondeterministically on the cumulative actions of the agents in the coalition. Furthermore, we assume that the reward obtained for executing a task evolves in time: the more the execution of the task is delayed, the lesser the reward. A representative example of this class of problems is the allocation of firefighters to fires in a disaster rescue environment. We describe a practical methodology through which a problem of this class can be encoded as a Markov Decision Process. Due to the three levels of factoring in the resulting MDP (the states, actions and rewards are composites of the original features of the problem) the resulting MDP can be directly solved only for small problem instances. We describe two methods for parallel decomposition of the MDP: the MDP RSUA approach for random sampling and uniform allocation and the MDP REUSE method which reuses the lower level MDP to allocate resources to the parallel subproblems. Through an experimental study which models the problem domain using the fire simulation components of the Robocup Rescue simulator, we show that both methods significantly outperform heuristic approaches and MDP REUSE provides an overall higher performance than MDP RSUA.

Journal ArticleDOI
TL;DR: In this paper, the authors consider algorithms for distributed constraint optimisation problems (DCOPs), using a potential game characterisation of DCOPs, and decompose eight DCOP algorithms, taken from the game theory and computer science literatures, into their salient components.
Abstract: In this paper, we consider algorithms for distributed constraint optimisation problems (DCOPs). Using a potential game characterisation of DCOPs, we decompose eight DCOP algorithms, taken from the game theory and computer science literatures, into their salient components. We then use these components to construct three novel hybrid algorithms. Finally, we empirical evaluate all eleven algorithms, in terms of solution quality, timeliness and communication resources used, in a series of graph colouring experiments. Our experimental results show the existence of several performance trade-offs (such as quick convergence to a solution, but with a cost of high communication needs), which may be exploited by a system designer to tailor a DCOP algorithm to suit their mix of requirements.

Journal ArticleDOI
TL;DR: This paper demonstrates that the characterization proposed in [1] is incorrect, and gives a correct characterization, and provides an example that demonstrates that the authors' characterization is different from that of Auletta et al.
Abstract: In the case of mechanism design with partial verification, where agents have restrictions on misreporting, the Revelation Principle does not always hold. Auletta et al. (J Auton Agent Multi-Agent Syst, to appear) proposed a characterization of correspondences for which the Revelation Principle holds, i.e., they described restrictions on misreporting under which a social choice function is implementable if and only if it is truthfully implementable. In this paper, we demonstrate that the characterization proposed in [1] is incorrect, and, building on their work, give a correct characterization. We also provide an example that demonstrates that our characterization is different from that of Auletta et al.

Journal ArticleDOI
TL;DR: This work shows that, although non-truthful implementations may exist, they are hard to find, and that it is NP-complete to decide if a given social choice function can be implemented in a non- Truthful manner, or even if it can be implementation at all.
Abstract: The central question in mechanism design is how to implement a given social choice function. One of the most studied concepts is that of truthful implementations in which truth-telling is always the best response of the players. The Revelation Principle says that one can focus on truthful implementations without loss of generality (if there is no truthful implementation then there is no implementation at all). Green and Laffont (Rev Econ Stud 53:447---456, 1986) showed that, in the scenario in which players' responses can be partially verified, the revelation principle holds only in some particular cases. When the Revelation Principle does not hold, non-truthful implementations become interesting since they might be the only way to implement a social choice function of interest. In this work we show that, although non-truthful implementations may exist, they are hard to find. Namely, it is NP-complete to decide if a given social choice function can be implemented in a non-truthful manner, or even if it can be implemented at all. This is in contrast to the fact that truthful implementability can be efficiently recognized, even when partial verification of the agents is allowed. Our results also show that there is no "simple" characterization of those social choice functions for which it is worth looking for non-truthful implementations.

Journal ArticleDOI
TL;DR: This work shows how Tropos extends the Tropos methodology by means of declarative business constraints, inspired by the ConDec graphical language, and describes how the models can be automatically formalized in computational logic.
Abstract: We propose $${\mathcal{B}}$$ -Tropos as a modeling framework to support agent-oriented systems engineering, from high-level requirements elicitation down to execution-level tasks. In particular, we show how $${\mathcal{B}}$$ -Tropos extends the Tropos methodology by means of declarative business constraints, inspired by the ConDec graphical language. We demonstrate the functioning of $${\mathcal{B}}$$ -Tropos using a running example inspired by a real-world industrial scenario, and we describe how $${\mathcal{B}}$$ -Tropos models can be automatically formalized in computational logic, discussing formal properties of the resulting framework and its verification capabilities.

Journal ArticleDOI
TL;DR: The theory of single-peaked preferences from points to ranges is extended to obtain a rule that is strategy-proof under a condition on preferences and introduced and analyzed a natural class of algorithms for approximately eliciting a median range from multiple agents.
Abstract: We study the case where agents have preferences over ranges (intervals) of values, and we wish to elicit and aggregate these preferences. For example, consider a set of climatologist agents who are asked for their predictions for the increase in temperature between 2009 and 2100. Each climatologist submits a range, and from these ranges we must construct an aggregate range. What rule should we use for constructing the aggregate range? One issue in such settings is that an agent (climatologist) may misreport her range to make the aggregate range coincide more closely with her own (true) most-preferred range. We extend the theory of single-peaked preferences from points to ranges to obtain a rule (the median-of-ranges rule) that is strategy-proof under a condition on preferences. We then introduce and analyze a natural class of algorithms for approximately eliciting a median range from multiple agents. We also show sufficient conditions under which such an approximate elicitation algorithm still incentivizes agents to answer truthfully. Finally, we consider the possibility that ranges can be refined when the topic is more completely specified (for example, the increase in temperature on the North Pole given the failure of future climate pacts). We give a framework and algorithms for selectively specifying the topic further based on queries to agents.

Journal ArticleDOI
TL;DR: This paper introduces the execution model of a declarative programming language intended for agent applications, which includes functional and logic programming idioms, higher-order functions, modal computation, probabilistic computation, and some theorem-proving capabilities.
Abstract: This paper introduces the execution model of a declarative programming language intended for agent applications. Features supported by the language include functional and logic programming idioms, higher-order functions, modal computation, probabilistic computation, and some theorem-proving capabilities. The need for these features is motivated and examples are given to illustrate the central ideas.

Journal Article
TL;DR: In this paper, an approximate measure of behavioral equivalence is introduced to group models and use it to reduce the complexity of the unconstrained model space for decision making and game play in multiagent settings.
Abstract: Decision making and game play in multiagent settings must often contend with behavioral models of other agents in order to predict their actions. One approach that reduces the complexity of the unconstrained model space is to group models that tend to be behaviorally equivalent. In this paper, we seek to further compress the model space by introducing an approximate measure of behavioral equivalence and using it to group models.

Journal ArticleDOI
TL;DR: It is demonstrated that an optimal solution to the problem of repeatedly choosing actions so as to be fairest to the multiple beneficiaries of those actions is intractable, and two good approximation algorithms are presented.
Abstract: How does one repeatedly choose actions so as to be fairest to the multiple beneficiaries of those actions? We examine approaches to discovering sequences of actions for which the worst-off beneficiaries are treated maximally well, then secondarily the second-worst-off, and so on. We formulate the problem for the situation where the sequence of action choices continues forever; this problem may be reduced to a set of linear programs. We then extend the problem to situations where the game ends at some unknown finite time in the future. We demonstrate that an optimal solution is intractable, and present two good approximation algorithms.

Journal ArticleDOI
TL;DR: A new model is presented that allows for large multi-agent decision problems with temporal and precedence constraints to be represented and polynomial algorithms to efficiently solve problems formalized by OC-DEC-MDPs are proposed.
Abstract: Optimizing the operation of cooperative multi-agent systems that can deal with large and realistic problems has become an important focal area of research in the multi-agent community. In this paper, we first present a new model, the OC-DEC-MDP (Opportunity Cost Decentralized Markov Decision Process), that allows us to represent large multi-agent decision problems with temporal and precedence constraints. Then, we propose polynomial algorithms to efficiently solve problems formalized by OC-DEC-MDPs. The problems we deal with consist of a set of agents that have to execute a set of tasks in a cooperative way. The agents cannot communicate during task execution and they must respect resource and temporal constraints. Our approach is based on Decentralized Markov Decision Processes (DEC-MDPs) and uses the concept of opportunity cost borrowed from economics to obtain approximate control policies. Experimental results show that our approach produces good quality solutions for complex problems which are out of reach of existing approaches.

Journal ArticleDOI
TL;DR: A formal semantics of this decomposition is given, as well as the definition of operators to introduce various ways of recursively defining agents and proof schemas that allow a proof of the correctness of a recursive agent.
Abstract: The purpose of this article is to formalise the notion of recursive agent by extending the Goal Decomposition Tree formalism (GDT). A formal semantics of this decomposition is given, as well as the definition of operators to introduce various ways of recursively defining agents. Design patterns, that show various use cases for recursive agents, are also presented. Finally, to preserve the essential GDT characteristics (that is to allow the verification of agents behaviours), we give proof schemas that allow a proof of the correctness of a recursive agent.