scispace - formally typeset
Search or ask a question

Showing papers by "Ya'akov Gal published in 2009"


Proceedings Article
10 May 2009
TL;DR: A novel probabilistic representation of other agents' beliefs about the recipes selected for their own or for the group activity is proposed, which is compact, and thus makes reasoning about helpful behavior tractable.
Abstract: This paper considers the design of agent strategies for deciding whether to help other members of a group with whom an agent is engaged in a collaborative activity. Three characteristics of collaborative planning must be addressed by these decision-making strategies: agents may have only partial information about their partners' plans for sub-tasks of the collaborative activity; the effectiveness of helping may not be known a priori; and, helping actions have some associated cost. The paper proposes a novel probabilistic representation of other agents' beliefs about the recipes selected for their own or for the group activity, given partial information. This representation is compact, and thus makes reasoning about helpful behavior tractable. The paper presents a decision-theoretic mechanism that uses this representation to make decisions about two kinds of helpful actions: communicating information relevant to a partner's plans for some sub-action, and adding domain actions that are helpful to other agent(s) into the collaborative plan. This mechanism includes a set of rules for reasoning about the utility of helpful actions and the cost incurred by doing them. It was tested using a multi-agent test-bed with configurations that varied agents' uncertainty about the world, their uncertainty about each others' capabilities or resources, and the cost of helpful behavior. In all cases, agents using the decision-theoretic mechanism to decide whether to help outperformed agents using purely axiomatic rules.

50 citations


Proceedings ArticleDOI
29 Aug 2009
TL;DR: Results show that the magnitude of the benefit of interruption to the collaboration is a major factor influencing the likelihood that people will accept interruption requests and imply that system designers need to consider not only the possible benefits of interruptions to collaborative human-computer teams but also the way that such benefits are perceived by people.
Abstract: This paper presents a model of collaborative decision-making for groups that involve people and computer agents. The model distinguishes between actions relating to participants’ commitment to the group and actions relating to their individual tasks, uses this distinction to decompose group decision making into smaller problems that can be solved efficiently. It allows computer agents to reason about the benefits of their actions on a collaboration and the ways in which human participants perceive these benefits. The model was tested in a setting in which computer agents need to decide whether to interrupt people to obtain potentially valuable information. Results show that the magnitude of the benefit of interruption to the collaboration is a major factor influencing the likelihood that people will accept interruption requests. They further establish that people’s perceived type of their partners (whether humans or computers) significantly affected their perceptions of the usefulness of interruptions when the benefit of the interruption is not clear-cut. These results imply that system designers need to consider not only the possible benefits of interruptions to collaborative human-computer teams but also the way that such benefits are perceived by people.

15 citations


Book ChapterDOI
01 Sep 2009
TL;DR: The recognition algorithms presented use constraint satisfaction techniques to compare user interaction histories to a set of ideal solutions, and it is found that these algorithms identified users' activities with 93% accuracy.
Abstract: Ideally designed software allow users to explore and pursue interleaving plans, making it challenging to automatically recognize user interactions. The recognition algorithms presented use constraint satisfaction techniques to compare user interaction histories to a set of ideal solutions. We evaluate these algorithms on data obtained from user interactions with a commercially available pedagogical software, and find that these algorithms identified users' activities with 93% accuracy.

10 citations


01 Jan 2009
TL;DR: Results show that those who choose to reveal their underlying goals outperform nego- tiators in the same setting that use a protocol that forbids reve- lation, and that goal revelation has a positive effect on the aggregate performance of negotiators, and on the likelihood to reach agreement.
Abstract: The Effects of Goal Revelation on Computer-Mediated Negotiation Ya’akov Gal MIT CSAIL Harvard Univ. SEAS Sohan D’souza British Univ. in Dubai Philippe Pasquier Simon Fraser Univ. Abstract This paper studies a novel negotiation protocol in settings in which players need to exchange resources in order to achieve their own objective, but are uncertain about the objectives of other participants. The protocol allows participants to request each other to disclose their interests at given points in the negotiation. Revealing information about participants’ needs may facilitate agreement, but it also exposes their negotiation strategy to the exploitation of others. Empirical studies were conducted using computer-mediated negotiation scenarios that provided an analogue to the way goals and resources interact in the world. The scenarios varied in the individual positions and interests of participants, as well as the dependency relation- ships that hold between participants. Results show that those who choose to reveal their underlying goals outperform nego- tiators in the same setting that use a protocol that forbids reve- lation. In addition, goal revelation has a positive effect on the aggregate performance of negotiators, and on the likelihood to reach agreement. Further analysis show goal revelation to be a cooperation mechanism by which negotiators are able to iden- tify acceptable agreements in scenarios characterized by few socially (Pareto) beneficial outcomes. Introduction Goals and incentives are key determinants of human behavior, but in many negotiation scenarios there is lack of information about the underlying interests of participants. Often, this pre- vents the parties from reaching a beneficial agreement. Con- sider a bank who is offering to purchase the majority of shares of a struggling company in return for potential job cuts. The unions may not allow the company to accept the offer because they refuse to agree to layoffs. However, if the bank discloses that it is committed to keeping the company afloat, the unions may agree to the buy-out if layoffs are minimal. On the other hand, revealing goals is often associated with a cost. Having realized that the bank does not intend to liquidate the com- pany, the unions may demand no job cuts. This work studies the trade-offs associated with different negotiation protocols in settings where self-interested parties lack information about each other’s aims. We consider strate- gic settings which require an agreement on the allocation of scarce resources among self-interested parties. Participants take turns proposing take-it-or-leave-it deals to each other un- der time constraints, and communication is associated with a cost. With no certain knowledge of each other’s goals, the offers of participants serve as a “noisy signal” to their true objectives. It is difficult to locate efficient trades for both par- ties in such conditions, either because participants may re- quest more than they need, or because there are simply too many combinations of possible agreements to try out under time constraints. In these conditions, revealing the objectives of one or more of the participants may facilitate agreement, because the addi- Iyad Rahwan British Univ. in Dubai Univ. of Edinburgh Sherief Abdallah British Univ. in Dubai Univ. of Edinburgh tional information narrows the “search space” of possible of- fers, and may reveal new avenues of negotiation that were not known before. However, it is not obvious that the revelation of information by either party will necessarily improve the result of the negotiation. Goal revelation is potentially costly, because it exposes the revealing party’s position and negoti- ation strategies. For example, a participant that revealed its goals could have exposed itself to be very selfish in its past offers, and this may adversely affect the quality of the deals they are offered in the future. This paper proposes a novel interest-based negotiation pro- tocol in task settings, in which parties can prompt each other to reveal their goals at fixed points within the negotiation pro- cess. This protocol is inspired by recent negotiation proto- cols that allow participants to exchange information about be- liefs, goals or social aspects (Rahwan et al., 2003). We com- pare this interest-based protocol with an alternative position- based protocol where goal revelation is not allowed. We mea- sured people’s behavior under each of these protocols using a computer-mediated testbed comprising a conceptually simple game in which players negotiate and exchange resources to enable them to achieve their individual goals. This testbed has been used previously to analyze the decision-making strate- gies people deploy when interacting with computers, and the comparison of these strategies with those that people deploy when interacting with other people (Gal, Pfeffer, Marzo, & Grosz, 2004). The advantage of this testbed for studying interest-based negotiation is two-fold: First, it presents a real- istic analogue to the ways in which goals, tasks and resources interact in real-world settings, but it abstracts away the com- plexities of specific domains. Second, it supports transpar- ent, anonymous communication between subjects who are interacting together in laboratory conditions, avoiding exper- imenter effects and face-to-face communication. We conducted experiments in which different subjects in- teracted with each other using either interest- or position- based protocols on the same set of negotiation scenarios. These scenarios varied in the dependency relationships that hold between players (i.e., who needs whom), as well as the number of integrative (mutually beneficial) exchanges. Re- sults show that goal revelation using interest-based negoti- ation protocols leads to a higher likelihood of agreement, and a significant increase in benefit to the revealing player, as compared to the benefit obtained using the position-based protocol. In addition, using the interest-based protocol it- self has a positive effect on the social benefit to both parties which is significantly higher than the social benefit obtained using the position-based protocol. Further analysis revealed that interest-based negotiation is essentially a mechanism by

10 citations


Proceedings ArticleDOI
29 Aug 2009
TL;DR: This work model how knowledge arises from observing different types of agents, each of whom reacts differently to the behaviors of others in an unfamiliar context, and shows how a set of observations guide agents' knowledge and behavior given different states of the world.
Abstract: Natural Intelligence is based not only on conscious procedural anddeclarative knowledge, but also on knowledge that is inferred fromobserving the actions of others. This knowledge is tacit, in that theprocess of its acquisition remains unspecified. However, tacitknowledge is an accepted guide of behavior, especiallyin unfamiliar contexts. In situations where knowledge is lacking,animals act on these beliefs without explicitly reasoning about theworld or fully considering the consequences of their actions. Thispaper provides a computational model of behavior in which tacitknowledge plays a crucial role. We model how knowledge arises fromobserving different types of agents, each of whom reacts differentlyto the behaviors of others in an unfamiliar context. Agents'interaction in this context is described using directed graphs. Weshow how a set of observations guide agents' knowledge and behaviorgiven different states of the world.

8 citations


Proceedings Article
10 May 2009
TL;DR: This abstract shows that graphical models can be used to simultaneously learn complex reporting policies that agents use and learn their capabilities; weigh the benefits of different combinations of information providers; and optimally choose a combination of Information providers to minimize error.
Abstract: In many multi-agent systems, information is distributed among potential providers that vary in their capability to report useful information and in the extent to which their reports may be biased. This abstract shows that graphical models can be used to simultaneously learn complex reporting policies that agents use and learn their capabilities; weigh the benefits of different combinations of information providers; and optimally choose a combination of information providers to minimize error. An agent's policy refers to the way in which the agent reports information. We show that these models are able to capture agents that vary in their capabilities and reporting policies. Agents using these graphical models outperformed the top contestants of the recent international Agent Reputation and Trust testbed competition. Further experiments show that graphical models can accurately model agents that use complex policies to decide how to report information, and determine how to combine these reports to minimize error.

4 citations


Proceedings ArticleDOI
29 Aug 2009
TL;DR: The paper shows that Hierarchical Bayesian models offer a unified approach for inferring the reliability of information providers, and learning the competencies of individual agents as well as the general population.
Abstract: This paper addresses the problem of learning with whom to interact in situations where obtaining information about others is associated with a cost, and this information is potentially unreliable. It considers settings in which agents need to decide whether to engage in a series of interactions with partners of unknown competencies, and can purchase reports about partners' competencies from others. The paper shows that Hierarchical Bayesian models offer a unified approach for (1) inferring the reliability of information providers, and (2) learning the competencies of individual agents as well as the general population. The performance of this model was tested in experiments of varying complexity, measuring agents' performance as well as error in estimating others' competencies. Results show that agents using the hierarchical model to make decisions outperformed other probabilistic models from the recent literature, even when there was a high ratio of unreliable information providers.

2 citations