scispace - formally typeset
Search or ask a question

Showing papers by "Luke Zettlemoyer published in 2008"


Proceedings Article
13 Jul 2008
TL;DR: This paper presents a new lifted inference algorithm, C-FOVE, that not only handles counting formulas in its input, but also creates counting formulas for use in intermediate potentials, and achieves asymptotic speed improvements compared to FOVE.
Abstract: Lifted inference algorithms exploit repeated structure in probabilistic models to answer queries efficiently. Previous work such as de Salvo Braz et al.'s first-order variable elimination (FOVE) has focused on the sharing of potentials across interchangeable random variables. In this paper, we also exploit interchangeability within individual potentials by introducing counting formulas, which indicate how many of the random variables in a set have each possible value. We present a new lifted inference algorithm, C-FOVE, that not only handles counting formulas in its input, but also creates counting formulas for use in intermediate potentials. C-FOVE can be described succinctly in terms of six operators, along with heuristics for when to apply them. Because counting formulas capture dependencies among large numbers of variables compactly, C-FOVE achieves asymptotic speed improvements compared to FOVE.

228 citations


Proceedings ArticleDOI
25 Oct 2008
TL;DR: The generative model is applied to the task of mapping sentences to hierarchical representations of their underlying meaning and achieves state-of-the-art performance when tested on two publicly available corpora.
Abstract: In this paper, we present an algorithm for learning a generative model of natural language sentences together with their formal meaning representations with hierarchical structures The model is applied to the task of mapping sentences to hierarchical representations of their underlying meaning We introduce dynamic programming techniques for efficient training and decoding In experiments, we demonstrate that the model, when coupled with a discriminative reranking technique, achieves state-of-the-art performance when tested on two publicly available corpora The generative model degrades robustly when presented with instances that are different from those seen in training This allows a notable improvement in recall compared to previous models

171 citations


Proceedings Article
08 Dec 2008
TL;DR: This paper formally defines an infinite sequence of nested beliefs about the state of the world at the current time t, and presents a filtering algorithm that maintains a finite representation which can be used to generate these beliefs.
Abstract: In partially observable worlds with many agents, nested beliefs are formed when agents simultaneously reason about the unknown state of the world and the beliefs of the other agents. The multi-agent filtering problem is to efficiently represent and update these beliefs through time as the agents act in the world. In this paper, we formally define an infinite sequence of nested beliefs about the state of the world at the current time t, and present a filtering algorithm that maintains a finite representation which can be used to generate these beliefs. In some cases, this representation can be updated exactly in constant time; we also present a simple approximation scheme to compact beliefs if they become too complex. In experiments, we demonstrate efficient filtering in a range of multi-agent domains.

25 citations


Proceedings Article
01 Jan 2008
TL;DR: In this article, a compact representation for relational hidden Markov models and an associated logical particle filtering algorithm are presented. But the algorithm updates the formulae as new observations are received, and it cannot be more accurate than a traditional particle filter in high dimensional state spaces.
Abstract: In this paper, we consider the problem of filtering in relational hidden Markov models. We present a compact representation for such models and an associated logical particle filtering algorithm. Each particle contains a logical formula that describes a set of states. The algorithm updates the formulae as new observations are received. Since a single particle tracks many states, this filter can be more accurate than a traditional particle filter in high dimensional state spaces, as we demonstrate in experiments.

10 citations


Proceedings Article
01 Jan 2008
TL;DR: In this article, a hierarchical Bayesian approach is used to learn a prior distribution over rule sets for multiple related tasks, and a coordinate ascent algorithm is proposed to iteratively optimise the task-specific rule sets and the prior distribution.
Abstract: The ways in which an agent's actions affect the world can often be modeled compactly using a set of relational probabilistic planning rules. This extended abstract addresses the problem of learning such rule sets for multiple related tasks. We take a hierarchical Bayesian approach, in which the system learns a prior distribution over rule sets. We present a class of prior distributions parameterized by a rule set prototype that is stochastically modified to produce a task-specific rule set. We also describe a coordinate ascent algorithm that iteratively optimizes the task-specific rule sets and the prior distribution. Experiments using this algorithm show that transferring information from related tasks significantly reduces the amount of training data required to predict action effects in blocks-world domains.

2 citations