scispace - formally typeset
Search or ask a question

Showing papers by "Andrew S. Gordon published in 2011"


Proceedings Article
20 Mar 2011
TL;DR: The Choice Of Plausible Alternatives (COPA) evaluation as discussed by the authors uses a forced-choice format, where each question gives a premise and two plausible causes or effects, where the correct choice is the alternative that is more plausible than the other.
Abstract: Research in open-domain commonsense reasoning has been hindered by the lack of evaluation metrics for judging progress and comparing alternative approaches. Taking inspiration from large-scale question sets used in natural language processing research, we authored one thousand English-language questions that directly assess commonsense causal reasoning, called the Choice Of Plausible Alternatives (COPA) evaluation. Using a forced-choice format, each question gives a premise and two plausible causes or effects, where the correct choice is the alternative that is more plausible than the other. This paper describes the authoring methodology that we used to develop a validated question set with sufficient breadth to advance open-domain commonsense reasoning research. We discuss the design decisions made during the authoring process, and explain how these decisions will affect the design of high-scoring systems. We also present the performance of multiple baseline approaches that use statistical natural language processing techniques, establishing initial benchmarks for future systems.

255 citations


Proceedings Article
07 Aug 2011
TL;DR: Casting the commonsense causal reasoning problem as a Choice of Plausible Alternatives, four experiments that compare various statistical and information retrieval approaches to exploit causal information in story corpora are described.
Abstract: The personal stories that people write in their Internet weblogs include a substantial amount of information about the causal relationships between everyday events. In this paper we describe our efforts to use millions of these stories for automated commonsense causal reasoning. Casting the commonsense causal reasoning problem as a Choice of Plausible Alternatives, we describe four experiments that compare various statistical and information retrieval approaches to exploit causal information in story corpora. The top performing system in these experiments uses a simple co-occurrence statistic between words in the causal antecedent and consequent, calculated as the Pointwise Mutual Information between words in a corpus of millions of personal stories.

80 citations


Book ChapterDOI
01 Jan 2011
TL;DR: A formalization of people’s implicit theory of how emotions mediate between what they experience and what they do is described and rules that link the theory with words and phrases in the emotional lexicon are sketched out.
Abstract: The research described here is part of a larger effort, first, to construct formal theories of a broad range of aspects of commonsense psychology, including knowledge management, the envisionment of possible courses of events, and goal-directed behavior, and, second, to link them to the English lexicon. We have identified the most common words and phrases for describing emotions in English. In this paper we describe a formalization of people’s implicit theory of how emotions mediate between what they experience and what they do. We then sketch out effort to write rules that link the theory with words and phrases in the emotional lexicon.

23 citations


Journal ArticleDOI
TL;DR: A DNA-based XOR logic gate is designed that allows bacterial colonies arranged in a series on an agar plate to perform hash function calculations, and is expected to have utility in other synthetic biology applications.
Abstract: Introduction: Hash functions are computer algorithms that protect information and secure transactions In response to the NIST`s "International Call for Hash Function", we developed a biological hash function using the computing capabilities of bacteria We designed a DNA-based XOR logic gate that allows bacterial colonies arranged in a series on an agar plate to perform hash function calculations Results and Discussion: In order to provide each colony with adequate time to process inputs and perform XOR logic, we designed and successfully demonstrated a system for time-delayed bacterial growth Our system is based on the diffusion of -lactamase, resulting in destruction of ampicillin Our DNA-based XOR logic gate design is based on the op-position of two promoters Our results showed that and functioned as expected individually, but did not behave as expected in the XOR construct Our data showed that, contrary to literature reports, the promoter is bidirectional In the absence of the 3OC6 inducer, the LuxR activator can bind to the promoter and induce backwards transcription Conclusion and Prospects: Our system of time delayed bacterial growth allows for the successive processing of a bacterial hash function, and is expected to have utility in other synthetic biology applications While testing our DNA-based XOR logic gate, we uncovered a novel function of In the absence of autoinducer 3OC6, LuxR binds to and activates backwards transcription This result advances basic research and has important implications for the widespread use of the promoter

12 citations


Proceedings ArticleDOI
26 Jun 2011
TL;DR: It is found that causal markers, specially causatives (causal verbs) are extremely domain dependent, and moderately genre dependent.
Abstract: This paper is a study of causation as it occurs in different domains and genres of discourse. There have been various initiatives to extract causality from discourse using causal markers. However, to our knowledge, none of these approaches have displayed similar results when applied to other styles of discourse. In this study we evaluate the nature of causal markers - specifically causatives, between corpora in different domains and genres of discourse and measure the overlap of causal markers using two metrics - Term Similarity and Causal Precision. We find that causal markers, specially causatives (causal verbs) are extremely domain dependent, and moderately genre dependent.

8 citations


01 Jan 2011
TL;DR: This paper outlines an approach involving the formulation of anthropomorphic self-models, where the representations that are used for metareasoning are based on formalizations of commonsense psychology theories and use of representations in the monitoring and control of objectlevel reasoning.
Abstract: Representations of an AI agent’s mental states and processes are necessary to enable metareasoning, i.e. thinking about thinking. However, the formulation of suitable representations remains an outstanding AI research challenge, with no clear consensus on how to proceed. This paper outlines an approach involving the formulation of anthropomorphic self-models, where the representations that are used for metareasoning are based on formalizations of commonsense psychology. We describe two research activities that support this approach, the formalization of broad-coverage commonsense psychology theories and use of representations in the monitoring and control of objectlevel reasoning. We focus specifically on metareasoning about memory, but argue that anthropomorphic self-models support the development of integrated, reusable, broadcoverage representations for use in metareasoning systems. Self-models in Metareasoning Cox and Raja (2007) define reasoning as a decision cycle within an action-perception loop between the ground level (doing) and the object level (reasoning). Metareasoning is further defined as a second loop, where this reasoning is itself monitored and controlled in order to improve the quality of the reasoning decisions that are made (Figure 1). Figure 1. Multi-level model of reasoning It has long been recognized (e.g., McCarthy, 1958) that to better understand and act upon the environment, an agent should have an explicit, declarative representation of the states and actions occurring in that environment. Thus the task at the object level is to create a declarative model of the world and to use such a representation to facilitate the selection of actions at the ground level. It follows also that to reason about other agents in the world (e.g., to Copyright © 2008, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. anticipate what they may do in the future), it helps to have a representation of the agents in the world, what they know, and how they think. Likewise an explicit representation of the self supports reasoning about oneself and hence facilitates metareasoning. Representations provide structure and enable inference. They package together related assertions so that knowledge is organized and brought to bear effectively and efficiently. One of the central concerns in the model of metareasoning as shown in Figure 1 is the character of the information that is passed between the object level and the meta-level reasoning modules to enable monitoring and control. Cast as a representation problem, the question becomes: How should an agent’s own reasoning be represented to itself as it monitors and controls this reasoning? Cox and Raja (2007) describe these representations as models of self, which serve to control an agent’s reasoning choices, represent the product of monitoring, and coordinate the self in social contexts. Self-models have been periodically explored in previous AI research since Minsky (1968), and explicit self-models have been articulated for a diverse set of reasoning processes that include threat detection (Birnbaum et al., 1990), case retrieval (Fox & Leake, 1995), and expectation management (Cox, 1997). Typically built to demonstrate a limited metareasoning capacity, these self-models have lacked several qualities that should be sought in future research in this area, including: 1. Broad coverage: Self-models should allow an agent to reason about and control the full breadth of their object-level reasoning processes. 2. Integrated: Self-models of different reasoning processes should be compatible with one another, allowing an agent to reason about and control the interaction between different reasoning subsystems. 3. Reusable: The formulation of self-models across different agents and agent architectures should have some commonalities that allow developers to apply previous research findings when building new systems. Despite continuing interest in metareasoning over the last two decades (see Anderson & Oates, 2007; Cox, 2005), there has been only modest progress toward the development of self-models that achieve these desirable qualities. We speculate that this is due, in part, to an

8 citations


Proceedings Article
20 Mar 2011
TL;DR: A logical formalization of a commonsense theory of mind-body interaction is proposed as a step toward a deep lexical semantics for words and phrases related to this topic.
Abstract: We propose a logical formalization of a commonsense theory of mind-body interaction as a step toward a deep lexical semantics for words and phrases related to this topic.

6 citations


Proceedings Article
01 Jan 2011
TL;DR: The causal relationships between events in the two video clips were identified, and the role that causality plays in determining whether subjects will mention a particular story event and the likelihood that these events will be told in the order that they occurred in the original videos was investigated.
Abstract: Empirical research supporting computational models of narrative is often constrained by the lack of large-scale corpora with deep annotation. In this paper, we report on our annotation and analysis of a dataset of 283 individual narrations of the events in two short video clips. The utterances in the narrative transcripts were annotated to align with known events in the source videos, offering a unique opportunity to study the regularities and variations in the way that different people describe the exact same set of events. We identified the causal relationships between events in the two video clips, and investigated the role that causality plays in determining whether subjects will mention a particular story event and the likelihood that these events will be told in the order that they occurred in the original videos.

2 citations