scispace - formally typeset
Search or ask a question

Showing papers on "Natural language published in 2004"


Patent
17 Jun 2004
TL;DR: In this paper, a method and system to search a variety of types of documents for material related to concepts expressed in a natural language text is presented, which includes analyzing the text to determine an underlying concept and identifying one or more categories (430) of searchable material in a knowledge base that are related to the underlying concept.
Abstract: A method and system to search a variety of types of documents for material related to concepts expressed in a natural language text. The invention includes analyzing the natural language text to determine an underlying concept (420) and identifying one or more categories (430) of searchable material in a knowledge base that are related to the underlying concept. The invention includes intelligently providing documents from the knowledge base (480) in these categories, both automatically and with the assistance of a customer service agent.

1,156 citations


Journal ArticleDOI
TL;DR: This article shows that the density of subjectivity clues in the surrounding context strongly affects how likely it is that a word is subjective, and it provides the results of an annotation study assessing the subjectivity of sentences with high-density features.
Abstract: Subjectivity in natural language refers to aspects of language used to express opinions, evaluations, and speculations. There are numerous natural language processing applications for which subjectivity analysis is relevant, including information extraction and text categorization. The goal of this work is learning subjective language from corpora. Clues of subjectivity are generated and tested, including low-frequency words, collocations, and adjectives and verbs identified using distributional similarity. The features are also examined working together in concert. The features, generated from different data sets using different procedures, exhibit consistency in performance in that they all do better and worse on the same data sets. In addition, this article shows that the density of subjectivity clues in the surrounding context strongly affects how likely it is that a word is subjective, and it provides the results of an annotation study assessing the subjectivity of sentences with high-density features. Finally, the clues are used to perform opinion piece recognition (a type of text categorization and genre detection) to demonstrate the utility of the knowledge acquired in this article.

734 citations


Book
01 Jan 2004
TL;DR: The article describes in detail the methods that have been adopted in some well-known dialogue systems, explores different system architectures, considers issues of specification, design, and evaluation, reviews some currently available dialogue development toolkits, and outlines prospects for future development.
Abstract: Spoken dialogue systems allow users to interact with computer-based applications such as databases and expert systems by using natural spoken language. The origins of spoken dialogue systems can be traced back to Artificial Intelligence research in the 1950s concerned with developing conversational interfaces. However, it is only within the last decade or so, with major advances in speech technology, that large-scale working systems have been developed and, in some cases, introduced into commercial environments. As a result many major telecommunications and software companies have become aware of the potential for spoken dialogue technology to provide solutions in newly developing areas such as computer-telephony integration. Voice portals, which provide a speech-based interface between a telephone user and Web-based services, are the most recent application of spoken dialogue technology. This article describes the main components of the technology---speech recognition, language understanding, dialogue management, communication with an external source such as a database, language generation, speech synthesis---and shows how these component technologies can be integrated into a spoken dialogue system. The article describes in detail the methods that have been adopted in some well-known dialogue systems, explores different system architectures, considers issues of specification, design, and evaluation, reviews some currently available dialogue development toolkits, and outlines prospects for future development.

542 citations


Proceedings Article
01 Jan 2004
TL;DR: This work develops a linear programing formulation for this problem and evaluates it in the context of simultaneously learning named entities and relations to efficiently incorporate domain and task specific constraints at decision time, resulting in significant improvements in the accuracy and the "human-like" quality of the inferences.
Abstract: : Given a collection of discrete random variables representing outcomes of learned local predictors in natural language. e.g.. named entities and relations. we seek an optimal global assignment to the variables in the presence of general (non-sequential) constraints. Examples of these constraints include the type of arguments a relation can take, and the mutual activity of different relations. etc. We develop a linear programing formulation for this problem and evaluate it in the context of simultaneously learning named entities and relations. Our approach allows us to efficiently incorporate domain and task specific constraints at decision time, resulting in significant improvements in the accuracy and the "human-like" quality of the inferences.

481 citations


Proceedings ArticleDOI
01 Jan 2004
TL;DR: PEGs address frequently felt expressiveness limitations of CFGs and REs, simplifying syntax definitions and making it unnecessary to separate their lexical and hierarchical components, and are here proven equivalent in effective recognition power.
Abstract: For decades we have been using Chomsky's generative system of grammars, particularly context-free grammars (CFGs) and regular expressions (REs), to express the syntax of programming languages and protocols The power of generative grammars to express ambiguity is crucial to their original purpose of modelling natural languages, but this very power makes it unnecessarily difficult both to express and to parse machine-oriented languages using CFGs Parsing Expression Grammars (PEGs) provide an alternative, recognition-based formal foundation for describing machine-oriented syntax, which solves the ambiguity problem by not introducing ambiguity in the first place Where CFGs express nondeterministic choice between alternatives, PEGs instead use prioritized choice PEGs address frequently felt expressiveness limitations of CFGs and REs, simplifying syntax definitions and making it unnecessary to separate their lexical and hierarchical components A linear-time parser can be built for any PEG, avoiding both the complexity and fickleness of LR parsers and the inefficiency of generalized CFG parsing While PEGs provide a rich set of operators for constructing grammars, they are reducible to two minimal recognition schemas developed around 1970, TS/TDPL and gTS/GTDPL, which are here proven equivalent in effective recognition power

467 citations


25 Mar 2004

439 citations


Journal ArticleDOI
19 Sep 2004
TL;DR: Finite functions over hereditarily finite algebraic datatypes are used to implement natural language morphology in the functional language Haskell to make it easy for linguists, who are not trained as functional programmers, to apply the ideas to new languages.
Abstract: This paper presents a methodology for implementing natural language morphology in the functional language Haskell. The main idea behind is simple: instead of working with untyped regular expressions, which is the state of the art of morphology in computational linguistics, we use finite functions over hereditarily finite algebraic datatypes. The definitions of these datatypes and functions are the language-dependent part of the morphology. The language-independent part consists of an untyped dictionary format which is used for synthesis of word forms, and a decorated trie, which is used for analysis.Functional Morphology builds on ideas introduced by Huet in his computational linguistics toolkit Zen, which he has used to implement the morphology of Sanskrit. The goal has been to make it easy for linguists, who are not trained as functional programmers, to apply the ideas to new languages. As a proof of the productivity of the method, morphologies for Swedish, Italian, Russian, Spanish, and Latin have already been implemented using the library. The Latin morphology is used as a running example in this article.

380 citations


ReportDOI
01 Jan 2004
TL;DR: A new corpus of over 180,000 hand- annotated dialog act tags and accompanying adjacency pair annotations for roughly 72 hours of speech from 75 naturally-occurring meetings is described.
Abstract: : We describe a new corpus of over 180,000 hand- annotated dialog act tags and accompanying adjacency pair annotations for roughly 72 hours of speech from 75 naturally-occurring meetings. We provide a brief summary of the annotation system and labeling procedure, inter-annotator reliability statistics, overall distributional statistics, a description of auxiliary files distributed with the corpus, and information on how to obtain the data.

359 citations


Patent
02 Sep 2004
TL;DR: In this article, a method for resolving ambiguities in natural language by organizing the task into multiple iterations of analysis done in successive levels of depth is presented. But the method is adaptive to the users' need for accuracy and efficiency.
Abstract: A method for resolving ambiguities in natural language by organizing the task into multiple iterations of analysis done in successive levels of depth. The processing is adaptive to the users' need for accuracy and efficiency. At each level of processing the most accurate disambiguation is made based on the available information. As more analysis is done, additional knowledge is incorporated in a systematic manner to improve disambiguation accuracy. Associated with each level of processing is a measure of confidence, used to gauge the confidence of a process in its disambiguation accuracy. An overall confidence measure is also used to reflect the level of the analysis done.

341 citations


PatentDOI
TL;DR: In this article, sentence-based queries from a user are analyzed using a natural language engine to determine appropriate answers from an electronic database, which is useful for Internet based search engines, as well as distributed speech recognition systems such as a client-server system.
Abstract: Sentence based queries from a user are analyzed using a natural language engine to determine appropriate answers from an electronic database. The system and methods are useful for Internet based search engines, as well as distributed speech recognition systems such as a client-server system. The latter are typically implemented on an intranet or over the Internet based on user queries at his/her computer, a PDA, or a workstation using a speech input interface.

319 citations


Patent
29 Jun 2004
TL;DR: In this article, methods and systems are provided for processing natural language queries. And they leverage an interpretation module to process and analyze the retrieved information in order to determine an intention associated with the natural language query.
Abstract: Methods and systems are provided for processing natural language queries. Such methods and systems may receive a natural language query from a user and generate corresponding semantic tokens. Information may be retrieved from a knowledge base using the semantic tokens. Methods and systems may leverage an interpretation module to process and analyze the retrieved information in order to determine an intention associated with the natural language query. Methods and systems may leverage an actuation module to provide results to the user, which may be based on the determined intention.

Proceedings ArticleDOI
21 Jul 2004
TL;DR: This work examines the problem of distinguishing among seven relation types that can occur between the entities "treatment" and "disease" in bioscience text, and finds that the latter help achieve high classification accuracy.
Abstract: A crucial step toward the goal of automatic extraction of propositional information from natural language text is the identification of semantic relations between constituents in sentences. We examine the problem of distinguishing among seven relation types that can occur between the entities "treatment" and "disease" in bioscience text, and the problem of identifying such entities. We compare five generative graphical models and a neural network, using lexical, syntactic, and semantic features, finding that the latter help achieve high classification accuracy.

Proceedings ArticleDOI
23 Aug 2004
TL;DR: A new open text word sense disambiguation method that combines the use of logical inferences with PageRank-style algorithms applied on graphs extracted from natural language documents is presented.
Abstract: This paper presents a new open text word sense disambiguation method that combines the use of logical inferences with PageRank-style algorithms applied on graphs extracted from natural language documents. We evaluate the accuracy of the proposed algorithm on several sense-annotated texts, and show that it consistently outperforms the accuracy of other previously proposed knowledge-based word sense disambiguation methods. We also explore and evaluate methods that combine several open-text word sense disambiguation algorithms.

01 Jan 2004
TL;DR: Based on major advances in statistical modeling of speech in the 1980s, automatic speech recognition systems today find widespread application in tasks that require a human-machine interface, such as automatic call processing in the telephone network and query-based information systems that do things like provide updated travel information, stock price quotations, weather reports, etc.
Abstract: Designing a machine that mimics human behavior, particularly the capability of speaking naturally and responding properly to spoken language, has intrigued engineers and scientists for centuries Since the 1930s, when Homer Dudley of Bell Laboratories proposed a system model for speech analysis and synthesis [1, 2], the problem of automatic speech recognition has been approached progressively, from a simple machine that responds to a small set of sounds to a sophisticated system that responds to fluently spoken natural language and takes into account the varying statistics of the language in which the speech is produced Based on major advances in statistical modeling of speech in the 1980s, automatic speech recognition systems today find widespread application in tasks that require a human-machine interface, such as automatic call processing in the telephone network and query-based information systems that do things like provide updated travel information, stock price quotations, weather reports, etc In this article, we review some major highlights in the research and development of automatic speech recognition during the last few decades so as to provide a technological perspective and an appreciation of the fundamental progress that has been made in this important area of information and communication technology

Journal ArticleDOI
TL;DR: The results show that Simon is capable of acquiring a regular and orderly morphological rule system for which his input provides only highly inconsistent and noisy data, and provides some insight into the mechanisms by which such learning may occur.

Book
14 Dec 2004
TL;DR: The authors explored the teaching of speaking in greater depth, in both ESL and EFL contexts, with a broad array of activities, all with emphasis on practical application; however, the organization of the material is confusing.
Abstract: This book is part of a series entitled Practical English Language Teaching. While the main volume in the series offers a broad overview of various aspects of language teaching methodology, this volume explores the teaching of speaking in greater depth, in both ESL and EFL contexts. The book offers a broad array of activities, all with emphasis on practical application; however, the organization of the material is confusing.

Journal ArticleDOI
01 May 2004
TL;DR: In this article, the authors investigate the use of spatial relationships to establish a natural communication mechanism between people and robots, in particular, for novice users, and show how linguistic spatial descriptions and other spatial information can be extracted from an evidence grid map and how they can be used in a natural human-robot dialog.
Abstract: In conversation, people often use spatial relationships to describe their environment, e.g., "There is a desk in front of me and a doorway behind it," and to issue directives, e.g., "go around the desk and through the doorway." In our research, we have been investigating the use of spatial relationships to establish a natural communication mechanism between people and robots, in particular, for novice users. In this paper, the work on robot spatial relationships is combined with a multimodal robot interface. We show how linguistic spatial descriptions and other spatial information can be extracted from an evidence grid map and how this information can be used in a natural, human-robot dialog. Examples using spatial language are included for both robot-to-human feedback and also human-to-robot commands. We also discuss some linguistic consequences in the semantic representations of spatial and locative information based on this work.

Journal ArticleDOI
TL;DR: This review addresses two main issues: whether grammar teaching makes any difference to language learning; and what kinds of grammar teaching have been suggested to facilitate second language learning.
Abstract: With the rise of communicative methodology in the late 1970s, the role of grammar instruction in second language learning was downplayed, and it was even suggested that teaching grammar was not only unhelpful but might actually be detrimental. However, recent research has demonstrated the need for formal instruction for learners to attain high levels of accuracy. This has led to a resurgence of grammar teaching, and its role in second language acquisition has become the focus of much current investigation. In this chapter we briefly review the major developments in the research on the teaching of grammar over the past few decades. This review addresses two main issues: (1) whether grammar teaching makes any difference to language learning; and (2) what kinds of grammar teaching have been suggested to facilitate second language learning. To this end, the chapter examines research on the different ways in which formal instruction can be integrated with communicative activities. Continuing in the tradition of more than 2000 years of debate regarding whether grammar should be a primary focus of language instruction, should be eliminated entirely, or should be subordinated to meaning-focused use of the target language (for historical reviews see Howatt, 1984; Kelly, 1969), the need for grammar instruction is once again attracting the attention of second language acquisition (SLA) researchers and teachers. We briefly review arguments against and in support of grammar teaching before examining the approaches to grammatical

Journal ArticleDOI
TL;DR: The results of an online market research are presented to assess the economic advantages of developing a CASE (computer-aided software engineering) tool that integrates linguistic analysis techniques for documents written in natural language, and to verify the existence of the potential demand for such a tool.
Abstract: Numerous studies in recent months have proposed the use of linguistic instruments to support requirements analysis. There are two main reasons for this: (i) the progress made in natural language processing and (ii) the need to provide the developers of software systems with support in the early phases of requirements definition and conceptual modelling. This paper presents the results of an online market research intended (a) to assess the economic advantages of developing a CASE (computer-aided software engineering) tool that integrates linguistic analysis techniques for documents written in natural language, and (b) to verify the existence of the potential demand for such a tool. The research included a study of the language – ranging from completely natural to highly restricted – used in documents available for requirements analysis, an important factor given that on a technological level there is a trade-off between the language used and the performance of the linguistic instruments. To determine the potential demand for such tool, some of the survey questions dealt with the adoption of development methodologies and consequently with models and support tools; other questions referred to activities deemed critical by the companies involved. Through statistical correspondence analysis of the responses, we were able to outline two “profiles” of companies that correspond to two potential market niches, which are characterised by their very different approach to software development.

Journal ArticleDOI
TL;DR: This paper starts with a gradual introduction to GF, going through a sequence of simpler formalisms till the full power is reached, followed by a systematic presentation of the GF formalism and outlines of the main algorithms: partial evaluation and parser generation.
Abstract: Grammatical Framework (GF) is a special-purpose functional language for defining grammars. It uses a Logical Framework (LF) for a description of abstract syntax, and adds to this a notation for defining concrete syntax. GF grammars themselves are purely declarative, but can be used both for linearizing syntax trees and parsing strings. GF can describe both formal and natural languages. The key notion of this description is a grammatical object, which is not just a string, but a record that contains all information on inflection and inherent grammatical features such as number and gender in natural languages, or precedence in formal languages. Grammatical objects have a type system, which helps to eliminate run-time errors in language processing. In the same way as a LF, GF uses dependent types in abstract syntax to express semantic conditions, such as well-typedness and proof obligations. Multilingual grammars, where one abstract syntax has many parallel concrete syntaxes, can be used for reliable and meaning-preserving translation. They can also be used in authoring systems, where syntax trees are constructed in an interactive editor similar to proof editors based on LF. While being edited, the trees can simultaneously be viewed in different languages. This paper starts with a gradual introduction to GF, going through a sequence of simpler formalisms till the full power is reached. The introduction is followed by a systematic presentation of the GF formalism and outlines of the main algorithms: partial evaluation and parser generation. The paper concludes by brief discussions of the Haskell implementation of GF, existing applications, and related work.

Book
01 Jan 2004
TL;DR: This chapter discusses the development of Embodied Presentation Agents and Their Application Fields, as well as some issues in the Design of Character Scripting and Specification Languages.
Abstract: I Introduction- Introducing the Cast for Social Computing: Life-Like Characters- II Languages and Tools for Life-Like Characters- Representing and Parameterizing Agent Behaviors- Toward a Unified Scripting Language: Lessons Learned from Developing CML and AML- APML, a Markup Language for Believable Behavior Generation- STEP: a Scripting Language for Embodied Agents- gUI: Specifying Complete User Interaction- A Behavior Language: Joint Action and Behavioral Idioms- BEAT: the Behavior Expression Animation Toolkit- Galatea: Open-Source Software for Developing Anthropomorphic Spoken Dialog Agents- MPML and SCREAM: Scripting the Bodies and Minds of Life-Like Characters- III Systems and Applications- Great Expectations: Prediction in Entertainment Applications- Shallow and Inner Forms of Emotional Intelligence in Advisory Dialog Simulation- Web Information Integration Using Multiple Character Agents- Expressive Behaviors for Virtual Worlds- Playing with Agents - Agents in Social and Dramatic Games- A Review of the Development of Embodied Presentation Agents and Their Application Fields- Interface Agents That Facilitate Knowledge Interactions Between Community Members- Animated Agents Capable of Understanding Natural Language and Performing Actions- IV Synopsis- What Makes Characters Seem Life-Like?- Some Issues in the Design of Character Scripting and Specification Languages - a Personal View- Online Material

Journal ArticleDOI
TL;DR: Using demonstrably available positive data, simple learning procedures can be formulated for each of the syntactic structures that have traditionally motivated invocation of the logical problem.
Abstract: Many researchers believe that there is a logical problem at the centre of language acquisition theory. According to this analysis, the input to the learner is too inconsistent and incomplete to determine the acquisition of grammar. Moreover, when corrective feedback is provided, children tend to ignore it. As a result, language learning must rely on additional constraints from universal grammar. To solve this logical problem, theorists have proposed a series of constraints and parameterizations on the form of universal grammar. Plausible alternatives to these constraints include: conservatism, item-based learning, indirect negative evidence, competition, cue construction, and monitoring. Careful analysis of child language corpora has cast doubt on claims regarding the absence of positive exemplars. Using demonstrably available positive data, simple learning procedures can be formulated for each of the syntactic structures that have traditionally motivated invocation of the logical problem. Within the perspective of emergentist theory (MacWhinney, 2001), the operation of a set of mutually supportive processes is viewed as providing multiple buffering for developmental outcomes. However, the fact that some syntactic structures are more difficult to learn than others can be used to highlight areas of intense grammatical competition and processing load.

Patent
11 Feb 2004
TL;DR: In this article, a concept recognition process is applied to automatically derive a representation of concepts embodied in the communication, which is used to provide to a human agent information useful in responding to the natural language communication.
Abstract: In one aspect, an arbitrary natural language communication is received from a user. A concept recognition process is applied to automatically derive a representation of concepts embodied in the communication. The concept representation is used to provide to a human agent information useful in responding to the natural language communication.

Krista Bennett1
01 Jan 2004
TL;DR: This paper provides a basic introduction to steganography and steganalysis, with a particular focus on text Steganography, and highlights some of the problems inherent in text steganographic as well as issues with existing solutions, and describes linguistic problems with character-based, lexical, and syntactic approaches.
Abstract: Steganography is an ancient art. With the advent of computers, we have vast accessible bodies of data in which to hide information, and increasingly sophisticated techniques with which to analyze and recover that information. While much of the recent research in steganography has been centered on hiding data in images, many of the solutions that work for images are more complicated when applied to natural language text as a cover medium. Many approaches to steganalysis attempt to detect statistical anomalies in cover data which predict the presence of hidden information. Natural language cover texts must not only pass the statistical muster of automatic analysis, but also the minds of human readers. Linguistically naive approaches to the problem use statistical frequency of letter combinations or random dictionary words to encode information. More sophisticated approaches use context-free grammars to generate syntactically correct cover text which mimics the syntax of natural text. None of these uses meaning as a basis for generation, and little attention is paid to the semantic cohesiveness of a whole text as a data point for statistical attack. This paper provides a basic introduction to steganography and steganalysis, with a particular focus on text steganography. Text-based information hiding techniques are discussed, providing motivation for moving toward linguistic steganography and steganalysis. We highlight some of the problems inherent in text steganography as well as issues with existing solutions, and describe linguistic problems with character-based, lexical, and syntactic approaches. Finally, the paper explores how a semantic and rhetorical generation approach suggests solutions for creating more believable cover texts, presenting some current and future issues in analysis and generation. The paper is intended to be both general enough that linguists without training in information security and computer science can understand the material, and specific enough that the linguistic and computational problems are described in adequate detail to justify the conclusions suggested.

Proceedings ArticleDOI
02 May 2004
TL;DR: ITSPOKE is a spoken dialogue system that uses the Why2-Atlas text-based tutoring system as its "back-end" and generates an empirically-based understanding of the ramifications of adding spoken language capabilities to text- based dialogue tutors.
Abstract: ITSPOKE is a spoken dialogue system that uses the Why2-Atlas text-based tutoring system as its "back-end". A student first types a natural language answer to a qualitative physics problem. ITSPOKE then engages the student in a spoken dialogue to provide feedback and correct misconceptions, and to elicit more complete explanations. We are using ITSPOKE to generate an empirically-based understanding of the ramifications of adding spoken language capabilities to text-based dialogue tutors.

ReportDOI
01 Jan 2004
TL;DR: This paper presents a statistical language-independent framework for identifying and tracking named, nominal and pronominal references to entities within unrestricted text documents, and chaining them into clusters corresponding to each logical entity present in the text.
Abstract: : Entity detection and tracking is a relatively new addition to the repertoire of natural language tasks. In this paper, we present a statistical language-independent framework for identifying and tracking named, nominal and pronominal references to entities within unrestricted text documents, and chaining them into clusters corresponding to each logical entity present in the text. Both the mention detection model and the novel entity tracking model can use arbitrary feature types, being able to integrate a wide array of lexical, syntactic and semantic features. In addition, the mention detection model crucially uses feature streams derived from different named entity classifiers. The proposed framework is evaluated with several experiments run in Arabic, Chinese and English texts; a system based on the approach described here and submitted to the latest Automatic Content Extraction (ACE) evaluation achieved top-tier results in all three evaluation languages.

Journal ArticleDOI
TL;DR: This paper argued that the language-as-code idea, although prima facie endowed with the attractiveness of common sense, is untenable, and should not figure, at least in the role usually assigned to it, in any inquiry into either language or human cognition.

Book
01 Jan 2004
TL;DR: This chapter discusses the grammatical construction of scientific knowledge, the framing of the English clause, and the power of language in the reshaping of human experience.
Abstract: Introduction - On the power of language Language and the reshaping of human experience Language and knowledge: the 'unpacking' of text Things and relations: regrammaticizing experience as technical knowledge The grammatical construction of scientific knowledge: the framing of the English clause On the language of physical science Some grammatical problems in scientific English On the grammar of scientific English Writing Science: literacy and discursive power

Journal ArticleDOI
TL;DR: This paper introduces, classifies, and surveys Arabic morphological analysis techniques, and summarizes and organize the information available in the literature in an attempt to motivate researchers to look into these techniques and try to develop more advanced ones.
Abstract: After several decades of heavy research activity on English stemmers, Arabic morphological analysis techniques have become a popular area of research. The Arabic language is one of the Semitic languages; it exhibits a very systematic but complex morphological structure based on root-pattern schemes. As a consequence, survey of such techniques proves to be more necessary. The aim of this paper is to summarize and organize the information available in the literature in an attempt to motivate researchers to look into these techniques and try to develop more advanced ones. This paper introduces, classifies, and surveys Arabic morphological analysis techniques. Furthermore, conclusions, open areas, and future directions are provided at the end.

Proceedings ArticleDOI
23 Aug 2004
TL;DR: The ability of the implemented system to perform several forms of probabilistic and temporal inferences to extract answers to complex questions is reported on, indicating enhanced accuracy over current state-of-the-art Q/A systems.
Abstract: The ability to answer complex questions posed in Natural Language depends on (1) the depth of the available semantic representations and (2) the inferential mechanisms they support. In this paper we describe a QA architecture where questions are analyzed and candidate answers generated by 1) identifying predicate argument structures and semantic frames from the input and 2) performing structured probabilistic inference using the extracted relations in the context of a domain and scenario model. A novel aspect of our system is a scalable and expressive representation of actions and events based on Coordinated Probabilistic Relational Models (CPRM). In this paper we report on the ability of the implemented system to perform several forms of probabilistic and temporal inferences to extract answers to complex questions. The results indicate enhanced accuracy over current state-of-the-art Q/A systems.