scispace - formally typeset
Search or ask a question

Showing papers by "Research Institute for Advanced Computer Science published in 2001"


Journal ArticleDOI
01 Oct 2001
TL;DR: Recent work on the development of Java PathExplorer (\JPaXX), a tool for monitoring the execution of Java programs, can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems.
Abstract: We present recent work on the development of Java PathExplorer (JP a X), a tool for monitoring the execution of Java programs. JPaX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems. The tool facilitates automated instrumentation of a program's byte code, which will then emit events to an observer during its execution. The observer checks the events against user provided high level requirement specifications, for example temporal logic formulae, and against lower level error detection procedures, usually concurrency related such as deadlock and data race algorithms. High level requirement specifications together with their underlying logics are defined in rewriting logic using Maude, and then can either be directly checked using Maude rewriting engine, or be first translated to efficient data structures and then checked in Java.

289 citations


Proceedings Article
21 Jan 2001
TL;DR: This paper presents a constraint-based approach to solving the Earth Observing Satellites (EOS) scheduling problem, and proposes a stochastic heuristic search method for solving it.
Abstract: We address the problem of scheduling observations for a collection of earth observing satellites. This scheduling task is a difficult optimization problem, potentially involving many satellites, hundreds of requests, constraints on when and how to service each request, and resources such as instruments, recording devices, transmitters, and ground stations. High-fidelity models are required to ensure the validity of schedules; at the same time, the size and complexity of the problem makes it unlikely that systematic optimization search methods will be able to solve them in a reasonable time. This paper presents a constraint-based approach to solving the Earth Observing Satellites (EOS) scheduling problem, and proposes a stochastic heuristic search method for solving it.

168 citations


01 Sep 2001
TL;DR: The Java PathExplorer (\JPaX) as mentioned in this paper is a tool for monitoring the execution of Java programs, which can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems.
Abstract: We present recent work on the development of Java PathExplorer (\JPaXX), a tool for monitoring the execution of Java programs. \JPaX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems. The tool facilitates automated instrumentation of a program''s byte code, which will then emit events to an observer during its execution. The observer checks the events against user provided high-level requirement specifications, for example temporal logic formulae, and against lower level error detection procedures, usually concurrency related such as deadlock and data race algorithms. High level requirement specifications together with their underlying logics are defined in rewriting logic using Maude, and then can either be directly checked using Maude rewriting engine, or be first translated to efficient data structures and then checked in Java.

114 citations



01 Jan 2001
TL;DR: In this paper, a hybrid diagnosis using particle filters is presented as an alternative, and its strengths and weakeners are examined, and some extensions to particle filters are designed to make them more suitable for use in diagnosis problems.
Abstract: Planetary rovers provide a considerable challenge for robotic systems in that they must operate for long periods autonomously, or with relatively little intervention. To achieve this, they need to have on-board fault detection and diagnosis capabilities in order to determine the actual state of the vehicle, and decide what actions are safe to perform. Traditional model-based diagnosis techniques are not suitable for rovers due to the tight coupling between the vehicle's performance and its environment. Hybrid diagnosis using particle filters is presented as an alternative, and its strengths and weakeners are examined. We also present some extensions to particle filters that are designed to make them more suitable for use in diagnosis problems.

96 citations


01 Jan 2001
TL;DR: This paper describes recent work on designing an environment, called Java PathExplorer, for monitoring the execution of Java programs, which contains algorithms for detecting classical error patterns in concurrent programs, such as deadlocks and data races.
Abstract: We describe recent work on designing an environment called Java PathExplorer for monitoring the execution of Java programs. This environment facilitates the testing of execution traces against high level specifications, including temporal logic formulae. In addition, it contains algorithms for detecting classical error patterns in concurrent programs, such as deadlocks and data races. An initial prototype of the tool has been applied to the executive module of the planetary Rover K9, developed at NASA Ames. In this paper we describe the background and motivation for the development of this tool, including comments on how it relates to formal methods tools as well as to traditional testing, and we then present the tool itself.

84 citations


01 Jan 2001
TL;DR: In this paper, a non-orthogonal wavelet basis function is used to detect non smooth behavior and control the amount of numerical dissipation to be added to the output.
Abstract: The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous ows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artifcial compression method (ACM) of Harten (1978) but utilize it in an entirely diferent context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem depen- dence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch off the extranumerical dissipation outside shock lay- ers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability atall parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992 ) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where refinement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent ofacho- sen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these sensors are scheme inde- pendent and can be stand alone options for numerical algorithm other than the Yee et al. scheme.

57 citations


Proceedings Article
01 Jan 2001
TL;DR: Recent work on the development of Java PathExplorer (\JPaXX), a tool for monitoring the execution of Java programs, can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems.
Abstract: We present recent work on the development of Java PathExplorer (\JPaXX), a tool for monitoring the execution of Java programs. \JPaX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems. The tool facilitates automated instrumentation of a program''s byte code, which will then emit events to an observer during its execution. The observer checks the events against user provided high level requirement specifications, for example temporal logic formulae, and against lower level error detection procedures, usually concurrency related such as deadlock and data race algorithms. High level requirement specifications together with their underlying logics are defined in rewriting logic using Maude, and then can either be directly checked using Maude rewriting engine, or be first translated to efficient data structures and then checked in Java.

53 citations


01 May 2001
TL;DR: An algorithm is presented which takes an LTL formula and generates an efficient dynamic programming algorithm which tests whether the LTL model is satisfied by a finite trace of events given as input.
Abstract: The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.}

43 citations


Proceedings ArticleDOI
15 Nov 2001
TL;DR: A system currently under development that employs a "literary" approach to reference resolution by monitoring eye-movements during human-to-human interaction to improve the performance of a spoken dialogue system on referring expressions that are underspecified according to the literary model is described.
Abstract: Most computational spoken dialogue systems take a "literary" approach to reference resolution. With this type of approach, entities that are mentioned by a human interactor are unified with elements in the world state based on the same principles that guide the process during text interpretation. In human-to-human interaction, however, referring is a much more collaborative process. Participants often under-specify their referents, relying on their discourse partners for feedback if more information is needed to uniquely identify a particular referent. By monitoring eye-movements during this interaction, it is possible to improve the performance of a spoken dialogue system on referring expressions that are underspecified according to the literary model. This paper describes a system currently under development that employs such a strategy.

38 citations


Proceedings Article
01 Jan 2001
TL;DR: This paper presents a minimal enumerative approach to the problem of compiling typed unification grammars into CFG language models, a prototype implementation and results of experiments in which it was used to compile some non-trivial unification Grammars.
Abstract: This paper presents a minimal enumerative approach to the problem of compiling typed unification grammars into CFG language models, a prototype implementation and results of experiments in which it was used to compile some non-trivial unification grammars. We argue that enumerative methods are considerably more useful than has been previously believed. Also, the simplicity of enumerative methods makes them a natural baseline against which to compare alternative approaches.

01 Jan 2001
TL;DR: In this paper, a general framework for the design of adaptive low-dissipative high order schemes is presented, which encompasses a rather complete treatment of the numerical approach based on four integrated design criteria: (1) for stability considerations, condition the governing equations before the application of the appropriate numerical scheme whenever it is possible; (2) for consistency, compatible schemes that possess stability properties, including physical and numerical boundary condition treatments, similar to those of the discrete analogue of the continuum are preferred; (3) for the minimization of numerical dissipation contamination, efficient and adaptive numerical
Abstract: A general framework for the design of adaptive low-dissipative high order schemes is presented. It encompasses a rather complete treatment of the numerical approach based on four integrated design criteria: (1) For stability considerations, condition the governing equations before the application of the appropriate numerical scheme whenever it is possible; (2) For consistency, compatible schemes that possess stability properties, including physical and numerical boundary condition treatments, similar to those of the discrete analogue of the continuum are preferred; (3) For the minimization of numerical dissipation contamination, efficient and adaptive numerical dissipation control to further improve nonlinear stability and accuracy should be used; and (4) For practical considerations, the numerical approach should be efficient and applicable to general geometries, and an efficient and reliable dynamic grid adaptation should be used if necessary. These design criteria are, in general, very useful to a wide spectrum of flow simulations. However, the demand on the overall numerical approach for nonlinear stability and accuracy is much more stringent for long-time integration of complex multiscale viscous shock/shear/turbulence/acoustics interactions and numerical combustion. Robust classical numerical methods for less complex flow physics are not suitable or practical for such applications. The present approach is designed expressly to address such flow problems, especially unsteady flows. The minimization of employing very fine grids to overcome the production of spurious numerical solutions and/or instability due to under-resolved grids is also sought. The incremental studies to illustrate the performance of the approach are summarized. Extensive testing and full implementation of the approach is forthcoming. The results shown so far are very encouraging.

Proceedings Article
01 Aug 2001
TL;DR: An approach to checking a running program against its Linear Temporal Logic specifications by translating LTL formulae to finite-state automata, which are used as observers of the program behavior.
Abstract: This report presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to Buchi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.

01 Jan 2001
TL;DR: The Java PathExplorer (JPAX) as mentioned in this paper is a tool developed at NASA Ames for monitoring the execution of Java programs, which can be used not only during program testing to reveal subtle errors, but also can be applied during operation to survey safety critical systems.
Abstract: We briefly present Java PathExplorer (JPAX), a tool developed at NASA Ames for monitoring the execution of Java programs. JPAX can be used not only during program testing to reveal subtle errors, but also can be applied during operation to survey safety critical systems. The tool facilitates automated instrumentation of a program in order to properly observe its execution. The instrumentation can be either at the bytecode level or at the source level when the source code is available. JPaX is an instance of a more general project, called PathExplorer (PAX), which is a basis for experiments rather than a fixed system, capable of monitoring various programming languages and experimenting with other logics and analysis techniques

01 Jan 2001
TL;DR: The complete VIPER system points out the power of combining plan execution, simulation, and visualization for envisioning rover behavior; it also demonstrates the utility of developing generic technologies, which can be combined in novel and useful ways.
Abstract: Simulation and visualization of rover behavior are critical capabilities for scientists and rover operators to construct, test, and validate plans for commanding a remote rover The VIPER system links these capabilities using a high-fidelity virtual-reality (VR) environment a kinematically accurate simulator, and a flexible plan executive to allow users to simulate and visualize possible execution outcomes of a plan under development This work is part of a larger vision of a science-centered rover control environment, where a scientist may inspect and explore the environment via VR tools, specify science goals, and visualize the expected and actual behavior of the remote rover The VIPER system is constructed from three generic systems, linked together via a minimal amount of customization into the integrated system The complete system points out the power of combining plan execution, simulation, and visualization for envisioning rover behavior; it also demonstrates the utility of developing generic technologies which can be combined in novel and useful ways

Journal ArticleDOI
01 May 2001
TL;DR: A tool is developed that automatically checks an abstract interpretation against a set of user-defined properties and can be used to select an appropriate abstract interpretation, to characterize the specific loss of information during abstraction, and to compare different abstractions with each other.
Abstract: We present a logical framework in which abstract interpretations can be naturally specified and then verified. Our approach is based on membership equational logic which extends equational logics by membership axioms, asserting that a term has a certain sort. We represent an abstract interpretation as a membership equational logic specification, usually as an overloaded order-sorted signature with membership axioms. It turns out that, for any term, its least sort over this specification corresponds to its most concrete abstract value. Maude implements membership equational logic and provides mechanisms to calculate the least sort of a term efficiently. We first show how Maude can be used to get prototyping of abstract interpretations ``for free.'''' Building on the meta-logic facilities of Maude, we further develop a tool that automatically checks an abstract interpretation against a set of user-defined properties. This can be used to select an appropriate abstract interpretation, to characterize the specific loss of information during abstraction, and to compare different abstractions with each other.}

01 Jan 2001
TL;DR: This paper focuses on the interaction between the symbolic computations and the deductive synthesis process, and synthesizes code by a schema-guided deductive process.
Abstract: Autobayes is a fully automatic program synthesis system for the statistical data analysis domain. Its input is a concise description of a data analysis problem in the form of a statistical model; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Autobayes synthesizes code by a schema-guided deductive process. Schemas (\ie code templates with associated semantic constraints) are applied to the original problem and recursively to emerging subproblems. \AB\ complements this approach by symbolic computation to derive closed-form solutions whenever possible. In this paper, we concentrate on the interaction between the symbolic computations and the deductive synthesis process.

01 Sep 2001
TL;DR: Thisalgorithm can beviewed asaway tomodifysimu-latedannealing byrecastingitasanon-cooperativegame,witheachvariablereplaced by a player, and thisrecastingallowsustoleveragetheintelligentbehaviorofthe individual player tostantiallyimprove the explorationstepofthesimulatedannealing.
Abstract: The game-theoretic field of COllective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved "as a side-effect". Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed game-theory-motivated algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting improves simulated annealing by several orders of magnitude for spin glass relaxation and bin-packing.

Proceedings ArticleDOI
01 Jan 2001
TL;DR: The Object Infrastructure Framework is described, an Aspect Oriented Programming (AOP) environment for developing distributed systems that provides policy insertion and can be used to enforce policies such as maintaining the annotations of artifacts, particularly the provenance and access control rules of artifacts.
Abstract: NASA's Intelligent Synthesis Environment (ISE) program is a grand attempt to develop a system to transform the way complex artifacts are engineered. This paper discusses a "middleware" architecture for enabling the development of ISE. Desirable elements of such an Intelligent Synthesis Architecture (ISA) include remote invocation; plug-and-play applications; scripting of applications; management of design artifacts, tools, and artifact and tool attributes; common system services; system management; and systematic enforcement of policies. A typical middleware foundation for an ISA is a distributed object technology such as CORBA (Common Object Request Broker Architecture). I argue that such an architecture can be profitably extended by enabling "plug-and-play" insertion of new policies into the system. I describe the Object Infrastructure Framework, an Aspect Oriented Programming (AOP) environment for developing distributed systems that provides policy insertion. This technology can be used to enforce policies such as maintaining the annotations of artifacts, particularly the provenance and access control rules of artifacts; performing automatic datatype transformations between representations; supplying alternative servers of the same service; reporting on the status of jobs and of the system; conveying privileges throughout an application; supporting long-lived transactions; maintaining version consistency; and providing software redundancy and mobility.

01 Jan 2001
TL;DR: This position paper describes ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking.
Abstract: When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

01 Jan 2001
TL;DR: D3, the third generation of DARWIN, is currently developing, extending DARWIN to handle other data domains and to be a distributed system, where a single login allows a user transparent access to test results from multiple servers and authority domains.
Abstract: DARWIN is a NASA developed, Internet-based system for enabling aerospace researchers to securely and remotely access and collaborate on the analysis of aerospace vehicle design data, primarily the results of wind-tunnel testing and numeric (e.g., computational fluid-dynamics) model executions. DARWIN captures, stores and indexes data; manages derived knowledge (such as visualizations across multiple datasets); and provides an environment for designers to collaborate in the analysis of test results. DARWIN is an interesting application because it supports high-volumes of data, integrates multiple modalities of data display (e.g., images and data visualizations), and provides non-trivial access control mechanisms. DARWIN enables collaboration by allowing not only sharing visualizations of data, but also commentary about and views of data. We are currently developing D3, the third generation of DARWIN. Earlier versions of DARWIN were characterized by browser-based interfaces and a hodge-podge of server technologies: CGI scripts, applets, PERL, and so forth. But browsers proved difficult to control, and a proliferation of computational mechanisms proved inefficient and difficult to maintain. D3 substitutes a pure-Java approach for that medley: A Java client communicates (though RMI over HTTPS) with a Java-based application server. Code on the server accesses information from JDBC databases, distributed LDAP security services, and a collaborative information system (CORE, a successor of PostDoc.) D3 is a three tier-architecture, but unlike "E-commerce" applications, the data usage pattern suggests different strategies than traditional Enterprise Java Beanswe need to move volumes of related data together, considerable processing happens on the client, and the "business logic" on the server-side is primarily data integration and collaboration. With D3, we are extending DARWIN to handle other data domains and to be a distributed system, where a single login allows a user transparent access to test results from multiple servers and authority domains.

Proceedings Article
01 Jan 2001
TL;DR: In this paper, the authors extend the notions of reachability and mutual exclusion to more general notions of time and action, and show that the general rules for mutual exclusion reasoning take on a remarkably clean and simple form.
Abstract: In recent years, Graphplan style reachability analysis and mutual exclusion reasoning have been used in many high performance planning systems. While numerous refinements and extensions have been developed, the basic plan graph structure and reasoning mechanisms used in these systems are tied to the very simple STRIPS model of action. In 1999, Smith and Weld generalized the Graphplan methods for reachability and mutex reasoning to allow actions to have differing durations. However, the representation of actions still has some severe limitations that prevent the use of these techniques for many real-world planning systems. In this paper, we 1) separate the logic of reachability from the particular representation and inference methods used in Graphplan, and 2) extend the notions of reachability and mutual exclusion to more general notions of time and action. As it turns out, the general rules for mutual exclusion reasoning take on a remarkably clean and simple form. However, practical instantiations of them turn out to be messy, and require that we make representation and reasoning choices.

01 Oct 2001
TL;DR: In this paper, the camera parameters are estimated by minimizing the error between the observed and rendered images as a function of camera parameters, and the surface model parameters are then estimated using a very small set of matched features.
Abstract: The conventional approach to shape from stereo is via feature extraction and correspondences. This results in estimates of the camera parameters and a typically spare estimate of the surface. Given a set of calibrated images, a dense surface reconstruction is possible by minimizing the error between the observed image and the image rendered from the estimated surface with respect to the surface model parameters. Given an uncalibrated image and an estimated surface, the camera parameters can be estimated by minimizing the error between the observed and rendered images a function of the camera parameters. We use a very small dense set of matched features to provide camera parameter estimates for the initial dense surface estimate. We then re-estimate the camera parameters as described above, and then re-estimate the surface. This process is iterated. Whilst it can not be proven to converge, we have found that around three iterations results in excellent surface and camera parameters estimates.

07 Jan 2001
TL;DR: This paper discusses NASA applications that are currently investigating from these perspectives, including reuse and adaptation of software architectures and components that must be incorporated in software development within and across missions.
Abstract: Software development for NASA missions is a particularly challenging task. Missions are extremely ambitious scientifically, have very strict time frames, and must be accomplished with a maximum degree of reliability. Verification technologies must therefore be pushed far beyond their current capabilities. Moreover, reuse and adaptation of software architectures and components must be incorporated in software development within and across missions. This paper discusses NASA applications that we are currently investigating from these perspectives.

01 Jan 2001
TL;DR: A set of algorithms is presented which perform reasonable synthesis and transformations between different UML notations (sequence diagrams, Object Constraint Language (OCL) constraints, statecharts) and the following transformations are discussed: Statechart synthesis, introduction of hierarchy, consistency of modifications, and "design-debugging".
Abstract: The Unified Modeling Language (UML) is gaining wide popularity for the design of object-oriented systems. UML combines various object-oriented graphical design notations under one common framework. A major factor for the broad acceptance of UML is that it can be conveniently used in a highly iterative, Use Case (or scenario-based) process (although the process is not a part of UML). Here, the (pre-) requirements for the software are specified rather informally as Use Cases and a set of scenarios. A scenario can be seen as an individual trace of a software artifact. Besides first sketches of a class diagram to illustrate the static system breakdown, scenarios are a favorite way of communication with the customer, because scenarios describe concrete interactions between entities and are thus easy to understand. Scenarios with a high level of detail are often expressed as sequence diagrams. Later in the design and implementation stage (elaboration and implementation phases), a design of the system's behavior is often developed as a set of statecharts. From there (and the full-fledged class diagram), actual code development is started. Current commercial UML tools support this phase by providing code generators for class diagrams and statecharts. In practice, it can be observed that the transition from requirements to design to code is a highly iterative process. In this talk, a set of algorithms is presented which perform reasonable synthesis and transformations between different UML notations (sequence diagrams, Object Constraint Language (OCL) constraints, statecharts). More specifically, we will discuss the following transformations: Statechart synthesis, introduction of hierarchy, consistency of modifications, and "design-debugging".

01 Dec 2001
TL;DR: The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions, and collaborates with NASA scientists to apply IT research to a variety of NASA application domains.
Abstract: The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

01 Jan 2001
TL;DR: It is suggested that AOP systems can be analyzed with respect to three critical dimensions: the kinds of quantifications allowed, the nature of the interactions that can be asserted, and the mechanism for combining base-level actions with asserted actions.
Abstract: We propose that the distinguishing characteristic of Aspect-Oriented Programming (AOP) languages is that they allow programming by making quantified programmatic assertions over programs that lack local notation indicating the invocation of these assertions. This suggests that AOP systems can be analyzed with respect to three critical dimensions: the kinds of quantifications allowed, the nature of the interactions that can be asserted, and the mechanism for combining base-level actions with asserted actions. Consequences of this perspective are the recognition that certain systems are not AOP and that some mechanisms are metabolism: they are sufficiently expressive to allow straightforwardly programming an AOP system within them.

Proceedings Article
01 Jan 2001
TL;DR: Experiments conducted on the Wall Street Journal corpus with a version of the AT&T 1995 FST LVSR system with part of speech (POS) trigram sequences show that using only POS leads to a loss in performance.
Abstract: The word graphs produced by a large vocabulary speech recognition system usually contain a path labelled with the correct utterance, but this is not always the highest scoring path. Boosting increases the probability of words which occur often in the word graph, which are in some sense robust. Adding syntactic information allows rescoring of arc probabilities with the possibility that more grammarical word sequences will also be the correct ones. A theory is developed which allows general probabilistic syntactic models to be used to rescore word lattices. Experiments conducted on the Wall Street Journal (WSJ) corpus with a version of the AT&T 1995 FST LVSR system with part of speech (POS) trigram sequences show that using only POS leads to a loss in performance. Boosting alone provides an improvement in performance which is not statistically significant. Cascading the two methods, boosting first and then using syntactic information improves performance 4.5 % relative on a large portion of the 1995 DARPA test set.