scispace - formally typeset
Search or ask a question
Institution

Research Institute for Advanced Computer Science

FacilityMountain View, California, United States
About: Research Institute for Advanced Computer Science is a facility organization based out in Mountain View, California, United States. It is known for research contribution in the topics: Model checking & Parallel algorithm. The organization has 180 authors who have published 418 publications receiving 17072 citations.


Papers
More filters
Book ChapterDOI
13 Jul 1991
TL;DR: In this paper, the authors present a plausible reasoning system based on the belief calculus of subjective Bayesian probability, which allows reasoning about defaults, likelihood, necessity and possibility in a manner similar to the earlier work of Adams.
Abstract: This paper presents a plausible reasoning system to illustrate some broad issues in knowledge representation: dualities between different reasoning forms, the difficulty of unifying complementary reasoning styles, and the approximate nature of plausible reasoning. These issues have a common underlying theme: there should be an underlying belief calculus of which the many different reasoning forms are special cases, sometimes approximate. The system presented allows reasoning about defaults, likelihood, necessity and possibility in a manner similar to the earlier work of Adams. The system is based on the belief calculus of subjective Bayesian probability which itself is based on a few simple assumptions about how belief should be manipulated. Approximations, semantics, consistency and consequence results are presented for the system. While this puts these often discussed plausible reasoning forms on a probabilistic footing, useful application to practical problems remains an issue.

3 citations

01 Mar 1989
TL;DR: In 1988, a worm program called WIKITRON as mentioned in this paper was reported to have infected several thousand UNIX-operated Sun workstations and VAX computers attached to the Research Internet, seriously disrupting service for several days but damaging no files.
Abstract: In November 1988 a worm program invaded several thousand UNIX-operated Sun workstations and VAX computers attached to the Research Internet, seriously disrupting service for several days but damaging no files. An analysis of the work's decompiled code revealed a battery of attacks by a knowledgeable insider, and demonstrated a number of security weaknesses. The attack occurred in an open network, and little can be inferred about the vulnerabilities of closed networks used for critical operations. The attack showed that passwork protection procedures need review and strengthening. It showed that sets of mutually trusting computers need to be carefully controlled. Sharp public reaction crystalized into a demand for user awareness and accountability in a networked world.

3 citations

Book
31 Jul 2013
TL;DR: In this paper, the state of practices of design reviews at NASA and research into what can be done to improve peer review practices is described, with the goal of identifying best practices and lessons learned from NASA's experience, supported by academic research and methodologies to ultimately improve the process.
Abstract: This report describes the state of practices of design reviews at NASA and research into what can be done to improve peer review practices. There are many types of reviews at NASA: required and not, formalized and informal, programmatic and technical. Standing project formal reviews such as the Preliminary Design Review and Critical Design Review are a required part of every project and mission development. However, the technical, engineering peer reviews that support teams' work on such projects are informal, some times ad hoc, and inconsistent across the organization. The goal of this work is to identify best practices and lessons learned from NASA's experience, supported by academic research and methodologies to ultimately improve the process. This research has determined that the organization, composition, scope, and approach of the reviews impact their success. Failure Modes and Effects Analysis (FMEA) can identify key areas of concern before or in the reviews. Product definition tools like the Project Priority Matrix, engineering-focused Customer Value Chain Analysis (CVCA), and project or system-based Quality Function Deployment (QFD) help prioritize resources in reviews. The use of information technology and structured design methodologies can strengthen the engineering peer review process to help NASA work towards error-proofing the design process.

3 citations

Proceedings Article
01 Jan 2006
TL;DR: The rules of the Cooper-Harper rating scheme are formulated as fuzzy rules with performance, control, and compensation as the antecedents, and pilot rating as the consequent and used to analyze the effectiveness of the aircraft controller.
Abstract: The Cooper-Harper rating of Aircraft Handling Qualities has been adopted as a standard for measuring the performance of aircraft since it was introduced in 1966. Aircraft performance, ability to control the aircraft, and the degree of pilot compensation needed are three major key factors used in deciding the aircraft handling qualities in the Cooper- Harper rating. We formulate the Cooper-Harper rating scheme as a fuzzy rule-based system and use it to analyze the effectiveness of the aircraft controller. The automatic estimate of the system-level handling quality provides valuable up-to-date information for diagnostics and vehicle health management. Analyzing the performance of a controller requires a set of concise design requirements and performance criteria. Ir, the case of control systems fm a piloted aircraft, generally applicable quantitative design criteria are difficult to obtain. The reason for this is that the ultimate evaluation of a human-operated control system is necessarily subjective and, with aircraft, the pilot evaluates the aircraft in different ways depending on the type of the aircraft and the phase of flight. In most aerospace applications (e.g., for flight control systems), performance assessment is carried out in terms of handling qualities. Handling qualities may be defined as those dynamic and static properties of a vehicle that permit the pilot to fully exploit its performance in a variety of missions and roles. Traditionally, handling quality is measured using the Cooper-Harper rating and done subjectively by the human pilot. In this work, we have formulated the rules of the Cooper-Harper rating scheme as fuzzy rules with performance, control, and compensation as the antecedents, and pilot rating as the consequent. Appropriate direct measurements on the controller are related to the fuzzy Cooper-Harper rating system: a stability measurement like the rate of change of the cost function can be used as an indicator if the aircraft is under control; the tracking error is a good measurement for performance needed in the rating scheme. Finally, the change of the control amount or the output of a confidence tool, which has been developed by the authors, can be used as an indication of pilot compensation. We use a number of known aircraft flight scenarios with known pilot ratings to calibrate our fuzzy membership functions. These include normal flight conditions and situations in which partial or complete failure of tail, aileron, engine, or throttle occurs.

3 citations

Proceedings ArticleDOI
23 May 1994
TL;DR: An experimental demonstration of the effectiveness of certain routing heuristics for adaptive, offline communication routing for a SIMD processor grid using large data sets drawn from supercomputing applications instead of an analytic model of communication load.
Abstract: Unstructured grids lead to unstructured communication on distributed memory parallel computers, a problem that has been considered difficult. We consider adaptive, offline communication routing for a SIMD processor grid. Our approach is empirical. We use large data sets drawn from supercomputing applications instead of an analytic model of communication load. The chief contribution of this paper is an experimental demonstration of the effectiveness of certain routing heuristics. Our routing algorithm is adaptive, nonminimal, and is generally designed to exploit locality. We have a parallel implementation of the router, and we report on its performance. >

3 citations


Authors

Showing all 180 results

NameH-indexPapersCitations
Tony F. Chan8243748083
Hanan Samet7536925388
Michael Fisher7363618535
Mikhail J. Atallah6333014019
Peter J. Denning5739721740
Grigore Rosu5429110222
Robert Schreiber4918212755
Ronen I. Brafman481809995
John R. Gilbert471308609
Neil D. Sandham472638112
Willem Visser421337978
Michael J. Flynn412509754
Rupak Biswas411739962
Matt Bishop402627251
Wray Buntine402078302
Network Information
Related Institutions (5)
Hewlett-Packard
59.8K papers, 1.4M citations

82% related

Google
39.8K papers, 2.1M citations

82% related

Microsoft
86.9K papers, 4.1M citations

81% related

Carnegie Mellon University
104.3K papers, 5.9M citations

81% related

Facebook
10.9K papers, 570.1K citations

80% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202114
202011
20196
20184
20172
20152