scispace - formally typeset
Search or ask a question
Institution

Computer Resources International

About: Computer Resources International is a based out in . It is known for research contribution in the topics: Expert system & Decision support system. The organization has 39 authors who have published 59 publications receiving 2651 citations.

Papers published on a yearly basis

Papers
More filters
Book
01 Jan 1993
TL;DR: This paper presents a guide to the literature the self-applicable scheme specializer, a partial evaluator for a subset of scheme for a first-order functional languages.
Abstract: Functions, types and expressions programming languages and their operational semantics compilation partial evaluation of a flow chart languages partial evaluation of a first-order functional languages the view from Olympus partial evaluation of the Lambda calculus partial evaluation of prolog aspects of Similix - a partial evaluator for a subset of scheme partial evaluation of C applications of partial evaluation termination of partial evaluation program analysis more general program transformation guide to the literature the self-applicable scheme specializer.

1,549 citations

Journal ArticleDOI
01 Feb 1995
TL;DR: The notion of aggregation primitive is made primitive to the logic, and strength mappings from sets of arguments to one of a number of possible dictionaries are defined, which provides a uniform framework for reasoning under uncertainty.
Abstract: We present the syntax and proof theory of a logic of argumentation, LA. We also outline the development of a category theoretic semantics for LA. LA is the core of a proof theoretic model for reasoning under uncertainty. In this logic, propositions are labelled with a representation of the arguments which support their validity. Arguments may then be aggregated to collect more information about the potential validity of the propositions of interest. We make the notion of aggregation primitive to the logic, and then define strength mappings from sets of arguments to one of a number of possible dictionaries. This provides a uniform framework which incorporates a number of numerical and symbolic techniques for assigning subjective confidences to propositions on the basis of their supporting arguments. These aggregation techniques are also described, with examples

293 citations

Journal ArticleDOI
TL;DR: A logical set of phenotypes is developed and compared with the established "human error" taxonomies as well as with the operational categories which have been developed in the field of human reliability analysis and the trade-off between precision and meaningfulness is discussed.
Abstract: The study of human actions with unwanted consequences, in this paper referred to as human erroneous actions, generally suffers from inadequate operational taxonomies. The main reason for this is the lack of a clear distinction between manifestations and causes. The failure to make this distinction is due to the reliance on subjective evidence which unavoidably mixes manifestations and causes. The paper proposes a clear distinction between the phenotypes (manifestations) and the genotypes (causes) of erroneous actions. A logical set of phenotypes is developed and compared with the established "human error" taxonomies as well as with the operational categories which have been developed in the field of human reliability analysis. The principles for applying the set of phenotypes as practical classification criteria are developed and described. A further illustration is given by the report of an action monitoring system (RESQ) which has been implemented as part of a larger set of operator support systems and which shows the viability of the concepts. The paper concludes by discussing the principal issues of error detection, in particular the trade-off between precision and meaningfulness.

188 citations

Journal ArticleDOI
TL;DR: This paper describes one approach to provide an independent cognitive description of complex situations that can be used to understand the sources of both good and poor performance, i.e. the cognitive problems to be solved or challenges to be met.
Abstract: Tool builders have focused, not improperly, on tool building--how to build better performing machine problem-solvers, where the implicit model is a human expert solving a problem in isolation. A critical task then for the designer working in this paradigm is to collect human knowledge for computerization in the stand alone machine problem-solver. But tool use involves more. Building systems that are "good" problem-solvers in isolation does not guarantee high performance in actual work contexts where the performance of the joint person-machine system is the relevant criterion. The key to the effective application of computational technology is to conceive, model, design, and evaluate the joint human-machine cognitive system (Hollnagel & Woods, 1983). Like Gestalt principles in perception, a decision system is not merely the sum of its parts, human and machine. The configuration or organization of the human and machine components is a critical determinant of the performance of the system as a whole (e.g. Sorkin & Woods, 1985). The joint cognitive system paradigm (Woods, 1986; Woods, Roth & Bennett, in press) demands a problemdriven, rather than technology-driven, approach where the requirements and bottlenecks in cognitive task performance drive the development of tools to support the human problem-solver. In this paper we describe an approach to understand the cognitive activities performed by joint human-machine cognitive systems. The real impediment to effective knowledge acquisition is the lack of an adequate language to describe cognitive activities in particular domains--what are the cognitive implications of some application's task demands and of the aids and interfaces available to the people in the system; how do people behave/perform in the cognitive situations defined by these demands and tools. Because this independent cognitive description has been missing, an uneasy mixture of other types of description of a complex situation has been substituted---descriptions in terms of the application itself, of the implementation technology of the interfaces/aids, of the user's physical activities or user psychohaetrics. We describe one approach to provide an independent cognitive description of complex situations that can be used to understand the sources of both good and poor performance, i.e. the cognitive problems to be solved or challenges to be met.

138 citations

Journal ArticleDOI
TL;DR: A catalogue of "things the authors do not know" about Intelligent Decision Support Systems is described, which refers to the design of artificial reasoning mechanisms, the structure and representation of knowledge, and the use of information across the man-machine interface.
Abstract: There are many formal theories of decision making seen as a whole as well as for its separate aspects. Few of these are, however, sufficiently developed to serve as a basis for actually designing decision support systems. That is because they generally consider decision making under idealised rather than real circumstances, hence cope with only part of the complexity. Some of the unsolved problems refer to the design of artificial reasoning mechanisms, the structure and representation of knowledge, and the use of information across the man-machine interface. This catalogue of “things we do not know” about Intelligent Decision Support Systems is described in the three main sections of this paper. The final section discusses the problems in validating the function of an artificial reasoning system, since this is an important factor in determining both their applicability and their acceptability.

62 citations


Network Information
Related Institutions (5)
Tampere University of Technology
19.7K papers, 431.7K citations

71% related

Technical University of Madrid
34.7K papers, 634K citations

71% related

Edith Cowan University
13.5K papers, 339.5K citations

70% related

Mitre Corporation
6K papers, 124.8K citations

70% related

University of Canberra
10K papers, 246.2K citations

70% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20081
19951
19945
19938
19926
199111