scispace - formally typeset
Search or ask a question

Showing papers by "Computer Resources International published in 1987"


Journal ArticleDOI
TL;DR: This paper describes one approach to provide an independent cognitive description of complex situations that can be used to understand the sources of both good and poor performance, i.e. the cognitive problems to be solved or challenges to be met.
Abstract: Tool builders have focused, not improperly, on tool building--how to build better performing machine problem-solvers, where the implicit model is a human expert solving a problem in isolation. A critical task then for the designer working in this paradigm is to collect human knowledge for computerization in the stand alone machine problem-solver. But tool use involves more. Building systems that are "good" problem-solvers in isolation does not guarantee high performance in actual work contexts where the performance of the joint person-machine system is the relevant criterion. The key to the effective application of computational technology is to conceive, model, design, and evaluate the joint human-machine cognitive system (Hollnagel & Woods, 1983). Like Gestalt principles in perception, a decision system is not merely the sum of its parts, human and machine. The configuration or organization of the human and machine components is a critical determinant of the performance of the system as a whole (e.g. Sorkin & Woods, 1985). The joint cognitive system paradigm (Woods, 1986; Woods, Roth & Bennett, in press) demands a problemdriven, rather than technology-driven, approach where the requirements and bottlenecks in cognitive task performance drive the development of tools to support the human problem-solver. In this paper we describe an approach to understand the cognitive activities performed by joint human-machine cognitive systems. The real impediment to effective knowledge acquisition is the lack of an adequate language to describe cognitive activities in particular domains--what are the cognitive implications of some application's task demands and of the aids and interfaces available to the people in the system; how do people behave/perform in the cognitive situations defined by these demands and tools. Because this independent cognitive description has been missing, an uneasy mixture of other types of description of a complex situation has been substituted---descriptions in terms of the application itself, of the implementation technology of the interfaces/aids, of the user's physical activities or user psychohaetrics. We describe one approach to provide an independent cognitive description of complex situations that can be used to understand the sources of both good and poor performance, i.e. the cognitive problems to be solved or challenges to be met.

138 citations


Journal ArticleDOI
TL;DR: A catalogue of "things the authors do not know" about Intelligent Decision Support Systems is described, which refers to the design of artificial reasoning mechanisms, the structure and representation of knowledge, and the use of information across the man-machine interface.
Abstract: There are many formal theories of decision making seen as a whole as well as for its separate aspects. Few of these are, however, sufficiently developed to serve as a basis for actually designing decision support systems. That is because they generally consider decision making under idealised rather than real circumstances, hence cope with only part of the complexity. Some of the unsolved problems refer to the design of artificial reasoning mechanisms, the structure and representation of knowledge, and the use of information across the man-machine interface. This catalogue of “things we do not know” about Intelligent Decision Support Systems is described in the three main sections of this paper. The final section discusses the problems in validating the function of an artificial reasoning system, since this is an important factor in determining both their applicability and their acceptability.

62 citations


Journal ArticleDOI
TL;DR: Etude sur la conception and le developpement de supports d'aide a la decision a base of connaissances.
Abstract: Etude sur la conception et le developpement de supports d'aide a la decision a base de connaissances

15 citations


Journal ArticleDOI
TL;DR: An ESPRIT project researching intelligent help systems has implemented an easy-to-use user support system, called EuroHelp, which aims to help both the inexperienced and more experienced user by prompts, suggestions and advice.
Abstract: User support is important Many systems are still too complicated to use easily Manuals, instructors and computer-aided instruction are only part of the answer An ESPRIT project researching intelligent help systems (IHS) has implemented an easy-to-use user support system, called EuroHelp, which aims to help both the inexperienced and more experienced user by prompts, suggestions and advice It works in a similar way to a teacher-pupil situation

12 citations


Book ChapterDOI
01 Jan 1987
TL;DR: The present version of the DEMON model is aimed at off-line analysis of specific event sequences, and focuses on resource monitoring rather than strategy selection.
Abstract: This paper describes the basic principles for a decision monitoring model (DEMON). The background is the growing interest for the metacognitive functions in decision making that controls strategy selection and resource monitoring. The purpose of the DEMON model is externally to monitor decision making to determine the cognitive load. This is done through a modelling of the metacognitive control, in particular the changes in the goal network of the decision making system. The paper describes the rationale of the model, and gives an outline of its basic functional modules. The present version of the model is aimed at off-line analysis of specific event sequences, and focuses on resource monitoring rather than strategy selection.

Book ChapterDOI
01 Jan 1987
TL;DR: It is necessary to consider the implications of the increased use of expert systems for the general approach to failure analysis of information systems.
Abstract: One of the specific products of modern information technology is the expert system. Expert systems have been developed to support humans in handling the large amount of information on which more and more tasks depend for their success. The expert systems do that by applying techniques of artificial intelligence, mainly various types of reasoning, to cases where human intelligence is insufficient — either because it is too slow or error-prone or because it is a scarce resource (cf. Wiener, 1986). Expert systems thereby more and more often become part of large information systems — in addition to being Information Systems themselves. It is therefore necessary to consider the implications of the increased use of expert systems for the general approach to failure analysis of information systems. There are two completely different aspects of this: • The expert system as an information system that itself may fail, i. e., as a target. • The expert system as an aid for the prevention and diagnosis of failures in information systems, i.e., as a tool.