scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Human-computer Studies \/ International Journal of Man-machine Studies in 1989"


Journal ArticleDOI
TL;DR: A theoretical framework for interface design is proposed that attempts to develop a meaningful representation of the process which is not just optimised for one particular level of cognitive control, but that supports all three levels simultaneously and suggests that reliable human-system interaction will be achieved by designing interfaces which tend to minimize the potential for control interference and support recovery from errors.
Abstract: Research during recent years has revealed that human errors are not stochastic events which can be removed through improved training programs or optimal interface design. Rather, errors tend to reflect either systematic interference between various models, rules, and schemata, or the effects of the adaptive mechanisms involved in learning. In terms of design implications, these findings suggest that reliable human-system interaction will be achieved by designing interfaces which tend to minimize the potential for control interference and support recovery from errors. In other words, the focus should be on control of the effects of errors rather than on the elimination of errors per se. In this paper, we propose a theoretical framework for interface design that attempts to satisfy these objectives. The goal of our framework, called ecological interface design, is to develop a meaningful representation of the process which is not just optimised for one particular level of cognitive control, but that supports all three levels simultaneously. The paper discusses the necessary requirements for a mapping between the process and the combined action/observation surface, and analyses of the resulting influence on both the interferences causing error and on the opportunity for error recovery left to the operator. There has been a rapidly growing interest in the analysis of human error caused by technological development. The growing complexity of technical installations makes it increasingly difficult for operators to understand the system’s internal functions. At the same time, the large scale of operations necessary for competitive production makes the effects of human errors increasingly unacceptable. Naturally enough, human error analysis has become an essential part of systems design. In order to conduct such an analysis, a taxonomy suited to describe human errors is essential. The structure and dimensions of the error taxonomy, however, will depend on the aim of the analysis. Therefore, different categorisations of human errors are useful during the various stages of systems design. At least two different perspectives can be identified, each with its own unique set of requirements. One point of view is useful for predicting the effects of human error on system performance, i.e. a failure-mode-and-effect analysis. For this purpose, a taxonomy based on a model of human error mechanisms should be adopted. A second perspective for error analysis is required for identifying possible improvements in system design. In order to meet the requirements of such an analysis, an error taxonomy based on cognitive control mechanisms (Rasmussen, 1983) is more appropriate. Both types of analyses are essential to system design. The failure-mode-and-effect analysis allows the designer to identify plausible human

409 citations


Journal ArticleDOI
TL;DR: This paper shows how a browsing capability can be integrated into an intelligent text retrieval system and provides facilities for controlling the browsing and for using the information derived during browsing in more formal search strategies.
Abstract: Browsing is potentially an extremely important technique for retrieving text documents from large knowledge bases. The advantages of this technique are that users get immediate feedback from the structure of the knowledge base and exert complete control over the outcome of the search. The primary disadvantages are that it is easy to get lost in a complex network of nodes representing documents and concepts, and there is no guarantee that a browsing search will be as effective as a more conventional search. In this paper, we show how a browsing capability can be integrated into an intelligent text retrieval system. The disadvantages mentioned above are avoided by providing facilities for controlling the browsing and for using the information derived during browsing in more formal search strategies. The architecture of the text retrieval system is described and the browsing techniques are illustrated using an example session.

160 citations


Journal ArticleDOI
TL;DR: Monte Carlo experiments are described which illustrate the effectiveness of the ·632 bootstrap as an alternative technique for tree selection and error estimation and a new incremental learning extension to CART is described.
Abstract: The CART concept induction algorithm recursively partitions the measurement space, displaying the resulting partitions as decision trees. Care, however, must be taken not to overfit the trees to the data, and CART employs cross-validation (cv) as the means by which an appropriately sized tree is selected. Although unbiased, cv estimates exhibit high variance, a troublesome characteristic, particularly for small learning sets. This paper describes Monte Carlo experiments which illustrate the effectiveness of the ·632 bootstrap as an alternative technique for tree selection and error estimation. In addition, a new incremental learning extension to CART is described.

155 citations


Journal ArticleDOI
TL;DR: A Bayesian approximation of a belief function is defined and it is shown that combining the Bayesian approximations of belief functions is computationally less involving than combining the belief functions themselves.
Abstract: An often mentioned obstacle for the use of Dempster-Shafer theory for the handling of uncertainty in expert systems is the computational complexity of the theory. One cause of this complexity is the fact that in Dempster-Shafer theory the evidence is represented by a belief function which is induced by a basic probability assignment, i.e. a probability measure on the powerset of possible answers to a question, and not by a probability measure on the set of possible answers to a question, like in a Bayesian approach. In this paper, we define a Bayesian approximation of a belief function and show that combining the Bayesian approximations of belief functions is computationally less involving than combining the belief functions themselves, while in many practical applications replacing the belief functions by their Bayesian approximations will not essentially affect the result.

125 citations


Journal ArticleDOI
TL;DR: This paper suggests that speech input will be more beneficial when users are engaged in multiple tasks mapped onto multiple user-response modalities, and when speech is used in tasks characterized by short transactions of a highly interactive nature.
Abstract: This paper focuses on two commonly-made claims about the utility of speech input: (1) It is faster than typed input; and (2) it also increases user productivity by providing an additional response channel. These claims are investigated, both through a review of research, and through an empirical evaluation of speech input. The research review supports both claims. Further, it suggests that speech input will be more beneficial when users are engaged in multiple tasks mapped onto multiple user-response modalities, and when speech is used in tasks characterized by short transactions of a highly interactive nature. The empirical study evaluated the utility of speech input in the context of a VLSI chip design package, and compared speech to typed, full-word input, single keypresses, and mouse clicks. Results supported the benefits of speech input over typed, full-word commands, and to a lesser extent, single keypresses. For the restricted set of commands that could be accomplished with mouse clicks, speech input and mouse clicks were equally efficient. These results are interpreted in terms of a general “ease vs expressiveness” guideline for assigning modalities to tasks in a user interface.

114 citations


Journal ArticleDOI
H. R. Hartson1, D. Hix1
TL;DR: This term, unknown only a few years ago, now conjures up images of icons and objects, windows and words that comprise the human-computer interface.
Abstract: This term, unknown only a few years ago, now conjures up images of icons and objects, windows and words that comprise the human-computer interface. A UIMS is an interactive system composed of high-level tools that support production and execution of human-computer interfaces. UIMS have become a major topic of both academic and trade journal articles, conference technical presentations, demonstrations, and special interest sessions. Many commercial software packages and research products even tangentially related to the area of human-computer interaction now claim to be UIMS. As young and exciting as the field is, there are already signs of promises unfulfilled, due to a lack of both functionality and usability factors that can make the difference between whether UIMS are a passing fad or a viable tool. But what does the future hold for UIMS? All indications are that they are here to stay. We perceive a trend in UIMS evolution that we have divided into generations based primarily on common characteristics and only loosely on chronology.

109 citations


Journal ArticleDOI
TL;DR: In this paper, the issues involved in demonstrating a rule-based system to be free from error are addressed, and a holistic perspective is adopted, wherein sorces, manifestations, and effects of errors are identified.
Abstract: As expert system technology spreads, the need for verification of system knowledge assumes greater importance. This paper addresses the issues involved in demonstrating a rule-based system to be free from error. A holistic perspective is adopted, wherein sorces, manifestations, and effects of errors are identified. A general taxonomy is created, and the implications for system performance and development outlined. Existing strategies for knowledge verification are surveyed, their applicability assessed, and some directions for systematic verification suggested.

90 citations


Journal ArticleDOI
TL;DR: A study of how high school students use an encyclopaedia in both print and electronic form was conducted from a mental models perspective to analyse subjects' development of mental models for the electronic system, and evaluate the human-computer interface effects.
Abstract: A study of how high school students use an encyclopaedia in both print and electronic form was conducted from a mental models perspective. Over three sessions and prompted by a set of protocols administered by a participant observer, 16 subjects each conducted three searches, one a verbal simulation, and one each with the print and electronic versions of a general purpose encyclopaedia. Observer notes, audio tapes of all sessions, captured keystrokes of the electronic searches, and responses to a final interview were used to compare print and electronic versions, analyse subjects' development of mental models for the electronic system, and evaluate the human-computer interface effects. Encyclopaedias seemed to be default sources of information for these subjects. Subjects demonstrated satisfactory use of them by sequentially reading articles rather than scanning and without using the index. Some subjects simply applied print models to their electronic searches, not taking advantage of full-text searching or hypertext capabilities. Most were able to use some of the electronic system's features and a few took good advantage of these features and thus appeared to develop distinct mental models for the electronic encyclopaedia by adapting their existing mental models. Subjects took almost twice as much time, posed more queries, and examined more articles in the electronic searches. Designers and instructors are encouraged to guide adaptive transitions by focusing attention on the interactive features of electronic systems and the unique features for browsing, querying and filtering information. Recommendations about display effects, navigational aids, and query formulation aids are also made.

80 citations


Journal ArticleDOI
TL;DR: The results suggest that when given structured tasks, novices are able to learn rather quickly how to produce small, good quality models using either of the tools, and this has implications for data modeling training and for the development of data modeling tools.
Abstract: The purpose of the current study was to seek insight into the ease of learning logical data modeling among novices. The two tools examined in the three learning experiments were the Logical Data Structure (LDS), which is based on the entity-relationship concept, and the Relational Data Model (RDM). For a series of trials, naive analysts were asked to generate data models using one of the tools. Feedback regarding the correct model was provided after each trial. The results suggest that when given structured tasks, novices are able to learn rather quickly how to produce small, good quality models using either of the tools. Comparatively, the LDS tool promoted significantly more top-down directed analysis and resulted in more accurate data models than the RDM tool. The significant differences are explained in terms of the visual appearance of the notation associated with the tools. The results have implications for data modeling training and for the development of data modeling tools.

66 citations


Journal ArticleDOI
TL;DR: Testing four hypotheses concerning the relative effectiveness of icon construction showed mixed modality icons were rated as distinctively more meaningful than alternatives and ratings were occasionally bolstered by population stereotypes acquired through experience.
Abstract: Computer systems often use icons to represent objects of interest within the system. In this study we tested four hypotheses concerning the relative effectiveness of icon construction: (1) Pictorial icons would be rated as more meaningful than verbal icons for concrete objects. (2) Ratings of meaningfulness would be dependent upon qualities of icons such as long versus short abbreviations, and industry standard versus enhanced pictogram. (3) Ratings would be dependent upon experience with the content domains. (4) Icons composed of both verbal and pictorial elements would be rated as more meaningful than icons composed of verbal or pictorial elements only. Hypotheses were developed from literature on text formatting, command names, human memory functionality, population stereotypes, and brain lateralization. Two experiments were conducted. The first involved icons for objects found in a building automation system (BAS) environment which were rated by 187 system operators. The second experiment involved icons from BAS, engineering, computer systems and finance environments as rated by 139 undergraduates with varying experience in those content domains. Results overall showed that: (1) Mixed modality icons were rated as distinctively more meaningful than alternatives. (2) Ratings were occasionally bolstered by population stereotypes acquired through experience. (3) Long abbreviations are preferable to short ones. (4) It is possible to construct pictograms that are more meaningful than industry standards, and (5) verbal icons are sometimes preferred over pictorial icons when mixed modes are not available.

66 citations


Journal ArticleDOI
TL;DR: Neither the relational nor the entity-relational data model was clearly superior when used as the interface between a database system and the end user.
Abstract: In database systems the end user interacts with the database at the external schema level. At this level the user sees only the logical structure of the database that is relevant to his/her work. Both the relational and the entity-relationship model have proponents arguing that one data model is superior to the other when used in the end user environment. However, a literature review indicated that these arguments have not been based on empirical results from a systematic inquiry. The study reported here examined this issue through a controlled experiment using query writing as the task. Our basic assumption was that if one data model was superior to the other, then the superiority of the model would be reflected in the user's query writing performance. In addition, this superiority would be demonstrated on both simple and complex tasks. Query writing performance was measured by three variables: number of syntax errors, number of semantic errors, and amount of time to complete queries. The results indicated that subjects using the relational model made fewer syntax errors, but required more time to complete a query. No significant differences in the number of semantic errors were found between the two data models. Based on these results, neither the relational nor the entity-relational data model was clearly superior when used as the interface between a database system and the end user. As expected, the more complex tasks caused more syntax and semantic errors, and required more time to complete.

Journal ArticleDOI
TL;DR: The XTRA access system to expert systems is presented which is aimed at rendering the interaction with expert systems easier for inexperienced users and in its first application the access to an expert system in the income tax domain is being realized.
Abstract: The XTRA access system to expert systems is presented which is aimed at rendering the interaction with expert systems easier for inexperienced users. XTRA communicates with the user in a natural language (German), extracts data relevant to the expert system from his/her natural-language input, answers user queries as to terminology and provides user-accommodated natural-language verbalizations of results and explanations provided by the expert system. A number of novel artificial intelligence techniques have been employed in the development of the system, including the combination of natural-language user input and user gestures on the terminal screen, referent identification with the aid of four different knowledge sources, simultaneous communication of the access system with the user and the expert system, fusion of two complementary knowledge bases into a single one, and the design of a natural-language generation component which allows for a controlled interaction between the “what-to-say” and the “how-to-say” parts to yield a more natural output. XTRA is being developed independently of any specific expert system. In its first application the access to an expert system in the income tax domain is being realized.

Journal ArticleDOI
TL;DR: It is proposed, in the case when no proper mathematical model is obtainable, to use human experts' inference models in computer control algorithms using PROLOG as the language of the model implementation.
Abstract: It is proposed, in the case when no proper mathematical model is obtainable, to use human experts' inference models in computer control algorithms. The notion of inference model is introduced and it is demonstrated that the formal apparatus of rough sets theory can be used to identify, to analyse and to evaluate this model. A method of computer implementation of such inference models is presented. The method is based on the analysis of dependencies among decision, measurable and observable attributes. PROLOG is proposed as the language of the model implementation. Formal considerations, the proposed approach and notions introduced are illustrated with a real-life example. It concerns a computer implementation of inference model of a rotary clinker kiln stoker. The model was used to control the process and an analysis of control results is presented.

Journal ArticleDOI
TL;DR: A syntax induction experiment in which subjects learn to operate a toy “lost property” computer system by a lexical command language is tested, finding that the results of the experiment are consistent with the predictions of task-action grammar.
Abstract: Task-action grammar (TAG), a formal model of the mental representation of task languages, makes predictions about the relative learnability of different command language structures. In particular TAG predicts that consistent structuring of task-action mappings across semantic domains of the task world will facilitate learning, but that consistent structuring within domains that are orthogonal to the semantic organisation of the task world cannot be accommodated within users' mental representations, and so will not help learners. Other models of humancomputer interaction either fail to address this distinction, or make quite different predictions. The prediction is tested by a syntax induction experiment in which subjects learn to operate a toy “lost property” computer system by a lexical command language. The results of the experiment are consistent with the predictions of task-action grammar.

Journal ArticleDOI
TL;DR: The results are interpreted in light of the need to formulate a mental model of correct program functioning and to determine the location of the program bug in terms of the functioning of that model.
Abstract: To develop a theory of computer program bugs and of debugging, we need to classify on an abstract basis the nature of the bug and to relate the nature of the bug to the difficulty of debugging. Atwood and Ramsey (1978 ) report the only attempt of this nature in a study based on the theory of propositional hierarchies (see Kintsch, 1974 ) from the text comprehension literature. Propositional hierarchies are a conceptualization of the way in which sentences are stored in memory for the purpose of recall, etc. Atwood and Ramsey's studies did not distinguish between the difficulty of debugging as a function of the location of the bug in the propositional hierarchy or the location of the bug in the program structure. The objective of the series of three studies reported here is to differentiate between bug difficulty based on location in the propositional hierarchy of the sentence structure of the programming language and its location in the serial structure of the program. Little support was found for the effect of the location of the bug in the program structure on debugging difficulty. The effect of the location of the bug in the propositional hierarchy warrants further investigation. The results are interpreted in light of the need to formulate a mental model of correct program functioning and to determine the location of the program bug in terms of the functioning of that model.

Journal ArticleDOI
TL;DR: The “broader-than” relationships of both a medical and a computer science thesaurus when coupled with a simple average path length algorithm are able to simulate the decisions of people regarding the conceptual similarity of documents and queries.
Abstract: Information retrieval systems often rely on thesauri or semantic nets in indexing documents and in helping users search for documents. Reasoning with these thesauri resembles traversing a graph. Several algorithms for matching documents to queries based on the distances between nodes on the graph (terms in the thesaurus) are compared to the evaluations of people. The “broader-than” relationships of both a medical and a computer science thesaurus when coupled with a simple average path length algorithm are able to simulate the decisions of people regarding the conceptual similarity of documents and queries. A graphical presentation of a thesaurus is connected to a multi-window document retrieval system and its ease of use is compared to a more traditional thesaurus-based information retrieval system. While substantial evidence exists that the graphics and multiple windows can be useful, our experiments have shown, as have many other human-computer interface experiments, that a multitude of factors come into play in determining the value of a particular interface.

Journal ArticleDOI
TL;DR: This study presents a framework by combining concepts from probability and fuzzy set theories, arguing that since in most realistic situations these two types exist simultaneously, it is necessary to combine them in a formal framework to yield realistic solutions.
Abstract: Two major sources of imprecision in human knowledge, linguistic inexactness and stochastic uncertainty, are identified in this study. It is argued that since in most realistic situations these two types exist simultaneously, it is necessary to combine them in a formal framework to yield realistic solutions. This study presents such a framework by combining concepts from probability and fuzzy set theories. In this framework four models ( Kwakernaak, 1978 ; Yager, 1979 , Yager, 1984 ; Zadeh, 1968 , Zadeh, 1975 ) that attempt to account for the numeric or linguistic responses in various probability elicitation tasks were tested. The linguistic models were relatively effective in predicting subjects' responses compared to a random choice model. The numeric model ( Zadeh, 1968 ) proved to be insufficient. These results and others suggest that subjects are unable to represent the full complexity of a problem. Instead they adopt a simplified view of the problem by representing vague linguistic concepts by multiple-crisp representations (the α-level sets). All of the mental computation is done at these surrogate levels.

Journal ArticleDOI
TL;DR: The experiments showed that a possibility to visual information chunking substantially decreases the memory load caused by spreadsheet calculation.
Abstract: Spreadsheet calculation causes a heavy memory load, since it is necessary to remember complex cell and calculation systems. A series of experiments were carried out to study the role of visual information chunking in spreadsheet calculation. The experiments showed that a possibility to visual information chunking substantially decreases the memory load caused by spreadsheet calculation. If subjects are able to induce the structure of a formula or a network of connected formulas, they usually learn it fast. The surface structure of a formula may cause subjects essential difficulties in chunking. Badly ordered formula networks, in which cell layers are embedded within each other and references cross each other, are difficult to learn and remember. Subjects are not able to abstract the deep structure and encode formula networks.

Journal ArticleDOI
TL;DR: Three experiments were carried out on learning iteration and recursion, suggesting the subjects are quite able to induce a computational procedure for both iterative and recursive functions.
Abstract: Recursion is basic to computer science, whether it is conceived of abstractly as a mathematical concept or concretely as a programming technique. Three experiments were carried out on learning iteration and recursion. The first involved learning to compute mathematical functions, such as the factorial, from worked out examples. The results suggest the subjects are quite able to induce a computational procedure for both iterative and recursive functions. Furthermore, prior work with iterative examples does not seem to facilitate subsequent learning of recursive procedures, nor does prior work with recursive examples facilitate subsequent learning of iterative procedures. The second experiment studied the extent to which people trained only with recursive examples are able to transfer their knowledge to compute other similar recursive mathematical functions stated in an abstract form. It turned out that subjects who transferred to abstractly stated problems performed somewhat worse than they had performed previously when given examples. However, they did far better than a control group trained only with an abstract description of recursion. The third experiment involved comprehension of iterative and recursive Pascal programs. Comprehension of the iterative program was not affected by prior experience with the recursive version of the same program. Comprehension of the recursive version was only weakly affected by prior experience with the iterative version.

Journal ArticleDOI
TL;DR: An empirical analysis of blast furnace conductor's strategies in a simulation of the process is presented and its implications for the design of intelligent computer support is discussed.
Abstract: An empirical analysis of blast furnace conductor's strategies in a simulation of the process is presented and its implications for the design of intelligent computer support is discussed. The rationale for the choice of this situation is the need for cognitive analysis of process control situations which are far from discrete state transformation situations for which information processing psychological models or artificial systems have been designed. The simulation method is justified by the results of the previous steps of the study (behavioral observations in the control room and interviews on tool use and knowledge representation). The strategies are described in terms of representations used and processing performed, their efficiency is evaluated, and correlations between strategic features and efficiency are examined. A number of hypotheses are put forward on types of computer support best suited to satisfying the conditions of implementation of the most efficient strategic features. The computer is seen as an instrument, where it operates as a colleague, rather than a prosthesis capable of replacing the human.

Journal ArticleDOI
TL;DR: An abstract architecture for the design of user-computer interfaces intended to serve the user-oriented principles of learnability and usability is described, which was organized around the definition of a multi-phase interaction event flowchart.
Abstract: An abstract architecture for the design of user-computer interfaces is described. It is intended to serve the user-oriented principles of learnability and usability . The primary interface features selected as responsive to these principles are task-specific context presented to the user, reinforced by system adaptability to user needs. Both of these features are undergirded by interface system modularity . Context is defined to include not only high-level direction and step-by-step guidance toward task completion but also intelligent advice on the different user actions and commands. A prototype interface system was implemented, using a Xerox 1108 LISP workstation and a VAX 11/780 UNIX system as the target computer. It was organized around the definition of a multi-phase interaction event flowchart and is very dependent on object-oriented and rule-based paradigms. Limited test results indicate a favorable performance pattern.

Journal ArticleDOI
TL;DR: This paper describes the development of a model of users' interaction with an auditory interface based on the approach applied by Card, Moran & Newell (1980; 1983) to modelling visual interfaces, and represents a first step towards expanding models of human-computer interactions to include auditory interactions.
Abstract: Modern window, icon, menu and pointer (WIMP) systems represent a significant new obstacle to access to computers for people with visual disabilities. A project was carried out which demonstrated the possibility of adapting such highly visual interfaces into an auditory form so that even totally blind people could use them. This paper describes the development of a model of users' interaction with such an auditory interface. It is based on the approach applied by Card, Moran & Newell (1980; 1983) to modelling visual interfaces. The model concerns the time taken to locate an object within a screen which is defined by sounds. It states that: T position = T think + d T move , where T think is a constant, representing the time component during which the mouse is not moved, d is the distance to the target and T move is the time to cross one object. Measurements taken yielded values of: T think = 3·99 s and T move = 0·80 s. The model does provide a good description of the behavior of most the test subjects. This work represents a first step towards expanding models of human-computer interactions to include auditory interactions. This should be of benefit not only to the development of interfaces for blind users, but also in the enhancement of interfaces for sighted users by the addition of an auditory component.

Journal ArticleDOI
TL;DR: A need for a help system which can provide appropriate types of help for two different styles of learning, field-dependent and field-independent, is indicated.
Abstract: Previous studies have highlighted the existence of differences in the cognitive style adopted by different individuals. One dimension of cognitive style has the extremes, “Field-Dependency” and “Field-Independency”. These styles affect the way a person structures and processes information which may in turn have a profound affect on the way a person learns to use a computer system. This study investigated the effects of these styles on learning to use the UNIX † operating system. Subjects were required to work through a number of tasks using UNIX and to ask for help when it was required. The results indicated that field-dependent subjects were less likely to know the command and more likely to ask for help without making any attempt at the task than field-independent subjects, whereas the latter were more likely to attempt the task and make errors than ask for help. These results indicate a need for a help system which can provide appropriate types of help for these two different styles of learning.

Journal ArticleDOI
TL;DR: In this paper, the Dialog Manager subsystem of the AQUINAS knowledge acquisition workbench provides automated assistance to a knowledge engineer or domain expert in analysing the problem domain, classifying the problem tasks and sub-tasks, identifying problem-solving methods, proposing knowledge acquisition tools, and suggesting the use of specific strategies for knowledge acquisition provided in selected tools.
Abstract: One of the most troublesome and time-consuming activities in constructing a knowledge-based system is the elicitation and modelling of knowledge from the human expert about the problem domain A major obstacle is that little guidance is available to the domain expert or knowledge engineer to help with (1) classifying the application task and identifying a problem-solving method, and (2) given the application task characteristics, selecting knowledge acquisition tools and strategies to be applied in creating and refining the knowledge base Our objective is to provide automated assistance to a knowledge engineer or domain expert in analysing the problem domain, classifying the problem tasks and sub-tasks, identifying problem-solving methods, proposing knowledge acquisition tools, and suggesting the use of specific strategies for knowledge acquisition provided in selected tools We describe such an implementation in the Dialog Manager subsystem of the AQUINAS knowledge acquisition workbench The Dialog Manager provides advice to potential AQUINAS users as well as continuing guidance to users who select AQUINAS for knowledge base development

Journal ArticleDOI
TL;DR: An analysis and comparison of the underlying measures of uncertainty will lead to a discussion of the role in expert systems, of the concepts of support, refutation, belief and possibility.
Abstract: The aim of this paper is to analyse a wide variety of alternative approaches to the modelling of uncertainty, and to discuss their relevance to the handling of uncertain inference in Expert Systems. Some of the approaches that I shall examine have been developed specifically for use in expert systems, while others have arisen from more theoretical work on the notions of support and belief. Particular emphasis will be placed on an analysis and comparison of the underlying measures of uncertainty; this analysis will lead to a discussion of the role in expert systems, of the concepts of support, refutation, belief and possibility. The criticism of current approaches to the modelling of uncertainty will be used as a basis for formulating some requirements for the processing of evidence in an expert system.

Journal ArticleDOI
TL;DR: This taxonomy was developed in a three-step process: (1) review existing taxonomies; (2) add independent variables used in Human; and (3) remove redundancy and ambiguity.
Abstract: As part of an ongoing program to develop a Computer Aided Engineering (CAE) system for human factors engineers, a Human Performance Expert System, Human, was designed. The system contains a taxonomy of independent variables which affect human performance. This taxonomy was developed in a three-step process: (1) review existing taxonomies; (2) add independent variables used in Human; and (3) remove redundancy and ambiguity. This process and the resultant taxonomy are described in this paper.

Journal ArticleDOI
TL;DR: This paper describes and discusses some of the work relating to the use of pictorial dialogue methods to support: (1) end-user interaction with electronic books; (2) mixed-mode consultations with expert systems; and (3) multi-media instruction through theUse of computer assisted learning techniques.
Abstract: Human-computer communication provides the basic mechanisms by which computer users are able to express their requirements and influence the mode of operation of sophisticated information processing machines. In the past, textual dialogue has been the primary mode of facilitating such communicative encounters. Increasingly, pictoiaal dialogue methods are being employed in order to overcome some of the limitations and inefficiencies of textual exchange. This paper describes and discusses some of our work relating to the use of pictorial dialogue methods to support: (1) end-user interaction with electronic books; (2) mixed-mode consultations with expert systems; and (3) multi-media instruction through the use of computer assisted learning techniques.

Journal ArticleDOI
TL;DR: In this article, the authors explore the fundamental issues of plan knowledge acquisition from domain experts, and construct a framework for task recall, in which the representation of a recallable activity is called an act, and acts can be decomposed and put into sequences.
Abstract: This article explores the fundamental issues of plan knowledge acquisition from domain experts. The general question is: Are humans with their knowledge of a domain and its procedures able to provide a planner with the necessary information for automatic planning? To answer this question we first review the requirements of the plan library of a situation calculus based planner. Then we review existing frameworks for the representation of human activity knowledge and investigate to what extent these frameworks address the requirements. A major factor in evaluating the frameworks is the psychological reality the framework has to the individual. From this review and interviews we conducted in a pilot study, we construct a framework for task recall. In this framework, the representation of a recallable activity is called an act. An act consists of a goal, a pre-situation, an operations-list and a post-situation. Acts can be decomposed and put into sequences. In experiments with the framework, we find support for all our hypotheses except the one dealing with effects. Further investigation of this issue is discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the differences between experts and novices in problem representation (abstract vs concrete) provided a criterion for evaluating performance on an expert system used for diagnostic problem-solving.
Abstract: Conceptual differences between experts and novices in problem representation (abstract vs concrete) provided a criterion for evaluating performance on an expert system used for diagnostic problem-solving. In a field study, employee skill level (high vs low), system usage (use of system vs no usage), and question type (requiring abstract vs concrete information organization) were studied with respect to employee performance (speed and accuracy). The findings showed that high-skill employees answered abstract as well as concrete questions faster and more accurately than did low-skill employees. Also, high-skill employees performed significantly faster on questions requiring abstract information organization than concrete information organization. In contrast, low-skill employees performed significantly faster and more accurately on questions requiring concrete information organization as compared to abstract information organization. The data also showed that problem solution time for low-skill employees decreased a greater amount than for the high-skill employees, using the system as compared to not using it. The findings suggest that high and low-skill employees organized their conceptual knowledge about the problem differently. The presentation of information in a manner that is conducive to employees' conceptual representations of a problem is discussed along with directions for future research.

Journal ArticleDOI
TL;DR: Based on the review, this paper suggests guidelines for development of a methodology suitable for knowledge elicitation of the programming process, and lays a groundwork for developing such procedures by discussing important methodological issues.
Abstract: The current information age has brought about radical changes in workforce requirements just as did the industrial revolution of the 1800's With the presence of new technology, jobs are requiring less manual effort and becoming more Cognitiveoriented With this shift, new techniques in job design and task analysis are required One area which will greatly benefit from effective task analysis procedures is software development This paper attempts to lay a groundwork for developing such procedures by discussing important methodological issues, and examining current theories and research findings for their potential to identify the cognitive tasks of computer programming Based on the review, this paper suggests guidelines for development of a methodology suitable for knowledge elicitation of the programming process