scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Human-computer Studies \/ International Journal of Man-machine Studies in 1988"


Journal ArticleDOI
TL;DR: Gains and Boose as discussed by the authors, Machine Learning and Uncertain Reasoning 3, pages 227-242, 1990; see also: International Journal of Man Machine Studies 29 (1988) 81-85
Abstract: W: B. Gains and J. Boose, editors, Machine Learning and Uncertain Reasoning 3, pages 227-242. Academic Press, New York, NY, 1990. see also: International Journal of Man Machine Studies 29 (1988) 81-85

431 citations


Journal ArticleDOI
TL;DR: Basic data are presented and discussed that characterize the class of keystroke digraph latencies that are found to have good potential as static identity verifiers as well as dynamic identity verifier.
Abstract: This paper reports on an experiment that was conducted to assess the viability of using keystroke digraph latencies (time between two successive keystrokes) as an identity verifier. Basic data are presented and discussed that characterize the class of keystroke digraph latencies that are found to have good potential as static identity verifiers as well as dynamic identity verifiers. Keystroke digraph latencies would be used in conjunction with other security measures to provide a total security package.

158 citations


Journal ArticleDOI
TL;DR: The performance of users with high and low spatial abilities on the old verbal interface and the new graphical interface were compared and the graphical interface resulted in changes in command usage that were consistent with the predictions of the visual momentum analysis.
Abstract: Individual differences among users of a hierarchical file system were investigated. The results of a previous experiment revealed that subjects with low spatial ability were getting lost in the hierarchical file structure. Based on the concept of visual momentum, two changes to the old interface were proposed in an attempt to accommodate the individual differences in task performance. The changes consisted of a partial map of the hierarchy and an analogue indicator of current file position. This experiment compared the performance of users with high and low spatial abilities on the old verbal interface and the new graphical interface. The graphical interface resulted in changes in command usage that were consistent with the predictions of the visual momentum analysis. Although these changes in strategy resulted in a performance advantage for the graphical interface, the relative performance difference between high and low spatial groups remained constant across interfaces. However, the new interface did result in a decrease in the within-group variability in performance.

132 citations



Journal ArticleDOI
TL;DR: A computerized fuzzy graphic rating scale which is an extension of a semantic differential which allows respondents to provide an imprecise rating and lends itself to analysis using fuzzy set theory is described.
Abstract: This paper aims to outline and evaluate a new approach to measurement within psychology. A computerized fuzzy graphic rating scale which is an extension of a semantic differential is described. The scale allows respondents to provide an imprecise rating and lends itself to analysis using fuzzy set theory. Respondents rated nine occupational stimuli, carefully chosen to represent three levels of prestige (Daniel, 1983) and three levels of sex-type (Shinar, 1975), eight fuzzy graphic rating scales (5 for prestige and 3 for sex-type). A single expected value was calculated for the fuzzy ratings of the occupations to permit correlations with the a priori values for the nine occupations. Various combinations of scales were obtained by forming the union of individual fuzzy ratings. Expected values based on combined scales were calculated and the results were also correlated with the a priori Daniel and Shinar scale values. Potential applications of the fuzzy graphic rating scale are outlined.

97 citations


Journal ArticleDOI
TL;DR: The rationale for relegating inductive learning and deductive problem solving to minor roles in support of retaining, indexing, and matching exemplars is described and an example of Pro tos in the domain of clinical audiology is discussed.
Abstract: Building Protos, a learning apprentice system for heuristic classification, has forced us to scrutinize the usefulness of inductive learning and deductive problem solving. While these inference methods have been widely studied in machine learning, their seductive elegance in artificial domains (e.g. mathematics) does not carry-over to natural domains (e.g. medicine). This paper briefly describes our rationale in the Protos system for relegating inductive learning and deductive problem solving to minor roles in support of retaining, indexing, and matching exemplars. The problems that arise from “lazy generalization” are described along with their solutions in Protos. Finally, an example of Pro tos in the domain of clinical audiology is discussed.

95 citations


Journal ArticleDOI
TL;DR: The cognitive organization of a set of abstract programming concepts was investigated in subjects who varied in degree of computer programming experience, revealing that the four groups differed in the way concepts were represented.
Abstract: The cognitive organization of a set of abstract programming concepts was investigated in subjects who varied in degree of computer programming experience. Relatedness ratings on pairs of the concepts were collected from naive, novice, intermediate, and advanced programmers. Both individual and group network representations of memory structure were derived using the Pathfinder network scaling algorithm. Not only did the four group networks differ, but they varied systematically with experience, providing support for the psychological meaningful-ness of the structures. Additionally, an analysis at the conceptual level revealed that the four groups differed in the way concepts were represented. Furthermore, this analysis was used to classify concepts in the naive, novice, and intermediate networks as well-defined or misdefined. The identification of semantic relations corresponding to some of the links in the networks provided further information concerning differences in programmer knowledge at different levels of experience. Applications of this work to programmer education and knowledge engineering are discussed.

93 citations


Journal ArticleDOI
TL;DR: It will be argued that the ability to apply more than one perspective is valuable to designers of computer applications, researchers dealing with human-computer interactions, as well as to users of a particular computer application.
Abstract: This paper will stress the value of a multi-perspective view on the use of computers. It will argue that the ability to apply more than one perspective is valuable to designers of computer applications, to researchers dealing with human-computer interactions, as well as to users of a particular computer application. As a means for that the paper will present the systems perspective, the dialogue partner perspective, the tool perspective, and the media perspective. All four perspectives will primarily be characterized in relation to human-computer interaction, and the characterizations will be based on a common set of concepts presented in the beginning of the paper. The last section of the paper will, with the help of a few examples, illustrate the value of applying multiple perspectives.

92 citations


Journal ArticleDOI
TL;DR: No significant difference was found in either reading speed or comprehension between screen and paper, or between dark and light character displays, however.
Abstract: This paper considers the effect of presentation medium on reading speed and comprehension. By directly comparing performance using screen and paper presentations, it examines the argument that it takes longer to read from a screen-based display than from paper, and that comprehension will be lower. The hypothesis is also tested that it takes longer to read light characters on a dark background compared with dark characters on a light background, and that comprehension will be lower with light-character displays. Altogether four conditions were used, with two passages read in each condition: screen with dark characters, screen with light characters, paper with dark characters, and paper with light characters. Subjects also ranked the four conditions for preference. No significant difference was found in either reading speed or comprehension between screen and paper, or between dark and light character displays. Some preference differences were found, however. Reasons for the lack of reading and comprehension differences are discussed, and it is argued that this reflects the close attention to experimental detail paid in the present experiment, which has often been missing in past studies.

91 citations


Journal ArticleDOI
TL;DR: This paper proposes that communication between humans and computers should likewise be regarded as a series of layered protocols, and that interfaces should be designed to take advantage of the natural tendency of humans to process communication in a layered manner, using protocols learned in other interactions.
Abstract: A consistent trend in the development of computer systems has been the attempt to separate considerations of how to use the computer from considerations of how to solve the problems for which the computer is used. The concept of layering was introduced early, first assemblers and then compilers providing higher levels of abstraction with which programmers could work. The recent development of User Interface Management Systems has extended to interactive systems the separation of problem and technique by layering. Psychologists have long recognized the likelihood that humans behave as if they used layers of abstraction in both perception and performance. In communication between two partners, both must use the same forms and signals, or communication fails. Together, the forms of messages and the signals that indicate alternations of message direction can be considered to be a protocol. Protocols at several layers of abstraction form the basis of current models for communication between computers. This paper proposes that communication between humans and computers should likewise be regarded as a series of layered protocols, and that interfaces should be designed to take advantage of the natural tendency of humans to process communication in a layered manner, using protocols learned in other interactions.

74 citations


Journal ArticleDOI
TL;DR: This paper discusses the use of repertory grid-centred knowledge acquisition tools such as the Expertise Transfer System (ETS), AQUINAS, KITTEN, and KSSO, and Dimensions of use are presented along with specific applications.
Abstract: Repertory grid-centred knowledge acquisition tools are useful as knowledge engineering aids when building many kinds of complex knowledge-based systems. These systems help in rapid prototyping and knowledge base analysis, refinement, testing, and delivery. These tools, however, are also being used as more general knowledge-based decision aids. Such features as the ability to very rapidly prototype knowledge bases for one-shot decisions and quickly combine and weigh various sources of knowledge, make these tools valuable outside of the traditional knowledge engineering process. This paper discusses the use of repertory grid-centred tools such as the Expertise Transfer System (ETS), AQUINAS, KITTEN, and KSSO. Dimensions of use are presented along with specific applications. Many of these dimensions are discussed within the context of ETS and AQUINAS applications at Boeing.

Journal ArticleDOI
TL;DR: Petri nets are identified as possible candidates for a modelling technique for dialogues on the basis of their applicability to concurrent, asynchronous systems and extended to nested Petri nets, allowing transitions to invoke subnets.
Abstract: The requirements of man-machine dialogue-specification techniques are examined. Petri nets are identified as possible candidates for a modelling technique for dialogues on the basis of their applicability to concurrent, asynchronous systems. Labelled Petri nets are extended to nested Petri nets, allowing transitions to invoke subnets. It is shown that this extension allows nested Petri nets to generate at least the set of context-free languages. Further extensions are made to simplify the modelling of input and output in the user interface, resulting in input-output nets. Transitions labelled by error conditions and meta functions on nets are introduced to increase the usability of the model. Finally, the use of the model is demonstrated by modelling a small hypothetical command language.

Journal ArticleDOI
TL;DR: Concept learning methods divide into similarity-based, hierarchical, function induction, and explanation-based knowledge-intensive techniques, which are related to knowledge acquisition for expert systems.
Abstract: Although experts have difficulty formulating their knowledge explicitly as rules, they find it easy to demonstrate their expertise in specific situations. Schemes for learning concepts from examples offer the potential for domain experts to interact directly with machines to transfer knowledge. Concept learning methods divide into similarity-based, hierarchical, function induction, and explanation-based knowledge-intensive techniques. These are described, classified according to input and output representations, and related to knowledge acquisition for expert systems. Systems discussed include candidate elimination, version space, ID3, PRISM, MARVIN, NODDY, BACON, COPER, and LEX-II. Teaching requirements are also analysed.

Journal ArticleDOI
Yiya Yang1
TL;DR: In this article, the authors present a review of undo support in user interface management systems and present a new undo model that addresses the requirements for a more general undo support facility, and its more powerful functionality is demonstrated.
Abstract: One of the important features for error handling and recovery provided by a user interface management system is undo support. Undo support allows a user to reverse the effects of commands that have already been executed. In this paper, characteristics of undo support are reviewed. Two classic kinds of undo support, history undo/undo and linear undo/redo, are respectively specified by two models, the primitive undo model and the meta undo model. Their properties are carefully analysed in terms of formal specifications. Requirements for a more general undo support facility are discussed in terms of these models. A new undo model that addresses these requirements is formally specified and its more powerful functionality is demonstrated.

Journal ArticleDOI
TL;DR: Four scenario machines were designed to embody different approaches to prompting, feedback, and automatic error correction for a “learning-by-doing” training simulator for a commercial, menu-based word processor.
Abstract: A scenario machine limits the user to a single action path through system functions and procedures. Four scenario machines were designed to embody different approaches to prompting, feedback, and automatic error correction for a “learning-by-doing” training simulator for a commercial, menu-based word processor. Compared with users trained directly on the commercial system, scenario machine users demonstrated an overall advantage in the “getting started” stage of learning. Initial training on a “prompting + automatic correction” system was particularly efficient, encouraging a DWIM (or “do what I mean”) approach to training system design. Curiously, training on a “prompting + feedback” system led to relatively impaired performance on a set of transfer of learning tasks. It was suggested that too much training information support may obscure the task coherence of the action scenario itself relative to a design that provides less explicit direction.

Journal ArticleDOI
TL;DR: This paper presents a survey of formal tools, methodologies, and models which have been proposed for developing user interfaces for interactive information systems and suggests suggested future research directions based on the results of this survey.
Abstract: This paper presents a survey of formal tools, methodologies, and models which have been proposed for developing user interfaces for interactive information systems. The treatment examines issues related to human engineering, human-computer interfacing, behavioural experiments, and user interface design aids. Particular emphasis is placed on user research studies, specification techniques for interactive language modeling, analytical studies of user-system interaction, user models (including cognitive models, conceptual models, and mental models), and user interface management systems. The paper concludes with a brief list of suggested future research directions based on the results of this survey.

Journal ArticleDOI
TL;DR: The results show that speech to a computer is not as ill-formed as one would expect, and people speaking to acomputer are more disciplined than when speaking to each other.
Abstract: This paper describes an empirical study of man-computer speech interaction. The goals of the experiment were to find out how people would communicate with a real-time, speaker-independent continuous speech understanding system. The experimental design compared three communication modes: natural language typing, speaking directly to a computer and speaking to a computer through a human interpreter. The results show that speech to a computer is not as ill-formed as one would expect. People speaking to a computer are more disciplined than when speaking to each other. There are significant differences in the usage of spoken language compared to typed language, and several phenomena which are unique to spoken or typed input respectively. Usefulness for work in speech understanding systems for the future is considered.

Journal ArticleDOI
TL;DR: This study investigates a framework for knowledge acquisition evaluation and validation, and a preliminary version of KITTEN (Knowledge Initiation and Transfer Tools for Experts and Novices) was described and demonstrated on Apollo workstations.
Abstract: At the previous workshop on Knowledge Acquisition for Knowledge-Based Systems in 1986, criteria for a knowledge support system were discussed, and a preliminary version of KITTEN (Knowledge Initiation and Transfer Tools for Experts and Novices) was described and demonstrated on Apollo workstations. This study is a continuation of the validation studies done by Shaw & Gaines (1983), and investigates a framework for knowledge acquisition evaluation and validation. KITTEN has been evaluated against the first stage of the model and the results are reported in the two domains of spatial interpolation techniques to produce contour maps and in trouble-shooting and maintenance of valves for oil and gas pipelines. Some preliminary results are described on validation experiments to show the extent to which experts agree with each other, with themselves at a later date, and with the results of the processing of their knowledge. Some of the questions asked were: (1) To what extent does an expert find the generated rules meaningful? (2) Do experts agree on their terminology in talking about a topic? (3) To what extend do experts agree among themselves about the topic? (4) Does an expert always use the same terminology? (5) To what extent does each experts agree with the knowledge at a different time?

Journal ArticleDOI
TL;DR: In this paper, the effects of computer-generated realism cues (hidden surfaces removed, multiple light sources, surf ace shading) on the speed and accuracy with which subjects performed a standard cognitive task (mental rotation) was investigated.
Abstract: Two experiments were performed, one to investigate the effects of computer-generated realism cues (hidden surfaces removed, multiple light sources, surf ace shading) on the speed and accuracy with which subjects performed a Standard cognitive task (mental rotation), the other to study the subjective perceived realism of computer-generated images. In the mental rotation experiment, four angles of rotation, two levels of object complexity, and five combinations of realism cues were varied as subjects performed “same—different” discriminations for pairs of rotated three-dimensional images. Results indicated that mean reaction times were faster for shaded images than for hidden-edge-removed images. In terms of speed of response and response accuracy, significant effects for object complexity and angle of rotation were shown. In the second experiment, subjective ratings of image realism revealed that wireframe images were viewed as less realistic than shaded images and that number of light sources was more important in conveying realism than type of surf ace shading. Implications of the results for analogue and propositional models on memory organization and integral and non-integral characteristics of realism cues are discussed.

Journal ArticleDOI
TL;DR: The results of a controlled study comparing various menu designs show that the types of tasks to be performed by users must be considered in organizing items in menus and that there may be sustained effects of menu organization with some tasks.
Abstract: Menus are an increasingly popular style of user-system interface. Although many aspects of menu design can affect user performance (e.g. item names and selection methods), the organization of items in menus is a particularly salient aspect of their design. Unfortunately, empirical studies of menu layout have yet to resolve the basic question of how menus should be organized to produce optimal performance. Furthermore, a disturbingly common finding has been that any initial effects of menu layout disappear with practice. Thus it is tempting to conclude that menu organization is not important or that it only affects performance during learning. In this paper we present some reasons to doubt this conclusion. In particular, we have found persistent effects of layout with multiple-item selection tasks, in contrast with studies employing a single-item selection paradigm. The results of a controlled study comparing various menu designs (fast-food keyboards) show that the types of tasks to be performed by users must be considered in organizing items in menus and that there may be sustained effects of menu organization with some tasks. In addition, the results of this study support the use of a formal methodology based on user knowledge for menu design. By comparing the performance of subjects using menus designed using our methodology with the performance of subjects using “personalized” menus, we were able to demonstrate the general superiority of our method for designing menus, and for tailoring menus to meet task requirements as well.

Journal ArticleDOI
TL;DR: This paper discusses the role of certain ICAI components in generating and maintaining the genetic graph, and describes, in detail, one approach to student modelling which is based on Goldstein's genetic graph.
Abstract: In this paper we examine the student model component of an intelligent computer-assisted instruction (ICAI) system. First, we briefly discuss the desirable capabilities of the student model and then describe, in detail, one approach to student modelling which is based on Goldstein's genetic graph. We expand Goldstein's definition and test it's feasibility in new domains, since his original domain was a limited, straightforward adventure game. In addition to modelling two diverse domains, subtraction and ballet, we also discuss the role of certain ICAI components in generating and maintaining the genetic graph.

Journal ArticleDOI
TL;DR: The concepts of the Layered Protocol reference model of user interaction are developed through consideration of the design process, using a multimodal spatial interaction as an example, and the layered model is considered in the light of published guidelines for user interfaces.
Abstract: The concepts of the Layered Protocol reference model of user interaction are developed through consideration of the design process, using a multimodal spatial interaction as an example. Specific issues are addressed: multiplexing (which is seen as a way of describing interface modes); feedback, with special consideration of voice recognition systems; type-ahead and asynchronous interaction; embedded help and the development of autonomous means for the computer to assist the user; the tension among robustness, modularity, and efficiency; learning and transfer of training; standardization issues; and evaluation of interfaces, with examples from the Apple Macintosh and the Adagio workstation. Finally, the layered model is considered in the light of published guidelines for user interfaces.

Journal ArticleDOI
TL;DR: In this article, a knowledge acquisition tool, called Aquinas, uses knowledge elicitation and representation techniques and consultation review mechanisms to help alleviate the problem when modifying knowledge bases is that changes may degrade system performance.
Abstract: A general problem when modifying knowledge bases is that changes may degrade system performance. This is especially a problem when the knowledge base is large; it may be unclear how changing one item in a knowledge base containing thousands of items will affect overall system performance. Aquinas, a knowledge acquisition tool, uses knowledge elicitation and representation techniques and consultation review mechanisms to help alleviate this problem. The consultation review mechanisms are discussed here. We are experimenting with ways to use consultations and test cases to refine the information in an Aquinas knowledge base. The domain expert can use interactive graphics to specify the expected results. Modifications to the knowledge base may be tested against previous consultations; adjustments are suggested that make the results of all previous consultations as well as the current consultation correlate better with the expert's expectations. New traits are synthesized that would improve the performance of all previous consultations. New test cases are suggested that cover aspects missed by previous test cases. While we are just beginning to experiment with these techniques, they promise to provide help in improving problem-solving performance and gaining problem-solving insight.

Journal ArticleDOI
TL;DR: In this paper, a discussion of the many-valued fuzzy logic and its impact on the fuzzy set theory, namely on the operations with fuzzy sets, is presented, where it is shown that general t -norms are not suitable to be basis of the operations of fuzzy sets and some general classes of operations with membership grades are presented.
Abstract: The paper is a discussion of the many-valued fuzzy logic, which is syntactico-semantically complete and its impact on the fuzzy set theory, namely on the operations with fuzzy sets. Arguments that all the operations with membership grades must fulfil the so called fitting condition are given. It follows that general t -norms are not suitable to be basis of the operations with fuzzy sets. Some general classes of operations with membership grades are presented.

Journal ArticleDOI
Ronald R. Yager1
TL;DR: A way that this programming approach to reason in the face of default rules can be used to solve the problem of logical inference via the use of mathematical programming is investigated.
Abstract: We suggest solving the problem of logical inference via the use of mathematical programming. We investigate a way that we can use this programming approach to reason in the face of default rules.

Journal ArticleDOI
TL;DR: This paper describes a methodology for the creation of knowledge-based computer-aided learning lessons that has lower developmental and operational overheads than alternatives and is also able to perform far more flexible evaluations of the student's performance.
Abstract: This paper describes a methodology for the creation of knowledge-based computeraided learning lessons. Unlike previous approaches, the knowledge base is utilized only for restricted aspects of the lesson - both for the management of flow of control through a body of instructional materials and for the evaluation of the student’s understanding of the subject matter. This has many advantages. While the approach has lower developmental and operational overheads than alternatives it is also able to perform far more flexible evaluations of the student’s performance. As flow of control is managed by a knowledge-based component with reference to a detailed analysis of the student’s understanding of the subject matter, lessons adapt to each student’s individual understanding and aptitude within a domain.

Journal ArticleDOI
TL;DR: This paper describes a system, K n A c, that modifies an existing knowledge base through a discourse with a domain expert that anticipates modifications to existing entity descriptions.
Abstract: The assimilation of information obtained from domain experts into an existing knowledge base is an important facet of the knowledge acquisition process. Knowledge assimilation requires an understanding of how the new information corresponds to that already contained in the knowledge base and how this existing information must be modified so as to reflect the expert's view of the domain. This paper describes a system, K n A c, that modifies an existing knowledge base through a discourse with a domain expert. Using heuristic knowledge about the knowledge acquisition process, K n A c anticipates modifications to existing entity descriptions. These anticipated modifications, or expectations , provide a context in which to assimilate new domain information.

Journal ArticleDOI
TL;DR: SALT as mentioned in this paper is a knowledge acquisition framework for the development of expert systems that use propose-and-revise as their problem-solving method, incrementally constructing a tentative design, identifying constraints on the design and revise design decisions in response to constraint violations.
Abstract: SALT provides a knowledge acquisition framework for the development of expert systems that use propose-and-revise as their problem-solving method. These systems incrementally construct a tentative design, identify constraints on the design and revise design decisions in response to constraint violations. By having an under-standing of the specific problem-solving method used to integrate the knowledge it acquires, it has been previously shown that SALT possesses a number of advantages over less restrictive programming languages. We have applied SALT to a new type of propose-and-revise task, and have identified areas where SALT was too restrictive to adequately permit acquisition of domain knowledge or efficient utilization of that knowledge. Addressing these problems has led to a more “general” SALT and to a better understanding of when it is an appropriate tool.

Journal ArticleDOI
TL;DR: In this paper, the authors classified models of fault diagnosis by expert human operators into two types: macro and micro models, which describe general problem-solving rules or strategies that are abstracted from observations of expert fault diagnostic behaviour.
Abstract: Models of fault diagnosis by expert human operators are classified into two types: macro and micro. Macro models describe general problem-solving rules or strategies that are abstracted from observations of expert fault diagnostic behaviour. Micro models are concerned with the detailed knowledge and the mechanisms underlying the diagnostic actions. This paper proposes a micro model developed from observations of fault diagnosis performance on a marine powerplant simulator. Based on experimental data, including protocols and operator action sequences, two types of knowledge are identified: rule-based symptom knowledge and hierarchical system knowledge. The diagnostic process seems to proceed with frequent reference to these two types of knowledge. Characteristics of the diagnostic process are discussed. A conceptual entity called a hypothesis frame is employed to account for observed characteristics. The diagnostic process involves choosing an appropriate frame that matches the known symptoms and evaluating the frame against the system state. This model of fault diagnosis performance is employed to explain protocol data and operator actions.

Journal ArticleDOI
TL;DR: Number of responses was the most important factor affecting execution time and the highest accuracy was found with the Upcoming Selections menu but that menu also resulted in the slowest execution time.
Abstract: Several menu configurations were designed to provide an independent assessment of the influence of breadth, depth and number of responses on computer menu search performance. The menu hierarchy consisted of a binary tree of category descriptor terms with 64 terminal options. Standard menus tested were 2 options on each of 6 sequential frames (26) and 4 options on 3 frames (43). Another menu (Upcoming Selections) was developed with 6 frames in which the binary choice on each frame was shown in the presence of options at the next menu level. Menus developed for separating the effects of number of frames and responses were configured with two menu levels per frame and responses were required to either one or both levels. Number of responses was the most important factor affecting execution time. The highest accuracy was found with the Upcoming Selections menu but that menu also resulted in the slowest execution time. A modified Upcoming Selections menu was developed which allowed participants to respond to each level or to bypass the higher level on each frame. Considering both speed and accuracy, that configuration yielded the best performance of all menus tested.