scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Human-computer Studies \/ International Journal of Man-machine Studies in 1992"


Journal ArticleDOI
TL;DR: The cognitive walkthrough methodology, described in detail, is an adaptation of the design walkthrough techniques that have been used for many years in the software engineering community and is based on a theory of learning by exploration presented.
Abstract: This paper presents a new methodology for performing theory-based evaluations of user interface designs early in the design cycle. The methodology is an adaptation of the design walkthrough techniques that have been used for many years in the software engineering community. Traditional walkthroughs involve hand simulation of sections of code to ensure that they implement specified functionality. The method we present involves hand simulation of the cognitive activities of a user, to ensure that the user can easily learn to perform tasks that the system is intended to support. The cognitive walkthrough methodology, described in detail, is based on a theory of learning by exploration presented in this paper. There is a summary of preliminary results of effectiveness and comparisons with other design methods.

778 citations


Journal ArticleDOI
TL;DR: This paper shows that if a given concept is approximated by one set, the same result given by the α-cut in the fuzzy set theory is obtained, and can derive both the algebraic and probabilistic rough set approximations.
Abstract: This paper explores the implications of approximating a concept based on the Bayesian decision procedure, which provides a plausible unification of the fuzzy set and rough set approaches for approximating a concept. We show that if a given concept is approximated by one set, the same result given by the α-cut in the fuzzy set theory is obtained. On the other hand, if a given concept is approximated by two sets, we can derive both the algebraic and probabilistic rough set approximations. Moreover, based on the well known principle of maximum (minimum) entropy, we give a useful interpretation of fuzzy intersection and union. Our results enhance the understanding and broaden the applications of both fuzzy and rough sets.

572 citations


Journal ArticleDOI
TL;DR: This paper presents a comprehensive sequence of three incremental, edited nearest neighbor algorithms that tolerate attribute noise, determine relative attribute relevances, and accept instances described by novel attributes.
Abstract: Incremental variants of the nearest neighbor algorithm are a potentially suitable choice for incremental learning tasks. They have fast learning rates, low updating costs, and have recorded comparatively high classification accuracies in several applications. Although the nearest neighbor algorithm suffers from high storage requirements, modifications exist that significantly reduce this problem. Unfortunately, its applicability is limited by several other serious problems. First, storage reduction variants of this algorithm are highly sensitive to noise. Second, these algorithms are sensitive to irrelevant attributes. Finally, the nearest neighbor algorithm assumes that all instances are described by the same set of attributes. This inflexibility causes problems when subsequently processed instances introduce novel attributes that are relevant to the learning task. In this paper, we present a comprehensive sequence of three incremental, edited nearest neighbor algorithms that tolerate attribute noise, determine relative attribute relevances, and accept instances described by novel attributes. We outline evidence indicating that these instance-based algorithms are robust incremental learners.

418 citations


Journal ArticleDOI
Ronald R. Yager1
TL;DR: Two possible semantics associated with the OWA operator are introduced, the first being a kind of generalized logical connective and the second being a new type of probabilistic expected value.
Abstract: We discuss the idea of ordered weighting averaging (OWA) operators. These operators provide a family of aggregation operators lying between the “and” and the “or”. We introduce two possible semantics associated with the OWA operator, the first being a kind of generalized logical connective and the second being a new type of probabilistic expected value. We suggest some applications of these operators. Among the applications we discuss are those involving multicriteria decision making under uncertainty and search procedures in games. We provide for a formulation of OWA operators that can be used in environments in which the underlying scale is simply an ordinal one.

204 citations


Journal ArticleDOI
TL;DR: The study found that the expert and novice behavior was similar in terms of modelling facets like entity, identifier, descriptor and binary relationship, somewhat different in modelling ternary relationship, but quite different in the modelling of unary relationship and category.
Abstract: This paper explores the similarities and differences between experts and novices engaged in a conceptual data modelling task, a critical part of overall database design, using data gathered in the form of think-aloud protocols. It develops a three-level process model of the subjects' behavior and the differentiated application of this model by experts and novices. The study found that the experts focused on generating a holistic understanding of the problem before developing the conceptual model. They were able to categorize problem descriptions into standard abstractions. The novices tended to have more errors in their solutions largely due to their inability to integrate the various parts of the problem description and map them into appropriate knowledge structures. The study also found that the expert and novice behavior was similar in terms of modelling facets like entity, identifier, descriptor and binary relationship, somewhat different in modelling ternary relationship, but quite different in the modelling of unary relationship and category. These findings are discussed in relation to the results of previous expert-novice studies in other domains.

137 citations


Journal ArticleDOI
TL;DR: A baseline description of a cognitive model that has been successfully implemented on high-speed, low-altitude navigation fighter plane missions illustrates designs for an intelligent assistance system for future French combat aircraft.
Abstract: A baseline description of a cognitive model that has been successfully implemented on high-speed, low-altitude navigation fighter plane missions illustrates designs for an intelligent assistance system for future French combat aircraft. The outcomes are based on several empirical studies. Task complexity (risk, uncertainty, time pressure) is extreme and provides a prototypical example of a rapid process control situation which requires specific assistance problems. The paper is divided into three sections: 1. 1. A general review discusses implications of the specific requirements for coupling an intelligent assistance system to pilots. Special attention is paid to understanding and coherence of the aid, both of which directly influence the nature of the system. 2. 2. An empirical analysis of missions carried out by novice and experienced pilots forms the basis for a cognitive model of in-flight navigation problem solving. Because of time pressure and risk, pilots have as much difficulty applying solutions as diagnosing problems. Pilots tend to develop a sophisticated model of the situation in order to anticipate problems and actively avoid or minimize problem difficulty. In contrast, poor solutions tend to be found for unexpected problems and generally result in renunciation of the mission and/or crash. 3. 3. The cognitive model described above serves as the basis for a computer cognitive model for flying high-speed, low-altitude navigation missions. The model splits functional knowledge into two levels: the local level deals with sub-goals and short-term activities; the global level deals with mission objectives and handles medium- and long-term activities. A resource manager coordinates the two levels. The program uses an Al actor programming style. This computer cognitive model serves to develop an intelligent navigation assistance system which can function as an automaton or as a tactical support system.

125 citations


Journal ArticleDOI
TL;DR: Cognitive problem-solving by novice systems analysts during a requirements analysis task was investigated by protocol analysis and good performance concorded with well-formed conceptual models and good reasoning/testing abilities.
Abstract: Cognitive problem-solving by novice systems analysts during a requirements analysis task was investigated by protocol analysis. Protocols were collected from 13 subjects who analysed a scheduling problem. Reasoning, planning, conceptual modelling and information gathering behaviours were recorded and subject's solutions were evaluated for completeness and accuracy. The protocols showed an initial problem scoping phase followed by more detailed reasoning. Performance in analysis was not linked to any one factor although reasoning was correlated with success. Poor performance could be ascribed to failure to scope the problem, poor formation of a conceptual model of the problem domain, or insufficient testing of hypotheses. Good performance concorded with well-formed conceptual models and good reasoning/testing abilities. The implication of these results for structured systems development methods and Computer-Aided Software Engineering (CASE) tools are discussed.

108 citations


Journal ArticleDOI
TL;DR: A domain-independent taxonomy of abstract explanatory utterances of multisentence explanations based on these utterance classes, a classification of reactions readers may have to explanations as well as an illustration of how these classifications can be applied computationally.
Abstract: Knowledge-based systems that interact with humans often need to define their terminology, elucidate their behavior or support their recommendations or conclusions. In general, they need to explain themselves. Unfortunately, current computer systems, if they can explain themselves at all, often generate explanations that are unnatural, ill-connected or simply incoherent. They typically have only one method of explanation which does not allow them to recover from failed communication. At a minimum, this can irritate an end-user and potentially decrease their productivity. More dangerous, poorly conveyed information may result in misconceptions on the part of the user which can lead to bad decisions or invalid conclusions, which may have costly or even dangerous implications. To address this problem, we analyse human-produced explanations with the aim of transferring explanation expertise to machines. Guided by this analysis, we present a classification of explanatory utterances based on their content and communicative function. We then use these utterance classes and additional text analysis to construct a taxonomy of text types. This text taxonomy characterizes multisentence explanations according to the content they convey, the communicative acts they perform, and their intended effect on the addressee's knowledge, beliefs, goals and plans. We then argue that the act of explanation presentation is an action-based endeavor and introduce and define an integrated theory of communicative acts (rhetorical, illocutionary, and locutionary acts). To illustrate this theory we formalize several of these communicative acts as plan operators and then show their use by a hierarchical text planner (TEXPLAN—Textual EXplanation PLANner) that composes natural language explanations. Finally, we classify a range of reactions readers may have to explanations and illustrate how a system can respond to these given a plan-based approach. Our research thus contributes (1) a domain-independent taxonomy of abstract explanatory utterances, (2) a taxonomy of multisentence explanations based on these utterance classes and (3) a classification of reactions readers may have to explanations as well as (4) an illustration of how these classifications can be applied computationally.

76 citations


Journal ArticleDOI
TL;DR: The approach models the rule-base as a Petri-Net and uses the structural properties of the net for verification and procedures for integrity checks at both local and chained inference levels are described.
Abstract: The production rule formalization has become a popular method for knowledge representation in expert systems. Current development environments for rule-based systems provide few automated mechanisms for verifying the consistency and completeness of rule bases as they are developed in an evolutionary manner. We describe an approach to verifying the integrity of a rule-based system. The approach models the rule-base as a Petri-Net and uses the structural properties of the net for verification. Procedures for integrity checks at both local and chained inference levels are described.

73 citations


Journal ArticleDOI
Ronald R. Yager1
TL;DR: The concept of higher order criteria in the decision making problem is introduced and situations in which the authors desire to satisfy a criterion if it is possible without sacrificing the satisfaction to other primary criteria are manifested.
Abstract: We introduce the concept of higher order criteria in the decision making problem. These types of criteria are manifested by situations in which we desire to satisfy a criterion if it is possible without sacrificing the satisfaction to other primary criteria.

70 citations


Journal ArticleDOI
TL;DR: A framework for designing an organizational decision support system that is based on a network of knowledgebased systems is proposed, which is also utilized to provide effective support for formal multi-participant decision making.
Abstract: Decision support systems have traditionally been discussed within the context of individual or group decision making. In this paper we study decision support systems from an organizational perspective. We propose a framework for designing an organizational decision support system that is based on a network of knowledgebased systems. Nodes of this network interact with each other, as well as various other organizational systems, to provide comprehensive decision support. This network is also utilized to provide effective support for formal multi-participant decision making.

Journal ArticleDOI
TL;DR: The discussion focuses on the differences in cognitive models as a function of the amount and type of HCI design experience and the role of cognitive models in H CI design and in communications within a multidisciplinary design team.
Abstract: A two-part experiment investigated human computer interface (HCI) experts' organization of declarative knowledge about the HCI. In Part 1, two groups of experts in HCI design—human factors experts and software development experts—and a control group of non-experts sorted 50 HCI concepts concerned with display, control, interaction, data manipulation and user knowledge into categories. In the second part of the experiment, the three groups judged the similarity of two sets of HCI concepts related to display and interaction, respectively. The data were transformed into measures of psychological distance and were analyzed using Pathfinder, which generates network representations of the data, and multidimensional scaling (MDS), which fits the concepts in a multidimensional space. The Pathfinder networks from the first part of the experiment differed in organization between the two expert groups, with human factors experts' networks consisting of highly interrelated subnetworks and software experts' networks consisting of central nodes and fewer, less interconnected sub-networks. The networks also differed across groups in concepts linked with such concepts as graphics, natural language, function keys and speech recognition. The networks of both expert groups showed much greater organization than did the non-experts' network. The network and MDS representations of the concepts for the two expert groups showed somewhat greater agreement in Part 2 than in Part 1. However, the MDS representations from Part 2 suggested that software experts organized their concepts on dimensions related to technology, implementation and user characteristics, whereas the human factors experts' organized their concepts more uniformly according to user characteristics. The discussion focuses on (1) the differences in cognitive models as a function of the amount and type of HCI design experience and (2) the role of cognitive models in HCI design and in communications within a multidisciplinary design team.

Journal ArticleDOI
TL;DR: This paper discusses the theoretical issues and methodological procedures pertaining to the analysis of verbal protocols collected from physicians engaged in medical problem solving and proposes how this type of analysis plays an important role in the development of medical artificial intelligence systems and educational efforts directed toward thedevelopment of expertise inmedical problem solving.
Abstract: One of the most common methods of codifying and interpreting human knowledge is through the use of verbal protocol analysis. Although the application of this methodology has increased in recent years, few detailed examples are readily available in the literature. This paper discusses the theoretical issues and methodological procedures pertaining to the analysis of verbal protocols collected from physicians engaged in medical problem solving. We first present a brief historical perspective on verbal protocol methodology. We then discuss how we have come to view the task of medical diagnosis both in general and in particular with respect to a specific specialty—congenital heart disease. Next, we describe and provide examples of our methodology for coding verbal protocols of physicians into abstract, but meaningful objects which are elements of a theory of diagnostic reasoning. In particular, we demonstrate how the coding scheme can represent an important aspect of medical problem solving behavior called a line of reasoning. We conclude by proposing how such analysis is important to understanding the psychology of medical problem solving and how this type of analysis plays an important role in the development of medical artificial intelligence systems and educational efforts directed toward the development of expertise in medical problem solving.

Journal ArticleDOI
TL;DR: This paper reviews various errors that have been described by comparing human behavior to the norms of probability, causal connection and logical deduction and cautions researchers and practitioners in referring to well known biases and errors.
Abstract: This paper reviews various errors that have been described by comparing human behavior to the norms of probability, causal connection and logical deduction. For each error we review evidence on whether the error has been demonstrated to occur. For many errors, the occurrence of a bias has not been demonstrated; for other, a bias does occur, but arguments can be made that the bias is not always an error. Based on the conclusions of this review, we caution researchers and practitioners in referring to well known biases and errors.

Journal ArticleDOI
TL;DR: It is found that the expert's commentary on the decisions and activities required to solve a programming problem helped students gain integrated understanding of programming, and the contention that explicit explanations can help students learn complex problem-solving skills is supported.
Abstract: This paper reports and experimental investigation of the effectiveness of case studies for teaching programming. A case study provides an “expert commentary” on the complex problem-solving skills used in constructing a solution to a computer programming problem as well as one or more worked-out solutions to the problem. To conduct the investigation, we created case studies of programming problems and evaluated what high school students in ten Pascal programming courses learned from them. We found that the expert's commentary on the decisions and activities required to solve a programming problem helped students gain integrated understanding of programming. Furthermore, the expert's commentary imparted more integrated understanding of programming than did the worked-out solution to the problem without the expert's explanation. These results support the contention that explicit explanations can help students learn complex problem-solving skills. We developed case studies for teaching students to solve programming problems for the same reasons that they have been developed in other disciplines. The case method for teaching complex problem solving was first used at Harvard College in 1870 and has permeated curricula for business, law and medicine across the country. These disciplines turned to the case method to communicate the complexity of real problems, to illustrate the process of dealing with this complexity and to teach analysis and decision making skills appropriate for these problems.

Journal ArticleDOI
TL;DR: A model which relates the average number of corrections to the recognition rate has been developed which provides a good fit to the data and requires less trials to correct the recognition errors.
Abstract: In a noisy environment speech recognizers make mistakes. In order that these errors can be detected the system can synthesize the word recognized and the user can respond by saying “correction” when the word was not recognized correctly. The mistake can then be corrected. Two error-correcting strategies have been investigated. In one, repetition-with-elimination, when a mistake has been detected the system eliminates its last response from the active vocabulary and then the user repeats the word that has been misrecognized. In the other, elimination-without-repetition, the system suggests the next-most-likely word based on the output of its pattern-matching algorithm. It was found that the former strategy, with the user repeating the word, required less trials to correct the recognition errors. A model which relates the average number of corrections to the recognition rate has been developed which provides a good fit to the data.


Journal ArticleDOI
TL;DR: It will be shown that connectionist systems benefit from the explicit coding of relations and the use of highly structured networks in order to allow explanation and explanation components (ECs) in connectionist semantic networks.
Abstract: Explanation is an important function in symbolic artificial intelligence (AI). For instance, explanation is used in machine learning, in case-based reasoning and, most important, the explanation of the results of a reasoning process to a user must be a component of any inference system. Experience with expert systems has shown that the ability to generate explanations is absolutely crucial for the user acceptance of Al systems. In contrast to symbolic systems, neural networks have no explicit, declarative knowledge representation and therefore have considerable difficulties in generating explanation structures. In neural networks, knowledge is encoded in numeric parameters (weight) and distributed all over the system. It is the intention of this paper to discuss the ability of neural networks to generate explanations. It will be shown that connectionist systems benefit from the explicit coding of relations and the use of highly structured networks in order to allow explanation and explanation components (ECs). Connectionist semantic networks (CSNs), i.e. connectionist systems with an explicit conceptual hierarchy, belong to a class of artificial neural networks which can be extended by an explanation component which gives meaningful responses to a limited class of “How” questions. An explanation component of this kind is described in detail.

Journal ArticleDOI
TL;DR: Program design methodologies which claim to improve the design process by providing strategies to programmers for structuring solutions to computer problems are examined to develop profiles of the solutions produced by different methodologies and to develop comparisons among the various methodologies.
Abstract: This research examined program design methodologies which claim to improve the design process by providing strategies to programmers for structuring solutions to computer problems. In this experiment, professional programmers were provided with the specifications for each of three non-trivial problems and asked to produce pseudo-code for each specification according to the principles of a particular design methodology. The measures collected were the time to design and code, percent complete, and complexity, as measured by several metrics. These data were used to develop profiles of the solutions produced by different methodologies and to develop comparisons among the various methodologies. These differences are discussed in light of their impact on the comprehensibility, reliability, and maintainability of the programs produced.

Journal ArticleDOI
TL;DR: Tools and techniques can be used to meet the need for a legitimate, integrated approach to validation and testing of KBSs as they are developed to determine system reliability.
Abstract: Symbolic problem solving, specifically which knowledge-based systems (KBSs), in new and uncertain problem domains is a difficult task. An essential part of developing systems for these environments is determining whether the system is adequately and reliably solving the problem. KBSs that utilize heuristics have a development cycle not conducive to formal control and have high potential for error or incorrect characterizations of the problem they are meant to solve. A method of validating and testing such systems to increase and quantify their reliability is needed. Software engineering strategies for accessing and projecting the reliability of traditional software have been developed after years of experience with the cause and effect of errors. Since KBSs are new, methods for accessing and projecting their reliability are not as well understood. However, validation techniques from traditional software development can be applied to KBSs. Validation and testing techniques unique to KBSs can also be used to determine system reliability. In essence, tools and techniques can be used to meet the need for a legitimate, integrated approach to validation and testing of KBSs as they are developed.

Journal ArticleDOI
TL;DR: The results of the study highlight that user's domain-related expertise, system experience, gender, intelligence, and cognitive style have important influence on one or more dimensions of DSS effectiveness, however, their relative importance vary with the outcome measure of choice.
Abstract: Despite extensive research on various factors affecting the acceptance and effectiveness of decision support systems (DSS), considerable ambiguity still exists regarding the role and influence of user characteristics. Although researchers have advocated DSS effectiveness as a multi-dimensional construct, specific guidelines regarding its dimensions or the approach to derive it is lacking. The study reported here attempts to contribute to the existing body of knowledge by proposing a multi-dimensional construct for DSS effectiveness and identifying a comprehensive set of user characteristics that influences DSS effectiveness. It critically examines the relationship between these two sets through canonical correlation analysis technique. Thirty seven students, taking a graduate level course in financial management, in a large university located in the north eastern part of the United States participated in the study acting as surrogates for real-world managers. The results of the study highlight that user's domain-related expertise, system experience, gender, intelligence, and cognitive style have important influence on one or more dimensions of DSS effectiveness. However, their relative importance vary with the outcome measure of choice.

Journal ArticleDOI
TL;DR: Three experiments on the cognitive aspects of computer graphics displays found displays containing a topologically complete diagram presenting task-relevant state information at the corresponding point on the diagram appear to be superior to displays that violate these principles.
Abstract: Computer graphics displays make it possible to display both the topological structure of a system in the form of a schematic diagram and information about its current state using color-coding and animation. Such displays should be especially valuable as user interfaces for decision support systems and expert systems for managing complex systems. This report describes three experiments on the cognitive aspects of such displays. Two experiments involved both fault diagnosis and system operation using a very simple artificial system; one involved a complex real system in a fault diagnosis task. The major factors of interest concerned the topological content of the display—principally, the extent to which the system structural relationships were visually explicit, and the availability and visual presentation of state information. Displays containing a topologically complete diagram presenting task-relevant state information at the corresponding point on the diagram appear to be superior to displays that violate these principles. A short set of guidelines for the design of such displays is listed.

Journal ArticleDOI
TL;DR: Students and professional programmers were asked to make either simple or complex modifications to programs that had been generated using each of three different program structures, suggesting that problem structure, problem content, complexity of modification, and programmer experience all play a crucial role in determining performance and the representation formed.
Abstract: A number of claims have been made by the developers of program design methodologies, including the claim that the code produced by following the methodologies will be more understandable and more easily maintained than code produced in other ways. However, there has been little empirical research to test these claims. In this study, student and professional programmers were asked to make either simple or complex modifications to programs that had been generated using each of three different program structures. Data on the programmers' modification performance, cognitive representations formed of the programs and subjective reactions to the programs suggested that problem structure (as created by the different methodologies), problem content, complexity of modification, and programmer experience all play a crucial role in determining performance and the representation formed.

Journal ArticleDOI
Sten Minör1
TL;DR: A deeper understanding of how structure-oriented editors can be improved to suit both naive and expert users is obtained, and some directions for future research are outlined.
Abstract: Why have structure-oriented editors failed to attract a wider audience? Despite their obviously good qualities, they have almost exclusively been used for education and for experimental purposes in universities and research labs. In this paper a number of common objections raised against structure-oriented editors are quoted and commented upon. Many objections concern the interaction of such editors. Therefore the aspect of interaction in structure-oriented editors in analysed in more detail. We pin down the differences between interacting with text and structure-oriented editors, thus obtaining a deeper understanding of how structure-oriented editors can be improved to suit both naive and expert users. An analysis based on Norman's model for user activities is presented both for text editing and structure-oriented editing of programming languages. The analysis illustrates the trade-offs between structure-oriented editing and text editing of programs. It is also used to suggest some improvements to structure-oriented editor interaction in order to minimize the mental and physical effort required. The interaction problems have earlier been dealt with in hybrid editors, which combine structure-oriented editing and text editing in one system. This approach is also commented upon and discussed. Conceptual models are presented and compared for text editors. structure-oriented editors and hybrid editors. An interaction model for structure-oriented editors based on direct manipulation is suggested. The model is examined in terms of semantic distance, articulatory distance, and engagement as suggested by Hutchins et al . It is also related to the analysis of user activities and the discussion of conceptual models. The direct manipulation model aims at obtaining a simple but powerful interaction model for “pure” structure-oriented editors that may be appreciated by different user categories. Finally, some objections against structure-oriented editors not concerning interaction issues are commented upon, and some directions for future research are outlined.

Journal ArticleDOI
TL;DR: The results proved that graph grammars are a software-engineering method of their own.
Abstract: This paper reports on the latest developments in ongoing work which started in 1981 and is aimed at a general method which would help to reduce considerably the time necessary to develop a syntax-directed editor for any given diagram technique. In joint projects between the University of Erlangen-Nurnberg and software companies it has been shown that the ideas and the implemented tools can also be used for the design of CAD-systems. Several editors for diagram techniques in the field of software engineering have been implemented (e.g. SDL and SADT). In addition, 3-D-modelling packages for interior design and furnishing or lighting systems have been developed. The main idea behind the approach is to represent diagrams by (formal) graphs whose nodes are enriched with attributes. Then, any manipulation of a diagram (typically the insertion of an arrow, a box, text, coloring etc.) can be expressed in terms of the manipulation of its underlying attributed representation graph. The formal description of the manipulation is done by programmed attributed graph grammars . The main advantage of using graph grammars is the unified approach for the design of the data structures and the representation of the algorithms as graphs and graph productions, respectively. The results proved that graph grammars are a software-engineering method of their own.

Journal ArticleDOI
TL;DR: Prelog, a tool for the Presentation and REndering of LOGic specifications, has been developed and its implementation is described.
Abstract: Accidents at Flixborough, Seveso, Bhopal, Three Mile Island, Windscale and Chernobyl have led to increasing concern over the safety and reliability of control systems. Human factors specialists have responded to this concern and have proposed a number of techniques which support the operator of such applications. Unfortunately, this work has not been accompanied by the provision of adequate tools which might enable a designer to carry it beyond the “laboratory bench” and on to the “shop floor”. The following paper exploits formal, mathematically based specification techniques to provide such a tool. Previous weaknesses of abstract specifications are identified and resolved. In particular, they have failed to capture the temporal properties which human factors specialists identify as crucial to the success or failure of interactive control systems. They also provide the non-formalist with an extremely poor impression of what it would be to like to interact with potential implementations. Temporal logic avoids these deficiencies. It can make explicit the sequential information which may be implicit within a design. Executable subsets of this formalization support prototyping and this provides a means of assessing the qualitative “look and feel” of potential implementations. A variety of presentation strategies, including structural decomposition and dialogue cycles, have been specified and incorporated directly into prototypes using temporal logic. Prelog, a tool for the Presentation and REndering of LOGic specifications, has been developed and its implementation is described.

Journal ArticleDOI
TL;DR: A new algorithm based on the formalization of rough set theory, a formalization which describes the case of incomplete information, is presented, which is always applicable.
Abstract: The experts often cannot explain why they choose this or that decision in terms of formalized “if-then” rules; in these cases we have a set of examples of their real decisions, and it is necessary to reveal the rules from these examples. The existing methods of discovering rules from examples either demand that the set of examples be in some sense complete (and it is often not complete) or they are too complicated. We present a new algorithm which is always applicable. This algorithm is based on the formalization of rough set theory, a formalization which describes the case of incomplete information.

Journal ArticleDOI
TL;DR: The outcomes of this paper include a delineation of what constitutes an appropriate conceptualization of this area and a specification of research issues that tend to dominate the design of a research agenda.
Abstract: Our objective in this paper is to provide a thorough understanding of the usability of data management environments with an end to conducting research in this area. We do this by synthesizing the existing literature that pertains to (i) data modelling as a representation medium and (ii) query interface evaluation in the context of data management. We were motivated by several trends that are prevalent in the current computing context. First, while there seems to be a proliferation of new modelling ideas that have been proposed in the literature, commensurate experimental evaluation of these ideas is lacking. Second, there appears to exist a significant user population that is quite adept at working in certain computing environments (e.g. spreadsheets) with a limited amount of computing skills. Finally, the choices in terms of technological platforms that are now available to implement new software designs allow us to deal with the implementation issue more effectively. The outcomes of this paper include a delineation of what constitutes an appropriate conceptualization of this area and a specification of research issues that tend to dominate the design of a research agenda.

Journal ArticleDOI
TL;DR: Four different user interfaces supporting scheduling two-state (ON/OFF) devices over time periods ranging from minutes to days are described and compared on a feature by feature basis.
Abstract: This article describes four different user interfaces supporting scheduling two-state (ON/OFF) devices over time periods ranging from minutes to days. The touchscreen-based user interfaces including a digital 12-h clock, 24-h linear and 24-h dial prototypes are described and compared on a feature by feature basis. A formative usability test with 14 subjects, feedback from more than 30 reviewers, and the flexibility to add functions favour the 24-h linear version.

Journal ArticleDOI
TL;DR: The aim of the present study was to provide some indication of the magnitude and time course of this decline in performance, and to clarify the nature of underlying changes in speech behaviour.
Abstract: Recognition accuracy of speech recognition devices tends to decline during an extended period of continuous use. Although this deterioration in performance is commonly acknowledged, there has been little systematic observation of the phenomenon, and no clear account of its causes is available. The aim of the present study was to provide some indication of the magnitude and time course of this decline in performance, and to clarify the nature of underlying changes in speech behaviour. There experiments are described. Experiment 1 confirmed that there is a fall-off in recognition accuracy during a half-hour session of a data entry task, and that this occurs for both naive and practised subjects. In Experiment 2, no recovery was observed in recognition performance when short rest breaks were scheduled, indicating that vocal fatigue was not a major factor. The effects of template retraining in mid-session were investigated in Experiment 3. This procedure was found to be effective in restoring recognition accuracy, and the retrained templates were relatively robust. The implications of these findings for operational use of speech recognition devices are briefly discussed. For most applications, one-off template retraining is seen as a more appropriate solution to the problem of voice drift than more complex solutions based on adaptive templates.