scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Human-computer Studies \/ International Journal of Man-machine Studies in 1994"


Journal ArticleDOI
TL;DR: This paper examines the relationship between trust in automatic controllers, self-confidence in manual control abilities, and the use of automatic controllers in operating a simulated semi-automatic pasteurization plant and found trust, combined with self- confidence, predicted the operators' allocation strategy.
Abstract: The increasing use of automation to supplant human intervention in controlling complex systems changes the operators' role from active controllers (directly involved with the system) to supervisory controllers (managing the use of different degrees of automatic and manual control). This paper examines the relationship between trust in automatic controllers, self-confidence in manual control abilities, and the use of automatic controllers in operating a simulated semi-automatic pasteurization plant. Trust, combined with self-confidence, predicted the operators' allocation strategy. A Multitrait-multimethod matrix and logit functions showed how trust and self-confidence relate to the use of automation. An ARMAV time series model of the dynamic interaction of trust and self-confidence, combined with individual biases, accounted for 60.9-86.5% of the variance in the use of the three automatic controllers. In general, automation is used when trust exceeds self-confidence, and manual control when the opposite is true. Since trust and self-confidence are two factors that guide operators' interactions with automation, the design of supervisory control systems should include provisions to ensure that operators' trust reflects the capabilities of the automation and operators' self-confidence reflects their abilities to control the system manually.

881 citations


Journal ArticleDOI
TL;DR: The purpose of this review is to identify knowledge elicitation techniques and the associated bibliographic information, organize the techniques into categories on the basis of methodological similarity, and summarize for each category of techniques strengths, weaknesses, and recommended applications.
Abstract: Information on knowledge elicitation methods is widely scattered across the fields of psychology, business management, education, counseling, cognitive science, linguistics, philosophy, knowledge engineering and anthropology. The purpose of this review is to (1) identify knowledge elicitation techniques and the associated bibliographic information, (2) organize the techniques into categories on the basis of methodological similarity, and (3) summarize for each category of techniques strengths, weaknesses, and recommended applications. The review is intended to provide a starting point for those interested in applying or developing knowledge elicitation techniques, as well as for those more generally interested in exploring the scope of the available methodology.

592 citations


Journal ArticleDOI
TL;DR: Two studies of using the thinking aloud method for user interface testing showed that experimenters who were not usability specialists could use the method, but they found only 28-30% of known usability problems when running a single test subject.
Abstract: Two studies of using the thinking aloud method for user interface testing showed that experimenters who were not usability specialists could use the method. However, they found only 28-30% of known usability problems when running a single test subject. Running more test subjects increased the number of problems found, but with progressively diminishing returns; after five test subjects 77-85% of the problems had been found.

438 citations


Journal ArticleDOI
TL;DR: This analysis of argumentation research sets an agenda for future work driven by a concern to support the designer in the whole process of externalizing and structuring DR, from initially ill-formed ideas to more rigorous, coherent argumentation.
Abstract: A design rationale (DR) is a representation of the reasoning behind the design of an artifact. In recent years, the use of semiformal notations for structuring arguments about design decisions has attracted much interest within the human-computer interaction and software engineering communities, leading to a number of DR notations and support environments. This paper examines two foundational claims made by argumentation-based DR approaches: that expressing DR as argumentation is useful, and that designers can use such notations. The conceptual and empirical basis for these claims is examined, firstly by surveying relevant literature on the use of argumentation in non-design contexts (from which current DR efforts draw much inspiration), and secondly, by surveying DR work. Evidence is classified according to the research contribution it makes, the kind of data on which claims are based (anecdotal or experimental), the extent to which the claims made are substantiated, and whether or not the users of the approach were also the researchers.In the survey, a trend towards tightly integrating DR with other design representations is noted, but it is argued that taken too far, this may result in the loss of the original vision of argumentative design. In examining the evidence for each claim, it is demonstrated firstly, that research into semiformal argumentation outside the design context has failed to substantiate convincingly either of the two claims implicitly attributed to it in current DR research, and secondly, that there are also significant gaps in the DR literature. There are emerging indications, however, that argumentation-based DR can assist certain kinds of design reasoning by turning the representational effort to the designer's advantage, and that such DRs can be useful at a later date. This analysis of argumentation research sets an agenda for future work driven by a concern to support the designer in the whole process of externalizing and structuring DR, from initially ill-formed ideas to more rigorous, coherent argumentation. The paper concludes by clarifying implications for the design of DR training, notations, and tools.

300 citations


Journal ArticleDOI
TL;DR: Although handwriting (with recognition) is touted as the entry method of choice for pen-based computers, the much simpler technique of tapping on a soft keyboard is faster and more accurate.
Abstract: Two experiments were conducted to compare several methods of numeric and text entry for pen-based computers. For numeric entry, the conditions were hand printing, tapping on a soft keypad, stroking a moving pie menu, and stroking a pie pad. For the pie conditions, strokes are made in the direction that numbers appear on a clock face. For the moving pie menu, strokes were made directly in the application, as with hand printing. For the pie pad, strokes were made on top of one another on a separate pie pad, with the results sent to the application. Based on speed and accuracy, the entry methods from best to worst were soft keypad (30 wpm, 1.2% errors), hand printing (18.5 wpm, 10.4% errors), pie pad (15.1 wpm, 14.6% errors), and moving pie menu (12.4 wpm, 16.4% errors).For text entry, the conditions were hand printing, tapping on a soft keyboard with a QWERTY layout, and tapping on a soft keyboard with an ABC layout (two rows of sequential characters). Tapping on the soft QWERTY keyboard was the quickest (23 wpm) and most accurate (1.1% errors) entry method. Hand printing was slower (16 wpm) and more error prone (8.1% errors). Tapping on the soft ABC keyboard was very accurate (0.6% errors) but was slower (13 wpm) than the other methods.These results represent the first empirical tests of entry speed and accuracy using a stylus to tap on a soft keyboard. Although handwriting (with recognition) is touted as the entry method of choice for pen-based computers, the much simpler technique of tapping on a soft keyboard is faster and more accurate.

181 citations


Journal ArticleDOI
TL;DR: Some of the problems associated with observational data analysis for complex domains are discussed, and the term "exploratory sequential data analysis" (ESDA) is introduced to describe the different kinds of observations currently being performed in many areas of the behavioral and social sciences.
Abstract: This paper discusses some of the problems associated with observational data analysis for complex domains, and introduces the term "exploratory sequential data analysis" (ESDA) to describe the different kinds of observational data analysis currently being performed in many areas of the behavioral and social sciences. The development and functionality of a software tool?MacSHAPA?for certain kinds of ESDA is described. MacSHAPA is designed to bring investigators into closer contact with their data and to help them achieve greater research productivity and quality. MacSHAPA allows investigators to see their data in various ways, to enter it, edit it and encode it, and to carry out statistical analyses and make reports. MacSHAPA's relation to other ESDA software tools is indicated throughout the paper.

175 citations


Journal ArticleDOI
TL;DR: Detailed examples illustrate the various benefits of adopting the AH as a knowledge representation framework, namely: providing sufficient representations to allow reasoning about unanticipated fault and control situations, allowing the use of reasoning mechanisms that are independent of domain information, and having psychological relevance.
Abstract: The abstraction hierarchy (AH) is a multileveled representation framework, consisting of physical and functional system models, which has been proposed as a useful framework for developing representations of complex work environments Despite the fact that the AH is well known and widely cited in the cognitive engineering community, there are surprisingly few examples of its application Accordingly, the intent of this paper is to provide a concrete example of how the AH can be applied as a knowledge representation framework A formal instantiation of the AH as the basis for a computer program is presented in the context of a thermal-hydraulic process This model of the system is complemented by a relatively simple reasoning mechanism which is independent of the information contained in the knowledge representation This reasoning mechanism uses the AH model, along with qualitative user input about system states, to generate reasoning trajectories for different types of events and problems Simulation outputs showing how the AH model can provide an effective basis for reasoning under different classes of situations, including challenging faults of various types, are presented These detailed examples illustrate the various benefits of adopting the AH as a knowledge representation framework, namely: providing sufficient representations to allow reasoning about unanticipated fault and control situations, allowing the use of reasoning mechanisms that are independent of domain information, and having psychological relevance

158 citations


Journal ArticleDOI
TL;DR: The paper shows how the user's own adaptations can be supported by the system by initial adaptive suggestions showing the rationale of adaptations and the way to perform them.
Abstract: This paper presents an adaptive and adaptable system and its evaluation. The system is based on a commercial spreadsheet application and provides adaptation opportunities for defining a user- and task-specific user interface (new menu entries and key shortcuts for subrouting names and parameters, changing default parameters). The development following a design-evaluation-redesign approach has shown that adaptations are accepted if the user has the opportunity to control their timing and content. This does not necessarily mean that the adaptation is initiated and performed by the user alone (adaptability). On the contrary, the strictly user-controlled adaptation is too demanding for the user. The paper shows how the user's own adaptations can be supported by the system by initial adaptive suggestions showing the rationale of adaptations and the way to perform them.

151 citations


Journal ArticleDOI
TL;DR: The results of the study indicate that the misunderstanding of Plan Composition and semantic misinterpretation of Language Constructs are the two major causes of errors.
Abstract: Why do novice programmers have difficulties in programming, and what are the probable causes of these errors? This study analyses the role of Language Constructs comprehension, Plan Composition, and their relationship to each other as applied to novice programming errors. The experiment was conducted with 80 novice programmers who were divided into four groups of 20. Each of the groups enrolled in one of the following programming language courses: Pascal, C, FORTRAN, or LISP. The results of the study indicate that the misunderstanding of Plan Composition and semantic misinterpretation of Language Constructs are the two major causes of errors. In addition, the study has concluded that these errors are highly correlated.

145 citations


Journal ArticleDOI
TL;DR: It is argued that, to reuse methods and knowledge bases, it must isolate, as much as possible, method knowledge from domain knowledge, and declarative mapping relations are defined, and the classes of mappings are enumerated.
Abstract: In this paper, we characterize the relationship between abstract problem-solving methods and the domain-oriented knowledge bases that they use. We argue that, to reuse methods and knowledge bases, we must isolate, as much as possible, method knowledge from domain knowledge. To connect methods and domains, we define declarative mapping relations, and enumerate the classes of mappings. We illustrate our approach to reuse with the PROTEGE-II architecture and a pair of configuration tasks. Our goal is to show that the use of mapping relations leads to reuse with high payoff of saved effort.

139 citations


Journal ArticleDOI
TL;DR: It is shown that individuals use inappropriate social rules in assessing machine behavior and individuals' responses to technology are shown to be inconsistent with their espoused beliefs.
Abstract: We show that individuals use inappropriate social rules in assessing machine behavior. Explanations of ignorance and individuals' views of machines as proxies for humans are shown to be inadequate; instead, individuals' responses to technology are shown to be inconsistent with their espoused beliefs. In two laboratory studies, computer-literate college students used computers for tutoring and testing. The first study (n = 22) demonstrates that subjects using a computer that praised itself believed that it was more helpful, contributed more to the subject's test score, and was more responsive than did subjects using a computer that criticized itself, although the tutoring and testing sessions were identical. In the second study (n = 44), the praise or criticism came from either the computer that did the tutoring or a different computer. Subjects responded as if they attributed a "self" and self-focused attributions (termed "ethopoeia") to the computers. Specifically, subjects responses followed the rules "other-praise is more valid and friendlier than self-praise", "self-criticism is friendlier than other-criticism", and "criticizers are smarter than praisers" to evaluate the computers, although the subjects claimed to believe that these rules should not be applied to computers.

Journal ArticleDOI
TL;DR: The strengths of the spreadsheet model allow quick gratification of immediate needs, while the weaknesses are such as make subsequent debugging and interpretation difficult, suggesting a situated view of spreadsheet usage in which present needs outweigh future needs.
Abstract: Ten discretionary users were asked to recount their experiences with spreadsheets and to explain how one of their own sheets worked. The transcripts of the interviews are summarized to reveal the key strengths and weaknesses of the spreadsheet model. There are significant discrepancies between these findings and the opinions of experts expressed in the HCI literature, which have tended to emphasize the strengths of spreadsheets and to overlook the weaknesses. In general, the strengths are such as allow quick gratification of immediate needs, while the weaknesses are such as make subsequent debugging and interpretation difficult, suggesting a situated view of spreadsheet usage in which present needs outweigh future needs. We conclude with an attempt to characterize three extreme positions in the design space of information systems: the incremental addition system, the explanation system and the transcription system. The spreadsheet partakes of the first two. We discuss how to improve its explanation facilities.

Journal ArticleDOI
TL;DR: A flexible general-purpose shell, called UMT (User Modeling Tool), which supports the development of user modeling applications and utilizes a modeling approach called assumption-based user modeling, which exploits a truth maintenance mechanism for maintaining the consistency of the user model.
Abstract: This paper first presents a general structured framework for user modeling, which includes a set of basic user modeling purposes exploited by a user modeling system when providing a set of services to other components of an application. At a higher level of abstraction such an application may perform a generic user modeling task , which results from an appropriate combination of some basic user modeling purposes. The central aim of the paper is to present, within the proposed framework, a flexible general-purpose shell, called UMT (User Modeling Tool), which supports the development of user modeling applications. UMT features a non-monotonic approach for performing the modeling activity: more specifically, it utilizes a modeling approach called assumption-based user modeling, which exploits a truth maintenance mechanism for maintaining the consistency of the user model. The modeling task is divided into two separate activities, one devoted to user classification and user model management, and the other devoted to consistency maintenance of the model. The modeling knowledge exploited by UMT is represented by means of stereotypes and production rules. UMT is capable of identifying, at any given moment during an interaction, all the possible alternative models which adequately describe the user and are internally consistent. The choice of the most plausible one among them is then performed using an explicit programmable preference criterion. UMT is also characterized by a very well defined and simple interface with the hosting application, and by a specialized development interface which supports the developer during the construction of specific applications. This paper includes an example application in the field of information-providing systems. UMT has been developed in Common LISP.

Journal ArticleDOI
TL;DR: In an empirical evaluation using a target selection task, the addition of tactile and force feedback shortened the response time and widened the effective area of targets.
Abstract: We have developed a mouse with tactile and force feedback Tactile information is provided to the operator by a small pin which projects slightly through the mouse button when pulsed Force information is provided by an electromagnet inside the mouse in conjunction with an iron mouse pad Tactile and force feedback are controlled by software linked to the visual information of targets on the visual display In an empirical evaluation using a target selection task, the addition of tactile and force feedback shortened the response time and widened the effective area of targets Design issues for interactive systems are discussed

Journal ArticleDOI
TL;DR: Differences showed in the way in which the two groups of novice programmers represented and organized programming concepts, although the performance tasks did not show parallel effects.
Abstract: Programming is a cognitive activity that requires the learning of new reasoning skills and the understanding of new technical information. Since novices lack domain-specific knowledge, many instructional techniques attempt to provide them with a framework or mental model that can be used for incorporating new information. A major research question concerns how to encourage the acquisition of good mental models and how these models influence the learning process. One possible technique for providing an effective mental model is to use dynamic cues that make transparent to the user all the changes in the variable values, source codes, output, etc., as the program runs. Two groups of novice programmers were used in the experiment. All subjects learned some programming notions in the C language (MIXC). The MIXC version of the programming language provides a debugging facility (C trace) designed to show through a system window all the program components. Subjects were either allowed to use this facility or not allowed to do so. Performance measures of programming and debugging were taken as well as measures directed to assess subjects' mental models. Results showed differences in the way in which the two groups represented and organized programming concepts, although the performance tasks did not show parallel effects.

Journal ArticleDOI
TL;DR: DASH is a metalevel tool that allows developers to generate domain-specific knowledge-acquisition tools from domain ontologies and allows the developer to custom tailor the layout of the knowledge- Acquisition tool for its users.
Abstract: Metalevel tools can support the software development process by automating the design of task- and application-specific tools. DASH is a metalevel tool that allows developers to generate domain-specific knowledge-acquisition tools from domain ontologies. Domain specialists use the knowledge-acquisition tools generated by DASH to instantiate the concepts and relationships defined in the domain ontologies. The output of the knowledge-acquisition tools is a collection of instances that constitute the knowledge base for a knowledge-based system. To automate the generation of appropriate tools, the DASH architecture uses a dialog-design module to produce a dialog structure that defines the target tool at the editor and window level. Given the dialog structure, a layout-design module completes the window layouts. DASH allows the developer to custom tailor the layout of the knowledge-acquisition tool for its users, and to store such modifications persistently so that they can be reapplied when the target tool is regenerated. The DASH implementation is based on a mapping problem-solving method that defines the tool-design steps. The DASH Development Environment (DDE) is an application-specific environment that supports the configuration of the mapping method and the maintenance of DASH. We have used DASH to generate several knowledge-acquisition tools for a broad range of application tasks.

Journal ArticleDOI
TL;DR: Experimental evaluation through 300 hours of classroom usage indicates that CLARE does support meaningful learning and shows that a major bottleneck to computer-mediated knowledge construction is summarization.
Abstract: Current collaborative learning systems focus on maximizing shared information. However, "meaningful learning" is not simply information sharing but also knowledge construction. CLARE is a computer-supported learning environment that facilitates meaningful learning through collaborative knowledge construction. It provides a semi-formal representation language called RESRA and an explicit process model called SECAI. Experimental evaluation through 300 hours of classroom usage indicates that CLARE does support meaningful learning. It also shows that a major bottleneck to computer-mediated knowledge construction is summarization. Lessons learned through the design and evaluation of CLARE provide new insights into both collaborative learning systems and collaborative learning theories.

Journal ArticleDOI
TL;DR: The work reported in this paper suggests that a fine-grained restructuring of individual schemata takes place during the later stages of skill development and attempts to show how specific forms of training can give rise to this knowledge restructuring process.
Abstract: This paper explores the relationship between knowledge structure and organization and the development of expertise in a complex problem-solving task. An empirical study of skill acquisition in computer programming is reported, providing support for a model of knowledge organization that stresses the importance of knowledge restructuring processes in the development of expertise. This is contrasted with existing models which have tended to place emphasis upon schemata acquisition and generalization as the fundamental modes of learning associated with skill development. The work reported in this paper suggests that a fine-grained restructuring of individual schemata takes place during the later stages of skill development. It is argued that those mechanisms currently thought to be associated with the development of expertise may not fully account for the strategic changes and the types of error typically found in the transition between intermediate and expert problem solvers. This work has a number of implications. Firstly, it suggests important limitations of existing theories of skill acquisition. This is particularly evident in terms of the ability of such theories to account for subtle changes in the various manifestations of skilled performance that are associated with increasing expertise. Secondly, the work reported in this paper attempts to show how specific forms of training can give rise to this knowledge restructuring process. It is argued that the effects of particular forms of training are of primary importance, but these effects are often given little attention in theoretical accounts of skill acquisition. Finally, the work presented here has practical relevance in a number of applied areas including the design of intelligent tutoring systems and programming environments.

Journal ArticleDOI
TL;DR: This work demonstrates how scenario questioning can be used to systematically interrogate the knowledge and practices of potential users, and thereby to create object-oriented analysis models that are psychologically valid.
Abstract: Scenarios are a natural and effective medium for thinking in general and for design in particular. Our work seeks to develop a potential unification between recent scenario-oriented work in object-oriented analysis/design methods and scenario-oriented work in the analysis/design of human-computer interaction. We illustrate this perspective by showing: (1) how scenario questioning can be used to systematically interrogate the knowledge and practices of potential users, and thereby to create object-oriented analysis models that are psychologically valid; (2) how depicting an individual object's point-of-view can serve as a pedagogical scaffold to help students of object-oriented analysis see how to identify and assign object responsibilities in creating a problem domain model; and (3) how usage scenarios can be employed to motivate and coordinate the design implementation, refactoring and reuse of object-oriented software.

Journal ArticleDOI
TL;DR: An interactive grammar/parser workbench is presented, a graphical development environment with various types of browsers, tracers, inspectors and debuggers, that has been adapted to the requirements of large-scale grammar engineering in a distributed, object-oriented specification and programming framework.
Abstract: The ParseTalk model of concurrent, object-oriented natural language parsing is introduced. It builds upon the complete lexical distribution of grammatical knowledge and incorporates inheritance mechanisms in order to express generalizations over sets of lexical items. The grammar model integrates declarative well-formedness criteria constraining linguistic relations between heads and modifiers, and procedural specifications of the communication protocol for establishing these relations. The parser's computation model relies upon the actor paradigm, with concurrency entering through asynchronous message passing. We consider various extensions of the basic actor model as required for distributed natural language understanding and elaborate on the semantics of the actor computation model in terms of event type networks (a graph representation for actor grammar specifications) and event networks (graphs which represent the actor parser's behavior). Besides theoretical claims, we present an interactive grammar/parser workbench, a graphical development environment with various types of browsers, tracers, inspectors and debuggers, that has been adapted to the requirements of large-scale grammar engineering in a distributed, object-oriented specification and programming framework.

Journal ArticleDOI
TL;DR: Examination of differences in cognitive activities and final designs among expert designers using object-oriented and procedural design methodologies, and among expert and novice object- oriented designers, when novices have extensive procedural experience observed a closer alliance of domain and solution spaces in object-orientation compared to procedural design.
Abstract: This research examines differences in cognitive activities and final designs among expert designers using object-oriented and procedural design methodologies, and among expert and novice object-oriented designers, when novices have extensive procedural experience. We observed, as predicted by others, a closer alliance of domain and solution spaces in object-oriented design compared to procedural design. Procedural programmers spent a large proportion of their time analysing the problem domain. In contrast, object-oriented designers defined objects and methods much more quickly and spent more time evaluating their designs through simulation processes. Novices resembled object-oriented experts in some ways and procedural experts in others. Their designs had the general shape of the object-oriented experts' designs, but retained some procedural features. Novices were very inefficient at defining objects, going through an extensive situation analysis first, in a manner similar to the procedural experts. Some suggestions for instruction are made on the basis of novice object-oriented designers' difficulties.

Journal ArticleDOI
TL;DR: The QUERY procedure as discussed by the authors is designed to systematically question an expert, and construct the unique knowledge space consistent with the expert's responses, such a knowledge space can then serve as the core of a knowledge assessment system.
Abstract: The QUERY procedure is designed to systematically question an expert, and construct the unique knowledge space consistent with the expert's responses. Such a knowledge space can then serve as the core of a knowledge assessment system. The essentials of the theory of knowledge spaces are given here, together with the theoretical underpinnings of the QUERY procedure. A full scale application of the procedure is then described, which consists in constructing the knowledge spaces of five expert-teachers, pertaining to 50 mathematics items of the standard high school curriculum. The results show that the technique is applicable in a realistic setting. However, the analysis of the data indicates that, despite a good agreement across experts concerning item difficulty and other coarse measures, the constructed knowledge spaces obtained for the different experts are not as consistent as one might expect or hope. Some experts appear to be considerably more skillful than others at generating a usable knowledge space, at least by this technique.

Journal ArticleDOI
TL;DR: In this article, an empirical enquiry into how people use natural language and drawing to achieve a shared understanding of a problem (the redesign of a kitchen) and its solution is presented.
Abstract: The design of interfaces which support a user's natural cognitive processes and structures depends on an understanding of communicational codes as well as task structures etc. Research in human-computer interaction has, however, tended to neglect the former in favour of the latter. This paper seeks to redress this imbalance by reporting in detail the results of an empirical enquiry into how people use two communicational codes?natural language and drawing?to achieve a shared understanding of a problem (the redesign of a kitchen) and its solution. This enquiry clearly indicates the complex interdependency of these forms of communication when used in combination. While a graphical depiction may provide a context for linguistic interpretation, especially in respect of the disambiguation of spatial expressions, graphical expressions (pictures and drawings) themselves require a context-dependent interpretation which, itself, can derive from an accompanying natural language expression. Often, however, neither form of expression can be independently interpreted. Rather the meaning of the situation is dependent on the synergistic combination of both forms of expression and is heavily dependent on the common background knowledge of participants in the interaction. While natural language expressions may be explicitly linked to graphical depictions through pointing actions, such actions are not mandatory for effective communication. The implications of these observations for the design of natural language/graphics interfaces are discussed. Among the questions raised by the paper are: how to characterize the difference between representation or modelling and communication in graphics; how to apply current object-oriented theories of knowledge representation to the highly fluid yet knowledge-rich use of pictures that was observed in our study; and finally what differences might emerge between dialogues of this type in different domains.

Journal ArticleDOI
TL;DR: This paper describes an experimental trackball device that provides the user with the less common E-feedback in addition to the conventional layered I- feedback in the form of the momentary cursor position on the screen and the kinetic forces from the ball.
Abstract: It has been argued by Engel and Haakma (1993, Expectations and feedback in user-system communication, International Journal of Man-Machine Studies, 39, 427-452) that for user-system communication to become more efficient, machine interfaces should present both early layered I-feedback on the current partial message interpretation as well as layered expectations (E-feedback) concerning the message components still to be communicated. As a clear example of our claim, this paper describes an experimental trackball device that provides the user with the less common E-feedback in addition to the conventional layered I-feedback in the form of the momentary cursor position on the screen and the kinetic forces from the ball. In particular, the machine expresses its expectation concerning the goal position of the cursor by exerting an extra force to the trackball.Two optical sensors and two servo motors are used in the described trackball device with contextual force feedback. One combination of position sensor and servo motor handles the cursor position and tactile feedback along the x -axis, the other combination controls that along the y-axis. By supplying supportive force feedback as a function of the current display contents and the momentary cursor position, the user's movements are guided towards the cursor target position expected by the machine. The force feedback diminishes the visual processing load of the user and combines increased ease of use with robustness of manipulation.Experiments with a laboratory version of this new device have shown that the force feedback significantly enhances speed and accuracy of pointing and dragging, while the effort needed to master the trackball is minimal compared with that for the conventional trackball without force feedback.

Journal ArticleDOI
TL;DR: An objective test for computer literacy was developed and an existing self-appraisal test was extended for use in a computer literacy assessment experiment, and it was found that the self-APPRAisal test is a more lenient performance indicator than the objective test.
Abstract: Whenever decisions are made based upon a person's level of computer literacy, it is important that such expertise is accurately assessed. This paper takes a thorough methodological approach to the measurement of computer literacy using both objective and self-appraisal tests. While objective tests have been used on many occasions to measure computer literacy, they suffer from generalizability problems. Self-appraisal tests, on the other hand, are subject to leniency bias by the respondents. Taken together, though, the potential exists for the establishment of a computer literacy assessment instrument with high levels of generalizability and accuracy. For this research, an objective test for computer literacy was developed and an existing self-appraisal test was extended for use in a computer literacy assessment experiment. It was found that the self-appraisal test is a more lenient performance indicator than the objective test. Both male and female subjects exhibited substantial self-leniency in their self-appraisals, but both self-leniency and gender-based differences in self-appraisal decreased as the subjects' level of computer expertise increased. Finally, the low level of convergence between the self-appraisal test and the objective test found in this study cast doubt on the ability of any self-appraisal test to assess accurately computer literacy by itself. A combination of different measures may be more appropriate when it is important to determine computer literacy levels accurately.

Journal ArticleDOI
TL;DR: The need for a common measurement taxonomy and framework is argued, which is derived from analyses of the analyses of software engineering and human-computer interaction, and it is shown that the two disciplines have many important similarities as well as differences and that there is some evidence to suggest that they are growing closer.
Abstract: The rapid development of any field of knowledge brings with it unavoidable fragmentation and proliferation of new disciplines. The development of computer science is no exception. Software engineering (SE) and human-computer interaction (HCI) are both relatively new disciplines of computer science. Furthermore, as both names suggest, they each have strong connections with other subjects. SE is concerned with methods and tools for general software development based on engineering principles. This discipline has its roots not only in computer science but also in a number of traditional engineering disciplines. HCI is concerned with methods and tools for the development of human-computer interfaces, assessing the usability of computer systems and with broader issues about how people interact with computers. It is based on theories about how humans process information and interact with computers, other objects and other people in the organizational and social contexts in which computers are used. HCI draws on knowledge and skills from psychology, anthropology and sociology in addition to computer science. Both disciplines need ways of measuring how well their products and development processes fulfil their intended requirements. Traditionally, SE has been concerned with "how software is constructed" and HCI with "how people use software". Given the different histories of the disciplines and their different objectives, it is not surprising that they take different approaches to measurement, Thus, each has its own distinct "measurement culture". In this paper we analyse the differences and the commonalities of the two cultures by examining the measurement approaches used by each. We then argue the need for a common measurement taxonomy and framework, which is derived from our analyses of the two disciplines. Next we demonstrate the usefulness of the taxonomy and framework via specific example studies drawn from our own work and that of others and show that, in fact, the two disciplines have many important similarities as well as differences and that there is some evidence to suggest that they are growing closer. Finally, we discuss the role of the taxonomy as a framework to support: reuse, planning future studies, guiding practice and facilitating communication between the two disciplines.

Journal ArticleDOI
TL;DR: A design methodology for iconic languages based upon the theory of icon algebra to derive the meaning of iconic sentences is described and an interactive design environment based upon this methodology is described.
Abstract: We describe a design methodology for iconic languages based upon the theory of icon algebra to derive the meaning of iconic sentences. The design methodology serves two purposes. First of all, it is a descriptive model for the design process of the iconic languages used in the Minspeak? systems for augmentative communication. Second, it is also a prescriptive model for the design of other iconic languages for human-machine interface. An interactive design environment based upon this methodology is described. This investigation raises a number of interesting issues regarding iconic languages and iconic communications.

Journal ArticleDOI
TL;DR: The study demonstrates that, depending on the characteristics of the human, the computerized aid, and the problem to be solved, the joint human-computer system performance can be better or worse than the performance of the individual human or computer system working alone.
Abstract: In recent years, there has been a growing interdisciplinary interest in designing intelligent human-computer systems for problem-solving. Although progress has been made, we are far from building intelligent human-computer systems that fully exploit the natural synergies of the combination of human and intelligent machine. One of the significant paradigms of intelligent decision support is the cognitive systems engineering approach. This approach considers the human and the intelligent machine as components of a joint cognitive system and focuses on the need to maximize the overall performance of the joint system. Factors influencing the performance of the joint cognitive system include the cognitive characteristics of the human, the computer system, and the task. An important relationship between the cognitive characteristics of the human and those of the system is cognitive coupling, which has a number of dimensions.The study described in this paper explores the style dimension of cognitive coupling by presenting a laboratory experiment that examines the interactions among human cognitive style (analytic vs. heuristic), problem type (analysis-inducing vs. heuristic-inducing), and nature of decision aid (analytic vs. heuristic). The study demonstrates that, depending on the characteristics of the human, the computerized aid, and the problem to be solved, the joint human-computer system performance can be better or worse than the performance of the individual human or computer system working alone. Furthermore, the results suggest that the impact of cognitive style on decision-making performance may depend upon the characteristics of the problem, the nature of the decision-aid, and the measures used to evaluate performance. Inadequate recognition of these factors and their interactions may have led to conflicting results in prior decision-making studies using cognitive style.

Journal ArticleDOI
TL;DR: The proposed Specify-Construct-Assess (SCA) problem-solving method for decomposition of the modelling task leads to a particular approach to modelling that is called evolutionary modelling, which is supported by a knowledge-based system called QuBA.
Abstract: Constructing models of physical systems is a recurring activity in engineering problem solving. This paper presents a generic knowledge-level analysis of the task of engineering modelling. Starting from the premise that modelling is a design-like activity, it proposes the Specify-Construct-Assess (SCA) problem-solving method for decomposition of the modelling task. A second structuring principle is found in the distinction between and separation of different ontological viewpoints. Here, we introduce three engineering ontologies that have their own specific roles and methods in the modelling task: functional components, physical processes, mathematical constraints. The combination of the proposed task and ontology decompositions leads to a particular approach to modelling that we call evolutionary modelling. This approach is supported by a knowledge-based system called QuBA. The implications of evolutionary modelling for structuring the modelling process, the content of produced models, as well as for the organization of reusable model fragment libraries are discussed.

Journal ArticleDOI
TL;DR: Two experiments done based on Pennington's model of programmer comprehension found an expertise effect, as well as evidence that knowledge of program function is independent of other sorts of knowledge, however, neither novices nor experts exhibited strong evidence of bottom-up comprehension.
Abstract: The question of whether the use of good naming style in programs improves program comprehension has important implications for both programming practice and theories of program comprehension. Two experiments were done based on Pennington's (Stimulus structures and mental representations in expert comprehension of computer programs, Cognitive Psychology,19, 295-341, 1987) model of programmer comprehension. According to her model, different levels of knowledge, ranging from operational to functional, are extracted during comprehension in a bottom-up fashion. It was hypothesized that poor naming style would affect comprehension of function, but would not affect the other sorts of knowledge. An expertise effect was found, as well as evidence that knowledge of program function is independent of other sorts of knowledge. However, neither novices nor experts exhibited strong evidence of bottom-up comprehension. The results are discussed in terms of emerging theories of program comprehension which include knowledge representation, comprehension strategies, and the effects of ecological factors such as task demands and the role-expressiveness of the language.