scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Human-computer Studies \/ International Journal of Man-machine Studies in 1993"


Journal ArticleDOI
TL;DR: Overall, TAM provides an informative representation of the mechanisms by which design choices influence user acceptance, and should therefore be helpful in applied contexts for forecasting and evaluating user acceptance of information technology.
Abstract: Lack of user acceptance has long been an impediment to the success of new information systems. The present research addresses why users accept or reject information systems and how user acceptance is affected by system design features. The technology acceptance model (TAM) specifies the causal relationships between system design features, perceived usefulness, perceived ease of use, attitude toward using, and actual usage behavior. Attitude theory from psychology provides the rationale for hypothesized model relationships, and validated measures were used to operationalize model variables. A field study of 112 users regarding two end-user systems was conducted to test the hypothesized model. TAM fully mediated the effects of system characteristics on usage behavior, accounting for 36% of the variance in usage. Perhaps the most striking finding was that perceived usefulness was 50% more influential than ease of use in determining usage, underscoring the importance of incorporating the appropriate functional capabilities in new systems. Overall, TAM provides an informative representation of the mechanisms by which design choices influence user acceptance, and should therefore be helpful in applied contexts for forecasting and evaluating user acceptance of information technology. Implications for future research and practice are discussed.

4,241 citations


Journal ArticleDOI
TL;DR: Are mental models always formed?
Abstract: In interacting with the world, people form internal representations or mental models of themselves and the objects with which they interact (Norman, 1983a). According to Norman, mental models provide predictive and explanatory powers for understanding the interaction. More abstractly, Gentner and Stevens (1983) propose that mental models focus on the way people understand a specific knowledge domain. More concretely, Carroll (1984) views mental models as information that is input into cognitive structures and processes. What are mental models? Are they always formed? When formed, what are their characteristics? What are the functional consequences of having no model (if that is possible), an immature model, or a mature model? This paper intends to explore these questions.

277 citations


Journal ArticleDOI
TL;DR: A method for identifying computer users by analysing keystroking patterns with neural networks and a simple geometric distance and preliminary results demonstrate complete exclusion of imposters and a reasonably low false alarm rate.
Abstract: A method for identifying computer users by analysing keystroking patterns with neural networks and a simple geometric distance is presented. A model of each user's normal typing style was created and compared with later typing samples. Preliminary results demonstrate complete exclusion of imposters and a reasonably low false alarm rate when the sample text was limited to the user's name.

190 citations


Journal ArticleDOI
TL;DR: A logical set of phenotypes is developed and compared with the established "human error" taxonomies as well as with the operational categories which have been developed in the field of human reliability analysis and the trade-off between precision and meaningfulness is discussed.
Abstract: The study of human actions with unwanted consequences, in this paper referred to as human erroneous actions, generally suffers from inadequate operational taxonomies. The main reason for this is the lack of a clear distinction between manifestations and causes. The failure to make this distinction is due to the reliance on subjective evidence which unavoidably mixes manifestations and causes. The paper proposes a clear distinction between the phenotypes (manifestations) and the genotypes (causes) of erroneous actions. A logical set of phenotypes is developed and compared with the established "human error" taxonomies as well as with the operational categories which have been developed in the field of human reliability analysis. The principles for applying the set of phenotypes as practical classification criteria are developed and described. A further illustration is given by the report of an action monitoring system (RESQ) which has been implemented as part of a larger set of operator support systems and which shows the viability of the concepts. The paper concludes by discussing the principal issues of error detection, in particular the trade-off between precision and meaningfulness.

188 citations


Journal ArticleDOI
TL;DR: A hypertext system where the user, instead of writing a formal query, just selects relevant terms and/or documents is implemented and Bayesian networks are used to manage the indexing spaces and to store the user's information need.
Abstract: In proposing a searching strategy in a hypertext environment, we have considered three criteria: (1) the retrieval process should use multiple sources of evidence, including multiple indexing schemes and multiple search mechanisms; (2) the hypertext links should be exploited in order to find more relevant material and "good" starting points for browsing; and (3) the user's information need must be easily expressed. To satisfy these three criteria, we have implemented a hypertext system where the user, instead of writing a formal query, just selects relevant terms and/or documents. Based on multiple indexing schemes, Bayesian networks are used to manage the indexing spaces and to store the user's information need. To combine multiple search techniques, the hypergraph is also composed of implicit links (bibliographic coupling, co-citation, etc.), and computed links storing the nearest neighbors of each node. Using link semantics, a constrained spreading activation retrieves relevant nodes for browsing.

180 citations


Journal ArticleDOI
TL;DR: In an attempt to give theoretical support to the elaboration of user interface languages, Eco's Theory of Sign Production is explored and a semiotic framework within which many design issues can be explained and predicted is built.
Abstract: Semiotic approaches to design have recently shown that systems are messages sent from designers so users. In this paper we examine the nature of such messages and show that systems are messages that can send and receive other messages—they are metacommunication artefacts that should be engineered according to explicit semiotic principles. User interface languages are the primary expressive resource for such complex communication environments. Existing cognitively-based research has provided results which set the target interface designers should hit, but little is said about how to make successful decisions during the process of design itself. In an attempt to give theoretical support to the elaboration of user interface languages, we explore Eco's Theory of Sign Production (U. Eco, A Theory of Semiotics , Bloomington, IN: Indiana University Press, 1976) and build a semiotic framework within which many design issues can be explained and predicted.

167 citations


Journal ArticleDOI
TL;DR: An explanation of programming skill is suggested that integrates ideas about knowledge representation with a strategic model, enabling one to make predictions about how changes in knowledge representation might give rise to particular strategies and to the strategy changes associated with developing expertise.
Abstract: Much of the literature concerned with understanding the nature of programming skill has focused explicitly upon the declarative aspects of programmers' knowledge. This literature has sought to describe the nature of stereotypical programming knowledge structures and their organization. However, one major limitation of many of these knowledge-based theories is that they often fail to consider the way in which knowledge is used or applied. Another strand of literature is less well represented. This literature deals with the strategic elements of programming skill and is directed towards an analysis of the strategies commonly employed by programmers in the generation and the comprehension of programs. In this paper an attempt is made to unify various analyses of programming strategy. This paper presents a review of the literature in this area, highlighting common themes and concerns, and proposes a model of strategy development which attempts to encompass the central findings of previous research in this area. It is suggested that many studies of programming strategy are descriptive and fail to explain why strategies take the form they do or to explain the typical strategy shifts which are observed during the transitions between different levels of skill. This paper suggests that what is needed is an explanation of programming skill that integrates ideas about knowledge representation with a strategic model, enabling one to make predictions about how changes in knowledge representation might give rise to particular strategies and to the strategy changes associated with developing expertise. This paper concludes by making a number of brief suggestions about the possible nature of this model and its implications for theories of programming expertise.

159 citations


Journal ArticleDOI
TL;DR: It is suggested that, for demanding text-based tasks, or for complex graphical tasks, there are overall benefits in adding a visual channel in the form of a Workspace, despite the costs involved in attempting to coordinate activity with this unfamiliar form of communication.
Abstract: We investigated the effect on synchronous communication of adding a Shared Workspace to audio, for three tasks possessing key representative features of workplace activity. We examined the content and effectiveness of remote audio communication between pairs of participants, who worked with and without the addition of the Workspace. For an undemanding task requiring the joint production of brief textual summaries, we found no benefits associated with adding the Workspace. For a more demanding text editing task, the Workspace initially hampered performance but, with task practice, participants performed more efficiently than with audio alone. When the task was graphical design, the Workspace was associated with greater communication efficiency and also changed the nature of communication. The Workspace permits the straightforward expression of spatial relations and locations, gesturing, and the monitoring and coordination of activity by direct visual inspection. The results suggest that, for demanding text-based tasks, or for complex graphical tasks, there are overall benefits in adding a visual channel in the form of a Workspace. These benefits occur despite the costs involved in attempting to coordinate activity with this unfamiliar form of communication. Our findings provide evidence for early claims about putative Workspace benefits. We also interpret these results in the context of a theory of mediated communication.

159 citations


Journal ArticleDOI
TL;DR: The paper describes the relationship between these three framework dimensions and relates the methods of data capture, measurements and criteria which may be appropriately applied in various evaluation contexts and considers the need to perform evaluations more effectively in the design of products and systems in the commercial world.
Abstract: A framework is described, which classifies usability evaluations in terms of three dimensions; the approach to evaluation, the type of evaluation and the time of evaluation in the context of the product life cycle. The approaches described are user-based, theory-based and expert-based. The approach to evaluation reflects the source of the data which forms the basis of the evaluation. The types of evaluation are diagnostic, summative and metrication. These reflect the purpose of the evaluation and therefore the nature of the data and likely use of the results. The time of testing reflects the temporal location in the product life cycle at which the evaluation is conducted. This dictates the representation of the product which is available for evaluation. The paper describes the relationship between these three framework dimensions. It also relates the methods of data capture, measurements and criteria which may be appropriately applied in various evaluation contexts. The latter part of the paper focuses on a more detailed review of methods which are associated with the most commonly applied and often most effective approach, i.e. the user-centred diagnostic evaluation. Finally the paper considers the need to perform evaluations more effectively in the design of products and systems in the commercial world. The discussion addresses the need for computer support tools to facilitate the handling of resulting data from user trials.

152 citations


Journal ArticleDOI
TL;DR: This experiment suggests that retrieval using a Galois lattice structure may be an attractive alternative since it combines a good performance for subject searching along with browsing potential.
Abstract: A controlled experiment was conducted comparing information retrieval using a Galois lattice structure with two more conventional retrieval methods: navigating in a manually built hierarchical classification and Boolean querying with index terms. No significant performance difference was found between Boolean querying and the Galois lattice retrieval method for subject searching with the three measures used for the experiment: user searching time, recall and precision. However, hierarchical classification retrieval did show significantly lower recall compared to the other two methods. This experiment suggests that retrieval using a Galois lattice structure may be an attractive alternative since it combines a good performance for subject searching along with browsing potential.

150 citations


Journal ArticleDOI
TL;DR: There is no consistent significant difference between the two modes in mastery of material by students, as measured by grades: in a computer science course, grades were better in the on-line section.
Abstract: The Virtual Classroom® consists of software enhancements to the basic capabilities of a computer-mediated communication system in order to support collaborative learning. Results of quasi-experimental field trials which included matched sections of college courses delivered in the traditional and virtual classrooms indicate that there is no consistent significant difference between the two modes in mastery of material by students, as measured by grades: in a computer science course, grades were better in the on-line section. Subjectively, most students report that the Virtual Classroom improves access to educational activities and is, overall, a "better" mode of learning. However, these favorable outcomes are contingent upon a number of variables, including student characteristics, adequate equipment access, and instructor-generated collaborative learning processes.

Journal ArticleDOI
TL;DR: Results indicate that there were no advantages associated with iconic representations compared to text-based representations of actions and objects and that the benefits to direct manipulation might diminish after a learning period.
Abstract: This paper reports on two experiments which examine the effects of iconic and direct manipulation interfaces on the performance of casual users using an electronic mail system. There are two key aspects to these experiments. First, they have been carefully designed to separate the effect of iconic representation from that of direct manipulation in order to examine the independent effect of each as well as their joint effect. Second, subjects performed the same experimental task three different times over 1 week, thus allowing for the effects of icons and direct manipulation interfaces to be assessed over repeated trials. Each experiment measured time taken and errors made in task completion as dependent variables. Results indicate that there were no advantages associated with iconic representations compared to text-based representations of actions and objects. Subjects working with direct manipulation interfaces completed the task faster than those with menu-based interfaces. However, this difference in time was not significant when the task was repeated for a third time, indicating that the benefits to direct manipulation might diminish after a learning period. No interface was better than others in terms of reducing error rates when interacting with the computer system.

Journal ArticleDOI
TL;DR: Results confirm those of the literature, namely that subjects of the computer group tend to control and simplify their use of language more than those in the operator group, and most observations are either new or in contradiction with previous results.
Abstract: We report an experiment designed to study whether models of human-human voice dialogues can be applied successfully to human-computer communication using natural spoken language. Two groups of six subjects were asked to obtain information about air travel via dialogue with a remote "travel agent". Subjects in the computer group were led to believe they were talking to a computer whereas subjects in the operator group were told they were talking to a human. Both groups of subjects actually talked to the same human experimenter. The study focuses on subjects' representations of interlocutor skill and knowledge, and differs from previous analogous studies in several respects: the task is more complex, giving rise to structured exchanges in natural language rather than to question/answer pairs in simplified language; specific attention has been paid to the design, which attempts to avoid biases that have flawed other studies (in particular, conditions are identical for both groups); the time factor has been taken into account (subjects take part in three sessions, at 1-week intervals). Some results confirm those of the literature, namely that subjects of the computer group tend to control and simplify their use of language more than those in the operator group. However, most observations are either new or in contradiction with previous results: subjects in the computer group produce more utterances but no significant differences were observed with respect to most structural and pragmatic features of language; the time factor plays a dual role. Subjects in both groups tend to become more concise. Operator group strategies differ significantly across sessions as regards scenario processing (problem solving) whereas computer group strategies remain stable. These differences in behavior between groups are ascribed to differences in representations of interlocutor ability.

Journal ArticleDOI
TL;DR: A surprising degree of commonality among subjects in the use of gestures as well as speech is shown, which can be applied to develop a gesture-based or gesture/speech-based system which enables computer users to manipulate graphic objects using easily learned and intuitive gestures to perform spatial tasks.
Abstract: This paper reports on the utility of gestures and speech to manipulate graphic objects. In the experiment described herein, three different populations of subjects were asked to communicate with a computer using either speech alone, gestures alone, or both. The task was the manipulation of a three-dimensional cube on the screen. They were asked to assume that the computer could see their hands, hear their voices, and understand their gestures and speech as well as a human could. A gesture classification scheme was developed to analyse the gestures of the subjects. A primary objective of the classification scheme was to determine whether common features would be found among the gestures of different users and classes of users. The collected data show a surprising degree of commonality among subjects in the use of gestures as well as speech. In addition to the uniformity of the observed manipulations, subjects expressed a preference for a combined gesture/speech interface. Furthermore, all subjects easily completed the simulated object manipulation tasks. The results of this research, and of future experiments of this type, can be applied to develop a gesture-based or gesture/speech-based system which enables computer users to manipulate graphic objects using easily learned and intuitive gestures to perform spatial tasks. Such tasks might include editing a three-dimensional rendering, controlling the operation of vehicles or operating virtual tools in three dimensions, or assembling an object from components. Knowledge about how people intuitively use gestures to communicate with computers provides the basis for future development of gesture-based input devices.

Journal ArticleDOI
TL;DR: Five abstract characteristics of the mental representation of computer programs are presented: hierarchical structure, explicit mapping of code to goals, foundation on recognition of recurring patterns, connection of knowledge, and grounding in the program text.
Abstract: This paper presents five abstract characteristics of the mental representation of computer programs: hierarchical structure, explicit mapping of code to goals, foundation on recognition of recurring patterns, connection of knowledge, and grounding in the program text. An experiment is reported in which expert and novice programmers studied a Pascal program for comprehension and then answered a series of questions about it, designed to show these characteristics if they existed in the mental representations formed. Evidence for all of the abstract characteristics was found in the mental representations of expert programmers. Novices' representations generally lacked the characteristics, but there was evidence that they had the beginnings, although poorly developed, of such characteristics.

Journal ArticleDOI
TL;DR: It is found that visualization ability is a strong predictor of user learning success and that subjects with lower visualization ability can be helped to narrow, and in some cases equal or surpass, the performance gap between themselves and subjects with higher visualization ability through appropriate training methods and direct manipulation interfaces.
Abstract: A novice user's cognitive abilities can influence how difficult he/she finds learning to use a software package. To ensure effective use, it is important to identify specific abilities that can influence learning and use, and then develop training methods or design interfaces to accommodate individuals who are lower in those abilities. This paper reports the integrated findings of five studies that examined a specific cognitive variable, visualization ability , for different systems (electronic mail, modeling software and operating systems), applying different training methods (analogical or abstract conceptual models) and computer interfaces (command-based or direct manipulation). Consistent with past results in other domains, we found that visualization ability is a strong predictor of user learning success. More importantly, we also found that subjects with lower visualization ability can be helped to narrow, and in some cases equal or surpass, the performance gap between themselves and subjects with higher visualization ability through appropriate training methods and direct manipulation interfaces. Based on our findings, we discuss implications for practitioners and designers and suggest possible avenues for future research.

Journal ArticleDOI
TL;DR: A new framework is developed and presented that highlights a fundamental symmetry between icons and symbols, and is used to raise a number of basic questions about the kinds of representational issues and challenges designers will need to consider as they create the next generation of icons for user interfaces.
Abstract: Icons are now routinely used in human-computer interactions. Despite their widespread use, however, we argue that icons are far more diverse and complex than normally realized. This article examines some of the history behind the evolution of icons from simple pictures to much richer and more complex representational devices. Then we develop and present a new framework that distinguishes: (1) different kinds of sign relations; (2) different kinds of referent relations; and (3) differences between sign and referent relations. In addition, we highlight a fundamental symmetry between icons and symbols, and use this framework to raise a number of basic questions about the kinds of representational issues and challenges designers will need to consider as they create the next generation of icons for user interfaces.

Journal ArticleDOI
Ronald R. Yager1
TL;DR: This work shows how a number of the classic aggregation methods fall out as special cases of this very general formulation based upon the use of fuzzy subsets to model the criteria and a form of the fuzzy integral to connect these two to obtain the overall decision function.
Abstract: The central focus of this work is to provide a general formulation for the aggregation of multi-criteria. This formulation is based upon the use of fuzzy subsets to model the criteria and the use of fuzzy measures to capture the interrelationship between criteria. A form of the fuzzy integral is used to connect these two to obtain the overall decision function. We are particularly interested here in the formulations obtained under different assumptions about the nature of the underlying fuzzy measure. We show how a number of the classic aggregation methods fall out as special cases of this very general formulation.

Journal ArticleDOI
TL;DR: Evidence is provided for the utility of speech input for command activation in application programs when the keyboard is used for text entry and the mouse for direct manipulation.
Abstract: Despite advances in speech technology, human factors research since the late 1970s has provided only weak evidence that automatic speech recognition devices are superior to conventional input devices such as keyboards and mice. However, recent studies indicate that there may be advantages to providing an additional input channel based on speech input to supplement the more common input modes. Recently the authors conducted an experiment to demonstrate the advantages of using speech-activated commands over mouse-activated commands for word processing applications when, in both cases, the keyboard is used for text entry and the mouse for direct manipulation. Sixteen experimental subjects, all professionals and all but one novice users of speech input, performed four simple word processing tasks using both input groups in this counterbalanced experiment. Performance times for all tasks were significantly faster when using speech to activate commands as opposed to using the mouse. On average, the reduction in task time due to using speech was 18·7%. The error rates due to subject mistakes were roughly the same for both input groups, and recognition errors, averaged over all the tasks, occurred for 6·3% of the speech-activated commands. Subjects made significantly more memorization errors when using speech as compared with the mouse for command activation. Overall, the subjects reacted positively to using speech input and preferred it over the mouse for command activation; however, they also voiced concerns about recognition accuracy, the interference of background noise, inadequate feedback and slow response time. The authors believe that the results of the experiment provide evidence for the utility of speech input for command activation in application programs.

Journal ArticleDOI
TL;DR: Co-operation is presented as a technique for radically improving human-computer interaction with complex knowledge bases during problem-identifying and problem-solving tasks and a machine architecture and software tools and techniques developed can form the foundation for building future co-operative systems.
Abstract: Co-operation is presented as a technique for radically improving human-computer interaction with complex knowledge bases during problem-identifying and problem-solving tasks. A study of human-human co-operation literature indicated the importance of creating an environment where the refinement of solutions can be based on argument and the resolution of differing viewpoints, as it is through this interaction that the nature of the problem is revealed. To bring about such an environment, the work identified and created three mechanisms now considered to be central to human-computer co-operation; goal-oriented working (GOW), an agreed definition knowledge base (ADKB), and a model which, using problem-domain rules, stimulates the interaction between the user and the machine: the partner model (PM). To identify the requirements of the co-operative machine more completely, a software exemplar was constructed, using the task metaphor of spatial design. The result of the work is the implementation of a machine software architecture which demonstrates the functioning of co-operation. This co-operative computer, its evaluators believe, supports a user-machine interaction having a totally new and different quality. The machine architecture and software tools and techniques developed in the work can form the foundation for building future co-operative systems.

Journal ArticleDOI
TL;DR: The EDGE system as mentioned in this paper is able to plan complex, extended explanations which allow such interactions with the user, and uses this information to influence the detailed further planning of the explanation, and when the user appears confused, the system can attempt to fill in missing knowledge or to explain things another way.
Abstract: Human verbal explanations are essentially interactive. If someone is giving a complex explanation, the hearer will be given the opportunity to indicate whether they are following as the explanation proceeds, and if necessary interrupt with clarification questions. These interactions allow the speaker to both clear up the hearer's immediate difficulties as they arise, and to update assumptions about their level of understanding. Better models of the hearer's level of understanding in turn allow the speaker to continue the explanation in a more appropriate manner, lessening the risk of continuing confusion. Despite its apparent importance, existing explanation and text generation systems fail to allow for this sort of interaction. Although some systems allow follow-up questions at the end of an explanation, they assume that a complete explanation has been planned and generated before such interactions are allowed. However, for complex explanations interactions with the user should take place as the explanation progresses, and should influence how that explanation continues. This paper describes the EDGE system, which is able to plan complex, extended explanations which allow such interactions with the user. The system can update assumptions about the user's knowledge on the basis of these interactions, and uses this information to influence the detailed further planning of the explanation. When the user appears confused, the system can attempt to fill in missing knowledge or to explain things another way.

Journal ArticleDOI
TL;DR: In this paper, the authors tried to replicate and extend the original study of Carroll et al. and found that minimalist users learned faster and better than non-minimalist users.
Abstract: Carroll, Smith-Kerker, Ford and Mazur-Rimetz (The minimal manual, Human-Computer Interaction , 3, 123-153, 1987) have introduced the minimal manual as an alternative to standard self-instruction manuals. While their research indicates strong gains, only a few attempts have been made to validate their findings. This study attempts to replicate and extend the original study of Carroll et al. Sixty-four first-year Dutch university students were randomly assigned to a minimal manual or a standard self-instruction manual for introducing the use of a word processor. During training, all students read the manual and worked training tasks on the computer. Learning outcomes were assessed with a performance test and a motivation questionnaire. The results closely resembled those of the original study: minimalist users learned faster and better. The students' computer experience affected performance as well. Experienced subjects performed better on retention and transfer items than subjects with little or no computer experience. Manual type did not interact with prior computer experience. The minimal manual is therefore considered an effective and efficient means for teaching people with divergent computer experience the basics of word processing. Expansions of the minimalist approach are proposed.

Journal ArticleDOI
Shawn D. Bird1
TL;DR: This paper presents a taxonomy for multi-agent systems that defines alternative architectures based on fundamental distributed, intelligent system characteristics and presents a step toward the development of general principles for their integration.
Abstract: As intelligent systems become more pervasive and capture more expert and organizational knowledge, the expectation that they be integrated into larger problem-solving systems is heightened. To capitalize on these investments and more fully exploit their potential as knowledge repositories, general principles for their integration must be developed. Although simulated and prototype systems described in the literature provide solutions to some practical problems, most are empirical (or often simply intuitive) in design, emerging from implementation strategy instead of general principles. As a step toward the development of such principles, this paper presents a taxonomy for multi-agent systems that defines alternative architectures based on fundamental distributed, intelligent system characteristics.

Journal ArticleDOI
TL;DR: This paper reports an approach, based on task-network modelling, which could be used to develop design specifications for appropriate error correction in automatic speech recognition (ASR), and describes some means of defining the requirements of an error correction dialogue.
Abstract: While automatic speech recognition (ASR) has achieved some level of success, it often fails to live up to its hype. One of the principal reasons for this apparent failure is the prevalence of "recognition errors". This makes error correction a topic of increasing importance to ASR system development, with a growing awareness that, by designing for error, a number of problems can be overcome. Currently, there is a wide range of possible techniques which could be used for correcting recognition errors, and it is often difficult to compare the techniques objectively because their performance is closely related to their implementation. Furthermore, different techniques may be more suited to different applications and domains. It would be useful to have some means of defining the requirements of an error correction dialogue, based on characteristics of the dialogue and ASR system in which it is to be used, in order to develop design specifications for appropriate error correction. This paper reports an approach, based on task-network modelling, which could be used to this end.

Journal ArticleDOI
TL;DR: This paper elaborates taxonomies of three sets of its dimensions—spatial audio ("throwing sound"), timbre ("pitching sound"), and gain—establishing matrices of variability for each, drawing similes, and citing applications.
Abstract: After surveying the concepts of audio windowing, this paper elaborates taxonomies of three sets of its dimensions—spatial audio ("throwing sound"), timbre ("pitching sound"), and gain ("catching sound")—establishing matrices of variability for each, drawing similes, and citing applications. Two audio windowing systems are examined across these three operations: repositioning, distortion/blending, and gain control (i.e. state transitions in virtual space, timbre space, and volume space). Handy Sound is a purely auditory system with gestural control, while MAW exploits exocentric graphical control. These two systems motivated the development of special user interface features. (Sonic) piggyback-channels are introduced as filtear manifestations of changing cursors, used to track control state. A variable control/response ratio can be used to map a near-field work envelope into perceptual space. Clusters can be used to hierarchically collapse groups of spatial sound objects. WIMP idioms are reinterpreted for audio windowing functions. Reflexive operations are cast an instance of general manipulation when all the modified entities, including an iconification of the user, are projected into an egalitarian control/response system. Other taxonomies include a spectrum of directness of manipulation, and sensitivity to current position crossed with dependency on some target position.

Journal ArticleDOI
TL;DR: This paper describes how COVER uses heuristics about the nature of likely deficiencies to improve its performance and clarify reporting of deficiencies to the user.
Abstract: Two of the most important and difficult tasks in building expert systems are knowledge acquisition (KA) and quality assurance (QA). QA involves verification and validation (VV focuses the search for meaningful deficiencies; integrates closely with checks for redundancy, conflicts and circularity; maximizes user-control over deficiency detection; and overcomes the combinatorial explosion traditionally associated with the deficiency check. The paper describes how COVER uses heuristics about the nature of likely deficiencies to improve its performance and clarify reporting of deficiencies to the user. COVER performance is analysed in detail, both theoretically and on real-world expert system knowledge bases.

Journal ArticleDOI
TL;DR: This research compared composing on a word processor with writing in longhand to explore whether the computer-based tool amplifies performance and restructures attentional allocation to writing processes.
Abstract: This research compared composing on a word processor with writing in longhand to explore whether the computer-based tool amplifies performance and restructures attentional allocation to writing processes. Performance was assessed in terms of the quality of the resulting documents, based on subjective ratings and text analysis, and the fluency of language production. The allocation of attentional resources was monitored in terms of the degree of cognitive effort (secondary task reaction times) and processing time (directed retrospective reports) devoted to planning ideas, translating ideas into text, and reviewing ideas and text. In Experiment 1, word processing increased the attentional investment in and nature of planning and reviewing, without improving either the quality or fluency of writing. In Experiment 2 these restructuring effects were again observed both for writers who reported modest experience composing on a computer and to an even greater degree for those who reported extensive experience. Only participants with extensive word processing experience matched the quality and fluency of those who wrote in longhand.

Journal ArticleDOI
TL;DR: The hybrid tiled and overlapped approach to window layout; an algorithm for determining the importance of a window based on its contents, relation to the ongoing dialogue, time of creation, frequency of use, and recency of use; and an approach to determining window size based on clutter and object resolution requirements are discussed.
Abstract: The CUBRICON Intelligent Window Manager (CIWM) is a knowledge-based system that automates windowing operations. The CIWM is a component of CUBRICON, a prototype knowledge-based multi-media human-computer interface. CUBRICON accepts inputs and generates outputs using integrated multiple media/modalities including speech, printed/typed natural language, tables, forms, maps, graphics, and pointing gestures. The CIWM automatically performs window management functions on CUBRICON's color and monochrome screens. These functions include window creation, sizing, placement, removal, and organization. These operations are accomplished by the CIWM without direct human inputs, although the system provides for user override of the CIWM decisions. The motivation for automated window management is based on the premise that, by freeing the user's cognitive and temporal resources from the task of managing the human-computer interface, more of these resources are available for the user's application domain activities. As the problems and tasks confronting computer users become more complex and information intensive, the potential of this approach for improving overall performance is enhanced. Recent research discussed in this paper indicates that, for some database management tasks, a significant portion of the user's time is spent in managing the window-based interface. If these findings are representative of the larger range of computer-based tasks that use windowing systems, the concept of automated window management offers great potential for enhancing human performance on these computer-based tasks. This paper provides a brief overview of the CUBRICON system and describes the CIWM and its underlying design principles and premises. The following important CIWM features are discussed: the hybrid tiled and overlapped approach to window layout; an algorithm for determining the importance of a window based on its contents, relation to the ongoing dialogue, time of creation, frequency of use, and recency of use; and an approach to determining window size based on clutter and object resolution requirements. Actual interactive examples are provided to illustrate the CIWM functionality. Results of an evaluation of CUBRICON support the design. Those results which pertain specifically to the CIWM are presented. Limitations and applicability of this research are also discussed.

Journal ArticleDOI
TL;DR: An on-line network of Pascal programming templates called a template library is devised, and tested with subjects both as a stand alone resource and in conjunction with programming case studies, suggesting that the template representations helped subjects remember and reuse information.
Abstract: We propose a template library as a good representation of programming knowledge, and programming case studies as part of an effective context for illustrating design skills and strategies for utilizing this knowledge In this project, we devised an on-line network of Pascal programming templates called a template library, and tested it with subjects (classified as novice, intermediate, and expert Pascal programmers) both as a stand alone resource and in conjunction with programming case studies We investigated three questions using these tools: 1) How do subjects organize templates? 2) How well can subjects understand and locate templates in the template library? 3) Does the template library help subjects reuse templates to solve new problems? Results suggest that the template representations helped subjects remember and reuse information, and that subjects gained deeper understandings if the representation was introduced in the context of a programming case study

Journal ArticleDOI
TL;DR: In this paper, a canonical indexing model whose relevance measures and combinations mechanisms are shown to be isomorphic to Shafer's belief functions and to Dempster's rule is presented.
Abstract: The Dempster Sharer theory of evidence concerns the elicitation and manipulation of degrees of belief rendered by multiple sources of evidence to a common set of propositions. Information indexing and retrieval applications use a variety of quantitative means—both probabilistic and quasi-probabilistic—to represent and manipulate relevance numbers and index vectors. Recently, several proposals were made to use the Dempster Shafer model as a relevance calculus in such applications. This paper provides a critical review of these proposals, pointing at several theoretical caveats and suggesting ways to resolve them. The methodology is based on expounding a canonical indexing model whose relevance measures and combinations mechanisms are shown to be isomorphic to Shafer's belief functions and to Dempster's rule, respectively. Hence, the paper has two objectives: (i) to describe and resolve some caveats in the way the Dempster Shafer theory is applied to information indexing and retrieval, and (ii) to provide an intuitive interpretation of the Dempster Shafer theory, as it unfolds in the simple context of a canonical indexing model.