scispace - formally typeset
Search or ask a question

Showing papers on "User interface published in 1992"


Journal ArticleDOI
01 Jul 1992
TL;DR: A theoretical framework for designing interfaces for complex human-machine systems, based on the skills, rules, and knowledge taxonomy of cognitive control, is proposed, and three prescriptive design principles are suggested to achieve this objective.
Abstract: A theoretical framework for designing interfaces for complex human-machine systems is proposed. The framework, called ecological interface design (EID), is based on the skills, rules, and knowledge taxonomy of cognitive control. The basic goals of EID are not to force processing to a higher level than the demands of the task require, and to support each of the three levels of cognitive control. Thus, an EID interface should not contribute to the difficulty of the task, and at the same time, it should support the entire range of activities that operators will be faced with. Three prescriptive design principles are suggested to achieve this objective, each directed at supporting a particular level of cognitive control. Particular attention is paid to presenting a coherent deductive argument justifying the principles of EID. Support for the EID framework is discussed. Some issues for future research are outlined. >

1,072 citations


Journal ArticleDOI
TL;DR: In this paper, attention has been given to making user interface design and testing less costly so that it might be more easily incorporated into the product development life cycle, and three experiments are reported.
Abstract: Attention has been given to making user interface design and testing less costly so that it might be more easily incorporated into the product development life cycle. Three experiments are reported...

839 citations


Patent
12 May 1992
TL;DR: A user interface element system (2) having a plurality of user interface elements (12) for marking, finding, organizing, and processing data within documents stored in an associated computer system is described in this article.
Abstract: A user interface element system (2) having a plurality of user interface elements (12) for marking, finding, organizing, and processing data within documents stored in an associated computer system are described. Each element (12) typically has an appearance which is uniquely related to the data or the function the element is designed to represent or perform, respectively. In their simplest form, These elements are only used to mark data within a document. Each element (12) however, can also be programmed to cause the computer (2) to perform some function in association with the marked data, such as printing the data or mailing the data to someone. A user can select particular data within a document using an element and have flat data associated with the element in memory (8). Data marked with common elements can be found by searching for a particular representative element in memory (8). Users can create their own elements, program elements with their own desired functionality, and modify existing elements. Elements (12) can also be compounded together so as to cause a combination of tasks to be performed by simply activating one element.

820 citations


Journal ArticleDOI
TL;DR: The cognitive walkthrough methodology, described in detail, is an adaptation of the design walkthrough techniques that have been used for many years in the software engineering community and is based on a theory of learning by exploration presented.
Abstract: This paper presents a new methodology for performing theory-based evaluations of user interface designs early in the design cycle. The methodology is an adaptation of the design walkthrough techniques that have been used for many years in the software engineering community. Traditional walkthroughs involve hand simulation of sections of code to ensure that they implement specified functionality. The method we present involves hand simulation of the cognitive activities of a user, to ensure that the user can easily learn to perform tasks that the system is intended to support. The cognitive walkthrough methodology, described in detail, is based on a theory of learning by exploration presented in this paper. There is a summary of preliminary results of effectiveness and comparisons with other design methods.

778 citations


Patent
22 Sep 1992
TL;DR: In this paper, a central data processor, a control unit for each gaming machine within the system, and a user interface which includes a keypad, a card reader and a display.
Abstract: An information and communication system permits communication between gaming machines and a central control system and between a player or operator and a central control system. The system includes a central data processor, a control unit for each gaming machine within the system which is in communication with the central data processor and a user interface which includes a keypad, a card reader and a display. A user interface is secured to each gaming machine and operatively connected to the control unit. The keypad can be used by a player or operator to transmit information to the central data processor. The control unit can be used to identify special players and transmits messages, including promotional messages, for display. The control unit includes memory which contains personality data for the gaming machine and can be used to transmit the personality data from the user interface to its memory. The control unit can accept personality data from a card inserted into the card reader and can be enabled by a personal identification number entered on the keypad. The system provides multiple features including automated maintenance, game accounting, security, player tracking, event tracking, employee/player interaction from the game to the central data processor, cashless operation of gaming machines, reserving gaming machines and other functions.

751 citations


Proceedings ArticleDOI
01 Jun 1992
TL;DR: The video presents a two-phase interaction technique that combines gesture and direct manipulation, and the result is a powerful interaction which combines the advantages of gesturing anddirect manipulation.
Abstract: A gesture, as the term is used here, is a handmade mark used to give a command to a computer. The attributes of the gesture (its location, size, extent, orientation, and dynamic properties) can be mapped to parameters of the command. An operation, operands, and parameters can all be communicated simultaneously with a single, intuitive, easily drawn gesture. This makes gesturing an attractive interaction teehnique. ~pically, agestural interactions completed (e.g. the styIus is lifted) before the the gesture is classified, its attributes computed, and the intended command performed. There is no opportunity for the interactive manipulation of parameters in the presence of application feedback that is typical of drag operations indirect manipulation interfaces. This lack of continuous feedback during the interaction makes the use of gestures awkward for tasks that require such feedback, The video presents a two-phase interaction technique that combines gesture and direct manipulation. A two-phase interaction begins with a gesture, which is recognized during the interaction (e.g. while the stylus is still touching the writing surface). After recognition, the application is informed and the interaction continues, allowing the user to manipulate parameters interactively, The result is a powerful interaction which combines the advantages of gesturing and direct manipulation.

700 citations


Journal ArticleDOI
TL;DR: It is shown that the most basic elements in the usability engineering model are empirical user testing and prototyping, combined with iterative design.
Abstract: A practical usability engineering process that can be incorporated into the software product development process to ensure the usability of interactive computer products is presented. It is shown that the most basic elements in the usability engineering model are empirical user testing and prototyping, combined with iterative design. Usability activities are presented for three main phases of a software project: before, during, and after product design and implementation. Some of the recommended methods are not really single steps but should be used throughout the development process. >

564 citations


01 Jan 1992
TL;DR: This third edition of Ben Shneiderman's introduction to user- interface design contains expanded and earlier coverage of development methodologies, evaluation techniques, and user-interface building tools.
Abstract: Ben Shneiderman again provides a complete, current, and authoritative introduction to user-interface design. Students will learn practical techniques and guidelines needed to develop good systems designs - systems with interfaces the typical user can understand, predict, and control. This third edition features new chapters on the World Wide Web, information visualization, and computer-supported cooperative work. It contains expanded and earlier coverage of development methodologies, evaluation techniques, and user-interface building tools. The author provides provocative discussion of speech input/output, natural-language interaction, anthropomorphic design, virtual environments, and intelligent (software) agents.

514 citations


Proceedings ArticleDOI
01 Jun 1992
TL;DR: With nearly one million pixels and an accurate, multi-state, cordless pen, the Liveboard provides a basis for research on user interfaces for group meetings, presentations and remote collaboration.
Abstract: This paper describes the Liveboard, a large interactive display system. With nearly one million pixels and an accurate, multi-state, cordless pen, the Liveboard provides a basis for research on user interfaces for group meetings, presentations and remote collaboration. We describe the underlying hardware and software of the Liveboard, along with several software applications that have been developed. In describing the system, we point out the design rationale that was used to make various choices. We present the results of an informal survey of Liveboard users, and describe some of the improvements that have been made in response to user feedback. We conclude with several general observations about the use of large public interactive displays.

475 citations



Patent
09 Sep 1992
TL;DR: A method for invoking a user interface for use with an application operating in a computer system which involves providing in the computer system a generic object class that corresponds to a class of function that is to be performed using the user interface is described in this article.
Abstract: A method for invoking a user interface for use with an application operating in a computer system which involves providing in the computer system a generic object class that corresponds to a class of function that is to be performed using the user interface; specifying in the application instance data in the form of a generic object specification that corresponds to the generic object class, the instance data including attribute criteria and hint criteria; providing in the computer system at least one specific user interface toolbox and controller that operates in the computer system to provide a selection of possible specific user interface implementations for use in performing the class of function; and providing in the computer system at least one interpreter that corresponds to the at least one specific user interface toolbox and controller.

Proceedings ArticleDOI
01 Jun 1992
TL;DR: The most common problems people reported when developing a user interface included getting users' requirements, writing help text, achieving consistency, learning how to use the tools, getting acceptable performance, and communicating among various parts of the program.
Abstract: This paper reports on the results of a survey of user interface programming. The survey was widely distributed, and we received 74 responses. The results show that in today's applications, an average of 48% of the code is devoted to the user interface portion. The average time spent on the user interface portion is 45% during the design phase, 50% during the implementation phase, and 37% during the maintenance phase. 34% of the systems were implemented using a toolkit, 27% used a UIMS, 14% used an interface builder, and 26% used no tools. This appears to be because the toolkit systems had more sophisticated user interfaces. The projects using UIMSs or interface builders spent the least percent of time and code on the user interface (around 41%) suggesting that these tools are effective. In general, people were happy with the tools they used, especially the graphical interface builders. The most common problems people reported when developing a user interface included getting users' requirements, writing help text, achieving consistency, learning how to use the tools, getting acceptable performance, and communicating among various parts of the program.

Patent
30 Apr 1992
TL;DR: In this paper, a virtual school user interface running on networked personal computers for providing administrative and instructional functions to users in a scholastic environment is presented, with the networked virtual reality presenting the user as a real-time entity within the virtual school so that the user can interact with other users and system elements.
Abstract: A virtual school user interface running on networked personal computers for providing administrative and instructional functions to users in a scholastic environment. A user selects among grouped system functions by accessing one of a plurality of rooms within a school representation displayed on a video screen, with the networked virtual reality presenting the user as a real-time entity within the virtual school so that the user can interact with other users and system elements. A learning path editor is also provided for allowing users to author student curriculum sequences using graphical icons. A guidance tutor is further provided for coaching a student by displaying a guidance message on the video screen when so indicated by an instructional context. A courseware scheduler is further provided for delivering specific courseware to specific computers during specific time periods. A system monitor is further provided for gathering information in real-time on the state of each computer.

Proceedings ArticleDOI
01 Jun 1992
TL;DR: This work presents a system which allows experimentation with 3D widgets, encapsulated 3D geometry and behavior, and hopes to allow user-interface designers to build highly interactive 3D environments more easily than is possible with today's tools.
Abstract: The 3D components of today’s user interfaces are still underdeveloped. Direct interaction with 3D objects has been limited thus far to gestural picking, manipulation with linear transformations, and simple camera motion. Further, there are no toolkits for building 3D user interfaces. We present a system which allows experimentation with 3D widgets, encapsulated 3D geometry and behavior. Our widgets are first-class objects in the same 3D environment used to develop the application. This integration of widgets and application objects provides a higher bandwidth between interface and application than exists in more traditional UI toolkit-based interfaces. We hope to allow user-interface designers to build highly interactive 3D environments more easily than is possible with today’s tools.

Proceedings ArticleDOI
01 Jun 1992
TL;DR: The findings were replicated across the two systems and show that the empirical testing condition identified the largest number of problems, and identified a significant number of relatively severe problems that were missed by the walkthrough conditions.
Abstract: We investigated the relative effectiveness of empirical usability testing and individual and team walkthrough methods in identifying usability problems in two graphical user interface office systems. The findings were replicated across the two systems and show that the empirical testing condition identified the largest number of problems, and identified a significant number of relatively severe problems that were missed by the walkthrough conditions. Team walkthroughs achieved better results than individual walkthroughs in some areas. About a third of the significant usability problems identified were common across all methods. Cost-effectiveness data show that empirical testing required the same or less time to identify each problem when compared to walkthroughs.

Proceedings ArticleDOI
01 Jul 1992
TL;DR: New techniques to design physically based, goal directed motion of synthetic creatures are described and an interactive framework for specifying constraints and objectives for the motion, and for guiding the numeric solution of the optimization problem is developed.
Abstract: This paper describes new techniques to design physically based, goal directed motion of synthetic creatures. More specifically, it concentrates on developing an interactive framework for specifying constraints and objectives for the motion, and for guiding the numericrd solution of the optimization problem thus defined. The ability to define, modify and guide constrained spacetime problems is provided through an interactive user interface. Innovations that are introduced include, (1) the subdivision of spacetime into discrete pieces, or Spacetime Windows, over which subproblems can be formulated and solved, (2) the use of cubic B-spline approximation techniques to define a C2 function for the creature’s time dependent degrees of freedom, (3) the use of both symbolic and numerical processes to construct and solve the constrained optimization problem, and (4) the ability to specify inequality and conditional constraints. Creatures, in the context of this work, consist of rigid links connected by joints defining a set of generalized degrees of freedom. Hybrid symbolic and numeric techniques to solve the resulting complex constrained optimization problems are made possible by the special structure of physically based models of such creatures, and by the recent development of symbolic algebraic languages. A graphical user interface process handles communication between the user and two other processes; one devoted to symbolic differentiation and manipulation of the constraints and objectives, and one that performs the iterative numerical solution of the optimization problem. The user interface itself provides both high and low level definition of, interaction with, and inspection of, the optimization process and the resulting animation. Implementation issues and experiments with the Spacetime Windows system are discussed,

Patent
22 Dec 1992
TL;DR: In this article, an artificial intelligence software shell for plant operation simulation includes a blackboard module including a database having objects representing plant elements and concepts, and a control module, in communication with the blackboard and the knowledge source modules, receives input data and controls operation of the knowledge sources in accordance with a predetermined knowledge source priority scheme.
Abstract: An artificial intelligence software shell for plant operation simulation includes a blackboard module including a database having objects representing plant elements and concepts A rule-based knowledge source module and a case-based knowledge source module, in communication with the blackboard module, operate on specific predefined blackboard objects A user interface module, in communication with the blackboard module, enables a user to view blackboard status information A control module, in communication with the blackboard module and the knowledge source modules, receives input data and controls operation of the knowledge source modules in accordance with a predetermined knowledge source priority scheme

Proceedings ArticleDOI
01 Jun 1992
TL;DR: This paper describes a desk with a computer-controlled projector and camera above it that enables people to interact with ordinary paper documents in ways normally possible only with electronic documents on workstation screens.
Abstract: Before the advent of the personal workstation, office work practice revolved around the paper document. Today the electronic medium offers a number of advantages over paper, but it has not eradicated paper from the office. A growing problem for those who work primarily with paper is lack of direct access to the wide variety of interactive functions available on personal workstations. This paper describes a desk with a computer-controlled projector and camera above it. The result is a system that enables people to interact with ordinary paper documents in ways normally possible only with electronic documents on workstation screens. After discussing the motivation for this work, this paper describes the system and two sample applications that can benefit from this style of interaction: a desk calculator and a French to English translation system. We describe the design and implementation of the system, report on some user tests, and conclude with some general reflections on interacting with computers in this way.

Book
01 Jan 1992
TL;DR: An introduction to Human Memory and Interaction and User Modelling in HCI and Applied to Interactive Informal and Formal Specifications of User Interaction Task Scenarios.
Abstract: Introducing Human Computer Interaction. An introduction to Human Memory. Memory Structures. Knowledge and Representation. Expertise. Skill and Skill Acquisition. Organisation and early Attempts at Modelling Human-Computer Interaction. Interaction and User Modelling in HCI. Task Analysis and Task Modelling. Developing Interface Designs. Evaluations of Interactive systems. User Interface Design. Environments. Management System and Toolkits. Task Analysis. Knowledge Analysis of Tasks. Design. Applied to Interactive Informal and Formal Specifications of User Interaction Task Scenarios. Appendix.

01 Jan 1992
TL;DR: A preview of what IRAF might look like to the user by the end of the decade is presented, with emphasis on the work being done on the image data structures, graphics and image display interfaces, and user interfaces.
Abstract: The Interactive Data Reduction and Analysis Facility (IRAF) data reduction and analysis system has been around since 1981 Today it is a mature system with hundreds of applications, and is supported on all the major platforms Many institutions, projects, and individuals around the US and around the world have developed software for IRAF Some of these packages are comparable in size to the IRAF core system itself IRAF is both a data analysis system, and a programming environment As a data analysis system it can be easily installed by a user at a remote site and immediately used to view and process data As a programming environment IRAF contains a wealth of high and low level facilities for developing new applications for interactive and automated processing of astronomical or other data As important as the applications programs and user interfaces are to the scientist using IRAF, the heart of the IRAF system is the programming environment The programming environment determines to a large extent the types of applications which can be built within IRAF, what they will look like, and how they will interact with one another and with the user While applications can be easily added to or removed from a software system, the programming environment must remain fairly stable, with carefully planned evolution and growth, over the lifetime of a system The IRAF programming environment is the framework on which the rest of the IRAF system is built The IRAF programming environment as it exists in 1992, and the work currently underway to enhance the environment are discussed The structure of the programming environment as a class hierarchy is discussed, with emphasis on the work being done on the image data structures, graphics and image display interfaces, and user interfaces The new technologies which we feel IRAF must deal with successfully over the coming years are discussed Finally, a preview of what IRAF might look like to the user by the end of the decade is presented

Patent
24 Mar 1992
TL;DR: In this article, an interactive intelligent interface in a system which performs an interactive processing making the system recognize the capacity of a user of the system is presented, where the system has a function of providing a processing method conformable to the recognized capacity of the user.
Abstract: An interactive intelligent interface in a system which performs an interactive processing making the system recognize the capacity of a user of the system. The system has a function of providing a processing method conformable to the recognized capacity of the system user, whereby a processing method to be performed by the system is changed in accordance with the operating or processing capacity of the system user so that a procedure desired by the system user can be performed irrespective of the operating or processing capacity of the system user.

Proceedings Article
01 Sep 1992
TL;DR: In this article, the prediction of human-computer interfaces using Fitts' law is reviewed and techniques for model building are summarized and three refinements to improve the theoretical and empirical accuracy of the law are presented.
Abstract: The prediction of movement time in human-computer interfaces as undertaken using Fitts' law is reviewed. Techniques for model building are summarized and three refinements to improve the theoretical and empirical accuracy of the law are presented. Refinements include (1) the Shannon formulation for the index of task difficulty, (2) new interpretations of “target width'' for two- and three-dimensional tasks, and (3) a technique for normalizing error rates across experimental factors. Finally, a detailed application example is developed showing the potential of Fitts' law to predict and compare the performance of user interfaces before designs are finalized.

Journal ArticleDOI
TL;DR: Two research communities exist in the USA, one focused on information system functionality and organizational impact, the other on human-computer dialogues or ‘user interfaces’ to systems and applications.

Proceedings ArticleDOI
01 Jun 1992
TL;DR: A conventional approach to multiparty videoeonferencing is to support a four way meeting using a Picture-in-aPicture (PIP) device, but in this approach, each remote participant’s image is placed in one quadrant of the screen of a single monitor.

Proceedings ArticleDOI
01 Jun 1992
TL;DR: It is believed that if the Cognitive Walkthrough is ultimately to be successful in industrial settings, the method must be refined and augmented in a variety of ways.
Abstract: The Cognitive Walkthrough methodology was developed in an effort to bring cognitive theory closer to practice; to enhance the design and evaluation of use interfaces in industrial settings. For the first time, small teams of professional developers have used this method to critique three complex software systems. In this paper we report evidence about how the methodology worked for these evaluations. We focus on five core issues: (1) task selection, coverage, and evaluation, (2) the process of doing a Cognitive Walkthrough, (3) requisite knowlege for the evaluators, (4) group walkthroughs, and (5) the interpretation of results. Our findings show that many variables can affect the success of the technique; we believe that if the Cognitive Walkthrough is ultimately to be successful in industrial settings, the method must be refined and augmented in a variety of ways.

Proceedings ArticleDOI
01 Jun 1992
TL;DR: Computer based toggle switches can be very confusing because the design needs to signal to the user the appropriate activity necessary to perform the desired action.
Abstract: Computer based toggle switches cart be very confusing. The most common problem encountered is the confusion between state indication and possible action labe~ does the label ON indicates the state of the device or does it indicate the resulting state when the toggle is activated? Another common problem comes from the difficulty of deciding what to do to change the state of the device. The design needs to signal to the user the appropriate activity necessary to perform the desired action. For example, Valk showed that users were confused by a design which showed a slider switch where only touches on the end of the slider were permitted, but “sliding” was not possible [5],

Journal ArticleDOI
01 Oct 1992
TL;DR: A high-level and flexible framework for supporting the construction of multiuser interfaces based on a generalized editing interaction model that allows users to view programs as active data that can be concurrently edited by multiple users is developed.
Abstract: We have developed a high-level and flexible framework for supporting the construction of multiuser interfaces. The framework is based on a generalized editing interaction model, which allows users to view programs as active data that can be concurrently edited by multiple users. It consists of several novel components including a refinement of both the Seeheim UIMS architecture and the distributed graphics architecture that explicitly addresses multiuser interaction; the abstractions of shared active variables and interaction variables, which allow users and applications to exchange information; a set of default collaboration rules designed to keep the collaboration-awareness low in multiuser programs; and a small but powerful set of primitives for overriding these rules. The framework allows users to be dynamically added and removed from a multiuser sesssion, different users to use different user interfaces to interact with an application, the modules interacting with a particular user to execute on the local workstation, and programmers to incrementally trade automation for flexibility. We have implemented the framework as part of a system called Suite. This paper motivates, describes, and illustrates the framework using the concrete example of Suite, discusses how it can be implemented in other kinds of systems, compares it with related work, discusses its shortcomings, and suggests directions for future work.

ReportDOI
01 Jun 1992
TL;DR: This tutorial style is meant to provide the reader with the ability to run PRODIGY and make use of all the basic features, as well as gradually learning the more esoteric aspects of PRODigY4.0.
Abstract: PRODIGY is a general-purpose problem-solving architecture that serves as a basis for research in planning, machine learning, apprentice-type knowledge-refinement interfaces, and expert systems This document is a manual for the latest version of the PRODIGY system, PRODIGY40, and includes descriptions of the PRODIGY representation language, control structure, user interface, abstraction module, and other features The tutorial style is meant to provide the reader with the ability to run PRODIGY and make use of all the basic features, as well as gradually learning the more esoteric aspects of PRODIGY40

Proceedings ArticleDOI
Yin Yin Wong1
03 May 1992
TL;DR: This paper argues that interface design can be made more effective by borrowing techniques from graphic design by way of coded prototypes, which do not facilitate quick turnaround and require a complete interface definition.
Abstract: This paper argues that interface design can be made more effective by borrowing techniques from graphic design. User interface designers often explore interface ideas through coded prototypes, which do not facilitate quick turnaround and require a complete interface definition. This method of prototyping is too detailed and laborious to appropriately facilitate early design decisions, such as brainstorming about the task the interface will support.

Journal ArticleDOI
TL;DR: Recommendations for research on interfaces to support end-user information seeking include: attention to multimedia information sources, development of interfaces that integrate information-seeking functions, support for collaborative information seeking, use of multiple input/output devices in parallel, integration of advanced information retrieval techniques in systems for end users, and development of adaptable interfaces to meet individual difference and multicultural needs.
Abstract: Essential features of interfaces to support end-user information seeking are discussed and illustrated. Examples of interfaces to support the following basic information-seeking functions are presented: problem definition, source selection, problem articulation, examination of results, and information extraction. It is argued that present interfaces focus on problem articulation and examination of results functions, and research and development are needed to support the problem definition and information extraction functions. General recommendations for research on interfaces to support end-user information seeking include: attention to multimedia information sources, development of interfaces that integrate information-seeking functions, support for collaborative information seeking, use of multiple input/output devices in parallel, integration of advanced information retrieval techniques in systems for end users, and development of adaptable interfaces to meet individual difference and multicultural needs.