scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Computer-Human Interaction in 2001"


Journal ArticleDOI
TL;DR: A study of mobile workers that highlights different facets of access to remote people and information, and different facet of anytime, anywhere, and four key factors in mobile work are identified.
Abstract: The rapid and accelerating move towards use of mobile technologies has increasingly provided people and organizations with the ability to work away from the office and on the move. The new ways of working afforded by these technologies are often characterized in terms of access to information and people anytime, anywhere. This article presents a study of mobile workers that highlights different facets of access to remote people and information, and different facets of anytime, anywhere. Four key factors in mobile work are identified: the role of planning, working in "dead time," accessing remote technological and informational resources, and monitoring the activities of remote colleagues. By reflecting on these issues, we can better understand the role of technology and artifacts in mobile work and identify the opportunities for the development of appropriate technological solutions to support mobile workers.

599 citations


Journal ArticleDOI
TL;DR: It is argued that filers may engage in premature filing: to clear their workspace, they archives information that later turns out to be of low value, given the effort involved in organzing data, they are also loath to discard filed information, even when its value is uncertain.
Abstract: We explored general issues concerning personal information management by investigating the characteristics of office workers' paper-based information, in an industrial research environment. we examined the reasons people collect paper, types of data they collect, problems encountered in handling paper, and strategies used for processing it. We tested three specific hypotheses in the course of an office move. The greater availability of public digital data along with changes in people's jobs or interests should lead to wholescale discarding of paper data, while preparing for the move. Instead we found workers kept large, highly valued papar archives. We also expected that the major part of people's personal archives would be unique documents. However, only 49% of people's archives were unique documents, the remainder being copies of publicly available data and unread information, and we explore reasons for this. We examined the effects of paper-processing strategies on archive structure. We discovered different paper-processing strategies (filing and piling)that were relatively independent of job type. We predicated that filers' attempted to evaluate and catergorize incoming documents would produce smaller archives that were accessed frequently. Contrary to our predictions, filers amassed more information, and accessed it less frequently than pilers. We argue that filers may engage in premature filing: to clear their workspace, they archives information that later turns out to be of low value. Given the effort involved in organzing data, they are also loath to discard filed information, even when its value is uncertain. We discuss the implications of this research for digital personal information management.

230 citations


Journal Article
TL;DR: A novel method for realtime finger tracking on an augmented desk system by introducing a infrared camera, pattern matching with normalized correlation, and a pan-tilt camera is developed.
Abstract: This article describes a design and implementation of an augmented desk system, named EnhancedDesk, which smoothly integrates paper and digital information on a desk. The system provides users an intelligent environment that automatically retrieves and displays digital information corresponding to the real objects (e.g., books) on the desk by using computer vision. The system also provides users direct manipulation of digital information by using the users’ own hands and fingers for more natural and more intuitive interaction. Based on the experiments with our first prototype system, some critical issues on augmented desk systems were identified when trying to pursue rapid and fine recognition of hands and fingers. To overcome these issues, we developed a novel method for realtime finger tracking on an augmented desk system by introducing a infrared camera, pattern matching with normalized correlation, and a pan-tilt camera. We then show an interface prototype on EnhancedDesk. It is an application to a computer-supported learning environment, named Interactive Textbook. The system shows how effective the integration of paper and digital information is and how natural and intuitive direct manipulation of digital information with users’ hands and fingers is.

225 citations


Journal ArticleDOI
TL;DR: A performance model of (recognition-based) multimodal interaction that predicts input speed including time needed for error correction is introduced, which suggests that recognition accuracy determines user choice between modalities: while users initially prefer speech, they learn to avoid ineffective correction modalities with experience.
Abstract: Although commercial dictation systems and speech-enabled telephone voice user interfaces have become readily available, speech recognition errors remain a serious problem in the design and implementation of speech user interfaces. Previous work hypothesized that switching modality could speed up interactive correction of recognition errors. This article presents multimodal error correction methods that allow the user to correct recognition errors efficiently without keyboard input. Correction accuracy is maximized by novel recognition algorithms that use context information for recognizing correction input. Multimodal error correction is evaluated in the context of a prototype multimodal dictation system. The study shows that unimodal repair is less accurate than multimodal error correction. On a dictation task, multimodal correction is faster than unimodal correction by respeaking. The study also provides empirical evidence that system-initiated error correction (based on confidence measures) may not expedite error correction. Furthermore, the study suggests that recognition accuracy determines user choice between modalities: while users initially prefer speech, they learn to avoid ineffective correction modalities with experience. To extrapolate results from this user study, the article introduces a performance model of (recognition-based) multimodal interaction that predicts input speed including time needed for error correction. Applied to interactive error correction, the model predicts the impact of improvements in recognition technology on correction speeds, and the influence of recognition accuracy and correction method on the productivity of dictation systems. This model is a first step toward formalizing multimodal interaction.

211 citations


Journal ArticleDOI
TL;DR: Cassowary is described---an incremental algorithm based on the dual simplex method, which can solve systems of constraints efficiently and is implemented as part of a constraint-solving toolkit.
Abstract: Linear equality and inequality constraints arise naturally in specifying many aspects of user interfaces, such as requiring that one window be to the left of another, requiring that a pane occupy the leftmost third of a window, or preferring that an object be contained within a rectangle if possible. Previous constraint solvers designed for user interface applications cannot handle simultaneous linear equations and inequalities efficiently. This is a major limitation, as such systems of constraints arise often in natural declarative specifications. We describe Cassowary---an incremental algorithm based on the dual simplex method, which can solve such systems of constraints efficiently. We have implemented the algorithm as part of a constraint-solving toolkit. We discuss the implementation of the toolkit, its application programming interface, and its performance.

200 citations


Journal ArticleDOI
Kori Inkpen1
TL;DR: Children's use of two common mouse interaction styles, drag-and-drop and point- and-click, are investigated to determine whether the choice of interaction style impacts children's performance in interactive learning environments and whether either method was superior to the other in terms of speed, error rate, or user preference.
Abstract: This research investigates children's use of two common mouse interaction styles, drag-and-drop and point-and-click, to determine whether the choice of interaction style impacts children's performance in interactive learning environments. The interaction styles were experimentally compared to determine if either method was superior to the other in terms of speed, error rate, or user preference, for children. The two interaction styles were also compared based on children's achievement and motivation, within a commercial software environment. Experiment I used an interactive learning environment as children played two versions of an educational puzzle-solving game, each version utilizing a different mouse interaction style; experiment II used a mouse-controlled software environment modeled after the educational game. The results were similar to previous results reported for adults: the point-and-click interaction style was faster; fewer errors were committed using it; and it was preferred over the drag-and-drop interaction style. Within the context of the puzzle-solving game, the children solved significantly fewer puzzles, and they were less motivated using the version that utilized a drag-and-drop interaction style as compared to the version that utilized a point-and-click interaction style. These results were also explored through the use of state-transition diagrams and GOMS models, both of which supported the experimental data gathered.

119 citations


Journal ArticleDOI
TL;DR: This research suggests that some of the educational deficiencies of Direct Manipulation (DM) interfaces are not necessarily caused by their “directness,” but by what they are directed at—in this case directness toward objects rather than embedded educational concepts being learned.
Abstract: This research investigates the role of interface manipulation style on reflective cognition and concept learning through a comparison of the effectiveness of three verisons of a software application for learning two-dimensional transformation geometry. The three versions respectively utilize a Direct Object Manipulation (DOM) interface in which the user manipulates the visual representation of objects being transformed; a Direct Concept Manipulation (DCM) interface in which the user manipulates the visual representation of the transformation being applied to the object; and a Reflective Direct Concept Manipulation (RDCM) interface in which the DCM approach is extended with scaffolding. Empirical results of a study showed that grade-6 students using the RDCM version learned significantly more than those using the DCM version, who is turn learned significantly more than those using the DOM version. Students using the RDCM version had to process information consciously and think harder than those using the DCM and DOM versions. Despite the relative difficulty when using the RDCM interface style, all three groups expressed a similar (positive) level of liking for the software. This research suggests that some of the educational deficiencies of Direct Manipulation (DM) interfaces are not necessarily caused by their “directness,” but by what they are directed at—in this case directness toward objects rather than embedded educational concepts being learned. This paper furthers our understanding of how the DM metaphor can be used in learning- and knowledge-centered software (i.e., learnware) by proposing a new DM metaphor (i.e., DCM), and the incorporation of scaffolding to enhance the DCM approach to promote reflective cognition and deep learning.

97 citations


Journal ArticleDOI
TL;DR: A software system, called Tunserver, which recognizes a musical tune whistled by the user, finds it in a database, and returns its name, composer, and other information, which is useful for track retrieval at radio stations, music stores, etc., and is a step toward the long-term goal of communicating with a computer much like one would with a human being.
Abstract: We present a software system, called Tunserver, which recognizes a musical tune whistled by the user, finds it in a database, and returns its name, composer, and other information. Such a service is useful for track retrieval at radio stations, music stores, etc., and is also a step toward the long-term goal of communicating with a computer much like one would with a human being. Tuneserver is implemented as a public Java-based WWW service with a database of approximately 10,000 motifs. Tune recognition is based on a highly error-resistant encoding, proposed by Parsons, that uses only the direction of the melody, ignoring the size of intervals as well as rhythm. We present the design and implementation of the tune recognition core, outline the design of the Web service, and describe the results obtained in an empirical evaluation of the new interface, including the derivation of suitable system parameters, resulting performance figures, and an error analysis.

97 citations


Journal ArticleDOI
TL;DR: A “knowledege/usability graph” is introduced, which shows the impact of even a smaller amount of knowledge for the user, and the extent to which designers' knowledge may bias their views of usability.
Abstract: How hard to users to find interactive devices to use to achieve their goals, and how can we get this information early enough to influence design? We show that Markov modeling can obtain suitable measures, and we provide formulas that can be used for a large class of systems. We analyze and consider alternative designs for various real examples. We introduce a “knowledege/usability graph,” which shows the impact of even a smaller amount of knowledge for the user, and the extent to which designers' knowledge may bias their views of usability. Markov models can be built into design tools, and can therefore be made very convenient for designers to utilize. One would hope that in the future, design tools would include such mathematical analysis, and no new design skills would be required to evaluate devices. A particular concern of this paper is to make the approach accessible. Complete program code and all the underlying mathematics are provided in appendices to enable others to replicate and test all results shown.

88 citations


Journal ArticleDOI
TL;DR: This article describes how direct manipulation human computer interfaces can be augmented with techniques borrowed from cartoon animators, and aims to improve the visual feedback of a direct manipulation interface by smoothing the changes of an interface, giving manipulated objects a feeling of substance and providing cues that anticipate the result of a manipulation.
Abstract: If judiciously applied, animation techniques can enhance the look and feel of computer applications that present a graphical human interface. Such techniques can smooth the rough edges and abrupt transitions common in many current graphical interfaces, and strengthen the illusion of direct manipulation that many interfaces strive to present. To date, few applications include such animation techniques. One possible reason is that animated interfaces are difficult to implement: they are difficult to design, place great burdens on programmers, and demand high-performance from underlying graphics systems.This article describes how direct manipulation human computer interfaces can be augmented with techniques borrowed from cartoon animators. In particular, we wish to improve the visual feedback of a direct manipulation interface by smoothing the changes of an interface, giving manipulated objects a feeling of substance and providing cues that anticipate the result of a manipulation. Our approach is to add support for animation techniques such as object distortion and keyframe interpolation, and to provide prepackaged animation effects such as animated widgets for common user interface interactions.To determine if these tools and techniques are practical and effective, we built a prototype direct manipulation drawing editor with an animated interface and used the prototype editor to carry out a set of human factors experiments. The experiments show that the techniques are practical even on standard workstation hardware, and that the effects can indeed enhance direct manipulation interfaces.

81 citations


Journal ArticleDOI
TL;DR: The observed mouse-pointing times suggest that people use a slower and more accurate speed-accuracy operating characteristic to select a target with a mouse when visual distractors are present, which suggests that Fitts' law coefficients derived from standard mouse-points experiments may under-predict mouse- pointing times for typical human-computer interactions.
Abstract: An experiment investigates (1) how the physical structure of a computer screen layout affects visual search and (2) how people select a found target object with a mouse. Two structures are examined---labeled visual hierarchies (groups of objects with one label per group) and unlabeled visual hierarchies (groups without labels). Search and selection times were separated by imposing a point-completion deadline that discouraged participants from moving the mouse until they found the target. The observed search times indicate that labeled visual hierarchies can be searched much more efficiently than unlabeled visual hierarchies, and suggest that people use a fundamentally different strategy for each of the two structures. The results have implications for screen layout design and cognitive modeling of visual search. The observed mouse-pointing times suggest that people use a slower and more accurate speed-accuracy operating characteristic to select a target with a mouse when visual distractors are present, which suggests that Fitts' law coefficients derived from standard mouse-pointing experiments may under-predict mouse-pointing times for typical human-computer interactions. The observed mouse-pointing times also demonstrate that mouse movement times for a two-dimensional pointing task can be most-accurately predicted by setting the w in Fitts' law to the width of the target along the line of approach.

Journal ArticleDOI
TL;DR: Model dependencies provide a useful new mechanism for improving the storage efficiency of dataflow constraint systems, especially when a large number of constrained objects must be managed.
Abstract: Dataflow constraints allow programmers to easily specify relationships among application objects in a natural, declarative manner. Most constraint solvers represent these dataflow relationships as directed edges in a dataflow graph. Unfortunately, dataflow graphs require a great deal of storage. Consequently, an application with a large number of constraints can get pushed into virtual memory, and performance degrades in interactive applications. Our solution is based on the observation that objects derived from the same class use the same constraints, and thus have the same dataflow graphs. We represent the common dataflow patterns in a model dataflow graph that is stored with the class. Instance objects may derive explicit dependencies from this graph when the dependencies are needed. Model dependencies provide a useful new mechanism for improving the storage efficiency of dataflow constraint systems, especially when a large number of constrained objects must be managed.