scispace - formally typeset
Search or ask a question

Showing papers on "User interface published in 2000"


Journal ArticleDOI
TL;DR: The MCRpd interaction model for tangible interfaces as discussed by the authors is a conceptual framework for tangible user interfaces, which relates the role of physical and digital representations, physical control, and underlying digital models.
Abstract: We present steps toward a conceptual framework for tangible user interfaces. We introduce the MCRpd interaction model for tangible interfaces, which relates the role of physical and digital representations, physical control, and underlying digital models. This model serves as a foundation for identifying and discussing several key characteristics of tangible user interfaces. We identify a number of systems exhibiting these characteristics, and situate these within 12 application domains. Finally, we discuss tangible interfaces in the context of related research themes, both within and outside of the human-computer interaction domain.

1,200 citations


Proceedings ArticleDOI
Barry Brumitt1, Brian R. Meyers1, John Krumm1, Amanda Kern1, Steven A. N. Shafer1 
25 Sep 2000
TL;DR: The current research in each of these areas of middleware, world modelling, perception, and service description is described, highlighting some common requirements for any intelligent environment.
Abstract: The EasyLiving project is concerned with development of an architecture and technologies for intelligent environments which allow the dynamic aggregation of diverse I/O devices into a single coherent user experience. Components of such a system include middleware (to facilitate distributed computing), world modelling (to provide location-based context), perception (to collect information about world state), and service description (to support decomposition of device control, internal logic, and user interface). This paper describes the current research in each of these areas, highlighting some common requirements for any intelligent environment.

959 citations


Proceedings ArticleDOI
01 May 2000
TL;DR: Based on a workshop discussion of multiple views, and based on the authors' own design and implementation experience with these systems, eight guidelines for the design of multiple view systems are presented.
Abstract: A multiple view system uses two or more distinct views to support the investigation of a single conceptual entity. Many such systems exist, ranging from computer-aided design (CAD) systems for chip design that display both the logical structure and the actual geometry of the integrated circuit to overview-plus-detail systems that show both an overview for context and a zoomed-in-view for detail. Designers of these systems must make a variety of design decisions, ranging from determining layout to constructing sophisticated coordination mechanisms. Surprisingly, little work has been done to characterize these systems or to express guidelines for their design. Based on a workshop discussion of multiple views, and based on our own design and implementation experience with these systems, we present eight guidelines for the design of multiple view systems.

794 citations


Journal ArticleDOI
TL;DR: This article considers cases of both success and failure in past user interface tools, and extracts a set of themes which can serve as lessons for future work.
Abstract: A user interface software tool helps developers design and implement the user interface. Research on past tools has had enormous impact on today's developers—virtually all applications today are built using some form of user interface tool. In this article, we consider cases of both success and failure in past user interface tools. From these cases we extract a set of themes which can serve as lessons for future work. Using these themes, past tools can be characterized by what aspects of the user interface they addressed, their threshold and ceiling, what path of least resistance they offer, how predictable they are to use, and whether they addressed a target that became irrelevant. We believe the lessons of these past themes are particularly important now, because increasingly rapid technological changes are likely to significantly change user interfaces. We are at the dawn of an era where user interfaces are about to break out of the “desktop” box where they have been stuck for the past 15 years. The next millenium will open with an increasing diversity of user interface on an increasing diversity of computerized devices. These devices include hand-held personal digital assistants (PDAs), cell phones, pages, computerized pens, computerized notepads, and various kinds of desk and wall size-computers, as well as devices in everyday objects (such as mounted on refridgerators, or even embedded in truck tires). The increased connectivity of computers, initially evidenced by the World Wide Web, but spreading also with technologies such as personal-area networks, will also have a profound effect on the user interface to computers. Another important force will be recognition-based user interfaces, especially speech, and camera-based vision systems. Other changes we see are an increasing need for 3D and end-user customization, programming, and scripting. All of these changes will require significant support from the underlying user interface sofware tools.

761 citations


Proceedings ArticleDOI
01 Oct 2000
TL;DR: This work describes an accurate vision-based tracking method for table-top AR environments and tangible user interface (TUI) techniques based on this method that allow users to manipulate virtual objects in a natural and intuitive manner.
Abstract: We address the problems of virtual object interaction and user tracking in a table-top augmented reality (AR) interface. In this setting there is a need for very accurate tracking and registration techniques and an intuitive and useful interface. This is especially true in AR interfaces for supporting face to face collaboration where users need to be able to easily cooperate with each other. We describe an accurate vision-based tracking method for table-top AR environments and tangible user interface (TUI) techniques based on this method that allow users to manipulate virtual objects in a natural and intuitive manner. Our approach is robust, allowing users to cover some of the tracking markers while still returning camera viewpoint information, overcoming one of the limitations of traditional computer vision based systems. After describing this technique we describe its use in prototype AR applications.

733 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to describe the development effort of JUPITER in terms of the underlying human language technologies as well as other system-related issues such as utterance rejection and content harvesting.
Abstract: In early 1997, our group initiated a project to develop JUPITER, a conversational interface that allows users to obtain worldwide weather forecast information over the telephone using spoken dialogue. It has served as the primary research platform for our group on many issues related to human language technology, including telephone-based speech recognition, robust language understanding, language generation, dialogue modeling, and multilingual interfaces. Over a two year period since coming online in May 1997, JUPITER has received, via a toll-free number in North America, over 30000 calls (totaling over 180000 utterances), mostly from naive users. The purpose of this paper is to describe our development effort in terms of the underlying human language technologies as well as other system-related issues such as utterance rejection and content harvesting. We also present some evaluation results on the system and its components.

697 citations


01 Jan 2000
TL;DR: In this paper, a formal, domain-independent definition of design patterns allows for computer support without sacrificing readability, and pattern use is integrated into the usability engineering life cycle, which is then used to inform follow-up projects and support HCI education.
Abstract: To create successful interactive systems, user interface designers need to cooperate with developers and application domain experts in an interdisciplinary team. These groups, however, usually miss a common terminology to exchange ideas, opinions, and values.This paper presents an approach that uses pattern languages to capture this knowledge in software development, HCI, and the application domain. A formal, domain-independent definition of design patterns allows for computer support without sacrificing readability, and pattern use is integrated into the usability engineering life cycle.As an example, experience from building an award-winning interactive music exhibit was turned into a pattern language, which was then used to inform follow-up projects and support HCI education.

680 citations


Proceedings ArticleDOI
01 Apr 2000
TL;DR: A user interface that organizes Web search results into hierarchical categories that allows users to focus on items in categories of interest rather than having to browse through all the results sequentially.
Abstract: We developed a user interface that organizes Web search results into hierarchical categories. Text classification algorithms were used to automatically classify arbitrary search results into an existing category structure on-the-fly. A user study compared our new category interface with the typical ranked list interface of search results. The study showed that the category interface is superior both in objective and subjective measures. Subjects liked the category interface much better than the list interface, and they were 50% faster at finding information that was organized into categories. Organizing search results allows users to focus on items in categories of interest rather than having to browse through all the results sequentially.

661 citations


Proceedings ArticleDOI
01 Apr 2000
TL;DR: The article presents the model, applies it to describe and compare a number of interaction techniques, and shows how it was used to create a new interface for searching and replacing text.
Abstract: This article introduces a new interaction model called Instrumental Interaction that extends and generalizes the principles of direct manipulation. It covers existing interaction styles, including traditional WIMP interfaces, as well as new interaction styles such as two-handed input and augmented reality. It defines a design space for new interaction techniques and a set of properties for comparing them. Instrumental Interaction describes graphical user interfaces in terms of domain objects and interaction instruments. Interaction between users and domain objects is mediated by interaction instruments, similar to the tools and instruments we use in the real world to interact with physical objects. The article presents the model, applies it to describe and compare a number of interaction techniques, and shows how it was used to create a new interface for searching and replacing text.

588 citations


Proceedings ArticleDOI
25 Sep 2000
TL;DR: ComMotion as mentioned in this paper is a location-aware computing environment which links personal information to locations in its user's life; for example, comMotion reminds one of her shopping list when she nears a grocery store.
Abstract: comMotion is a location-aware computing environment which links personal information to locations in its user's life; for example, comMotion reminds one of her shopping list when she nears a grocery store. Using satellite-based GPS position sensing, comMotion gradually learns about the locations in its user's daily life based on travel patterns. The full set of comMotion functionality, including map display, requires a graphical user interface. However, because it is intended primarily for mobile use, including driving, the core set of reminder creation and retrieval can be managed completely by speech.

572 citations


BookDOI
01 Jan 2000
TL;DR: The most inspiring book today from a very professional writer in the world, adaptive hypermedia and adaptive web based systems 5th international conference ah 2008 hannover germany july 29 august 1 2008 proceedings lecture notes in computer science.
Abstract: Now welcome, the most inspiring book today from a very professional writer in the world, adaptive hypermedia and adaptive web based systems 5th international conference ah 2008 hannover germany july 29 august 1 2008 proceedings lecture notes in computer science. This is the book that many people in the world waiting for to publish. After the announced of this book, the book lovers are really curious to see how this book is actually. Are you one of them? That's very proper. You may not be regret now to seek for this book to read.

Journal ArticleDOI
TL;DR: The induction of hybrid user models that consist of separate models for short-term and long-term interests are proposed, and it is suggested that effective personalization can be achieved without requiring any extra effort from the user.
Abstract: We present a framework for adaptive news access, based on machine learning techniques specifically designed for this task. First, we focus on the system's general functionality and system architecture. We then describe the interface and design of two deployed news agents that are part of the described architecture. While the first agent provides personalized news through a web-based interface, the second system is geared towards wireless information devices such as PDAs (personal digital assistants) and cell phones. Based on implicit and explicit user feedback, our agents use a machine learning algorithm to induce individual user models. Motivated by general shortcomings of other user modeling systems for Information Retrieval applications, as well as the specific requirements of news classification, we propose the induction of hybrid user models that consist of separate models for short-term and long-term interests. Furthermore, we illustrate how the described algorithm can be used to address an important issue that has thus far received little attention in the Information Retrieval community: a user's information need changes as a direct result of interaction with information. We empirically evaluate the system's performance based on data collected from regular system users. The goal of the evaluation is not only to understand the performance contributions of the algorithm's individual components, but also to assess the overall utility of the proposed user modeling techniques from a user perspective. Our results provide empirical evidence for the utility of the hybrid user model, and suggest that effective personalization can be achieved without requiring any extra effort from the user.

Patent
27 May 2000
TL;DR: In this article, a phrase-based modeling of generic structures of verbal interaction is proposed for the purpose of automating part of the design of grammar networks, which can regulate, control, and define the content and scope of human-machine interaction in natural language voice user interfaces.
Abstract: The invention enables creation of grammar networks that can regulate, control, and define the content and scope of human-machine interaction in natural language voice user interfaces (NLVUI). More specifically, the invention concerns a phrase-based modeling of generic structures of verbal interaction and use of these models for the purpose of automating part of the design of such grammar networks.

Patent
03 Mar 2000
TL;DR: In this article, a distributed electronic entertainment method and apparatus are described, where a central management resource is coupled to multiple out-of-home venues through a wide area network (WAN).
Abstract: A distributed electronic entertainment method and apparatus are described. In one embodiment, a central management resource is coupled to multiple out-of-home venues through a wide area network (WAN). The central management resource stores content and performs management, monitoring and entertainment content delivery functions. At each venue at least one entertainment unit is coupled to the WAN. Multiple entertainment units in a venue are coupled to each other through a local area network (LAN). In one embodiment, an entertainment unit includes a user interface that comprises at least one graphical user interface (GUI). The entertainment unit further comprises a local memory device that stores entertainment content comprising music, a peripheral interface, and a user input device. A plurality of peripheral devices are coupled to the at least one entertainment unit via the peripheral interface, wherein a user, through the user input device and the user interface, performs at least one activity from a group comprising, playing music, playing electronic games, viewing television content, and browsing the Internet.

Patent
29 Dec 2000
TL;DR: In this article, a system for dynamic distribution of audio signals at a site based on defined zones within the site is presented, where a plurality of addressable audio devices are coupled to a local network for the site which are configured to receive a designated digital audio stream over the local network and to output the received audio stream to audio equipment located at the site.
Abstract: Systems and methods are provided for dynamic distribution of audio signals at a site based on defined zones within the site. A plurality of addressable audio devices are coupled to a local network for the site which are configured to receive a designated digital audio stream over the local network and to output the received digital audio stream to audio equipment located at the site. A zone manager defines a plurality of zones for the site which may include a plurality of the addressable audio devices. The zone manager defines a relationship between a characteristic of the audio signal for a reference audio device and for the addressable audio devices in the zones. An audio interface receives digital audio streams and outputs the digital audio streams on the local network addressed to selected ones of the audio devices based on the defined zones, the defined relationship between a characteristic of the audio signal for a reference audio device and for the addressable audio devices and a control input associated with the characteristic. A user interface is provided which is configured to receive a user designation of the control input. Systems and methods for dynamic aggregation of audio equipment in zones are also provided.

Journal ArticleDOI
TL;DR: This survey examines computer-aided techniques used by HCI practitioners and researchers to extract usability-related information from user interface events and provides a conceptual evaluation to help identify some of the relative merits and drawbacks of the various classes of approaches.
Abstract: Modern window-based user interface systems generate user interface events as natural products of their normal operation. Because such events can be automatically captured and because they indicate user behavior with respect to an application's user interface, they have long been regarded as a potentially fruitful source of information regarding application usage and usability. However, because user interface events are typically voluminos and rich in detail, automated support is generally required to extract information at a level of abstraction that is useful to investigators interested in analyzing application usage or evaluating usability. This survey examines computer-aided techniques used by HCI practitioners and researchers to extract usability-related information from user interface events. A framework is presented to help HCI practitioners and researchers categorize and compare the approaches that have been, or might fruitfully be, applied to this problem. Because many of the techniques in the research literature have not been evaluated in practice, this survey provides a conceptual evaluation to help identify some of the relative merits and drawbacks of the various classes of approaches. Ideas for future research in this area are also presented. This survey addresses the following questions: How might user interface events be used in evaluating usability? How are user interface events related to other forms of usability data? What are the key challenges faced by investigators wishing to exploit this data? What approaches have been brought to bear on this problem and how do they compare to one another? What are some of the important open research questions in this area?

Patent
13 Dec 2000
TL;DR: In this paper, a userbar is established which includes a plurality of item representations and a magnification function is provided which magnifies items within the userbar when they are proximate the cursor associated with the graphical user interface.
Abstract: Methods and systems for providing graphical user interfaces are described. To provide greater access and consolidation to frequently used items in the graphical user interface, a userbar is established which includes a plurality of item representations. To permit a greater number of items to reside in the userbar, a magnification function can be provided which magnifies items within the userbar when they are proximate the cursor associated with the graphical user interface.

Patent
19 Dec 2000
TL;DR: In this article, a method of informing a first network user of activity by other network users includes receiving information identifying television programming viewed by at least one other network user and displaying the information to the first user on a user interface.
Abstract: A method of informing a first network user of activity by other network users includes receiving information identifying television programming viewed by at least one other network user and displaying the information to the first network user on a user interface.

Journal ArticleDOI
TL;DR: A four-phase framework for creativity that might assist designers in providing effective tools for users is offered, which proposes eight activities that require human-computer interaction research and advanced user interface design.
Abstract: A challenge for human-computer interaction researchers and user interf ace designers is to construct information technologies that support creativity. This ambitious goal can be attained by building on an adequate understanding of creative processes. This article offers a four-phase framework for creativity that might assist designers in providing effective tools for users: (1)Collect: learn from provious works stored in libraries, the Web, etc.; (2) Relate: consult with peers and mentors at early, middle, and late stages, (3)Create: explore, compose, evaluate possible solutions; and (4) Donate: disseminate the results and contribute to the libraries. Within this integrated framework, this article proposes eight activities that require human-computer interaction research and advanced user interface design. A scenario about an architect illustrates the process of creative work within such an environment.

Patent
04 May 2000
TL;DR: The Cooperative Help Assistance (CHA) system as discussed by the authors provides real-time user assistance for one or more windows-based GUI applications or a single application's different subsections such as web pages, running concurrently in any operating system.
Abstract: A Cooperative Help Assistance (CHA) system and method provide real-time user assistance for one or more windows-based Graphic User Interface (GUI) applications or a single application's different subsections such as web pages, running concurrently in any operating system. The CHA System enables the development of an informative assistance object independently from the original source code or development environment of the target Host Application. The assistance object can be selected by any number of user interfaces from sophisticated inference driven interactive interface search tools or categorized lists. By intercepting and monitoring user actions on a Host Application, the CHA system performs intelligent assistance in the context of the target host application program. Utilizing a Host Application Model, the CHA System and method dynamically assemble many elements in real-time or just-in-time to produce assistance sequences or elements very efficiently without having to code every interface path permutation. Paths can be dynamically generated from the Host Application Model, which enables a real-time module to offer intelligent, contextual assistance as well as real-time construction of automated, accelerated CHA Sequences or Procedures that require little or no user interaction. All assistance and information are processed and expressed by an extensive multitasking, multimedia subsystem for two dimensional (2D) and real-time three-dimensional (3D) application interfaces, which greatly enhances and extends the effectiveness of any explanation or material expression. The production of Assistant Sequences is facilitated by the Host Application Model and 2D and 3D GUI “drag and drop” interface tools.

Journal ArticleDOI
TL;DR: The paper examines the mathematical tools that have proven successful, provides a taxonomy of the problem domain, and then examines the state of the art: person identification, surveillance/monitoring, 3D methods, and smart rooms/perceptual user interfaces.
Abstract: The research topic of looking at people, that is, giving machines the ability to detect, track, and identify people and more generally, to interpret human behavior, has become a central topic in machine vision research. Initially thought to be the research problem that would be hardest to solve, it has proven remarkably tractable and has even spawned several thriving commercial enterprises. The principle driving application for this technology is "fourth generation" embedded computing: "smart" environments and portable or wearable devices. The key technical goals are to determine the computer's context with respect to nearby humans (e.g., who, what, when, where, and why) so that the computer can act or respond appropriately without detailed instructions. The paper examines the mathematical tools that have proven successful, provides a taxonomy of the problem domain, and then examines the state of the art. Four areas receive particular attention: person identification, surveillance/monitoring, 3D methods, and smart rooms/perceptual user interfaces. Finally, the paper discusses some of the research challenges and opportunities.

Proceedings ArticleDOI
09 Oct 2000
TL;DR: Polaris is presented, an interface for exploring large multi-dimensional databases that extends the well-known Pivot Table interface that includes an interfaces for constructing visual specifications of table based graphical displays and the ability to generate a precise set of relational queries from the visual specifications.
Abstract: In the last several years, large multi-dimensional databases have become common in a variety of applications such as data warehousing and scientific computing. Analysis and exploration tasks place significant demands on the interfaces to these databases. Because of the size of the data sets, dense graphical representations are more effective for exploration than spreadsheets and charts. Furthermore, because of the exploratory nature of the analysis, it must be possible for the analysts to change visualizations rapidly as they pursue a cycle involving first hypothesis and then experimentation. The authors present Polaris, an interface for exploring large multi-dimensional databases that extends the well-known Pivot Table interface. The novel features of Polaris include an interface for constructing visual specifications of table based graphical displays and the ability to generate a precise set of relational queries from the visual specifications. The visual specifications can be rapidly and incrementally developed, giving the analyst visual feedback as they construct complex queries and visualizations.

Proceedings ArticleDOI
25 Sep 2000
TL;DR: Cybre-Minder is described, a prototype context-aware tool that supports users in sending and receiving reminders that can be associated to richly described situations involving time, place and more sophisticated pieces of context.
Abstract: Current tools do not provide adequate support to users for handling reminders. The main reason for this is the lack of use of rich context that specifies when a reminder should be presented to its recipient. We describe Cybre-Minder, a prototype context-aware tool that supports users in sending and receiving reminders that can be associated to richly described situations involving time, place and more sophisticated pieces of context. These situations better define when reminders should be delivered, enhancing our ability to deal with them more effectively. We describe how the tool is used and how it was developed using our previously developed Context Toolkit infrastructure for context-aware computing.

Patent
29 Nov 2000
TL;DR: In this article, a method of navigating within a plurality of bit-maps through a client user interface, comprising the steps of displaying at least a portion of a first one of the bitmaps on the client interface, receiving a gesture at the client UI, and in response to the gesture, altering the display by substituting a different bitmap for the first bit-map.
Abstract: A method of navigating within a plurality of bit-maps through a client user interface, comprising the steps of displaying at least a portion of a first one of the bit-maps on the client user interface, receiving a gesture at the client user interface, and in response to the gesture, altering the display by substituting at least a portion of a different one of the bit-maps for at least a portion of the first bit-map

Patent
26 Jan 2000
TL;DR: In this paper, the authors describe a new device bridging the gap between the virtual multimedia-based Internet world and the real world, best exemplified by print media, which relates to communicating multimedia information using a scanner for machine-readable code containing a link information corresponding to a provider information depicted on the printed medium.
Abstract: The present invention describes a revolutionary new device bridging the gap between the virtual multimedia-based Internet world and the real world, best exemplified by print media. More particularly, the invention relates to communicating multimedia information using a scanner for machine-readable code containing a link information corresponding to a provider information depicted on the printed medium, a user interface for obtaining user input information corresponding to the provider information, a communications bridge for sending the link information and the user input information via the network, a receiver in communication with the scanner, capable of receiving the link information and user input information, and further capable of receiving and playing a multimedia information sequence, and a portal server in communication with the scanner via the network capable of selecting a multimedia information sequence corresponding to the link information and the user input information.

Journal ArticleDOI
TL;DR: Iterative design and a preliminary user evaluation suggest that audio is an appropriate medium for mobile messaging, but that care must be taken to minimally intrude on the wearer's social and physical environment.
Abstract: Mobile workers need seamless access to communication and information services while on the move. However, current solutions overwhelm users with intrusive interfaces and ambiguous notifications. This article discusses the interaction techniques developed for Nomadic Radio, a wearable computing platform for managing voice and text-based messages in a nomadic environment. Nomadic Radio employs an auditory user interface, which synchronizes speech recognition, speech synthesis, nonspeech audio, and spatial presentation of digital audio, for navigating among messages as well as asynchonous notific ation of nely arrived messages. Emphasis is placed on an auditory modality as Nomadic Radio is designed to be used while performing other tasks in a user's everyday environment; a range of auditory cues provides peripheral awareness of incoming messages. Notification is adaptive and cntext sensitive; messages are presented as more or less obtrsive based on importance inferred from content filtering, whether the user is engaged in conversation and his or her own recent responses to prior messages. Auditory notifications are dynamically scaled from ambient sound through recorded voice cues up to message summaries. Iterative design and a preliminary user evaluation suggest that audio is an appropriate medium for mobile messaging, but that care must be taken to minimally intrude on the wearer's social and physical environment.

Journal ArticleDOI
TL;DR: The emerging architectural approaches for interpreting speech and pen-based gestural input in a robust manner are summarized-including early and late fusion approaches, and the new hybrid symbolic-statistical approach.
Abstract: The growing interest in multimodal interface design is inspired in large part by the goals of supporting more transparent, flexible, efficient, and powerfully expressive means of human-computer interaction than in the past. Multimodal interfaces are expected to support a wider range of diverse applications, be usable by a broader spectrum of the average population, and function more reliably under realistic and challenging usage conditions. In this article, we summarize the emerging architectural approaches for interpreting speech and pen-based gestural input in a robust manner-including early and late fusion approaches, and the new hybrid symbolic-statistical approach. We also describe a diverse collection of state-of-the-art multimodal systems that process users' spoken and gestural input. These applications range from map-based and virtual reality systems for engaging in simulations and training, to field medic systems for mobile use in noisy environments, to web-based transactions and standard text-editing applications that will reshape daily computing and have a significant commercial impact. To realize successful multimodal systems of the future, many key research challenges remain to be addressed. Among these challenges are the development of cognitive theories to guide multimodal system design, and the development of effective natural language processing, dialogue processing, and error-handling techniques. In addition, new multimodal systems will be needed that can function more robustly and adaptively, and with support for collaborative multiperson use. Before this new class of systems can proliferate, toolkits also will be needed to promote software development for both simulated and functioning systems.

Patent
21 Aug 2000
TL;DR: In this article, a wireless communication system that utilizes a remote voice recognition server system to translate voice input received from serviced mobile devices into a symbolic data file (e.g., alpha-numeric or control characters) that can be processed by the mobile devices is presented.
Abstract: A wireless communication system that utilizes a remote voice recognition server system to translate voice input received from serviced mobile devices into a symbolic data file (e.g. alpha-numeric or control characters) that can be processed by the mobile devices. The translation process begins by establishing a voice communication channel between the serviced mobile device and the voice recognition server. A user of the mobile device then begins speaking in a fashion that may be detected by the voice recognition server system. Upon detecting the user's speech, the voice recognition server system translates the speech into a symbolic data file, which is then forwarded to the user through a separate data communication channel. The user, upon receiving the symbolic data file at the mobile device, reviews and edits the content and further utilizes the file as desired.

Journal ArticleDOI
TL;DR: The resulting ideas about interface design for fieldworkers are formulated into two general principles: Minimal Attention User Interfaces (MAUIs) and context awareness.
Abstract: “Using while moving” is the basic ability fieldwork users require of a mobile computer system. These users come from a wide range of backgrounds but have in common an extremely mobile and dynamic workplace. We identify four specific characteristics of this class of users: dynamic user configuration, limited attention capacity, high-speed interaction, and context dependency. A prototype is then presented that was designed to assist fieldworkers in data collection tasks and to explore the HCI design issues involved. The prototype was used in an extensive field trial by a group of ecologists observing giraffe behavior in Kenya. Following this trial, improvements were made to the prototype interface which in turn was tested in a subsequent field trial with another group of ecologists. From this experience, we have formulated our resulting ideas about interface design for fieldworkers into two general principles: Minimal Attention User Interfaces (MAUIs) and context awareness. The MAUI seeks to minimize the attention, though not necessarily the number of interactions, required from the user in operating a device. Context awareness enables the mobile device to provide assistance based on a knowledge of its environment.

Patent
18 Aug 2000
TL;DR: A user interface device and system for providing a shared GTM and CDN (collectively Universal Distribution Network) for a service fee, where the customer or user does not need to purchase significant hardware and/or software features is presented in this article.
Abstract: A user interface device and system for providing a shared GTM and CDN (collectively Universal Distribution Network) for a service fee, where the customer or user does not need to purchase significant hardware and/or software features. The present interface device and system allows a customer to scale up its Web site, without a need for expensive and difficult to use hardware and/or software. In a preferred embodiment, the customer merely pays for a service fee, which can be fixed, variable, lump some, or based upon a subscription model using the present system. The present device and system are preferably implemented on a system including a novel combination of global traffic management and content distribution.