scispace - formally typeset
Search or ask a question
Author

Jeffrey S. Pierce

Bio: Jeffrey S. Pierce is an academic researcher from Samsung. The author has contributed to research in topics: Mobile device & User interface. The author has an hindex of 26, co-authored 74 publications receiving 3746 citations. Previous affiliations of Jeffrey S. Pierce include Georgia Institute of Technology & University of Virginia.


Papers
More filters
Proceedings ArticleDOI
01 Nov 2000
TL;DR: This work introduces and integrates a set of sensors into a handheld device, and demonstrates several new functionalities engendered by the sensors, such as recording memos when the device is held like a cell phone, switching between portrait and landscape display modes by holding the device in the desired orientation.
Abstract: We describe sensing techniques motivated by unique aspects of human-computer interaction with handheld devices in mobile settings. Special features of mobile interaction include changing orientation and position, changing venues, the use of computing as auxiliary to ongoing, real-world activities like talking to a colleague, and the general intimacy of use for such devices. We introduce and integrate a set of sensors into a handheld device, and demonstrate several new functionalities engendered by the sensors, such as recording memos when the device is held like a cell phone, switching between portrait and landscape display modes by holding the device in the desired orientation, automatically powering up the device when the user picks it up the device to start using it, and scrolling the display using tilt. We present an informal experiment, initial usability testing results, and user reactions to these techniques.

729 citations

Patent
06 Jun 2001
TL;DR: In this article, the orientation of a device is determined by detecting movement followed by an end of movement of the device and the orientation is then determined and used to set the orientation on the display.
Abstract: In a device having a display, a change in focus for an application is used with a requested usage of a context attribute to change the amount of information regarding the context attribute that is sent to another application. A method of changing the orientation of images on a device's display detects movement followed by an end of movement of the device. The orientation of the device is then determined and is used to set the orientation of images on the display. A method of setting the orientation of a display also includes storing information regarding an item displayed in a first orientation before changing the orientation. When the orientation is returned to the first orientation, the stored information is retrieved and is used to display the item in the first orientation. The stored information can include whether the item is to appear in the particular orientation.

565 citations

Proceedings ArticleDOI
30 Apr 1997
TL;DR: This paper presents a set of interaction techniques for use in headtracked immersive virtual environments that can be used for object selection, object manipulation, and user navigation in virtual environments.
Abstract: This paper presents a set of interaction techniques for use in headtracked immersive virtual environments. With these techniques, the user interacts with the 2D projections that 3D objects in the scene make on his image plane. The desktop analog is the use of a mouse to interact with objects in a 3D scene based on their projections on the monitor screen. Participants in an immersive environment can use the techniques we discuss for object selection, object manipulation, and user navigation in virtual environments. CR Categories and Subject Descriptors: 1.3.6 [Computer Graphics]: Methodology and Techniques - InteractionTechniques; 1.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism - VirtualReality. Additional Keywords: virtual worlds, virtual environments, navigation, selection, manipulation.

449 citations

Proceedings ArticleDOI
26 Apr 1999
TL;DR: The Voodoo Dolls technique is a two-handed interaction technique for manipulating objects at a distance in immersive virtual environments that allows both visible and occluded objects to be selected, and provides a stationary frame of reference for working relative to moving objects.
Abstract: The Voodoo Dolls technique is a two-handed interaction technique for manipulating objects at a distance in immersive virtual environments. This technique addresses some limitations of existing techniques: they do not provide a lightweight method of interacting with objects of widely varying sizes, and many limit the objects that can be selected and the manipulations possible after making a selection. With the Voodoo Dolls technique, the user dynamically creates dolls: transient, hand held copies of objects whose effects on the objects they represent are determined by the hand holding them. For simplicity, we assume a right-handed user in the following discussion. When a user holds a doll in his right hand and moves it relative to a doll in his left hand, the object represented by the doll in his right hand moves to the same position and orientation relative to the object represented by the doll in his left hand. The system scales the dolls so that the doll in the left hand is half a meter along its longest dimension and the other dolls maintain the same relative size; this allows the user to work seamlessly at multiple scales. The Voodoo Dolls technique also allows both visible and occluded objects to be selected, and provides a stationary frame of reference for working relative to moving objects.

229 citations

Proceedings ArticleDOI
06 Apr 2008
TL;DR: Opportunities to improve the user experience are suggested by focusing on the user rather than the applications and devices; making devices aware of their roles; and providing lighter-weight methods for transferring information, including synchronization services that engender more trust from users.
Abstract: The number of computing devices that people use is growing. To gain a better understanding of why and how people use multiple devices, we interviewed 27 people from academia and industry. From these interviews we distill four primary findings. First, associating a user's activities with a particular device is problematic for multiple device users because many activities span multiple devices. Second, device use varies by user and circumstance; users assign different roles to devices both by choice and by constraint. Third, users in industry want to separate work and personal activities across work and personal devices, but they have difficulty doing so in practice Finally, users employ a variety of techniques for accessing information across devices, but there is room for improvement: participants reported managing information across their devices as the most challenging aspect of using multiple devices. We suggest opportunities to improve the user experience by focusing on the user rather than the applications and devices; making devices aware of their roles; and providing lighter-weight methods for transferring information, including synchronization services that engender more trust from users.

227 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A conceptual framework is presented that separates the acquisition and representation of context from the delivery and reaction to context by a context-aware application, and a toolkit is built that instantiates this conceptual framework and supports the rapid development of a rich space of context- aware applications.
Abstract: Computing devices and applications are now used beyond the desktop, in diverse environments, and this trend toward ubiquitous computing is accelerating. One challenge that remains in this emerging research field is the ability to enhance the behavior of any application by informing it of the context of its use. By context, we refer to any information that characterizes a situation related to the interaction between humans, applications, and the surrounding environment. Context-aware applications promise richer and easier interaction, but the current state of research in this field is still far removed from that vision. This is due to 3 main problems: (a) the notion of context is still ill defined, (b) there is a lack of conceptual models and methods to help drive the design of context-aware applications, and (c) no tools are available to jump-start the development of context-aware applications. In this anchor article, we address these 3 problems in turn. We first define context, identify categories of contextual information, and characterize context-aware application behavior. Though the full impact of context-aware computing requires understanding very subtle and high-level notions of context, we are focusing our efforts on the pieces of context that can be inferred automatically from sensors in a physical environment. We then present a conceptual framework that separates the acquisition and representation of context from the delivery and reaction to context by a context-aware application. We have built a toolkit, the Context Toolkit, that instantiates this conceptual framework and supports the rapid development of a rich space of context-aware applications. We illustrate the usefulness of the conceptual framework by describing a number of context-aware applications that have been prototyped using the Context Toolkit. We also demonstrate how such a framework can support the investigation of important research challenges in the area of context-aware computing.

3,095 citations

Patent
11 Jan 2011
TL;DR: In this article, an intelligent automated assistant system engages with the user in an integrated, conversational manner using natural language dialog, and invokes external services when appropriate to obtain information or perform various actions.
Abstract: An intelligent automated assistant system engages with the user in an integrated, conversational manner using natural language dialog, and invokes external services when appropriate to obtain information or perform various actions. The system can be implemented using any of a number of different platforms, such as the web, email, smartphone, and the like, or any combination thereof. In one embodiment, the system is based on sets of interrelated domains and tasks, and employs additional functionally powered by external services with which the system can interact.

1,462 citations

Journal ArticleDOI
TL;DR: An analysis of comparative surveys done in the field of gesture based HCI and an analysis of existing literature related to gesture recognition systems for human computer interaction by categorizing it under different key parameters are provided.
Abstract: As computers become more pervasive in society, facilitating natural human---computer interaction (HCI) will have a positive impact on their use. Hence, there has been growing interest in the development of new approaches and technologies for bridging the human---computer barrier. The ultimate aim is to bring HCI to a regime where interactions with computers will be as natural as an interaction between humans, and to this end, incorporating gestures in HCI is an important research area. Gestures have long been considered as an interaction technique that can potentially deliver more natural, creative and intuitive methods for communicating with our computers. This paper provides an analysis of comparative surveys done in this area. The use of hand gestures as a natural interface serves as a motivating force for research in gesture taxonomies, its representations and recognition techniques, software platforms and frameworks which is discussed briefly in this paper. It focuses on the three main phases of hand gesture recognition i.e. detection, tracking and recognition. Different application which employs hand gestures for efficient interaction has been discussed under core and advanced application domains. This paper also provides an analysis of existing literature related to gesture recognition systems for human computer interaction by categorizing it under different key parameters. It further discusses the advances that are needed to further improvise the present hand gesture recognition systems for future perspective that can be widely used for efficient human computer interaction. The main goal of this survey is to provide researchers in the field of gesture based HCI with a summary of progress achieved to date and to help identify areas where further research is needed.

1,338 citations

Dissertation
01 Jan 2000
TL;DR: This dissertation shows how the Context Toolkit has been used as a research testbed, supporting the investigation of difficult problems in context-aware computing such as the building of high-level programming abstractions, dealing with ambiguous or inaccurate context data and controlling access to personal context.
Abstract: Traditional interactive applications are limited to using only the input that users explicitly provide. As users move away from traditional desktop computing environments and move towards mobile and ubiquitous computing environments, there is a greater need for applications to leverage from implicit information, or context. These types of environments are rich in context, with users and devices moving around and computational services becoming available or disappearing over time. This information is usually not available to applications but can be useful in adapting the way in which it performs its services and in changing the available services. Applications that use context are known as context-aware applications. This research in context-aware computing has focused on the development of a software architecture to support the building of context-aware applications. While developers have been able to build context-aware applications, they have been limited to using a small variety of sensors that provide only simple context such as identity and location. This dissertation presents a set of requirements and component abstractions for a conceptual supporting framework. The framework along with an identified design process makes it easier to acquire and deliver context to applications, and in turn, build more complex context-aware applications. In addition, an implementation of the framework called the Context Toolkit is discussed, along with a number of context-aware applications that have been built with it. The applications illustrate how the toolkit is used in practice and allows an exploration of the design space of context-aware computing. This dissertation also shows how the Context Toolkit has been used as a research testbed, supporting the investigation of difficult problems in context-aware computing such as the building of high-level programming abstractions, dealing with ambiguous or inaccurate context data and controlling access to personal context.

1,152 citations

Patent
08 Jun 2007
TL;DR: In this paper, liquid-crystal display (LCD) touch screens that integrate the touch sensing elements with the display circuitry are discussed. But the integration may take a variety of forms.
Abstract: Disclosed herein are liquid-crystal display (LCD) touch screens that integrate the touch sensing elements with the display circuitry. The integration may take a variety of forms. Touch sensing elements can be completely implemented within the LCD stackup but outside the not between the color filter plate and the array plate. Alternatively, some touch sensing elements can be between the color filter and array plates with other touch sensing elements not between the plates. In another alternative, all touch sensing elements can be between the color filter and array plates. The latter alternative can include both conventional and in-plane-switching (IPS) LCDs. In some forms, one or more display structures can also have a touch sensing function. Techniques for manufacturing and operating such displays, as well as various devices embodying such displays are also disclosed.

1,083 citations