scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Bricks: laying the foundations for graspable user interfaces

01 May 1995-pp 442-449
TL;DR: This work introduces the concept of Graspable User Interfaces that allow direct control of electronic or virtual objects through physical handles for control, and presents a design space for Bricks which lay the foundation for further exploring and developing Graspables User Inter interfaces.
Abstract: We introduce the concept of Graspable User Interfaces that allow direct control of electronic or virtual objects through physical handles for control. These physical artifacts, which we call "bricks," are essentially new input devices that can be tightly coupled or “attached” to virtual objects for manipulation or for expressing action (e.g., to set parameters or for initiating processes). Our bricks operate on top of a large horizontal display surface known as the "ActiveDesk." We present four stages in the development of Graspable UIs: (1) a series of exploratory studies on hand gestures and grasping; (2) interaction simulations using mock-ups and rapid prototyping tools; (3) a working prototype and sample application called GraspDraw; and (4) the initial integrating of the Graspable UI concepts into a commercial application. Finally, we conclude by presenting a design space for Bricks which lay the foundation for further exploring and developing Graspable User Interfaces.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
27 Mar 1997
TL;DR: Tangible Bits allows users to "grasp & manipulate" bits in the center of users’ attention by coupling the bits with everyday physical objects and architectural surfaces and ambient media for background awareness.
Abstract: This paper presents our vision of Human Computer Interaction (HCI): "Tangible Bits." Tangible Bits allows users to "grasp & manipulate" bits in the center of users’ attention by coupling the bits with everyday physical objects and architectural surfaces. Tangible Bits also enables users to be aware of background bits at the periphery of human perception using ambient display media such as light, sound, airflow, and water movement in an augmented space. The goal of Tangible Bits is to bridge the gaps between both cyberspace and the physical environment, as well as the foreground and background of human activities. This paper describes three key concepts of Tangible Bits: interactive surfaces; the coupling of bits with graspable physical objects; and ambient media for background awareness. We illustrate these concepts with three prototype systems ‐ the metaDESK, transBOARD and ambientROOM ‐ to identify underlying research issues.

3,885 citations


Cites background or methods from "Bricks: laying the foundations for ..."

  • ...Tangible Bits is also directly grounded on the previous works of ClearBoard [12] and Graspable User Interfaces [ 8 ]....

    [...]

  • ...Bits flowing through the wires of a computer network become tangible through motion, sound, and even Figure 4 ClearBoard [12] Figure 5 Bricks [ 8 ]...

    [...]

  • ...I n t e r f a c e s Graspable User Interfaces [ 8 ] (Fitzmaurice, Ishii & Buxton) allow direct control of virtual objects through physical handles called "bricks." Bricks can be "attached" to virtual objects, thus making virtual objects physically graspable....

    [...]

Journal ArticleDOI
TL;DR: Everyday computing is proposed, a new area of applications research, focussed on scaling interaction with respect to time, just as pushing the availiability of computing away from the traditional desktop fundamentally changes the relationship between humans and computers.
Abstract: The proliferation of computing into the physical world promises more than the ubiquitous availability of computing infrastructure; it suggest new paradigms of interaction inspired by constant access to information and computational capabilities. For the past decade, application-driven research on abiquitous computing (ubicomp) has pushed three interaction themes:natural interfaces, context-aware applications,andautomated capture and access. To chart a course for future research in ubiquitous computing, we review the accomplishments of these efforts and point to remaining research challenges. Research in ubiquitious computing implicitly requires addressing some notion of scale, whether in the number and type of devices, the physical space of distributed computing, or the number of people using a system. We posit a new area of applications research, everyday computing, focussed on scaling interaction with respect to time. Just as pushing the availiability of computing away from the traditional desktop fundamentally changes the relationship between humans and computers, providing continuous interaction moves computing from a localized tool to a constant companion. Designing for continous interaction requires addressing interruption and reumption of intreaction, representing passages of time and providing associative storage models. Inherent in all of these interaction themes are difficult issues in the social implications of ubiquitous computing and the challenges of evaluating> ubiquitious computing research. Although cumulative experience points to lessons in privacy, security, visibility, and control, there are no simple guidelines for steering research efforts. Akin to any efforts involving new technologies, evaluation strategies form a spectrum from technology feasibility efforts to long-term use studies—but a user-centric perspective is always possible and necessary

1,541 citations

Journal ArticleDOI
TL;DR: The MCRpd interaction model for tangible interfaces as discussed by the authors is a conceptual framework for tangible user interfaces, which relates the role of physical and digital representations, physical control, and underlying digital models.
Abstract: We present steps toward a conceptual framework for tangible user interfaces. We introduce the MCRpd interaction model for tangible interfaces, which relates the role of physical and digital representations, physical control, and underlying digital models. This model serves as a foundation for identifying and discussing several key characteristics of tangible user interfaces. We identify a number of systems exhibiting these characteristics, and situate these within 12 application domains. Finally, we discuss tangible interfaces in the context of related research themes, both within and outside of the human-computer interaction domain.

1,200 citations

Proceedings ArticleDOI
20 Apr 2002
TL;DR: A new sensor architecture for making interactive surfaces that are sensitive to human hand and finger gestures that can be integrated within the surface, and which does not suffer from lighting and occlusion problems.
Abstract: This paper introduces a new sensor architecture for making interactive surfaces that are sensitive to human hand and finger gestures. This sensor recognizes multiple hand positions and shapes and calculates the distance between the hand and the surface by using capacitive sensing and a mesh-shaped antenna. In contrast to camera-based gesture recognition systems, all sensing elements can be integrated within the surface, and this method does not suffer from lighting and occlusion problems. This paper describes the sensor architecture, as well as two working prototype systems: a table-size system and a tablet-size system. It also describes several interaction techniques that would be difficult to perform without using this architecture

1,076 citations

Journal ArticleDOI
TL;DR: This paper reviews the major approaches to multimodal human-computer interaction, giving an overview of the field from a computer vision perspective, and focuses on body, gesture, gaze, and affective interaction.

948 citations


Cites background from "Bricks: laying the foundations for ..."

  • ...Glove mounted devices [19] and graspable user interfaces [48], for example, seem now ripe for exploration....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: Consider writing, perhaps the first information technology: The ability to capture a symbolic representation of spoken language for long-term storage freed information from the limits of individual memory.
Abstract: Specialized elements of hardware and software, connected by wires, radio waves and infrared, will soon be so ubiquitous that no-one will notice their presence.

9,073 citations

Journal ArticleDOI
Pierre Wellner1
TL;DR: The DigitalDesk is built around an ordinary physical desk and can be used as such, but it has extra capabilities, including a video camera mounted above the desk that can detect where the user is pointing, and it can read documents that are placed on the desk.

1,127 citations


"Bricks: laying the foundations for ..." refers methods in this paper

  • ...Finally, the DigitalDesk [15] merges our everyday physical desktop with paper documents and electronic documents....

    [...]

  • ...Interacting with paper on the DigitalDesk....

    [...]

  • ...Finally, the DigitalDesk [15] merges our everyday physical desktop with paper documents and electronic documents....

    [...]

  • ...The DigitalDesk is a great example of how well we can merge physical and electronic artifacts, taking advantage of the strengths of both mediums....

    [...]

Journal ArticleDOI
TL;DR: This article presents a tentative theoretical framework for the study of asymmetry in the context of human bimanual action and suggests that the kinematic chain model may help in understanding the adaptive advantage of human manual specialization.
Abstract: This article presents a tentative theoretical framework for the study of asymmetry in the context of human bimanual action. It is emphasized that in man most skilled manual activities involve two hands playing different roles, a fact that has been often overlooked in the experimental study of human manual lateralization. As an alternative to the current concepts of manual preference and manual superiority-whose relevance is limited to the particular case of unimanual actions-the more general concept of lateral preference is proposed to denote preference for one of the two possible ways of assigning two roles to two hands. A simple model describing man's favored intermanual division of labor in the model are the following. 1) The two hands represent two motors, that is, decomplexity is ignored in the suggested approach. 2) In man, the two manual motors cooperate with one another as if they were assembled in series, thereby forming a kinematic chain: In a right-hander allowed to follow his or her lateral preferences, motion produced by the right hand tends to articulate with motion produced by the left. It is suggested that the kinematic chain model may help in understanding the adaptive advantage of human manual specialization.

967 citations


"Bricks: laying the foundations for ..." refers background in this paper

  • ...In general, the Graspable UI design philosophy has several advantages: • It encourages two handed interactions [ 3 , 7]; • shifts to more specialized, context sensitive input devices;...

    [...]

Journal ArticleDOI
TL;DR: The goal is to go a step further by grounding and situating the information in a physical context to provide additional understanding of the organization of the space and to improve user orientation.
Abstract: article in this issue) will further these abilities and cause the generation of short-range and global electronic information spaces to appear lhroughout our everyday environments. How will this information be organized, and how will we interact with it? Wherever possible, we should look for ways of associating electronic information with physical objects in our environment. This raeans that our information spaces will be 3D. The SemNet system [4] is an example of a tool that offers users access to large, complicated 3D information spaces. Our goal is to go a step further by grounding and situating the information in a physical context to provide additional understanding of the organization of the space and to improve user orientation. As an example of ubiquitous computing and situated information spaces, consider a fax machine. The electronic data associated with a fax machine should be collecl:ed, associated , and colocated with [he physical device (see Figure 1). This means that your personal electronic phone book, a log of your incoming and outgoing calls, and fax messages could be accessible by browsing a situated 3D electronic information space surrounding the fax machine. The information would be organized by the layout of the physical device. Incoming calls would be located near 1:he earpiece of the hand receiver while outgoing calls would be situated near the mouthpiece. The phone, book could be found near the keypad. A log of the outgoing fax messages would be found near the fax paper feeder while a log of the incoming faxes would be located at the paper dispenser tray. These logical information hot spots on the physical device can be moved and customized by users according to their personal organizations. The key idea is that the physical object anchors the information, provides a logical means of partitioning and organizing the associated information space, and serves as a retrieval cue for users. A major design requirement of situated information spaces is the ability for users to visualize, browse, and manipulate the 3D space using a ,.RoE.ALL-portable, palmtop computer. That is, instead of a large fixed display on a desk, we want a small, mobile display to act as a window onto the information space. Since the information spaces will consist of multimedia data, the display of the palmtop should be able to handle all forms of data including text, graphics, video, and audio. Moreover, the desire to merge the physical and …

563 citations

Proceedings ArticleDOI
24 Apr 1994
TL;DR: In this paper, the authors present a 3D user interface for neurosurgical planning using a head viewing prop, a cutting-plane selection prop, and a trajectory selection prop.
Abstract: We claim that physical manipulation of familiar real-world objects in the user’s real environment is an important technique for the design of three-dimensional user interfaces. These real-world passive interface props are manipulated by the user to specify spatial relationships between interface objects. By unobtrusively embedding free-space position and orientation trackers within the props, we enable the computer to passively observe a natural user dialog in the real world, rather than forcing the user to engage in a contrived dialog in the computer-generated world. We present neurosurgical planning as a driving application and demonstrate the utility of a head viewing prop, a cutting-plane selection prop, and a trajectory selection prop in this domain. Using passive props in this interface exploits the surgeon’s existing skills, provides direct action-task correspondence, eliminates explicit modes for separate tools, facilitates natural two-handed interaction, and provides tactile and kinesthetic feedback for the user. Our informal evaluation sessions have shown that with a cursory introduction, neurosurgeons who have never seen the interface can understand and use it without training.

484 citations


Additional excerpts

  • ...has developed passive real-world interface props[5]....

    [...]