scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Augment-able reality: situated communication through physical and digital spaces

TL;DR: This paper describes a system that allows users to dynamically attach newly created digital information such as voice notes photographs to the physical environment, through wearable computers as well as normal computers.
Abstract: Most existing augmented reality systems only provide a method for browsing information that is situated in the real world context. This paper describes a system that allows users to dynamically attach newly created digital information such as voice notes photographs to the physical environment, through wearable computers as well as normal computers. Attached data is stored with contextual tags such as location IDs and object IDs that are obtained by wearable sensors, so the same or other wearable users can notice them when they come to the same context. Similar to the role that Post-it notes play in community messaging, we expect our proposed method to be a fundamental communication platform when wearable computers become commonplace.

Summary (3 min read)

1 Introduction

  • Augmented Reality (AR) systems are designed to provide an enhanced view of the real world through see-through head-mounted displays[16] or hand-held devices[11].
  • Various kinds of context sensing technologies, such as position sensors[5, 4], or ID readers[11], are used to determine digital information according to the user’s current physical context.
  • Taking into account recent advances in wearable devices, the authors can expect that people will soon carry their wearables at all times, and that their lives will be constantly supported by context-sensitive information from those wearables.
  • In that sense, current AR systems are essentially ‘‘context-sensitive browsers’’ for the real world, and information is limited to a one-way flow.
  • The user of traditional computers can also add data to the physical environment through the Web and E-Mail interfaces.

2 Augment-able Environments

  • Let us start with a scenario when augment-able reality becomes commonplace.
  • You create a voice-note with a still picture of the damaged tape and attach it to the VCR for other users.
  • When you go outside for lunch, you find several virtual messages are floating in front of restaurants.
  • From the system’s point of view, this is achieved by storing a created data item with contextual information obtained from several wearable sensors.
  • There are several possibilities about data types that can be attached to the environment.

Context-sensitive information notification

  • Similar to other AR systems, the attached data can be browsed by (the same or other) users who are wearing computers.
  • Conversely, normal computer users can attach data to the physical context through the Web or E-Mail interfaces, and wearable users will notice the data when they come to the corresponding situation.
  • Instead of sending messages to people, the authors can place message in "physical contexts" to indirectly communicate with other people.
  • After having dinner at a restaurant, for example, people might leave their impressions or ratings of that restaurant at that location.
  • Even when messages are directed to the person, the authors can select several contextual attributes to do that.

3 System Design

  • To achieve the proposed concept described in the previous section, the authors are currently developing a set of prototype systems based on wearable and normal desktop computers.
  • Figure 2 shows the overall system design of the prototype.
  • Systems share the same database server on the network through wired or wireless communication.
  • If one user attaches a data item to the particular physical context (e.g., location), this effect immediately becomes visible from other computers.
  • The following subsections present details of each subsystem as well as their user interfaces.

3.1 Environmental Support

  • To make it easier for wearable computers to capture surrounding contexts, the authors have deployed two kinds ID systems in the environment.
  • One noticeable difference between these two ID systems is emission range.
  • While IR beacons can cover room-size areas and are relatively robust regarding orientation of the sensors, printed IDs are more sensitive to distance and the camera orientation.
  • Since 2D codes are virtually costless and printable as well, there are some usages that could not be achieved by other ID systems.
  • This card can convey digital data such as voice notes or photographs with attachment of the ID.

Hardware

  • The head-worn part of the wearable system consists of a monocular see-through head-up display (based on Sony Glasstron), a CCD camera, an infrared sensor (based on a remote commander chip for consumer electronics).
  • These devices are connected to a sub-note PC (Mitsubishi AMiTY) communicating to the network through a spread-spectrum wireless LAN (Netwave Air- Surfer).
  • The user controls the system with a miniature pointing device.
  • The head-worn camera is primarily used to detect 2D codes in the environment, but is also used to take still image pictures.
  • Figure 5 shows a user creating a voicenote with the wearable unit.

User interface

  • As a context-aware browser for the real world, this wearable system acts mostly the same as their previous AR system NaviCam[11].
  • The system recognizes the surrounding environment by recognizing attached visual tags on physical objects, or by detecting infrared beacons installed at locations in the environment.
  • These panes represent location-level and object-level contexts.
  • An icon representing this newly created voice note then appears on the personal tray.
  • Afterwards, other users (wearing a computer) who try to use the VCR will find your warning.

Time Machine Mode

  • The context-aware panes are not always directly linked to the current physical situation.
  • When the user taps on the left side of the window, the system switches to what the authors call the ‘‘Time Machine Mode’’ .
  • Combining these navigation techniques allows a user to attach data to any context at any time.
  • Suppose that you want to attach a meeting agenda to the conference room but it should not become visible until next Monday.
  • To encourage this technique, the personal tray is separated into two parts (left and right) during the Time Machine Mode.

Web (Java Applet) Interface

  • Attached information can also be accessed from normal computing environments.
  • The authors have developed a Java applet for retrieving and adding information to the physical environment .
  • The user can display or playback attached data simply by clicking a corresponding icon on the floor map.
  • The user can also attach data to a physical location or an object through the map window of the applet.
  • This attachment immediately becomes visible to other users.

E-Mail interface

  • When users wish to attach a message to the meeting room, they can simply send mail to that room, such as: To: room3@ar.csl.sony.co.jp.
  • Msg=Today’s meeting is cancelled, also known as Subject.
  • This capability encourages a user to attach information from a remote and mobile environment (typically using a PDA with a limited communication bandwidth).

Graceful notification

  • Based-on their initial experience with the prototype system, the authors feel that the key design issue on augment-able reality is how the system can gracefully notify situated information.
  • When a user notices the presence of situated data, they can browse it through a palmtop or wristtop display.
  • The authors believe that there are several features that can only be available by digitally attaching the data.
  • Secondly, the authors can apply several information retrieval techniques to filter attached information.
  • While Post-it notes are visible to anybody, digital data could be disclosed to selected people only.

6 Conclusion and Future Directions

  • The authors have described ‘‘augment-able reality’’ where people can dynamically create digital data and attach it to the physical context.
  • The authors have developed working prototypes to explore this concept and has created user interfaces that supports easy data transfer between personal and context-aware information space.
  • The authors are also currently working on a system using GPS, based on the same concept.
  • The user will be able to digitally leave a hand-written note or other data type on the current geographic location, where others can observe the attached data from wearable and traditional computing environments.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Augment-able Reality:
Situated Communication through Physical and Digital Spaces
Jun Rekimoto Yuji Ayatsuka Kazuteru Hayashi
Sony Computer Science Laboratory Information Technology Laboratories Dept. of Information Science
3-14-13 Sony Corporation Tokyo Institute of Technology
Higashigotanda, Shinagawa-ku Kitashinagawa, Shinagawa-ku Ookayama Meguro-ku
Tokyo 141-0022 Japan Tokyo 141-0001 Japan Tokyo Japan
rekimoto@csl.sony.co.jp aya@csl.sony.co.jp k-hayasi@is.titech.ac.jp
Abstract
Most existing augmented reality systems only provide
a method for browsing information that is situated in
the real world context. This paper describes a system
that allows users to dynamically attach newly created
digital information such as voice notes or photographs to
the physical environment, through wearable computers as
well as normal computers. Attached data is stored with
contextual tags such as location IDs and object IDs that
are obtained by wearable sensors, so the same or other
wearable users can notice them when they come to the
same context. Similar to the role that Post-it notes play in
community messaging, we expect our proposed method to
be a fundamental communication platform when wearable
computers become commonplace.
1 Introduction
Augmented Reality (AR) systems are designed to
provide an enhanced view of the real world through
see-through head-mounted displays[16] or hand-held
devices[11]. Various kinds of context sensing technolo-
gies, such as position sensors[5, 4], or ID readers[11],
are used to determine digital information according to the
user’s current physical context. Compared with traditional
information retrieval systems, users of AR systems can get
context-aware information without being bothered by cum-
bersomeoperationsthatmightimpedetheuser’sreal-world
tasks.
The concept of wearable computing is pushing this
direction even further. Wearable computers are always
‘‘on’’ always acting, and always sensing the surrounding
environment to offer a better interface to the real world.
Taking into account recent advances in wearable devices,
we can expect that people will soon carry their wearables
at all times, and that our lives will be constantly supported
by context-sensitive information from those wearables.
While most of the existing augmented reality systems
are mainly focusing on context-aware information presen-
tation, information registration interfaces for AR are yet
to be investigated. For example, KARMA[4], which is
a well-known AR system, displays information about the
laser printer based on the current physical position of a
head-worn display. Such information is assumed to be
prepared beforehand. To add new information during a
task, the user has to return to the normal computer envi-
ronment. In that sense, current AR systems are essentially
‘‘context-sensitive browsers’’ for the real world, and in-
formation is limited to a one-way flow. Since two-way
communication tools (e.g, E-mail) have fundamentally dif-
ferent functionality and applications from one-way tools
(e.g., radio), we can imagine several new usages of AR if
it allows bidirectional communications.
This paper presents an environment that supports in-
formation registration in the real world contexts through
wearable and traditional computers. Using this technique,
a user of wearable computers can dynamically create a
data item such as a voice or a photograph and can attach
it to nearby physical objects or locations. To enable this,
the system supports drag-and-drop operations between per-
sonal and context-sensitive spaces. The user of traditional
computers can also add data to the physical environment
through the Web and E-Mail interfaces. By combining
these information flows, we can freely transfer data from
physical space to digital space, or vice versa. We call such
an environment "augment-able reality", because users of
our system are not just browsing information situated in

Sensor
Voice, Photo,
Text, etc.
Sensor
Wearable
Computer
Database
Physical Context
Context-sensitive
Information
Real World
Environment
Wearable
Computer
Database
Physical Context
Context-sensitive
Information
Real World
Environment
Dynamic Creation of
Augmenting Information
Room A
Information Registration and
Browsing through WWW and E-Mail
Interfaces
(a) Traditional Augmented Reality Systems
(
b
)
Au
g
mentable Realit
y
Environments
Commnication
through situated
Digital Information
Figure 1: The Augment-able Reality Concept
the real world, but can also create, attach, or carry the data
with them.
2 Augment-able Environments
Let us start with a scenario when augment-able reality
becomes commonplace.
When you enter the video studio, your eyeglass notifies
you that a new video effector has been installed. You
approach the effector and find a movie that was attached
by your colleague, describing typical usage of equipment.
Meanwhile, you find the VCR is broken and the tape has
been damaged. You create a voice-note with a still picture
of the damaged tape and attach it to the VCR for other
users.
When you go outside for lunch, you find several virtual
messages are floating in front of restaurants. Some are
commercials, but many are messages that were created
and attached by previous visitors, giving information and
ratings on the restaurant. You select a restaurant by
looking at these messages. While you are at lunch, you
remember that you will have visitors at noon. You select an
icon representing your office in the eyeglass, and attach a
voice memo to it ....
As we can see in this example, the concept of augment-
able reality can be summarized in features outlined in the
following sections.
Dynamic creation of augmenting information
People wearing computers and having network facilities
can dynamically create information and attach it to the
user’s surrounding physical environment. From the sys-
tem’s point of view, this is achieved by storing a created
data item with contextual information obtained from sev-
eral wearable sensors. Our current prototype supports
infrared beacons and visual ID markers as contextual infor-
mation. Another possibility is to use geographical locations
based on GPS (global positioning system) or PHS(personal
handy phone) as contextual information.
There are several possibilities about data types that can
be attached to the environment. Our current prototype
supports voice notes and photograph snapshots. Other pos-
sible data types are text (created by a wearable keyboard)
or hand-drawn strokes (created by a miniature touch panel).
Context-sensitive information notification
Similar to other AR systems, the attached data can be
browsed by (the same or other) users who are wearing
computers. The concept of ‘‘wearing’’ is essential; other-
wise users would not notice digitally attached information.
Information sharing among wearable and normal
computer environments
Attached information can also be accessible from users in
normal computing environments. For example, when a
wearable user attaches a voice note to the meeting room, it
becomes visible on the floor map page of the Web browser.
Conversely, normal computer users can attach data to the
physical context through the Web or E-Mail interfaces, and
wearable users will notice the data when they come to the
corresponding situation.
Communication through situated information
The combination of these features creates a new way of
communication. Instead of sending messages to people,
we can place message in "physical contexts" to indirectly
communicate with other people. After having dinner
at a restaurant, for example, people might leave their
impressions or ratings of that restaurant at that location.
Afterwards, other people would be able to check the

Sub-note
PC
CCD Camera
Miniature
Mouse
Microphone
Earphone
IR-sensor
Monocular
See-through
HUD
Interface
Box
Camera Interface
(PCMCIA)
Wireless LAN
(PCMCIA)
Shared
Database
E-mail Interface
Web (Java Applets)
E-Mail Clients
Wearable Unit
Normal Computers
Figure 2: Schematic of the prototype system
messages from the front of the restaurant, or when they are
browsing a map on the Web.
Even when messages are directed to the person, we can
select several contextual attributes to do that. For example,
you may want to attach an e-mail message to a person’s
office door (locally or remotely through the net), to get
their attention when they return to the office.
3 System Design
To achieve the proposed concept described in the previ-
ous section, we are currently developing a set of prototype
systems based on wearable and normal desktop computers.
Figure 2 shows the overall system design of the prototype.
Systems share the same database server on the network
through wired or wireless communication. If one user
attaches a data item to the particular physical context (e.g.,
location), this effect immediately becomes visible from
other computers.
The following subsections present details of each sub-
system as well as their user interfaces.
Figure 3: Two ID systems deployed in the environment
(left: infrared beacons, right: printed 2D matrix codes)
3.1 Environmental Support
To make it easier for wearable computers to capture
surrounding contexts, we have deployed two kinds ID
systems in the environment. One is an infrared (IR) beacon
that periodically emits a unique number to the environment
(Figure 3, left); the other is a printed 2D matrix code that
can be recognized by a head-worn camera (Figure 3, right).
One noticeabledifferencebetweenthesetwoID systems
is emission range. While IR beacons can cover room-size
areas and are relatively robust regarding orientation of the
sensors, printed IDs are more sensitive to distance and
the camera orientation. We are thus using IR beacons to
detect current rooms (e.g., meeting room, office, etc.) and
printed 2D codes for object level identification (e.g., VCR,
bookshelf, etc.).
Since 2D codes are virtually costless and printable as
well, there are some usages that could not be achieved
by other ID systems. For example, we can use small
Post-it cards with a 2D code. This (physical) card can
convey digital data such as voice notes or photographs
with attachment of the ID.
3.2 Wearable System
Hardware
The head-worn part of the wearable system (Figure 4,
above) consists of a monocular see-through head-up dis-
play (based on Sony Glasstron), a CCD camera, an infrared
sensor (based on a remote commander chip for consumer
electronics). These devices are connected to a sub-note
PC (Mitsubishi AMiTY) communicating to the network
through a spread-spectrum wireless LAN (Netwave Air-

See-through
Monocular Display
Infrared
Sensor
CCD Camera
Figure 4: The head-up and hand-held parts of the wearable
system
Surfer). The user controls the system with a miniature
pointing device. A microphone attached to the pointing
device is used to make voice notes (Figure 4, below). The
head-worn camera is primarily used to detect 2D codes
in the environment, but is also used to take still image
Figure 5: Wearable system is used to attach a voice note to
the physical object
Figure 6: Screen layout of the wearable display. The user
looks at this image on their monocular see-through display.
The above two areas (called context-aware panes) are
corresponding to the current physical contexts (location
and object), while the bottom area (called the personal
information tray) stores personal data carried by the user.
pictures. Figure 5 shows a user creating a voicenote with
the wearable unit.
User interface
As a context-aware browser for the real world, this wear-
able system acts mostly the same as our previous AR
system NaviCam[11]. The system recognizes the sur-
rounding environment by recognizing attached visual tags
on physical objects, or by detecting infrared beacons in-
stalled at locations in the environment. For example, when
the system recognizes a visual tag attached to the door of
the meeting room, the meeting schedule would appear on
the see-through head mounted display. In a same way,
when the user is walking into the cafeteria, an infrared
sensor automatically detects the location by receiving an
IR ID from that location. Since the system is equipped with
a wireless LAN, information can dynamically be retrieved
from the (shared) database on the network.
The unique feature of our new system is its ability to
add new data to the physical environment. To enable
this feature, the user switches the system to ‘‘authoring
mode.’’ A window consisting of a personal information
area and context sensitive area then appears on the see-
through display (Figure 6). The personal information area
(also called "the personal information tray") stores the data
carried by the user. The system currently supports voice
notes and still images (captured by a head-worn camera)
as such data.
The other area, called the ‘‘context-aware area,’’ shows
information related to the current physical context. The

area is further split into two panes, the location pane and
the object pane. These panes represent location-level and
object-levelcontexts. Thelocation-level contextis detected
by the IR beacons, while object-level context is determined
by visual tag recognition.
These context-aware panes change when the user moves
to a new situation. For example, when a user enters a
(a)
(b)
(c)
Figure 7: Creating and attaching data to the physical
context. (a) The user first presses the ‘‘CAPTURE’’ button
to take a snapshot and adds a voice note. (b) The newly
created data object then appears on the user’s personal
information tray. (c) The user drags it to the context-aware
pane to attach the data to the current location (the studio).
meeting room, IR beacons from the room tell the wearable
system the meeting room ID, the location pane switches
to the meeting room and icons attached to that location
appear. A gentle sound also rings to announce this context
switch. The user can then open or playback these icons by
pressing one of them with the right button of the hand-held
mouse.
To support intuitive data transfer between personal and
context-aware spaces, the system provides drag and drop
operations between these areas. That is, the user can
create a voice memo, (which will appear on the user’s
personal area as an voice icon), then drag and drop it to the
location pane to attach it to the current recognized location
(Figure 7). Since the operation is bidirectional, the user
can also drag an object from the context-aware panes to
the personal tray to copy information that is attached (by
someone or by the user himself) on the current physical
context.
For example, suppose that you find that the VCR in front
of you has a problem and may damage video tapes. You
create a voice note such as ‘‘This VCR is broken, do not
insert tapes, or they may be damaged,’’ with a microphone
while pressing a record button on the miniature pointing
device. An icon representing this newly created voice note
then appears on the personal tray. You also see that the
object-sensitive pane is showing the VCR name and picture
to indicate the system is recognizing it. You then drag the
voice note icon from the personal tray to the VCR pane;
the voice note is now attached to the VCR. Afterwards,
other users (wearing a computer) who try to use the VCR
will find your warning.
Time Machine Mode
The context-aware panes are not always directly linked to
the current physical situation. When the user taps on the
left side of the window, the system switches to what we
call the ‘‘Time Machine Mode’’ (Figure 8).
In this mode, the user can navigate through previously
visited rooms or objects without physically visiting these
that contexts, by clicking arrow buttons on the context
panes. Thisinterface is similar tothe forward and backward
buttons on web browsers. The user can also visit the future
(past) by clicking buttons around the time indicator.
Combining these navigation techniques allows a user
to attach data to any context at any time. For example,
suppose that you want to attach a meeting agenda to the
conference room but it should not become visible until next
Monday. To do this, you first (virtually) visit the meeting
room in the Time Machine Mode, change the time to the

Citations
More filters
Proceedings ArticleDOI
01 Apr 2000
TL;DR: Examples of augmented reality applications based on CyberCode are described, and some key characteristics of tagging technologies that must be taken into account when designing augmented reality environments are discussed.
Abstract: The CyberCode is a visual tagging system based on a 2D-barcode technology and provides several features not provided by other tagging systems. CyberCode tags can be recognized by the low-cost CMOS or CCD cameras found in more and more mobile devices, and it can also be used to determine the 3D position of the tagged object as well as its ID number. This paper describes examples of augmented reality applications based on CyberCode, and discusses some key characteristics of tagging technologies that must be taken into account when designing augmented reality environments.

626 citations


Cites background from "Augment-able reality: situated comm..."

  • ...When these tags are part of our physical environment, devices with a tag reader can retrieve digital information from them [18, 10], activate associated actions, or attach information to them [17]....

    [...]

Proceedings ArticleDOI
30 Sep 2001
TL;DR: This work proposes context proximity for selective artefact communication, using the context of artefacts for matchmaking, and suggests to empower users with simple but effective means to impose the same context on a number of artefacts.
Abstract: Ubiquitous computing is associated with a vision of everything being connected to everything. However, for successful applications to emerge, it will not be the quantity but the quality and usefulness of connections that will matter. Our concern is how qualitative relations and more selective connections can be established between smart artefacts, and how users can retain control over artefact interconnection. We propose context proximity for selective artefact communication, using the context of artefacts for matchmaking. We further suggest to empower users with simple but effective means to impose the same context on a number of artefacts. To prove our point we have implemented Smart-Its Friends, small embedded devices that become connected when a user holds them together and shakes them.

578 citations

Proceedings ArticleDOI
25 Sep 2000
TL;DR: ComMotion as mentioned in this paper is a location-aware computing environment which links personal information to locations in its user's life; for example, comMotion reminds one of her shopping list when she nears a grocery store.
Abstract: comMotion is a location-aware computing environment which links personal information to locations in its user's life; for example, comMotion reminds one of her shopping list when she nears a grocery store. Using satellite-based GPS position sensing, comMotion gradually learns about the locations in its user's daily life based on travel patterns. The full set of comMotion functionality, including map display, requires a graphical user interface. However, because it is intended primarily for mobile use, including driving, the core set of reminder creation and retrieval can be managed completely by speech.

572 citations


Cites methods from "Augment-able reality: situated comm..."

  • ...The Olivetti Active Badge [5] was used in several systems, for example to aid a telephone receptionist by dynamically updating the telephone extension a user was closest to. Augmentable Reality [ 6 ] allows users to dynamically attach digital information such as voice notes or photographs to the physical environment....

    [...]

Patent
William J. Johnson1
18 Aug 2005
TL;DR: In this paper, a fully automated web service with location-based services is presented, which is involved in transmission of situational location dependent information to automatically located mobile receiving data processing systems.
Abstract: Provided is a fully automated web service with location based services generally involved in transmission of situational location dependent information to automatically located mobile receiving data processing systems. The web service communicates with a receiving data processing system in a manner by delivering information to the device when appropriate without the device requesting it at the time of delivery. There are varieties of configurations made by different user types of the web service for configuring information to be delivered, and for receiving the information. The web service maximizes anonymity of users, provides granular privacy control with a default of complete privacy, and supports user configurable privileges and features for desired web service behavior and interoperability. The web service is fully automated to eliminate human resources required to operate services. Integrated with the web service are enhanced location based services providing map solutions, alerts, sharing of novel services between users, and complete user control for managing heterogeneous device interoperability through the web service.

542 citations

Journal ArticleDOI
TL;DR: This paper reviews the user interface of the initial Studierstube system, in particular the implementation of collaborative augmented reality, and the Personal Interaction Panel, a two-handed interface for interaction with the system.
Abstract: Our starting point for developing the Studierstube system was the belief that augmented reality, the less obtrusive cousin of virtual reality, has a better chance of becoming a viable user interface for applications requiring manipulation of complex three-dimensional information as a daily routine. In essence, we are searching for a 3-D user interface metaphor as powerful as the desktop metaphor for 2-D. At the heart of the Studierstube system, collaborative augmented reality is used to embed computer-generated images into the real work environment. In the first part of this paper, we review the user interface of the initial Studierstube system, in particular the implementation of collaborative augmented reality, and the Personal Interaction Panel, a two-handed interface for interaction with the system. In the second part, an extended Studierstube system based on a heterogeneous distributed architecture is presented. This system allows the user to combine multiple approaches-augmented reality, projection displays, and ubiquitous computing--to the interface as needed. The environment is controlled by the Personal Iteraction Panel, a two-handed, pen-and-pad interface that has versatile uses for interacting with the virtual environment. Studierstube also borrows elements from the desktop, such as multi-tasking and multi-windowing. The resulting software architecture is a user interface management system for complex augmented reality applications. The presentation is complemented by selected application examples.

471 citations


Cites background from "Augment-able reality: situated comm..."

  • ...This concept can be implemented on separate desktop displays (Smith & Mariani, 1997), handheld displays (Rekimoto et al., 1998), HMDs (Butz et al....

    [...]

  • ...This concept can be implemented on separate desktop displays (Smith & Mariani, 1997), handheld displays (Rekimoto et al., 1998), HMDs (Butz et al., 1999) or time-interlacing displays such as the two-user workbench (Agrawala et al., 1997)....

    [...]

References
More filters
Proceedings ArticleDOI
09 Dec 1968
TL;DR: The fundamental idea behind the three-dimensional display is to present the user with a perspective image which changes as he moves, and this display depends heavily on this "kinetic depth effect".
Abstract: The fundamental idea behind the three-dimensional display is to present the user with a perspective image which changes as he moves. The retinal image of the real objects which we see is, after all, only two-dimensional. Thus if we can place suitable two-dimensional images on the observer's retinas, we can create the illusion that he is seeing a three-dimensional object. Although stereo presentation is important to the three-dimensional illusion, it is less important than the change that takes place in the image when the observer moves his head. The image presented by the three-dimensional display must change in exactly the way that the image of a real object would change for similar motions of the user's head. Psychologists have long known that moving perspective images appear strikingly three-dimensional even without stereo presentation; the three-dimensional display described in this paper depends heavily on this "kinetic depth effect."

1,825 citations


"Augment-able reality: situated comm..." refers background in this paper

  • ...Augmented Reality (AR) systems are designed to provide an enhanced view of the real world through see-through head-mounted displays[16] or hand-held devices[11]....

    [...]

Journal ArticleDOI

1,032 citations


"Augment-able reality: situated comm..." refers background in this paper

  • ...Various kinds of context sensing technologies, such as position sensors[5, 4], or ID readers[11], are used to determine digital information according to the user’s current physical context....

    [...]

  • ...For example, KARMA[4], which is a well-known AR system, displays information about the laser printer based on the current physical position of a head-worn display....

    [...]

Journal ArticleDOI
13 Oct 1997
TL;DR: A prototype system that combines the overlaid 3D graphics of augmented reality with the untethered freedom of mobile computing is described, to explore how these two technologies might together make possible wearable computer systems that can support users in their everyday interactions with the world.
Abstract: We describe a prototype system that combines together the overlaid 3D graphics of augmented reality with the untethered freedom of mobile computing The goal is to explore how these two technologies might together make possible wearable computer systems that can support users in their everyday interactions with the world We introduce an application that presents information about our university's campus, using a head-tracked, see-through, head-worn, 3D display, and an untracked, opaque, handheld, 2D display with stylus and trackpad We provide an illustrated explanation of how our prototype is used, and describe our rationale behind designing its software infrastructure and selecting the hardware on which it runs

916 citations

Proceedings ArticleDOI
01 Oct 1997
TL;DR: This paper proposes a new field of user interfaces called multi-computer direct manipulation and presents a penbased direct manipulation technique that can be used for data transfer between different computers as well as within the same computer.
Abstract: This paper proposes a new field of user interfaces called multi-computer direct manipulation and presents a penbased direct manipulation technique that can be used for data transfer between different computers as well as within the same computer. The proposed Pick-andDrop allows a user to pick up an object on a display and drop it on another display as if he/she were manipulating a physical object. Even though the pen itself does not have storage capabilities, a combination of Pen-ID and the pen manager on the network provides the illusion that the pen can physically pick up and move a computer object. Based on this concept, we have built several experimental applications using palm-sized, desk-top, and wall-sized pen computers. We also considered the importance of physical artifacts in designing user interfaces in a future computing environment.

753 citations


"Augment-able reality: situated comm..." refers methods in this paper

  • ...‘‘Pick and Drop’’[10] provides a method for carrying digital data within a physical space using a stylus....

    [...]

Journal ArticleDOI
13 Oct 1997
TL;DR: The wearable remembrance agent is described, a continuously running proactive memory aid that uses the physical context of a wearable computer to provide notes that might be relevant in that context.
Abstract: This paper describes the wearable remembrance agent, a continuously running proactive memory aid that uses the physical context of a wearable computer to provide notes that might be relevant in that context. A currently running prototype is described, along with future directions for research inspired by using the prototype.

573 citations

Frequently Asked Questions (10)
Q1. What have the authors contributed in "Augment-able reality: situated communication through physical and digital spaces" ?

This paper describes a system that allows users to dynamically attach newly created digital information such as voice notes or photographs to the physical environment, through wearable computers as well as normal computers. Similar to the role that Post-it notes play in community messaging, the authors expect their proposed method to be a fundamental communication platform when wearable computers become commonplace. 

Since humans rely on artificial markers such as traffic signals or indication panels, the authors believe that having appropriate artificial supports in the physical environment is the key to a successful AR and wearable systems. 

Since this applet can be started from Java-enabled web browsers (such as Netscape or Internet Explorer), the user can access the attached data from virtually anywhere. 

This paper presents an environment that supports information registration in the real world contexts through wearable and traditional computers. 

Various kinds of context sensing technologies, such as position sensors[5, 4], or ID readers[11], are used to determine digital information according to the user’s current physical context. 

Their interface design for attaching data is partially inspired by ‘‘Fix and Float’’[13], an interface technique for carrying data within a virtual 3D environment. 

Based-on their initial experience with the prototype system, the authors feel that the key design issue on augment-able reality is how the system can gracefully notify situated information. 

The head-worn part of the wearable system (Figure 4, above) consists of a monocular see-through head-up display (based on Sony Glasstron), a CCD camera, an infrared sensor (based on a remote commander chip for consumer electronics). 

In this paper, the authors have described ‘‘augment-able reality’’ where people can dynamically create digital data and attach it to the physical context. 

Their current approach is to overlay information on a see-through headsup display; the authors expect this approach to be less obtrusive when the display is embedded in eyeglasses [14].