scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A touring machine: prototyping 3D mobile augmented reality systems for exploring the urban environment

13 Oct 1997-Vol. 1, Iss: 4, pp 74-81
TL;DR: A prototype system that combines the overlaid 3D graphics of augmented reality with the untethered freedom of mobile computing is described, to explore how these two technologies might together make possible wearable computer systems that can support users in their everyday interactions with the world.
Abstract: We describe a prototype system that combines together the overlaid 3D graphics of augmented reality with the untethered freedom of mobile computing The goal is to explore how these two technologies might together make possible wearable computer systems that can support users in their everyday interactions with the world We introduce an application that presents information about our university's campus, using a head-tracked, see-through, head-worn, 3D display, and an untracked, opaque, handheld, 2D display with stylus and trackpad We provide an illustrated explanation of how our prototype is used, and describe our rationale behind designing its software infrastructure and selecting the hardware on which it runs

Summary (1 min read)

Introduction

  • The first form of damages in laminates is usually matrix microcracks.
  • The majority of earlier analytical work on transverse cracking assumed that cracks formed when the stress or strain reached the transverse strength of a ply material.
  • Numerical results are obtained and compared with the tests results available in the literature.

Solution of an angle-ply lamina

  • Consider an off-axis lamina (Fig. 1) with principal material directions (1-2-3) in the global x-y-z coordinate system.
  • Solution of an angle-ply laminate Consider an infinite long multi-layered general angle-ply laminate of thickness H and width L. Again the laminate is subjected to a constant longitudinal strain, 0ε .
  • Upon choosing a suitably large value of N, each individual sub-layer becomes thin.

Ply-crack boundary conditions

  • When a general angle-ply laminate is subjected to an in-plane extension perpendicular to the 90° fibers, transverse ply cracks appear parallel to the fibers and across the entire width from edge to edge.
  • The total complementary potential energy of this representative volume element is given as the difference of the complementary strain energy Uc and the potential of prescribed displacements Vc .
  • The entire length of the laminate is Le and the thickness of cracked layers is Hc. Fig. 4 shows the propagation process of the transverse cracks from state (a) to state (c).
  • The crack density of state (c) is then e k L k 1 1 + =ρ + (33) During the transverse cracking process, the crack surface area increment is Hc.
  • If G<Gc, the current load is increased to obtain a larger energy release rate until G=Gc.

Numerical results

  • The formulations and criterion proposed above are applied to predict transverse cracking in composite laminates with different configurations, including symmetric cross-ply laminates, symmetric angle-ply laminates and general non-symmetric laminates.
  • In comparison with Fig. 7b, the onset crack stress of the non-symmetric laminate is significantly increased.
  • The negative shear stresses advance and the positive stresses delay the transverse cracking.
  • The solution was extended to the analysis of non-symmetric laminates under tension, and then to the analysis of general laminates subjected to both tension and shearing.

Did you find this useful? Give us your feedback

Figures (9)

Content maybe subject to copyright    Report

Abstract
We describe a prototype system that combines together
the overlaid 3D graphics of augmented reality with the
untethered freedom of mobile computing. The goal is to
explore how these two technologies might together make
possible wearable computer systems that can support users
in their everyday interactions with the world. We introduce
an application that presents information about our univer-
sity’s campus, using a head-tracked, see-through, head-
worn, 3D display, and an untracked, opaque, handheld, 2D
display with stylus and trackpad. We provide an illustrated
explanation of how our prototype is used, and describe our
rationale behind designing its software infrastructure and
selecting the hardware on which it runs.
Keywords: Augmented Reality, Virtual Environments,
Mobile Computing, Wearable Computing, GPS.
1. Introduction
Recent years have seen significant advances in two
promising fields of user interface research: virtual environ-
ments, in which 3D displays and interaction devices
immerse the user in a synthesized world, and mobile com-
puting, in which increasingly small and inexpensive com-
puters and wireless networking allow users to roam the real
world without being tethered to stationary machines. We
are interested in how virtual environments can be com-
bined with mobile computing, with the ultimate goal of
supporting ordinary users in their interactions with the
world.
To experiment with these ideas, we have been building
the system described in this paper. The kind of virtual envi-
ronment technology with which we have been working is
augmented reality. Unlike most virtual environments, in
which a virtual world replaces the real world, in aug-
mented reality a virtual world supplements the real world
with additional information. This concept was pioneered
by Ivan Sutherland [28], and is accomplished through the
use of tracked “see-through” displays that enrich the users
view of the world by overlaying visual, auditory, and even
haptic, material on what she experiences.
The application that we are addressing is that of provid-
ing users with information about their surroundings, creat-
ing a personal “touring machine.” There are several themes
that we have stressed in this work:
Presenting information about a real environment that is
integrated into the 3D space of that environment.
Supporting outdoor users as they move about a rela-
tively large space on foot.
Combining multiple display and interaction technolo-
gies to take advantage of their complementary capabili-
ties.
Our prototype assists users who are interested in our
university’s campus, overlaying information about items of
interest in their vicinity. As a user moves about, she is
tracked through a combination of satellite-based, differen-
tial GPS (Global Positioning System) position tracking and
magnetometer/inclinometer orientation tracking. Informa-
tion is presented and manipulated on a combination of a
head-tracked, see-through, headworn, 3D display, and an
untracked, opaque, handheld, 2D display with stylus and
trackpad.
Our emphasis in this project has been on developing
experimental user interface software, not on designing
hardware. Therefore, we have used commercially available
hardware throughout. As we describe later, this has neces-
sitated a number of compromises, especially in the accu-
racy with which the users 3D position and orientation is
tracked. These have in turn affected the design of our user
interface, which relies on approaches that require only
approximate, rather than precise, registration of virtual and
real objects.
In Section 2 we present related work. Section 3
describes a scenario in our application domain, including
pictures generated by a running testbed implementation. In
Section 4, we describe both our high-level approach in
designing our system and the specific hardware and soft-
ware used. Finally, Section 5 presents our conclusions and
A Touring Machine: Prototyping 3D Mobile Augmented Reality Systems for
Exploring the Urban Environment
Steven Feiner, Blair MacIntyre, Tobias Höllerer
Department of Computer Science
Columbia University
New York, NY 10027
{feiner,bm,htobias}@cs.columbia.edu
http://www.cs.columbia.edu/graphics/
Anthony Webster
Graduate School of Architecture, Planning and
Preservation
Columbia University
New York, NY 10027
acw18@columbia.edu
http://www.cc.columbia.edu/~archpub/BT/
In: Personal Technologies, 1(4), 1997, pp. 208-217

the directions that we will be taking as we continue to
develop the system.
2. Related Work
Previous research in augmented reality has addressed a
variety of application areas including aircraft cockpit con-
trol [12], assistance in surgery [27], viewing hidden build-
ing infrastructure [10], maintenance and repair [8], and
parts assembly [5, 30]. In contrast to these systems, which
use see-through headworn displays, Rekimoto [24] has
used handheld displays to overlay information on color-
coded objects. Much effort has also been directed towards
developing techniques for precise tracking using tethered
trackers (e.g., [16, 2, 29, 26]).
Work in mobile user interfaces has included several
projects that allow users to explore large spaces. Loomis
and his colleagues have developed an application that
makes it possible for blind users to navigate a university
campus by tracking their position with differential GPS
and orientation with a magnetometer to present spatialized
sonic location cues [18]. Petrie et al. have field-tested a
GPS-based navigation aid for blind users that uses a speech
synthesizer to describe city routes [23]. The CMU Wear-
able Computer Project has developed several generations
of mobile user interfaces using a single handheld or
untracked headworn display with GPS, including a campus
tour [25]. Long et al. have explored the use of infrared
tracking in conjunction with handheld displays [17]. Mann
[21] has developed a family of wearable systems with
headworn displays, the most recent of which uses optical
flow to overlay textual information on automatically recog-
nized objects.
Our work emphasizes the combination of these two
streams of research: augmented reality and mobile user
interfaces. We describe a prototype application that uses
tracked see-through displays and 3D graphics without
assuming precise registration, and explore how a combina-
tion of displays and interaction devices can be used
together to take advantage of their individual strengths.
Prior to the development of VRML, several researchers
experimented with integrating hypertext and virtual envi-
ronments [7, 9, 1]. All investigated the advantages of pre-
senting hypertext on the same 3D display as all other
material, be it headworn or desktop. In contrast, our current
work exploits the different capabilities of our displays by
presenting hypertext documents on the relatively high-res-
olution 2D handheld display, which is itself embedded
within the 3D space viewed through the lower-resolution
headworn display.
3. Application Scenario
Consider the following scenario, whose figures were
created using our system. The user is standing in the mid-
dle of our campus, wearing our prototype system, as shown
in Figure 1
. His tracked see-through headworn display is
driven by a computer contained in his backpack. He is
holding a handheld computer and stylus.
As the user looks around the campus, his see-through
headworn display overlays textual labels on campus build-
ings, as shown in Figures 2 and 3. (These image were shot
through the headworn display, as described in Section 4.3,
and are somewhat difficult to read because of the low
brightness of the display and limitations of the recording
technology.) Because we label buildings, and not specific
building features, the relative inaccuracy of the trackers we
are using is not a significant problem for this application.
At the top of the display is a menu of choices: “Colum-
bia:”, “Where am I?”, “Depts?”, “Buildings?”, and
“Blank”. When selected, each of the first four choices
sends a URL to a web browser running on the handheld
computer. The browser then presents information about the
campus, the user’s current location, a list of departments,
and a list of buildings, respectively. The URL points to a
custom HTTP server on the handheld computer that gener-
ates a page on the fly containing the relevant information.
Figure 1. Prototype campus information system. Th
e
user wears a backpack and headworn display, an
d
holds a handheld display and its stylus.

The generated pages contain links back to the server itself
and to pages anywhere on the world wide web to which we
are connected via radio modems talking to base stations on
the campus. The last menu item, “Blank”, allows the head-
worn display to be blanked when the user wants to view the
unaugmented campus. Menu entries are selected using a
trackpad mounted on the back of the handheld computer.
The trackpad’s x coordinates are inverted to preserve intui-
tive control of the menus.
Labels seen through the headworn display are grey,
increasing in intensity as they approach the center of the
display. The one label closest to the center is highlighted
yellow. If it remains highlighted for more than a second, it
changes to green, indicating that it has been selected, and a
second menu bar is added below the first, containing the
name of the selected building and entries for obtaining
information specific to it. A selected building remains
selected until the users head orientation dwells on another
building for more than a second as indicated by the color
change. This approximation of gaze-directed selection can
be disabled or enabled via a trackpad button.
When a building is selected, a conical green compass
pointer appears at the bottom of the headworn display, ori-
ented in the building’s direction. The pointer turns red if
the building is more than 90 degrees away from the users
head orientation (i.e., behind the user). This allows the user
to find the building more easily if they turn away from it.
The pointer is especially useful for finding buildings
selected from the handheld computer. To do this, the user
turns off gaze-directed selection, displays a list of all build-
ings via the “Buildings?” top level menu entry, selects with
a stylus the building she is looking for on the handheld
computer, and then follows the direction of the arrow
pointer to locate that building. When the building’s link is
selected on the handheld computer the system immediately
reflects the selection on the headworn display. This is made
possible by our custom HTTP server, which on URL selec-
tion can interact with the backpack computer.
The building’s menu bar contains the name of the build-
ing plus additional items: “Architecture”, “Departments”,
and “Miscellaneous”. Selecting the name of the building
from the menu using the trackpad sends a relevant URL to
the handheld computer’s browser. Selecting any of the
remaining menu entries also sends a URL to the browser
and additionally creates a collection of items that are posi-
tioned near the building on the headworn display. These
items represent the information that was requested by the
second level menu entry selection and they stay in the same
relative position to the building (and its label) until this
menu level is left via a “Dismiss” entry.
To call the users attention to the new material on the
handheld computer, when menu items that send URLs are
selected, a copy of the menu item is translated down to and
off the bottom of the headworn display. For example,
Figure 3 shows the Philosophy Building with the “Depart-
ments” menu item highlighted prior to selection. When the
item is selected, the building is surrounded with the names
of the departments that it contains, as shown in Figure 4.
The automatically-generated web page displayed on the
handheld is shown in Figure 5(a).
There are two ways to access information about the
selected building. On the headworn display, the user can
cycle through the surrounding items with the trackpad and
select any to present relevant information about it on the
handheld display. Alternatively, the user can select a corre-
sponding item from the automatically-generated web page.
For example, Figure 5(b) shows the regular web page for
one of the departments in the Philosophy Building,
accessed by the URL selection shown in Figure 5(a).
Another way of accessing information about a specific
department is through the global list of departments that is
produced on the handheld by selecting the top-level
Figure 2. View shot through the see-through head-
worn display, showing campus buildings with over-
laid names. Labels increase in brightness as they
near the center of the display.
Figure 3. A view of the Philosophy Building with th
e
“Departments” menu item highlighted.

“Departments?” menu item on the headworn display. In
this case the associated building does not have to be
selected beforehand.
4. System Design
While we wanted our system to be as lightweight and
comfortable as possible, we also decided to use only off-
the-shelf hardware to avoid the expense, effort, and time
involved in building our own. Consequently we often set-
tled for items that were far bulkier than we would like them
to be, in return for the increased flexibility that they
offered. The combined weight of the system is just under
40 pounds.
(a)
(b)
(c)
Figure 4.
After the “Departments” menu item is
selected, the department list for the Philosophy
Building is added to the world, arrayed about the
building. The three figures show the label animation
sequence: (a) a fraction of a second after selection,
(b) approximately half a second later, and (c) after
the animation has finished.
(a)
(b)
Figure 5.
(a) Selecting the “Departments” menu item
causes an automatically-generated URL to be sen
t
to the web browser on the handheld computer, con
-
taining the department list for the Philosophy Build
-
ing. (b) Actual home page for the English an
d
Comparative Literature department, as selecte
d
from either the generated browser page or th
e
department list of Figure 4.

The following subsections describe some of the hard-
ware and software choices that we made in designing our
system, whose hardware design is diagrammed in Figure 6.
4.1. Hardware
Backpack computer. It was important to us that our
main computer not only be portable, but also capable of
working with readily available peripherals, including high-
performance 3D graphics cards. We chose a Fieldworks
7600, which includes a 133MHz Pentium, 64Mbyte mem-
ory, 512K cache, 2GB disk, and a card cage that can hold 3
ISA and 3 PCI cards. While this system is our biggest com-
promise in terms of weight and size, it has significantly
simplified our development effort.
Graphics card. We use an Omnicomp 3Demon card,
which is based on the Glint 500DTX chipset, including
hardware support for 3D transformations and rendering
using OpenGL.
Handheld computer. Our handheld computer is a Mit-
subishi Amity, which has a 75MHz DX4, 640x480 color
display, 340MB disk, 16MB main memory, PCMCIA slot,
and integral stylus. Control of the headworn display menu
is accomplished through a Cirque GlidePoint trackpad that
we mounted on the back of the handheld computer. (We
originally considered having the handheld computer stylus
control the headworn display’s menu when it was within a
designated physical area of the handheld computer’s dis-
play. We decided against this, however, because it would
be difficult to remain in that area when the user was not
looking at the handheld display.)
Headworn display. Video see-through displays currently
provide a number of advantages over optical see-through
displays, particularly with regard to registration and proper
occlusion effects [27]. However, video-based systems
restrict the resolution of the real world to that of the virtual
world. While we believe that this is a good trade-off in
many applications, we feel that augmented reality systems
will become commonplace only when they truly add to
reality, rather than subtract from it. In our work we have
selected the relatively lightweight Virtual I/O i-glasses
head-worn display. This is a 60,000 triad color display. We
are also experimenting with a Virtual I/O 640x480 resolu-
tion greyscale display.
Orientation tracker. We use the built-in tracking pro-
vided with our headworn display. This includes a magne-
tometer, which senses the earth’s magnetic field to
determine head yaw, and a two-axis inclinometer that uses
gravity to detect head pitch and roll.
Position tracking. We use a Trimble DSM GPS receiver
to obtain position information for its antenna, which is
located on the backpack above the users head. While nor-
mal GPS generates readings that are accurate only within
about 100 meters, it can be routinely coupled with correc-
tion information broadcast from a another receiver at a
known location that contains information about how far it
is off. We subscribe to a differential correction service pro-
vided by Differential Corrections Inc., which allows us to
achieve about one-meter accuracy.
Network. To provide communication with the rest of our
infrastructure we use NCR WaveLan spread-spectrum
2Mbit/sec radio modems in both the backpack and hand-
held PCs, which operate with a network of base stations on
campus.
Power. With the exception of the computers, each of the
other hardware components has relatively modest power
requirements of under 10 watts each. We run them all using
an NRG Power-MAX NiCad rechargeable battery belt. It
has the added advantage of allowing a fully charged
replacement powerpack to be plugged in prior to unplug-
ging the depleted powerpack, without interrupting power.
4.2. Software
Infrastructure. We use COTERIE [19], a system that
provides language-level support for distributed virtual
environments. COTERIE is based on the distributed data-
Figure 6. Hardware design of our prototype cam-
pus information system.
Power belt
See-through
headworn display
3D graphics
card
Spread-spectrum
radio
GPS
Differential GPS
Backpack PC
Handheld PC
Radio
GlidePoint
video
orientation tracker
FM receiver
Display
Headworn display
interface
& orientation tracker

Citations
More filters
Journal ArticleDOI
TL;DR: This work refers one to the original survey for descriptions of potential applications, summaries of AR system characteristics, and an introduction to the crucial problem of registration, including sources of registration error and error-reduction strategies.
Abstract: In 1997, Azuma published a survey on augmented reality (AR). Our goal is to complement, rather than replace, the original survey by presenting representative examples of the new advances. We refer one to the original survey for descriptions of potential applications (such as medical visualization, maintenance and repair of complex equipment, annotation, and path planning); summaries of AR system characteristics (such as the advantages and disadvantages of optical and video approaches to blending virtual and real, problems in display focus and contrast, and system portability); and an introduction to the crucial problem of registration, including sources of registration error and error-reduction strategies.

3,624 citations


Cites background or methods from "A touring machine: prototyping 3D m..."

  • ...The first outdoor system was the Touring Machine.(46) Developed at Columbia University, this self-contained system includes tracking (a compass, inclinometer, and differential GPS), a mobile computer with a 3D graphics board, and a see-through HMD....

    [...]

  • ...(Courtesy T. Höllerer, S. Feiner, J. Pavlik, Columbia Univ.) 17 Battlefield Augmented Reality System, a descendent of the Touring Machine....

    [...]

  • ...The first outdoor system was the Touring Machine.46 Developed at Columbia University, this self-contained system includes tracking (a compass, inclinometer, and differential GPS), a mobile computer with a 3D graphics board, and a see-through HMD....

    [...]

  • ...For example, a handheld tablet interacts well with a text document.(46) In the Augmented Surfaces system(47) (Figure 9), users manipulate data through a variety of real and virtual mechanisms and can interact with data through projective and handheld displays....

    [...]

Journal ArticleDOI
01 Jul 2006
TL;DR: This work presents a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface that consists of an image-based modeling front end that automatically computes the viewpoint of each photograph and a sparse 3D model of the scene and image to model correspondences.
Abstract: We present a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface. Our system consists of an image-based modeling front end that automatically computes the viewpoint of each photograph as well as a sparse 3D model of the scene and image to model correspondences. Our photo explorer uses image-based rendering techniques to smoothly transition between photographs, while also enabling full 3D navigation and exploration of the set of images and world geometry, along with auxiliary information such as overhead maps. Our system also makes it easy to construct photo tours of scenic or historic locations, and to annotate image details, which are automatically transferred to other relevant images. We demonstrate our system on several large personal photo collections as well as images gathered from Internet photo sharing sites.

3,398 citations

Journal ArticleDOI
TL;DR: A conceptual framework is presented that separates the acquisition and representation of context from the delivery and reaction to context by a context-aware application, and a toolkit is built that instantiates this conceptual framework and supports the rapid development of a rich space of context- aware applications.
Abstract: Computing devices and applications are now used beyond the desktop, in diverse environments, and this trend toward ubiquitous computing is accelerating. One challenge that remains in this emerging research field is the ability to enhance the behavior of any application by informing it of the context of its use. By context, we refer to any information that characterizes a situation related to the interaction between humans, applications, and the surrounding environment. Context-aware applications promise richer and easier interaction, but the current state of research in this field is still far removed from that vision. This is due to 3 main problems: (a) the notion of context is still ill defined, (b) there is a lack of conceptual models and methods to help drive the design of context-aware applications, and (c) no tools are available to jump-start the development of context-aware applications. In this anchor article, we address these 3 problems in turn. We first define context, identify categories of contextual information, and characterize context-aware application behavior. Though the full impact of context-aware computing requires understanding very subtle and high-level notions of context, we are focusing our efforts on the pieces of context that can be inferred automatically from sensors in a physical environment. We then present a conceptual framework that separates the acquisition and representation of context from the delivery and reaction to context by a context-aware application. We have built a toolkit, the Context Toolkit, that instantiates this conceptual framework and supports the rapid development of a rich space of context-aware applications. We illustrate the usefulness of the conceptual framework by describing a number of context-aware applications that have been prototyped using the Context Toolkit. We also demonstrate how such a framework can support the investigation of important research challenges in the area of context-aware computing.

3,095 citations

Journal ArticleDOI
TL;DR: This paper presents structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like “Notre Dame” or “Trevi Fountain,” and presents these algorithms and results as a first step towards 3D modeled sites, cities, and landscapes from Internet imagery.
Abstract: There are billions of photographs on the Internet, comprising the largest and most diverse photo collection ever assembled. How can computer vision researchers exploit this imagery? This paper explores this question from the standpoint of 3D scene modeling and visualization. We present structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like "Notre Dame" or "Trevi Fountain." This approach, which we call Photo Tourism, has enabled reconstructions of numerous well-known world sites. This paper presents these algorithms and results as a first step towards 3D modeling of the world's well-photographed sites, cities, and landscapes from Internet imagery, and discusses key open problems and challenges for the research community.

2,207 citations

Journal ArticleDOI
TL;DR: The field of AR is described, including a brief definition and development history, the enabling technologies and their characteristics, and some known limitations regarding human factors in the use of AR systems that developers will need to overcome.
Abstract: We are on the verge of ubiquitously adopting Augmented Reality (AR) technologies to enhance our percep- tion and help us see, hear, and feel our environments in new and enriched ways. AR will support us in fields such as education, maintenance, design and reconnaissance, to name but a few. This paper describes the field of AR, including a brief definition and development history, the enabling technologies and their characteristics. It surveys the state of the art by reviewing some recent applications of AR technology as well as some known limitations regarding human factors in the use of AR systems that developers will need to overcome.

1,526 citations

References
More filters
Proceedings ArticleDOI
09 Dec 1968
TL;DR: The fundamental idea behind the three-dimensional display is to present the user with a perspective image which changes as he moves, and this display depends heavily on this "kinetic depth effect".
Abstract: The fundamental idea behind the three-dimensional display is to present the user with a perspective image which changes as he moves. The retinal image of the real objects which we see is, after all, only two-dimensional. Thus if we can place suitable two-dimensional images on the observer's retinas, we can create the illusion that he is seeing a three-dimensional object. Although stereo presentation is important to the three-dimensional illusion, it is less important than the change that takes place in the image when the observer moves his head. The image presented by the three-dimensional display must change in exactly the way that the image of a real object would change for similar motions of the user's head. Psychologists have long known that moving perspective images appear strikingly three-dimensional even without stereo presentation; the three-dimensional display described in this paper depends heavily on this "kinetic depth effect."

1,825 citations


"A touring machine: prototyping 3D m..." refers background in this paper

  • ...This concept was pioneered by Ivan Sutherland [27], and is accomplished through the use of tracked “see-through” displays that enrich the user’s view of the world by overlaying visual, auditory, and even haptic, material on what she experiences....

    [...]

Proceedings ArticleDOI
07 Jan 1992
TL;DR: The authors describe the design and prototyping steps they have taken toward the implementation of a heads-up, see-through, head-mounted display (HUDset), combined with head position sensing and a real world registration system, that will enable cost reductions and efficiency improvements in many of the human-involved operations in aircraft manufacturing.
Abstract: The authors describe the design and prototyping steps they have taken toward the implementation of a heads-up, see-through, head-mounted display (HUDset). Combined with head position sensing and a real world registration system, this technology allows a computer-produced diagram to be superimposed and stabilized on a specific position on a real-world object. Successful development of the HUDset technology will enable cost reductions and efficiency improvements in many of the human-involved operations in aircraft manufacturing, by eliminating templates, formboard diagrams, and other masking devices. >

1,257 citations


"A touring machine: prototyping 3D m..." refers background in this paper

  • ...Previous research in augmented reality has addressed a variety of application areas including aircraft cockpit control [12], assistance in surgery [26], viewing hidden building infrastructure [10], maintenance and repair [9], and parts assembly [5, 29]....

    [...]

Journal ArticleDOI

1,032 citations


"A touring machine: prototyping 3D m..." refers background in this paper

  • ...Previous research in augmented reality has addressed a variety of application areas including aircraft cockpit control [12], assistance in surgery [26], viewing hidden building infrastructure [10], maintenance and repair [9], and parts assembly [5, 29]....

    [...]

Journal ArticleDOI
TL;DR: The wearable personal imaging system as mentioned in this paper is based on the idea of keeping an eye on the screen while walking around and doing other things, which is similar to the wearable personal digital assistant.
Abstract: Miniaturization of components has enabled systems that are wearable and nearly invisible, so that individuals can move about and interact freely, supported by their personal information domain. To explore such new concepts in imaging and lighting, I designed and built the wearable personal imaging system. My invention differed from present-day laptops and personal digital assistants in that I could keep an eye on the screen while walking around and doing other things. Just as computers have come to serve as organizational and personal information repositories, computer clothing, when worn regularly, could become a visual memory prosthetic and perception enhancer.

568 citations

Proceedings ArticleDOI
01 Dec 1995
TL;DR: Combination of ID-awareness and portable video-see-through display solves several problems with current ubiquitous computers systems and augmented reality systems.
Abstract: Current user interface techniques such as WIMP or the desktop metaphor do not support real world tasks, because the focus of these user interfaces is only on human–computer interactions, not on human–real world interactions. In this paper, we propose a method of building computer augmented environments using a situation-aware portable device. This device, calledNaviCam, has the ability to recognize the user’s situation by detecting color-code IDs in real world environments. It displays situation sensitive information by superimposing messages on its video see-through screen. Combination of ID-awareness and portable video-see-through display solves several problems with current ubiquitous computers systems and augmented reality systems.

566 citations


Additional excerpts

  • ...In contrast to these systems, which use see-through headworn displays, Rekimoto [23] has used handheld displays to overlay information on colorcoded objects....

    [...]

Frequently Asked Questions (14)
Q1. What are the contributions in this paper?

The authors describe a prototype system that combines together the overlaid 3D graphics of augmented reality with the untethered freedom of mobile computing. The authors introduce an application that presents information about their university ’ s campus, using a head-tracked, see-through, headworn, 3D display, and an untracked, opaque, handheld, 2D display with stylus and trackpad. The authors provide an illustrated explanation of how their prototype is used, and describe their rationale behind designing its software infrastructure and selecting the hardware on which it runs. 

The authors are in the process of replacing the magnetometer/inclinometer contained in the headworn display with a higher-quality unit, and are considering obtaining a gyroscopic system for hybrid tracking. 

A selected building remains selected until the user’s head orientation dwells on another building for more than a second as indicated by the color change. 

Better outdoors position tracking can be addressed through realtime kinematic GPS systems, which can achieve centimeter-level accuracy. 

The application running on the handheld PC is a custom HTTP server in charge of generating web pages on the fly and also accessing and caching external web pages by means of a proxy component. 

One of the main reasons that the authors run their own HTTP server on the handheld display is that it gives us the opportunity to react freely to user input from the web browser. 

Since tracked sites can be predicted based on satellite ephemeris information broadcast to the GPS receiver, combined with known campus topology, the authors could also direct the user toward other reliably tracked sites either on the headworn display or on the 2D absolute space of a map viewed on the handheld computer. 

It has the added advantage of allowing a fully charged replacement powerpack to be plugged in prior to unplugging the depleted powerpack, without interrupting power. 

Eventually GPS techniques may be used with spread-spectrum radio transmitters to support precise tracking in large indoor spaces [3] 

The prototype comprises two applications, one running on each machine, implemented in approximately 3600 lines of commented Repo code. 

This includes a magnetometer, which senses the earth’s magnetic field to determine head yaw, and a two-axis inclinometer that uses gravity to detect head pitch and roll. 

While the authors wanted their system to be as lightweight and comfortable as possible, the authors also decided to use only offthe-shelf hardware to avoid the expense, effort, and time involved in building their own. 

Mann [21] has developed a family of wearable systems with headworn displays, the most recent of which uses optical flow to overlay textual information on automatically recognized objects. 

The CMU Wearable Computer Project has developed several generations of mobile user interfaces using a single handheld or untracked headworn display with GPS, including a campus tour [25].