scispace - formally typeset
Search or ask a question

Showing papers on "Augmented reality published in 1995"


Proceedings ArticleDOI
21 Dec 1995
TL;DR: In this article, the authors discuss augmented reality displays in a general sense, within the context of a reality-virtuality (RV) continuum, encompassing a large class of ''mixed reality'' displays, which also includes augmented virtuality (AV).
Abstract: In this paper we discuss augmented reality (AR) displays in a general sense, within the context of a reality-virtuality (RV) continuum, encompassing a large class of `mixed reality' (MR) displays, which also includes augmented virtuality (AV). MR displays are defined by means of seven examples of existing display concepts in which real objects and virtual objects are juxtaposed. Essential factors which distinguish different MR display systems from each other are presented, first by means of a table in which the nature of the underlying scene, how it is viewed, and the observer's reference to it are compared, and then by means of a three dimensional taxonomic framework comprising: extent of world knowledge, reproduction fidelity, and extent of presence metaphor. A principal objective of the taxonomy is to clarify terminology issues and to provide a framework for classifying research across different disciplines.

1,684 citations


Proceedings ArticleDOI
01 Dec 1995
TL;DR: Combination of ID-awareness and portable video-see-through display solves several problems with current ubiquitous computers systems and augmented reality systems.
Abstract: Current user interface techniques such as WIMP or the desktop metaphor do not support real world tasks, because the focus of these user interfaces is only on human–computer interactions, not on human–real world interactions. In this paper, we propose a method of building computer augmented environments using a situation-aware portable device. This device, calledNaviCam, has the ability to recognize the user’s situation by detecting color-code IDs in real world environments. It displays situation sensitive information by superimposing messages on its video see-through screen. Combination of ID-awareness and portable video-see-through display solves several problems with current ubiquitous computers systems and augmented reality systems.

566 citations


Patent
04 Aug 1995
TL;DR: In this paper, a virtual reality hybrid of virtual and real environments is provided which permits the user to perform significant physical exertion by applying forces to the machine while viewing images on a head mounted display.
Abstract: This invention relates to computer controlled exercise machines and provides the user with a wide variety of interactive exercise options controlled by software. A virtual reality hybrid of virtual and real environments is provided which permits the user to perform significant physical exertion by applying forces to the machine while viewing images on a head mounted display. The invention permits the user to view his own hands and body superimposed over a computer generated image of objects that are not actually present while maintaining parts of the exercise machine that the user physically contacts, such as a handle, superimposed over the computer generated image. As the user exerts forces against the machine (such as the handle) he perceives that he is exerting forces against the objects the images represent. The invention includes a video camera and computer adapted to record images from the real world which may be combined with computer generated images while retaining the proper spacial orientation to produce a composite virtual reality environment. Virtual reality exercise regimens adapted to the user's individual capabilities, virtual reality exercise games, virtual reality competitive sports, and virtual reality team sports are disclosed.

500 citations


Proceedings ArticleDOI
07 May 1995
TL;DR: A prototype audio augmented reality-based tour guide that superimposes audio on the world based on where a user is located is proposed for use as an automated tour guide in museums and is expected to enhance the social aspects of museum visits, compared to taped tour guides.
Abstract: Augmented reality (or computer augmented environments as it is sometimes called) uses computers to enhance the richness of the real world. It differs from virtual reality in that it doesn’t attempt to replace the real world. Our prototype automated tour guide superimposes audio on the world based on where a user is located. We propose this technique for use as an automated tour guide in museums and expect it will enhance the social aspects of museum visits, compared to taped tour guides. INTRODUCTION For many types of information retrieval, social interaction is critical to the experience. For instance, why do people go to live music concerts instead of listening to compact discs at home? Partly because of the different quality of sound, but it is also largely due to the social experience of being at the live show, being with your friends, and being part of the audience. Many people believe that computation enhances our everyday lives. But many of the forms in which computers aid us also work to isolate us. Perhaps the most glaring example of this is virtual reali~, where the main point is to take us out of the physical world by replacing our senses with computer-generated ones. Augmented reality, on the other hand, is an attempt to combine our real world interactions with the richness of computational information without isolating people from each other. The basic idea is to superimpose computer generated data on top of the real world, as the person moves within it. This idea was originally described by Myron Krueger [4], but now a number of groups are Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of ACM. To copy otherwise, or to republish, requires a fee and/or specific permission. CH1’ Companion 95, Denver, Colorado, USA @ 1995 ACM 0-89791 -755-3/95/0005 ...$3.50 beginning to experiment with it (see [2] for a collection of several research articles, or [1] for a survey). MUSEUM TOURS One place a low-tech version of augmented reality has long been in the marketplace is museums. It is quite common for museums to rent audio-tape tour guides that viewers carry around with them as they tour the exhibits. While this technology works reasonably well, many people using it become frustrated because it seems to obstruct some of their social purposes in attending the museum. As with music, part of the reason many people go to museums is to socialize, to be with friends and to discuss the exhibit as they experience it. Taped tour guides conflict with these goals because the tapes are linear, preplanned, and go at their own pace. This makes it hard to stay with friends because if one person turns off their tape temporarily, it is very difficult to get synchronized again. In addition, the taped tours typically describe only a relatively small subset of the pieces on exhibit. The pieces described on tape may not be the ones a particular viewer is interested in hearing. But because the tape must be accessed linearly, it is impossible to skip over or access descriptions out of order. AUDIO AUGMENTED REALITY PROTOTYPE A more technologically sophisticated tour guide, on the other hand, can offer the benefits of automation without the social conflicts caused by the taped tour guide. We have built a prototype audio augmented reality-based tour guide. This system replaces analog audio tapes with random access digital audio. In addition, it adds a microcomputer and an invisible spatial locating device that allow much more freedom for the participant. ● Current address: bederson @cs.unm.edu, Computer Science Department, University of New Mexico, Albuquerque, NM 87131.

213 citations


Journal ArticleDOI
TL;DR: Applications of this powerful visualization tool include previewing proposed buildings in their natural settings, interacting with complex machinery for purposes of construction or maintenance training, and visualizing in-patient medical data such as ultrasound.
Abstract: Augmented reality systems allow users to interact with real and computer-generated objects by displaying 3D virtual objects registered in a user's natural environment. Applications of this powerful visualization tool include previewing proposed buildings in their natural settings, interacting with complex machinery for purposes of construction or maintenance training, and visualizing in-patient medical data such as ultrasound. In all these applications, computer-generated objects must be visually registered with respect to real-world objects in every image the user sees. If the application does not maintain accurate registration, the computer-generated objects appear to float around in the user's natural environment without having a specific 3D spatial position. Registration error is the observed displacement in the image between the actual and intended positions of virtual objects. >

196 citations


Journal ArticleDOI
TL;DR: This work identifies the calibration steps necessary to build a computer model of the real world and describes each of the calibration processes that determine the internal parameters of the imaging devices (scan converter, frame grabber, and video camera), as well as the geometric transformations that relate all of the physical objects of the system to a known world coordinate system.
Abstract: Augmented reality entails the use of models and their associated renderings to supplement information in a real scene. In order for this information to be relevant or meaningful, the models must be positioned and displayed in such a way that they blend into the real world in terms of alignments, perspectives, illuminations, etc. For practical reasons the information necessary to obtain this realistic blending cannot be known a priori, and cannot be hard wired into a system. Instead a number of calibration procedures are necessary so that the location and parameters of each of the system components are known. We identify the calibration steps necessary to build a computer model of the real world and then, using the monitor based augmented reality system developed at ECRC (GRASP) as an example, we describe each of the calibration processes. These processes determine the internal parameters of our imaging devices (scan converter, frame grabber, and video camera), as well as the geometric transformations that relate all of the physical objects of the system to a known world coordinate system. >

184 citations


01 Jan 1995
TL;DR: This paper describes experiments in the use of cross-correlation as a means of tracking pointing devices for a digital desk using a reference template and a method to detect when to initiate tracking as well as when tracking has failed.
Abstract: This paper describes experiments in the use of cross-correlation as a means of tracking pointing devices for a digital desk. The first section introduces the motivation by describing the potential for applying real time computer vision to man machine interaction. The problem of tracking is then formulated and addressed as a problem of optimal signal detection. Signal detection leads to a formulation of tracking using cross-correlation with a reference template as shown section 2. The problems of normalisation, choosing the size of the reference template and search region are addressed. A method is provided to detect when to initiate tracking as well as when tracking has failed. The problem of updating the reference mask is also addressed.

166 citations


22 Feb 1995
TL;DR: This dissertation demonstrates that predicting future head locations is an effective approach for significantly reducing dynamic errors and can also estimate the maximum possible time-domain error and the maximum tolerable system delay given a specified maximum time- domain error.
Abstract: In Augmented Reality systems, see-through Head-Mounted Displays (HMDs) superimpose virtual three-dimensional objects on the real world. This technology has the potential to enhance a user's perception of and interaction with the real world. However, many Augmented Reality applications will not be accepted unless virtual objects are accurately registered with their real counterparts. Good registration is difficult, because of the high resolution of the human visual system and its sensitivity to small differences. Registration errors fall into two categories: static errors, which occur even when the user remains still, and dynamic errors caused by system delays when the user moves. Dynamic errors are usually the largest errors. This dissertation demonstrates that predicting future head locations is an effective approach for significantly reducing dynamic errors. This demonstration is performed in real time with an operational Augmented Reality system. First, evaluating the effect of prediction requires robust static registration. Therefore, this system uses a custom optoelectronic head-tracking system and three calibration procedures developed to measure the viewing parameters. Second, the system predicts future head positions and orientations with the aid of inertial sensors. Effective use of these sensors requires accurate estimation of the varying prediction intervals, optimization techniques for determining parameters, and a system built to support real-time processes. On average, prediction with inertial sensors is 2 to 3 times more accurate than prediction without inertial sensors and 5 to 10 times more accurate than not doing any prediction at all. Prediction is most effective at short prediction intervals, empirically determined to be about 80 milliseconds or less. An analysis of the predictor in the frequency domain shows the predictor magnifies the signal by roughly the square of the angular frequency and the prediction interval. For specified head-motion sequences and prediction intervals, this analytical framework can also estimate the maximum possible time-domain error and the maximum tolerable system delay given a specified maximum time-domain error. Future steps that may further improve registration are discussed.

159 citations


Proceedings ArticleDOI
15 Apr 1995
TL;DR: A video see-through augmented reality system capable of resolving occlusion between real and computer-generated objects and a new algorithm that assigns depth values to each pixel in a pair of stereo video images in near-real-time.
Abstract: Current state-of-the-art augmented reality systems simply overlay computer-generated visuals on the real-world imagery, for example via video or optical see-through displays. However, overlays are not effective when displaying data in three dimensions, since occlusion between the real and computer-generated objects is not addressed.We present a video see-through augmented reality system capable of resolving occlusion between real and computer-generated objects. The heart of our system is a new algorithm that assigns depth values to each pixel in a pair of stereo video images in near-real-time. The algorithm belongs to the class of stereo matching algorithms and thus works in fully dynamic environments. We describe our system in general and the stereo matching algorithm in particular.

132 citations


Proceedings ArticleDOI
05 Jul 1995
TL;DR: A taxonomy for classifying human mediated control of remote manipulation systems is proposed, based on three dimensions: degree of machine autonomy, level of structure of theRemote environment, and extent of knowledge, or modellability, of the remote world.
Abstract: A taxonomy for classifying human mediated control of remote manipulation systems is proposed, based on three dimensions: degree of machine autonomy, level of structure of the remote environment, and extent of knowledge, or modellability, of the remote world. For certain unstructured and thus difficult to model environments, a case is made for remote manipulation by means of director/agent control, rather than telepresence. The ARGOS augmented reality toolkit is presented, as a means for gathering quantitative spatial information about-i.e., for interactively creating a partial model of a remotely viewed 3D worksite. This information is used for off-line local programming of the remote manipulator-i.e. virtual telerobotic control-and when ready the final commands are transmitted for execution to the manipulator control system.

114 citations


Proceedings ArticleDOI
11 Mar 1995
TL;DR: The idea of dynamically measuring registration error in combined images (2D error) and using that information to correct 3D coordinate system registration error which in turn improves registration in the combined images is introduced.
Abstract: This paper addresses the problem of correcting visual registration errors in video-based augmented-reality systems. Accurate visual registration between real and computer-generated objects in combined images is critically important for conveying the perception that both types of object occupy the same 3-dimensional (3D) space. To date, augmented-reality systems have concentrated on simply improving 3D coordinate system registration in order to improve apparent (image) registration error. This paper introduces the idea of dynamically measuring registration error in combined images (2D error) and using that information to correct 3D coordinate system registration error which in turn improves registration in the combined images. Registration can be made exact in every combined image if a small video delay can be tolerated. Our experimental augmented-reality system achieves improved image registration, stability, and error tolerance from tracking system drift and jitter over current augmented-reality systems. No additional tracking hardware or other devices are needed on the user's head-mounted display. Computer-generated objects can be "nailed" to real-world reference points in every image the user sees with an easily-implemented algorithm. Dynamic error correction as demonstrated here will likely be a key component of future augmented-reality systems.

Journal ArticleDOI
TL;DR: A system for constructing collaborative design applications based on distributed augmented reality that enables users at remote sites to collaborate on design tasks and interactively control their local view, try out design options, and communicate design proposals.
Abstract: This paper presents a system for constructing collaborative design applications based on distributed augmented reality. Augmented reality interfaces are a natural method for presenting computer-based design by merging graphics with a view of the real world. Distribution enables users at remote sites to collaborate on design tasks. The users interactively control their local view, try out design options, and communicate design proposals. They share virtual graphical objects that substitute for real objects which are not yet physically created or are not yet placed into the real design environment.

Book ChapterDOI
TL;DR: This chapter describes a system for annotating real-world objects using augmented reality, and studies the fundamental features of augmented reality system, as well as those capabilities developed specifically for the application.
Abstract: Publisher Summary This chapter describes a system for annotating real-world objects using augmented reality. It also studies the fundamental features of augmented reality system, as well as those capabilities developed specifically for the application. It also provides an overview of the software and hardware architecture of our augmented reality system. The chapter also describes the general calibration procedures required for this and most other augmented reality applications. The issues of tracking and user interaction and describes the details of generating and displaying the annotations are also discussed. The augmented reality system uses a standard video camera and combines the video signal with computer-generated graphics, the result of which is presented on a normal video display. The core of the augmented reality system is an interactive 3D computer graphics system, providing methods for representing and viewing 3D geometric models. The mixing of the computer graphics and video input is done by a Folsom Research Otto 9500 Scan-Converter. Calibration is an essential component of augmented reality that provides the information that allows for the registration and overlaying of geometric and viewing models onto objects and cameras in the real world.

Journal ArticleDOI
TL;DR: The combination of frameless stereotactic localization technology with real-time video processing permits the visualization of medical imaging data as a video overlay during the actual surgical procedure, resulting in surgical navigation assistance without limiting the judgement of the physician based on the continuous observation of the operating field.
Abstract: We present a new visualization system for image-guided stereotactic navigation in tumor surgery. The combination of frameless stereotactic localization technology with real-time video processing permits the visualization of medical imaging data as a video overlay during the actual surgical procedure. Virtual computer-generated anatomical structures are displayed intraoperatively in a semi-immersive head-up display. This results in surgical navigation assistance without limiting the judgement of the physician based on the continuous observation of the operating field. The case presented documents the potential of augmented reality visualization concepts in tumor surgery of the head.

Proceedings ArticleDOI
01 Dec 1995
TL;DR: A new user-interface metaphor for immersive virtual reality — the virtual tricorder, which visually duplicates a six-degrees-of-freedom input device in the virtual environment and unifies many existing interaction techniques for immersivevirtual reality.
Abstract: We describe a new user-interface metaphor for immersive virtual reality — the virtual tricorder. The virtual tricorder visually duplicates a six-degrees-of-freedom input device in the virtual environment. Since we map the input device to the tricorder one-to-one at all times, the user identifies the two. Thus, the resulting interface is visual as well as tactile, multipurpose, and based on a tool metaphor. It unifies many existing interaction techniques for immersive virtual reality.

Patent
27 Mar 1995
TL;DR: In this paper, a vision system which collects information from similar vision systems having a different perspective of a scene is arranged to produce a composite image, which can then include features impossible to otherwise show.
Abstract: A vision system which collects information from similar vision systems having a different perspective of a scene are arranged to produce a composite image. The composite image having information from both perspectives can then include features impossible to otherwise show. Objects otherwise “hidden” from a first perspective are displayed as information from a second perspective may contain imagery relating to those images. A translation of spatial coordinates conditions the image from the second perspective such that it will fit into a composite image and match the first perspective.

Proceedings ArticleDOI
01 Dec 1995
TL;DR: A new method is presented, as an example of the reality awareness, to retrieve electronic documents with real objects such as paper documents or folders, so that the user can unify the arrangement of electronic documents into the arrangements of real objects.
Abstract: We are developing a computerized desk which we have named InteractiveDESK [1]. One of the major features of the InteractiveDESK is reality awareness; that is, the ability to respond to situational changes in the real world in order to reduce users’ workloads. In this paper, we present a new method, as an example of the reality awareness, to retrieve electronic documents with real objects such as paper documents or folders. Users of the InteractiveDESK can retrieve electronic documents by just showing real objects which have links to the electronic documents. The links are made by the users through interactions with the InteractiveDESK. The advantage of this method is that the user can unify the arrangement of electronic documents into the arrangement of real objects.

Proceedings ArticleDOI
Wendy E. Mackay1, D. S. Pagani1, L. Faber1, B. Inwood1, P. Launiainen1, L. Brenta1, V. Pouzol1 
07 May 1995
TL;DR: The design of Ariel is based on studies of users in a distributed cooperative work setting combined with a scenario-based design approach in which users contribute to the development of design scenarios.
Abstract: Ariel is an example of a new approach to user interfaces called Augmented Reality (see Wellner et al., 1993, Mackay et al., 1993). The goal is to allow users to continue to use the ordinrny, everyday objects they encounter in their daily work, and then to enhance or augment them with functionality from the computer. Ariel is designed to augment the use of a particular type of paper document: engineering drawings. Computer information (menus, multimedia annotations, access to a media space) is projected onto a drawing and users can interact with both the projected information and the paper drawing. The design of Ariel is based on studies of users in a distributed cooperative work setting (the construction of a bridge) combined with a scenario-based design approach in which users contribute to the development of design scenarios. This video shows the third Ariel prototype. Future versions will continue to evolve, based on input from users when the system is installed at the work site.

Journal ArticleDOI
TL;DR: An overview of the early stages of three related research projects whose goals are to exploit augmented reality, virtual worlds, and artificial intelligence to explore relationships between perceived architectural space and the structural systems that support it is provided.
Abstract: We provide an overview of the early stages of three related research projects whose goals are to exploit augmented reality, virtual worlds, and artificial intelligence to explore relationships between perceived architectural space and the structural systems that support it. In one project, we use a see-through head-mounted display to overlay a graphic representation of a building's structural systems on the user's view of a room within the building. This overlaid virtual world shows the out-lines of the concrete joists, beams, and columns surrounding the room, as well as the reinforcing steel inside them, and includes displays from a commercially available structural analysis program. In a related project, the structural view is exposed by varying the opacity of room finishes and concrete in a 3D model of the room and surrounding structure rendered on a conventional CRT. We also describe a hypermedia database, currently under construction, depicting major, twentieth-century American buildings. The interactive, multidisciplinary elements of the database-including structural and thermal analyses, free body diagrams which show how forces are resisted by portions of a structure under various loading conditions, facsimiles of construction documents, and critical essays-are bound together and made available over the World-Wide Web. Finally, we discuss the relationships among all these projects, and their potential applications to teaching architecture students and to construction, assembly, and repair of complex structures.

Posted Content
TL;DR: The Ubiquitous Talker introduces robust natural language processing in to a system of augmented reality with situation awareness and feels as if he/she is talking with the object itself through the system.
Abstract: Augmented reality is a research area that tries to embody an electronic information space within the real world, through computational devices. A crucial issue within this area, is the recognition of real world objects or situations. In natural language processing, it is much easier to determine interpretations of utterances, even if they are ill-formed, when the context or situation is fixed. We therefore introduce robust, natural language processing into a system of augmented reality with situation awareness. Based on this idea, we have developed a portable system, called the Ubiquitous Talker. This consists of an LCD display that reflects the scene at which a user is looking as if it is a transparent glass, a CCD camera for recognizing real world objects with color-bar ID codes, a microphone for recognizing a human voice and a speaker which outputs a synthesized voice. The Ubiquitous Talker provides its user with some information related to a recognized object, by using the display and voice. It also accepts requests or questions as voice inputs. The user feels as if he/she is talking with the object itself through the system.

Journal ArticleDOI
TL;DR: The authors believe that virtual environments should convey the widest possible range of perceptual and symbolic information and discuss virtual reality as a user interface with two virtual athletic venues.
Abstract: The problem of integrating symbolic and perceptual information becomes more difficult in a fully immersive virtual reality, in which the computer must draw and render the whole visible apparatus in real time. The authors believe that virtual environments should convey the widest possible range of perceptual and symbolic information. They discuss virtual reality as a user interface. To explore the possibilities they create two virtual athletic venues. >

Journal ArticleDOI
TL;DR: It is shown that in many clinical cases, registration is only possible through the use of internal patient structures, and the state of the art in this domain is presented.

Journal ArticleDOI
TL;DR: In virtual reality the user becomes part of an interactive world of data objects created by a computer, which may be used to recreate images of the real world or to construct a new and totally imaginary world.
Abstract: pioneering company in virtual reality development. Other terminology used to describe this technology includes artificial reality, virtual environments, cyberspace, visually coupled systems, telepresence and virtual presence. The multiplicity of different terms to describe this concept suggests that a single concise definition is difficult to obtain. Definitions which have been proposed include: virtual reality is an advanced human–computer interface that simulates a realistic environment and allows participants to interact with it [13]; virtual environments are three-dimensional, computer-generated worlds which accurately model and simulate an actual environment, whether it be a physical structure or an aggregation of different types of data [7]; and cyberspace—a graphic representation of data abstracted from the banks of every computer in the human system [10]. In essence, virtual reality is simply a more imaginative way of providing a human–computer interface than the standard keyboard, mouse and screen system with which we are familiar (fig. 1). Virtual reality systems have been developed using high definition monitor screens with specialized tools for manipulating screen images, and the more popularly familiar fully immersive helmet-based systems seen in some games arcades. In virtual reality the user becomes part of an interactive world of data objects created by a computer. These objects may be used to recreate images of the real world or to construct a new and totally imaginary world. Objects may represent physical structures or such

Journal Article
TL;DR: The presented method for the first time links preoperatively acquired radiologic data, planned implant location and intraoperative navigation assistance for orthotopic positioning of dental implants with an image-to-tissue interface.
Abstract: In this paper, a recently developed computer-based dental implant positioning system with an image-to-tissue interface is presented. On a computer monitor or in a head-up display, planned implant positions and the implant drill are graphically superimposed on the patient's anatomy. Electromagnetic 3D sensors track all skull and jaw movements; their signal feedback to the workstation induces permanent real-time updating of the virtual graphics' position. An experimental study and a clinical case demonstrates the concept of the augmented reality environment--the physician can see the operating field and superimposed virtual structures, such as dental implants and surgical instruments, without loosing visual control of the operating field. Therefore, the operation system allows visualization of CT planned implantposition and the implementation of important anatomical structures. The presented method for the first time links preoperatively acquired radiologic data, planned implant location and intraoperative navigation assistance for orthotopic positioning of dental implants.

Book
01 Jun 1995
TL;DR: Techniques in Modeling Virtual Environment: N.M. Thalmann, Deformable Surfaces Using Physically Based Particle Systems C. Margaliot and C.Margaliot, Approximation of Smooth Surfaces and Adaptive Sampling by Piecewise-Linear Interpolants.
Abstract: Techniques in Modeling Virtual Environment: N.Tsingos, E.Bittar, and M.P.Gascues, Implicit Surfaces for Semi-Automatic Medical Organs Reconstruction M.Margaliot and C. Gotsman, Approximation of Smooth Surfaces and Adaptive Sampling by Piecewise-Linear Interpolants G.Elber, Metamorphosis of Free-Form Curves and Surfaces. Rendering Virtual Environments: G. Jay, and D. McRobie, Pixel-Independent Ray Tracing S.N. Pattanaik and K. Bouatouch, Interactive Walk Through Using Particle Tracing D. Stuttard, A. Worrall, D. Paddon, and C. Willis, A Radiosity System for Real-Time Phto-Realism S. Merzouk, B.Salque, and J.C. Paul, A Domain Decomposition Method for Radiosity D. Tadamura and E. Nakamae, Modeling the Colour of Water in Lighting Design C. Chevrier, S. Belblidia, and J.C. Paul, Compositing Computer-Generated Images and Video Films: An Application for Visual Assessment in Urban Environments M.Inakage, Volume-Tracing Mirage Effects A. Takaghi, T.Ozeki, T.Oshima, Y. Ogata, and S.Minaso, A Colour Printing System Enabling Faithful Reproduction of the Desired Colour: Widened Colour Gamut Realized by a Multiple Ink Method with the Aid of an Inverse Problem Solution. Animating and Visualizing Virtual Environments: M. Melkemi and L. Melkemi, An Algorithm for Detecting of Dot Patterns K.W. Nam and M.S. Kim, Hermite Interpolation of Solid Orientations Based on Smooth Blending of Two Great Circular Arcs on SO(3) R. Turner, Leman: A System for constructing and Animating Layered Elastic Characters Y.Wu, D. Thalmann, and N.M. Thalmann, Deformable Surfaces Using Physically Based Particle Systems C. Beardon and V.Ye, Using Behavioural Rules in Animation Z. Huang, R.Boulic, N.M. Thalmann, and D. Thalmann, A Multi-Sensor Approach for Grasping and 3D Interaction M. Cooper and I. Benjamin, Actors, Performance, and Drama in Virtual Worlds S. Takai, T. Yasuda, S.Yokoi, and J.Toriwaki, A Method of Expression of Space Phenomena with CG Animation: The Collision of the Comet Shoemaker-Levy 9 with Jupiter in July 1994. N.M. Thalmann, I.S. Pandzic, and J.C. Moussaly, The Making of the Xian Terra-Cotta Soldiers F. Van Reeth, K. Coninx, and E. Flerackers, A Modular System of r Manipulating Image Sequences E.V. Anoshkina, A.G. Belyaev, R. Huang, and T.L. Kunji, Ridges and Ravines on a Surface and Related Geometry of Skeletons, Caustics, and Wavefronts S.T. Tuohy, J.W.Yoon, and N.M. Patrikalakis, Reliable Interrogation of 3D Non-linear and Geophysical Databases M.Jern, Visualization Market Trends -- An Industrial Briefing. Virtual Reality Systems: E. Rose, D.Breen, K. Ahlers, C.Crampton, M.Tuceryan, r. Whitaker, and D. Greer, Annotating Real-World Objects Using Augmented Reality P.J. Erard, C. Fuhrer, and L.Iff, Virtual Sensors and Mobile Robotics S. Miyazaki, M. Ishiguro, T.Yasuda, S.Yokoi, and J.Toriwaki, A Study of Virtual Manipulation of Elastic Objects A. Johnson and F. Fotouhi, Data Retrieval through

Book ChapterDOI
01 Aug 1995
TL;DR: This paper describes experiments with techniques for watching the hands and recognizing gestures and describes an “augmented reality” that makes it possible for a human to use any convenient object, including fingers, as digital input devices.
Abstract: Computer vision provides a powerful tool for the interaction between man and machine. The barrier between physical objects (paper, pencils, calculators) and their electronic counterparts limits both the integration of computing into human tasks, and the population willing to adapt to the required input devices. Computer vision, coupled with video projection using low cost devices, makes it possible for a human to use any convenient object, including fingers, as digital input devices. In such an “augmented reality”, information is projected onto ordinary objects and acquired by watching the way objects are manipulated. In the first part of this paper we describe experiments with techniques for watching the hands and recognizing gestures.

Journal ArticleDOI
TL;DR: This work describes the design and implementation of a simple, reliable alignment or calibration process so that computer models can be accurately registered with their real‐life counterparts and shows how it can be used to create convincing interactions between real and virtual objects.
Abstract: Augmented reality involves the use of models and their associated renderings to supplement information in a real scene. In order for this information to be relevant or meaningful, the models must be positioned and displayed in such a way that they align with their corresponding real objects. For practical reasons this alignment cannot be known a priori, and cannot be hard-wired into a system. Instead a simple, reliable alignment or calibration process is performed so that computer models can be accurately registered with their real-life counterparts. We describe the design and implementation of such a process and we show how it can be used to create convincing interactions between real and virtual objects.

Book ChapterDOI
03 Apr 1995
TL;DR: Augmented Reality, a technology that allows the overlay of graphically generated objects on real scenes on the real scenes, as a means of visualization for such an application as radiographic positioning instruction for dynamic three-dimensional anatomy teaching.
Abstract: A Virtual-Reality-based tool for teaching dynamic three-dimensional anatomy may impart better understanding of bone dynamics during body movement. One application of this teaching tool is radiographic positioning instruction. We propose Augmented Reality, a technology that allows the overlay of graphically generated objects(bones in this case) on the real scenes(body in this case), as a means of visualization for such an application. In this paper we describe the design and the three stages of development of a prototype unit which demonstrates elbow movement. Preliminary results and problems encountered while developing the first stage, are also presented.

Proceedings Article
20 Aug 1995
TL;DR: The Ubiquitous Talker as discussed by the authors is a portable system that consists of an LCD display that reflects the scene at which a user is looking as if it is a transparent glass a CCD camera for reognizing real world objects with color bar ID codes a microphone for recognizing a human voice and a speaker which outputs a synthesized voice.
Abstract: Augmented reality is a research area that tries to embody an electronic information space within the real world through computational devices A crucial issue within thus area is the recognition of real world objects or situations In natural language processing, it is much racier to determine interpietations of utterances even if they art in formed WHEN the context or situation is fixed We therefore introduce robust natural language processing in to a system of augmented reality with situation awareness Based on this idea we have developed a portable system called the Ubiquitous Talker This consists of an LCD display that reflects the scene at which a user is looking as if it is a transparent glass a CCD camera for reognizing real world objects with color bar ID codes a microphone for recognizing a human voice and a speaker which outputs a synthesized voice The Ubiquitous Talker provides its user with some information related to a recognized object by using the display and voice It also accepts requests or questions as voice inputs The user feels as if he/she is talking with the object itself through the system.

Patent
05 Sep 1995
TL;DR: In this article, an enhanced reality maintenance system operates one or more remotely operated vehicles (ROVs) in a hazardous or inaccessible environment, and a computer model of the environment is created from spatial parameters provided to the system.
Abstract: An enhanced reality maintenance system operates one or more remotely operated vehicles (ROVs) in a hazardous or inaccessible environment. A computer model of the environment is created from spatial parameters provided to the system. Positions and orientations of moving objects are monitored. The projected course of the moving objects is extrapolated and constantly updated. An automated flight planner, receives desired destinations from an operator, analyzes the environment, the projected courses of moving objects and planned trajectories of other ROVs, and selects a planned trajectory of a selected ROV through the environment without collision.