scispace - formally typeset
Search or ask a question

Showing papers on "Augmented reality published in 1993"



Proceedings ArticleDOI
01 Dec 1993
TL;DR: A small display overlays a selected portion of the X bitmap on the hypermedia system that allows links to be made between user’s view of the world, creating an X-based augmented windows and windows to be attached to objects.
Abstract: INTRODUCTION We describe the design and implementation of a prototype When we think of the use of head-mounted displays and 3D heads-up window system intended for use in a 3D environinteraction devices to present virtual worlds, it is often in ment. Our system includes a see-through head-mounted terms of environments populated solely by 3D objects. display that runs a full X server whose image is overlaid on There are many situations, however, in which 2D text and the user’s view of the physical world. The user’s head is graphics of the sort supported by current window systems tracked so that the display indexes into a large X bitmap, can be useful components of these environments. This is effectively placing the user inside a display space that is especially true in the case of the many applications that run mapped onto part of a surrounding virtual sphere. By under an industry standard window system such as X [13]. tracking the user’s body, and interpreting head motion relaWhile we might imagine porting or enhancing a significant tive to it, we create a portable information surround that X application to take advantage of the 3D capabilities of a envelopes the user as they move about. virtual world, the effort and cost may not be worth the return, especially if the application is inherently 2D. We support three kinds of windows implemented on top of Therefore, we have been exploring how we can incorporate the X server: windows fixed to the head-mounted display, an existing 2D window system within a 3D virtual world. windows fixed to the information surround, and windows fixed to locations and objects in the 3D world. Objects can We are building an experimental system that supports a full also be tracked, allowing windows to move with them. To X11 server on a see-through head-mounted display. Our demonstrate the utility of this model, we describe a small display overlays a selected portion of the X bitmap on the hypermedia system that allows links to be made between user’s view of the world, creating an X-based augmented windows and windows to be attached to objects. Thus, our reality. Depending on the situation and application, the hypermedia system can forge links between any combinauser may wish to treat a window as a stand-alone entity or tion of physical objects and virtual windows. to take advantage of the potential relationships that can be made between it and the visible physical world. To make this possible, we have developed facilities that allow X

319 citations


Journal ArticleDOI
TL;DR: In this Issue, Fltzmaurlc, = and Feiner describe two different augmented-reality systems that require highly capable head and object trackers to create an effective Illusion of virtual objects coexisting with the real world.
Abstract: I In this Issue, Fltzmaurlc,~ = and Feiner describe two different augmented-reality systems. Such sy~stems require highly capable head and object trackers to create an effective Illusion of virtual objects coexisting with the real world. For ordinary virtual environments that completely replace the real world with a virtual world, It sufflo~=s to know the approximate position and orientation of the user's head. Small errors are not easily discernible because the user's visual sense tencls to override 1:he conflictIng signals from his or her w~=stlbular and broprloceptlve systems. But In augmented reality, virtual objec\"ts supplement rather than supplant tl~e real world. Preserving the Illusion that the two coexist requires proper alignment and reglstral~lon of the vlrtu~al objects to the real world. Even tiny errors in regis-tratlon are easily detectable by the human visual system. What does augmented reality require from trackers to avoid such errors? First, a tracker must be accurate to a small fraction of a degree In orientation and a few millimeters (mm) in position. Figure 1. ,Conceptual ctrawing of sensors viiewing beacons In the ceiling Errors in measured head orientation usually cause larger registration offsets than object orientation errors do, making this requirement more critical for systems based on Head-Mounted Displays (HMDS). Try the following simple demonstration. Take out a dime and hold It at arm's length. The diameter of the dime covers approximately 1.5 degrees of arc. In comparison, a full moon covers 1/2 degree of arc. Now imagine a virtual coffee cup sitting on the corner of a real table two meters away from you. An angular error of 1.5 degrees in head orientation moves the cup by about 52 mm. Clearly, small orientation errors could result In a cup suspended in midair or interpene-trating the table. Similarly, If we want the cup to stay within 1 to 2 mm of Its true position, then we cannot tolerate tracker positional errors of more than 1 to 2 mm. Second, the combined latency of the tracker and the graphics engine must be very low. Combined latency is the delay from the time the tracker subsystem takes its measurements to the time the corresponding images appear In the display devices. Many HMD-based systems have a combined latency over 100 ms. At a moderate head or object rotation rate of 50 degrees per second, 100 milliseconds (ms) of latency causes 5 degrees of angular error. At a rapid rate …

309 citations


Patent
10 Sep 1993
TL;DR: In this paper, the authors present a vision system including devices and methods of augmented reality wherein an image of some real scene is altered by a computer processor to include information from a data base having stored information of that scene in a storage location that is identified by the real time position and attitude of the vision system.
Abstract: The present invention is generally concerned with electronic vision devices and methods, and is specifically concerned with image augmentation in combination with navigation, position, and attitude devices. In the simplest form, devices of the invention can be envisioned to include six major components: A 1) camera to collect optical information about a real scene and present that information as an electronic signal to; a 2) computer processor; a 3) device to measure the position of the camera; and a 4) device to measure the attitude of the camera (direction of the optic axis), thus uniquely identifying the scene being viewed, and thus identifying a location in; a 5) data base where information associated with various scenes is stored, the computer processor combines the data from the camera and the data base and perfects a single image to be presented at; a 6) display whose image is continuously aligned to the real scene as it is viewed by the user. The present invention is a vision system including devices and methods of augmented reality wherein an image of some real scene is altered by a computer processor to include information from a data base having stored information of that scene in a storage location that is identified by the real time position and attitude of the vision system. It is a primary function of the vision system of the invention, and a contrast to the prior art, to present augmented real images and data that is continuously aligned with the real scene as that scene is naturally viewed by the user of the vision system. An augmented image is one that represents a real scene but has deletions, additions and supplements.

270 citations


Patent
29 Sep 1993
TL;DR: In this paper, a virtual reality generator (4) having an input module (8) that receives as input financial information is disclosed, and outputs to a display device (not shown) a virtual virtual reality world generated from the financial information.
Abstract: A virtual reality generator (4) having an input module (8) that receives as input financial information is disclosed. The virtual reality generator (4) outputs to a display device (not shwon) a virtual reality world generated from the financial information. The financial information can be pre-processed by a financial analytic system (not shown) prior to input to the virtual reality generator (4). The financial information can be received from a data file. The virtual reality generator (4) can dynamically display and continuously update the virtual reality world. Further, movement through the virtual reality world can be simulated.

224 citations


Proceedings ArticleDOI
18 Sep 1993
TL;DR: The authors have developed "augmented reality" technology, consisting of a see-through head-mounted display, a robust, accurate position/orientation sensor, and their supporting electronics and software, enabling a factory worker to view index markings or instructions as if they were painted on the surface of a workpiece.
Abstract: The authors have developed "augmented reality" technology, consisting of a see-through head-mounted display, a robust, accurate position/orientation sensor, and their supporting electronics and software. Their primary goal is to apply this technology to touch labor manufacturing processes, enabling a factory worker to view index markings or instructions as if they were painted on the surface of a workpiece. In order to accurately project graphics onto specific points of a workpiece, it is necessary to have the coordinates of the workpiece, the display's virtual screen, the position sensor, and the user's eyes in the same coordinate system. The linear transformation and projection of each point to be displayed from world coordinates to virtual screen coordinates are described, and the experimental procedures for determining the correct values of the calibration parameters are characterized. >

213 citations


Proceedings ArticleDOI
26 Jul 1993
TL;DR: The director/agent (D/A) metaphor of telerobotic interaction is discussed as a potential means for achieving human-robot synergy and the medium of augmented reality through overlaid virtual stereographics is proposed, leading to what is referred to as virtual control.
Abstract: The director/agent (D/A) metaphor of telerobotic interaction is discussed as a potential means for achieving human-robot synergy. In order for the human operator to communicate spatial information to the robot during D/A operations, the medium of augmented reality through overlaid virtual stereographics is proposed, leading to what is referred to as virtual control. An overview is given of the ARGOS (Augmented Reality through Graphic Overlays on Stereovideo) system. In particular, the uses of overlaid virtual pointers for enhancing absolute depth judgement tasks, virtual tape measures for real-world quantification, virtual tethers for perceptual enhancement in manual teleoperation, virtual landmarks for enhancing depth scaling, and virtual object overlays for on-object edge enhancement and display superposition are all presented and discussed.

180 citations


Book
01 Jan 1993
TL;DR: Part 1 Introduction: virtual reality - definitions, history, applications, and techniques: gesture driven interaction as a human factor in virtual environments - an approach with neural networks, M.A. Gigante.
Abstract: Part 1 Introduction: virtual reality - definitions, history, applications, M.A. Gigante virtual reality - enabling technologies, M.A. Gigante. Part 2 Systems: supervision - a parallel architecture for virtual reality, C. Grimsdale virtual reality products, T.W. Rowley a computational model for the stereoscopic optics of a head-mounted display, W. Robinett and J.P. Rolland a comprehensive virtual reality environment laboratory facility, R.S. Kalawsky. Part 3 Techniques: gesture driven interaction as a human factor in virtual environments - an approach with neural networks, K. Vaananen and K. Bohm using physical constraints in a virtual environment, M.J. Papper and M.A. Gigante device synchronization using an optimal linear filter, M. Friedmann, et al. Part 4 Applications: virtual reality techniques in flight simulation, J.A. Vince using virtual reality techniques in the animation process, D. Thalmann dynamic fisheye information visualization, K.M. Fairchild, et al virtual reality - a tool for telepresence and human factors research, R.J. Stone critical aspects of visually coupled systems, R.S. Kalawsky AVIARY - a generic virtual reality interface for real applications, A.J. West, et al using gestures to control a virtual arm, M.J. Papper and M.A. Gigante. Part 5 Theory and modelling: toward three-dimensional graphical models of reality, P. quarendon. Part 6 Ethics and societal implications: back to the cave - cultural perspectives on virtual reality to the treatment of mental disorders, L.J. Whalley. Appendix: virtual reality software systems, C. McNaughton.

129 citations


Proceedings ArticleDOI
01 May 1993
TL;DR: This video describes the development of the ARGOS (Augmented Reality through Graphic Overlays on Stereovideo) system, as a tool for enhancing humantelerobot interaction and as a more general tool with applications in a variety of areas, including image enhancement, simulation, sensor fusion, and virtual reality.
Abstract: This video describes the development of the ARGOS (Augmented Reality through Graphic Overlays on Stereovideo) system, as a tool for enhancing humantelerobot interaction, and as a more general tool with applications in a variety of areas, including image enhancement, simulation, sensor fusion, and virtual reality.

51 citations


Proceedings ArticleDOI
M.F. Deering1
18 Sep 1993
TL;DR: Three alternatives to the traditional head mounted virtual reality display are described, one of which is the Virtual Holographic Workstation, an external stereo CRT viewed by a user wearing head tracking stereo shutter glasses.
Abstract: Three alternatives to the traditional head mounted virtual reality display are described One configuration is the Virtual Holographic Workstation, an external stereo CRT viewed by a user wearing head tracking stereo shutter glasses Another is the Computer Augmented Reality Camcorder, where virtual objects are composited onto live video using six axis tracking information about the location of the video camera The third system is the Virtual Portal, where an entire room is turned into a high-resolution inclusive display by replacing three walls with floor-to-ceiling rear projection stereo displays Details of the systems and experiences and limitations in their use are discussed >

34 citations


Proceedings ArticleDOI
J.-H. Youn1, Kwangyun Wohn1
18 Sep 1993
TL;DR: A collision detection scheme for virtual reality applications is proposed that exploits a hierarchical object representation to facilitate the detection of colliding segments.
Abstract: Virtual reality technology aims at the expansion of the communication bandwidth by providing users with 3D immersive environments. For the true direct manipulation of the environments, fast collision detection must be provided to increase the sense of reality. A collision detection scheme for virtual reality applications is proposed. The method exploits a hierarchical object representation to facilitate the detection of colliding segments. >


Journal ArticleDOI
TL;DR: A VR systems's head-mounted display, stereo graphics, and direct manipulation in three dimensions are outlined and research directions and potential applications of VR are discussed.
Abstract: Virtual reality (VR) presents a synthetically generated environment to the user through visual, auditory, and other stimuli. A VR systems's head-mounted display, stereo graphics, and direct manipulation in three dimensions are outlined. Early work in VR and major technological hurdles in the areas of tracking, display, image generation, and software support are reviewed. Research directions and potential applications of VR are discussed. >

Proceedings ArticleDOI
03 Nov 1993
TL;DR: This paper describes a head mounted display (HMD) system which can present realistic visual images of the environment by superimposing small, high resolution images on wide, low resolution images.
Abstract: This paper describes a head mounted display (HMD) system which can present realistic visual images of the environment. There are several types of commercial HMDs for virtual reality systems. One of the problems of their systems is that the screen size is small in order to keep the image resolution high. Therefore, the screen frame and displayed images overlap and impair the sense of reality. In order to improve it, a new type of HMD is proposed. The idea is based on superimposing small, high resolution images on wide, low resolution images. The high resolution images are presented only near the center of human retina, according to the motion of eyeball detected. In this paper, the design of the optical system and image control system is explained. Also, some preliminary evaluation experiments using a prototype system are introduced to demonstrate the effectiveness of the proposed idea. >

Journal ArticleDOI
TL;DR: The trends of virtual-reality research in Japan are reviewed and a prototype of a plant monitoring system that uses camera images to provide a sense of being there, and experiments on the tactile senses of touch and pressure, are described.
Abstract: The trends of virtual-reality research in Japan are reviewed. Research on the application of virtual reality systems in three-dimensional imaging of software structures, remote control of construction robots, and molecular model design is discussed. A prototype of a plant monitoring system that uses camera images to provide a sense of being there, and experiments on the tactile senses of touch and pressure, are described. The space interface device for artificial reality (SPIDAR) is also discussed. >

Proceedings ArticleDOI
K. Zinser1
17 Oct 1993
TL;DR: It is shown how new forms of process visualisation and the integration of different information media can greatly help the operators of complex plants in supervisory control (S&C) and how the well-designed integration of video can lead to an 'augmented reality' that helps to narrow the gap between central control rooms and the actual plant.
Abstract: In this contribution it will be shown how new forms of process visualisation and the integration of different information media can greatly help the operators of complex plants in supervisory control (S&C). The process visualisation aspects encompass process overview as well as detailed and context-specific yet interactive presentation of process information. The multimedia aspects show how the well-designed integration of video in particular can lead to an 'augmented reality' that helps to narrow the gap between central control rooms and the actual plant. An example from the application domain of power plant control illustrates the benefits. >

Proceedings ArticleDOI
01 Sep 1993

02 Jan 1993
TL;DR: This thesis defines a visual language and a methodology for its use to both design and generate automatically 3D graphics that fulfill a specified communicative intent and implemented IBIS as proof of concept.
Abstract: The objective of this work is to devise a scheme for effective visual communication. A visual presentation can be designed to solve communicative intent; that is, both convey what we want to communicate as well as elicit, from our audience, a desired action or interpretation. In this thesis we define a visual language and a methodology for its use to both design and generate automatically 3D graphics that fulfill a specified communicative intent. The visual language we have defined is used to capture the full cycle of communication: Given specific communicative goals, the system begins to create an illustration by choosing sets of visual effects that, in turn, specify the parameters that define a computer-generated picture. The system then examines the picture to determine if each effect is achieved, and thus evaluates how well each communicative goal is achieved. Our visual language represents an illustration on three levels: what is being conveyed, what visual effects are used, how these visual effects are achieved. Thus, we separate design or the set of visual effects selected to satisfy communicative goals, from style, or how each visual effect is accomplished. Computer graphics need not be static; we have exploited the dynamism of the system to support four different types of interactivity: user-navigation, changing goals, changing worlds, and self-evaluation. During all phases of interaction, the illustration remains bound to the (possible changing) communicative intent, modifying the illustration to achieve violated goals. We have implemented IBIS as proof-of-concept for our ideas. IBIS generates illustrations that explain the use, maintenance and repair of objects. IBIS generates fully shaded 3D graphics that are displayed on a high resolution display for a multimedia explanation system and IBIS also generates overlaid 3D bi-level illustrations that are projected on a see-through head-mounted display that appear over the user's view of the world for an augmented reality system.