scispace - formally typeset
Search or ask a question

Showing papers on "Augmented reality published in 2000"


Proceedings ArticleDOI
01 Oct 2000
TL;DR: This work describes an accurate vision-based tracking method for table-top AR environments and tangible user interface (TUI) techniques based on this method that allow users to manipulate virtual objects in a natural and intuitive manner.
Abstract: We address the problems of virtual object interaction and user tracking in a table-top augmented reality (AR) interface. In this setting there is a need for very accurate tracking and registration techniques and an intuitive and useful interface. This is especially true in AR interfaces for supporting face to face collaboration where users need to be able to easily cooperate with each other. We describe an accurate vision-based tracking method for table-top AR environments and tangible user interface (TUI) techniques based on this method that allow users to manipulate virtual objects in a natural and intuitive manner. Our approach is robust, allowing users to cover some of the tracking markers while still returning camera viewpoint information, overcoming one of the limitations of traditional computer vision based systems. After describing this technique we describe its use in prototype AR applications.

733 citations


Proceedings ArticleDOI
01 Apr 2000
TL;DR: Examples of augmented reality applications based on CyberCode are described, and some key characteristics of tagging technologies that must be taken into account when designing augmented reality environments are discussed.
Abstract: The CyberCode is a visual tagging system based on a 2D-barcode technology and provides several features not provided by other tagging systems. CyberCode tags can be recognized by the low-cost CMOS or CCD cameras found in more and more mobile devices, and it can also be used to determine the 3D position of the tagged object as well as its ID number. This paper describes examples of augmented reality applications based on CyberCode, and discusses some key characteristics of tagging technologies that must be taken into account when designing augmented reality environments.

626 citations


Proceedings ArticleDOI
01 Apr 2000
TL;DR: The article presents the model, applies it to describe and compare a number of interaction techniques, and shows how it was used to create a new interface for searching and replacing text.
Abstract: This article introduces a new interaction model called Instrumental Interaction that extends and generalizes the principles of direct manipulation. It covers existing interaction styles, including traditional WIMP interfaces, as well as new interaction styles such as two-handed input and augmented reality. It defines a design space for new interaction techniques and a set of properties for comparing them. Instrumental Interaction describes graphical user interfaces in terms of domain objects and interaction instruments. Interaction between users and domain objects is mediated by interaction instruments, similar to the tools and instruments we use in the real world to interact with physical objects. The article presents the model, applies it to describe and compare a number of interaction techniques, and shows how it was used to create a new interface for searching and replacing text.

588 citations


BookDOI
01 Nov 2000
TL;DR: A trusted reference for almost 15 years, Fundamentals of Wearable Computers and Augmented Reality goes beyond smart clothing to explore user interface design issues specific to wearable tech and areas in which it can be applied.
Abstract: Data will not help you if you cant see it where you need it. Or cant collect it where you need it. Upon these principles, wearable technology was born. And although smart watches and fitness trackers have become almost ubiquitous, with in-body sensors on the horizon, the future applications of wearable computers hold so much more. A trusted reference for almost 15 years, Fundamentals of Wearable Computers and Augmented Reality goes beyond smart clothing to explore user interface design issues specific to wearable tech and areas in which it can be applied. Upon its initial publication, the first edition almost instantly became a trusted reference, setting the stage for the coming decade, in which the explosion in research and applications of wearable computers and augmented reality occurred. Written by expert researchers and teachers, each chapter in the second edition has been revised and updated to reflect advances in the field and provide fundamental knowledge on each topic, solidifying the books reputation as a valuable technical resource as well as a textbook for augmented reality and ubiquitous computing courses. New Chapters in the Second Edition Explore: Haptics Visual displays Use of augmented reality for surgery and manufacturing Technical issues of image registration and tracking Augmenting the environment with wearable audio interfaces Use of augmented reality in preserving cultural heritage Human-computer interaction and augmented reality technology Spatialized sound and augmented reality Augmented reality and robotics Computational clothing From a technology perspective, much of what is happening now with wearables and augmented reality would not have been possible even five years ago. In the fourteen years since the first edition burst on the scene, the capabilities and applications of both technologies are orders of magnitude faster, smaller, and cheaper. Yet the books overarching mission remains the same: to supply the fundamental information and basic knowledge about the design and use of wearable computers and augmented reality with the goal of enhancing peoples lives.

394 citations


Proceedings ArticleDOI
18 Oct 2000
TL;DR: This paper presents an outdoor/indoor augmented reality first person application ARQuake, and presents an architecture for a low cost, moderately accurate six degrees of freedom tracking system based on GPS, digital compass, and fiducial vision-based tracking.
Abstract: This paper presents an outdoor/indoor augmented reality first person application ARQuake we have developed. ARQuake is an extension of the desktop game Quake, and as such we are investigating how to convert a desktop first person application into an outdoor/indoor mobile augmented reality application. We present an architecture for a low cost, moderately accurate six degrees of freedom tracking system based on GPS, digital compass, and fiducial vision-based tracking. Usability issues such as monster selection, colour, and input devices are discussed. A second application for AR architectural design visualisation is presented.

334 citations


Journal ArticleDOI
TL;DR: This work compares two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices, as well as hybrid optical/video technology.
Abstract: We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology.

333 citations


Proceedings ArticleDOI
05 Oct 2000
TL;DR: A markerless camera tracking system for augmented reality that operates in environments which contain one or more planes, which is a common special case, which it is shown significantly simplifies tracking.
Abstract: We describe a markerless camera tracking system for augmented reality that operates in environments which contain one or more planes. This is a common special case, which we show significantly simplifies tracking. The result is a practical, reliable, vision-based tracker. Furthermore, the tracked plane imposes a natural reference frame, so that the alignment of the real and virtual coordinate systems is rather simpler than would be the case with a general structure-and-motion system. Multiple planes can be tracked, and additional data such as 2D point tracks are easily incorporated.

330 citations


01 Jan 2000
TL;DR: Several techniques for interactively performing occlusion and collision detection between static real objects and dynamic virtual objects in augmented reality are presented.
Abstract: We present several techniques for interactively performing occlusion and collision detection between static real objects and dynamic virtual objects in augmented reality. Computer vision algorithms are used to acquire data that model aspects of the real world. Either geometric models may be registered to real objects, or a depth map of the real scene may be extracted with computer vision algorithms. The computer vision-derived data are mapped into algorithms that exploit the power of graphics workstations, in order to interactively produce new effects in augmented reality. By combining live video from a calibrated camera with real-time renderings of the real-world data from graphics hardware, dynamic virtual objects occlude and are occluded by static real objects. As a virtual object is interactively manipulated collisions with real objects are detected, and the motion of the virtual object is constrained. Simulated gravity may then be produced by automatically moving the virtual object in the direction of a gravity vector until it encounters a collision with a real object.

293 citations


Journal ArticleDOI
TL;DR: Anecdotal evidence supports the claim that the use of Construct3D is easy to learn and encourages experimentation with geometric constructions.
Abstract: Construct3D is a three dimensional geometric construction tool based on the collaborative augmented reality system ‘Studierstube’. Our setup uses a stereoscopic head mounted display (HMD) and the Personal Interaction Panel (PIP) - a two-handed 3D interaction tool that simplifies 3D model interaction. Means of application in mathematics and geometry education at high school as well as university level are being discussed. A pilot study summarizes the strengths and possible extensions of our system. Anecdotal evidence supports our claim that the use of Construct3D is easy to learn and encourages experimentation with geometric constructions.

280 citations


Proceedings ArticleDOI
05 Oct 2000
TL;DR: A region-based information filtering algorithm that can dynamically respond to changes in the environment and the user's state and refine the transitions between different information sets is described.
Abstract: Augmented reality is a potentially powerful paradigm for annotating the (real) environment with computer-generated material. These benefits will be even greater when augmented reality systems become mobile and wearable. However, to minimize the problem of clutter and to maximize the effectiveness of the display, algorithms must be developed to select only the most important information for the user. In this paper, we describe a region-based information filtering algorithm. The algorithm takes account of the state of the user (location and intent) and the state of individual objects about which information can be presented. It can dynamically respond to changes in the environment and the user's state. We also describe how simple temporal, distance and angle cues can be used to refine the transitions between different information sets.

243 citations


Journal ArticleDOI
TL;DR: Three emerging second-generation technologies can be grouped into three domains: 3D documentation (everything from site surveys to epigraphy), 3D representation (from historic reconstruction to visualization), and 3D dissemination (from immersive networked worlds to "in situ" augmented reality).
Abstract: From the pyramids at Giza to Kakadu National Park in Australia, the world's cultural and natural heritage has stood the test of time. The pace of progress threatens these landmarks of our past at an ever-increasing pace. Rapid advances in digital technologies in recent years, from new media to virtual reality (VR) and high-speed networks, have offered heritage some hope. The first wave of VR worlds failed to live up to the promise. Today, the forward march of technology has quietly enabled a second wave of VR applications. Digital tools and techniques offer new hope to the often painstakingly complex tasks of archaeology, surveying, historic research, conservation, and education. These emerging second-generation technologies can be grouped into three domains: 3D documentation (everything from site surveys to epigraphy), 3D representation (from historic reconstruction to visualization), and 3D dissemination (from immersive networked worlds to "in situ" augmented reality). The author reviews these emerging trends.

Proceedings ArticleDOI
26 Mar 2000
TL;DR: A fast and robust method for tracking positions of the centers and the fingertips of both right and left hands, which makes use of infrared camera images for reliable detection of a user's hands, and uses a template matching strategy for finding fingertips.
Abstract: We introduce a fast and robust method for tracking positions of the centers and the fingertips of both right and left hands. Our method makes use of infrared camera images for reliable detection of a user's hands, and uses a template matching strategy for finding fingertips. This method is an essential part of our augmented desk interface in which a user can, with natural hand gestures, simultaneously manipulate both physical objects and electronically projected objects on a desk, e.g., a textbook and related WWW pages. Previous tracking methods which are typically based on color segmentation or background subtraction simply do not perform well in this type of application because an observed color of human skin and image backgrounds may change significantly due to protection of various objects onto a desk. In contrast, our proposed method was shown to be effective even in such a challenging situation through demonstration in our augmented desk interface. This paper describes the details of our tracking method as well as typical applications in our augmented desk interface.

Proceedings ArticleDOI
05 Oct 2000
TL;DR: The calibration is based on the alignment of image points with a single 3D point in the world coordinate system from various viewpoints and the user interaction to do the calibration is extremely easy compared to prior methods.
Abstract: Augmented reality (AR) is a technology in which a user's view of the real world is enhanced or augmented with additional information generated from a computer model. In order to have a working AR system, the see-through display system must be calibrated so that the graphics is properly rendered. The optical see-through systems present an additional challenge because we do not have access to the image data directly as in video see-through systems. This paper reports on a method we developed for optical see-through head-mounted displays. The method integrates the measurements for the camera and the magnetic tracker which is attached to the camera in order to do the calibration. The calibration is based on the alignment of image points with a single 3D point in the world coordinate system from various viewpoints. The user interaction to do the calibration is extremely easy compared to prior methods, and there is no requirement for keeping the head static while doing the calibration.

Proceedings ArticleDOI
18 Mar 2000
TL;DR: A method for augmented reality with a stereo vision sensor and a video see-through head-mounted display (HMD) that can synchronize the display timing between the virtual and real worlds so that the alignment error is reduced.
Abstract: In an augmented reality system, it is required to obtain the position and orientation of the user's viewpoint in order to display the composed image while maintaining a correct registration between the real and virtual worlds. All the procedures must be done in real time. This paper proposes a method for augmented reality with a stereo vision sensor and a video see-through head-mounted display (HMD). It can synchronize the display timing between the virtual and real worlds so that the alignment error is reduced. The method calculates camera parameters from three markers in image sequences captured by a pair of stereo cameras mounted on the HMD. In addition, it estimates the real-world depth from a pair of stereo images in order to generate a composed image maintaining consistent occlusions between real and virtual objects. The depth estimation region is efficiently limited by calculating the position of the virtual object by using the camera parameters. Finally, we have developed a video see-through augmented reality system which mainly consists of a pair of stereo cameras mounted on the HMD and a standard graphics workstation. The feasibility of the system has been successfully demonstrated with experiments.

Proceedings ArticleDOI
01 Apr 2000
TL;DR: It is argued that, to achieve user friendly products, working with user video should be an integral part of the activities of the design team, not a specialised task of experts.
Abstract: In User Centred Design, the integration of knowledge of users work practice, preferences etc. into the design process is crucial to success. For this reason, video recording has become a widespread tool for documenting user activities observed in field studies, usability tests and user workshops. To make sense of video recordings - though a rewarding experience - is time consuming and mostly left to experts. Even though developers may ask for expert advice on usability matters, chances are that they will not follow it, given the technical and commercial trade-offs in every project.In this paper we will argue that, to achieve user friendly products, working with user video should be an integral part of the activities of the design team, not a specialised task of experts. To support this, video must be made available as a resource in design discussions and developers must be allowed to form their own understanding and conclusions. This paper presents a technique for turning video into tangible arguments to support design teams work. Furthermore it discusses how this technique can be improved with Augmented Reality and presents an augmented prototype session.

Proceedings ArticleDOI
18 Mar 2000
TL;DR: A novel visuo-haptic display using a head-mounted projector (HMP) with X'tal Vision (Crystal Vision) optics is proposed, which enables an observer to touch a virtual object just as it is seen.
Abstract: Proposes a novel visuo-haptic display using a head-mounted projector (HMP) with X'tal Vision (Crystal Vision) optics. Our goal is to develop a device which enables an observer to touch a virtual object just as it is seen. We describe in detail the design of an HMP with X'tal Vision, which is very suitable for augmented reality. For instance, the HMP makes the occlusion relationship between the virtual and the real environments nearly correct. Accordingly, the user can observe his/her real hand with the virtual objects. Furthermore, the HMP reduces eye fatigue because of the low inconsistency of accommodation and convergence. Therefore, we applied HMP-model 2 to a visuo-haptic display using a camouflage technique. This technique, called optical camouflage, makes an obstacle object, such as a haptic display, become translucent. With this method, a user can observe a stereoscopic virtual object with a nearly correct occlusion relationship between the virtual and the real environments, and he can actually feel the object.

Proceedings ArticleDOI
30 Jul 2000
TL;DR: This work investigates how augmented reality enhanced by physical and spatial 3D user interfaces can be used to develop effective face-to-face collaborative computing environments.
Abstract: In the Shared Space project, we explore, innovate, design and evaluate future computing environments that will radically enhance interaction between human and computers as well as interaction between humans mediated by computers. In particular, we investigate how augmented reality enhanced by physical and spatial 3D user interfaces can be used to develop effective face-to-face collaborative computing environments. How will we interact in such collaborative spaces? How will we interact with each other? What new applications can be developed using this technology? These are the questions that we are trying to answer in research on Shared Space. The paper provides a short overview of Shared Space, its directions, technologies and applications.

Journal ArticleDOI
TL;DR: A method is developed to analyze the accuracy of the relative head-to-object position and orientation (pose) in augmented reality systems with head-mounted displays, and produces a pose estimate that is significantly more accurate than that produced by either sensor acting alone.
Abstract: A method is developed to analyze the accuracy of the relative head-to-object position and orientation (pose) in augmented reality systems with head-mounted displays. From probabilistic estimates of the errors in optical tracking sensors, the uncertainty in head-to-object pose can be computed in the form of a covariance matrix. The positional uncertainty can be visualized as a 3D ellipsoid. One useful benefit of having an explicit representation of uncertainty is that we can fuse sensor data from a combination of fixed and head-mounted sensors in order to improve the overall registration accuracy. The method was applied to the analysis of an experimental augmented reality system, incorporating an optical see-through head-mounted display, a head-mounted CCD camera, and a fixed optical tracking sensor. The uncertainty of the pose of a movable object with respect to the head-mounted display was analyzed. By using both fixed and head mounted sensors, we produced a pose estimate that is significantly more accurate than that produced by either sensor acting alone.

Patent
22 Dec 2000
TL;DR: In this paper, a projective display and video capture system provides video images to the users, which are then projected and reflected back to the user's eyes to view a local site in which they are located.
Abstract: A teleportal system which provides remote communication between at least two users. A projective display and video capture system provides video images to the users. The video system obtains and transmits 3D images which are stereoscopic to remote users. The projective display unit provides an augmented reality environment to each user and allows users to view, unobstructed, the other local users, and view a local site in which they are located. A screen transmits to the user the images generated by the projective display via a retro-reflective fabric upon which images are projected and reflected back to the user's eyes.

Journal ArticleDOI
TL;DR: The goal is to provide interactive tools to perform common illumination, i.e., light interactions between real and virtual objects, including shadows and relighting (real and virtual light source modification), and focuses on virtually modifying real light source intensities and inserting virtual lights and objects into a real scene.
Abstract: Computer augmented reality (CAR) is a rapidly emerging field which enables users to mix real and virtual worlds. Our goal is to provide interactive tools to perform common illumination, i.e., light interactions between real and virtual objects, including shadows and relighting (real and virtual light source modification). In particular, we concentrate on virtually modifying real light source intensities and inserting virtual lights and objects into a real scene; such changes can be very useful for virtual lighting design and prototyping. To achieve this, we present a three-step method. We first reconstruct a simplified representation of real scene geometry using semiautomatic vision-based techniques. With the simplified geometry, and by adapting recent hierarchical radiosity algorithms, we construct an approximation of real scene light exchanges. We next perform a preprocessing step, based on the radiosity system, to create unoccluded illumination textures. These replace the original scene textures which contained real light effects such as shadows from real lights. This texture is then modulated by a ratio of the radiosity (which can be changed) over a display factor which corresponds to the radiosity for which occlusion has been ignored. Since our goal is to achieve a convincing relighting effect, rather than an accurate solution, we present a heuristic correction process which results in visually plausible renderings. Finally, we perform an interactive process to compute new illumination with modified real and virtual light intensities.

Dissertation
01 Apr 2000
TL;DR: This paper presents an augmented reality system using audio as the primary interface, whereby individuals can leave "audio imprints," consisting of several layers of music, sound effects, or recorded voice, at a location outdoors.
Abstract: This paper presents an augmented reality system using audio as the primary interface. Using the authoring component of this system, individuals can leave "audio imprints," consisting of several layers of music, sound effects, or recorded voice, at a location outdoors. Using the navigation component, individuals can hear imprints by walking into the area that the imprint occupies. Furthermore, imprints can be linked together, whereby an individual is directed from one imprint to related imprints in the area.

Proceedings ArticleDOI
05 Oct 2000
TL;DR: Studierstube is an experimental user interface system which uses collaborative augmented reality to incorporate true 3D interaction into a productivity environment, and its design philosophy is centered around the notion of contexts and locales, as well as the underlying software and hardware architecture.
Abstract: Studierstube is an experimental user interface system which uses collaborative augmented reality to incorporate true 3D interaction into a productivity environment. This concept is extended to bridge multiple user interface dimensions by including multiple users, multiple host platforms, multiple display types, multiple concurrent applications and a multi-context (i.e. 3D document) interface into a heterogeneous distributed environment. With this architecture, we can explore the user interface design space between pure augmented reality and the popular ubiquitous computing paradigm. We report on our design philosophy, which is centered around the notion of contexts and locales, as well as the underlying software and hardware architecture. Contexts encapsulate a live application together with 3D (visual) and other data, while locales are used to organize geometric reference systems. By separating geometric relationships (locales) from semantic relationships (contexts), we achieve a great amount of flexibility in the configuration of displays. To illustrate our claims, we present several applications, including a cinematographic design tool which showcases many features of our system.

Proceedings ArticleDOI
18 Mar 2000
TL;DR: The Perceptive Workbench is used for augmented reality gaming and terrain navigation applications, which demonstrate the utility and capability of the interface.
Abstract: The Perceptive Workbench enables a spontaneous, natural and unimpeded interface between the physical and virtual worlds. It uses vision-based methods for interaction that eliminate the need for wired input devices and wired tracking. Objects are recognized and tracked when placed on the display surface. Through the use of multiple light sources, the object's 3D shape can be captured and inserted into the virtual interface. This ability permits spontaneity since either preloaded objects or those objects selected on-the-spot by the user can become physical icons. Integrated into the same vision-based interface is the ability to identify 3D hand position, pointing direction and sweeping arm gestures. Such gestures can enhance selection, manipulation and navigation tasks. In this paper, the Perceptive Workbench is used for augmented reality gaming and terrain navigation applications, which demonstrate the utility and capability of the interface.

Journal ArticleDOI
M. Weidenbach1, C. Wick, S. Pieper, K.-J. Quast, T. Fox, G. Grunst, D. A. Redel1 
TL;DR: In two-dimensional echocardiography the sonographer must synthesize multiple tomographic slices into a mental three-dimensional model of the heart to establish spatial and temporal congruence, and in augmented reality (AR) applications, real and virtual image data are linked, to increase the information content.

Proceedings ArticleDOI
09 Jan 2000
TL;DR: This paper presents a developing multi-player augmented reality game, patterned as a cross between a martial arts fighting game and an agent controller, as implemented using the Wearable Augmented Reality for Personal, Intelligent, and Networked Gaming (WARPING) system.
Abstract: Computer gaming offers a unique test-bed and market for advanced concepts in computer science, such as Human Computer Interaction (HCI), computer-supported collaborative work (CSCW), intelligent agents, graphics, and sensing technology. In addition, computer gaming is especially well-suited for explorations in the relatively young fields of wearable computing and augmented reality (AR). This paper presents a developing multi-player augmented reality game, patterned as a cross between a martial arts fighting game and an agent controller, as implemented using the Wearable Augmented Reality for Personal, Intelligent, and Networked Gaming (WARPING) system. Through interactions based on gesture, voice, and head movement input and audio and graphical output, the WARPING system demonstrates how computer vision techniques can be exploited for advanced, intelligent interfaces.

Proceedings ArticleDOI
01 Oct 2000
TL;DR: By assembling the best of available hardware and software technologies in static scene acquisition, modeling algorithms, rendering, tracking and stereo projective display, this work is able to demonstrate a portal to a real office, occupied today by a mannequin, and in the future by a real remote collaborator.
Abstract: In 1998 we introduced the idea for a project we call the Office of the Future. Our long-term vision is to provide a better every-day working environment, with high-fidelity scene reconstruction for life-sized 3D tele-collaboration. In particular, we want a true sense of presence with our remote collaborator and their real surroundings. The challenges related to this vision are enormous and involve many technical tradeoffs. This is true in particular for scene reconstruction. Researchers have been striving to achieve real-time approaches, and while they have made respectable progress, the limitations of conventional technologies relegate them to relatively low resolution in a restricted volume. We present a significant step toward our ultimate goal, via a slightly different path. In lieu of low-fidelity dynamic scene modeling we present an exceedingly high fidelity reconstruction of a real but static office. By assembling the best of available hardware and software technologies in static scene acquisition, modeling algorithms, rendering, tracking and stereo projective display, we are able to demonstrate a portal to a real office, occupied today by a mannequin, and in the future by a real remote collaborator. We now have both a compelling sense of just how good it could be, and a framework into which we will later incorporate dynamic scene modeling, as we continue to head toward our ultimate goal of 3D collaborative telepresence.

Proceedings ArticleDOI
01 Apr 2000
TL;DR: How the augmented reality and product design communities might learn from each other is described to inspire those involved in augmented reality design and help them to avoid the pitfalls that the product design community is now trying to crawl out of.
Abstract: In this article we describe how the augmented reality and product design communities, which share the common interest of combining the real and the virtual, might learn from each other. From our side, we would like to share with you some of our ideas about product design which we consider highly relevant for the augmented reality community. In a pamphlet we list 10 sloganesque points for action which challenge the status quo in product design. Finally, we present some projects which show how these points could be implemented. We hope this approach will inspire those involved in augmented reality design and help them to avoid the pitfalls that the product design community is now trying to crawl out of.

Proceedings ArticleDOI
01 Apr 2000
TL;DR: Two interface prototypes which are developed on the authors' augmented desk interface system, EnhancedDesk, are described, which are aimed at providing an effective learning environment and supporting effective information retrieval.
Abstract: This paper describes two interface prototypes which we have developed on our augmented desk interface system, EnhancedDesk. The first application is Interactive Textbook, which is aimed at providing an effective learning environment. When a student opens a page which describes experiments or simulations, Interactive Textbook automatically retrieves digital contents from its database and projects them onto the desk. Interactive Textbook also allows the student hands-on ability to interact with the digital contents. The second application is the Interactive Venn Diagram, which is aimed at supporting effective information retrieval. Instead of keywords, the system uses real objects such as books or CDs as keys for retrieval. The system projects a circle around each book; data corresponding the book are then retrieved and projected inside the circle. By moving two or more circles so that the circles intersect each other, the user can compose a Venn diagram interactively on the desk. We also describe the new technologies introduced in EnhancedDesk which enable us to implement these applications.

Proceedings ArticleDOI
05 Oct 2000
TL;DR: A parallax-free stereo video see-through HMD with a wide field of view was developed and it was confirmed experimentally that a small displacement of the entrance pupils of the imaging system along the axes to the exit pupil of the display system does not cause serious spatial perception errors.
Abstract: Geometrical studies and experiments were carried out in order to specify requirements for the imaging and display systems of a stereo video see-through head-mounted display (HMD) for augmented reality (AR) systems. It was found that it was necessary to set the left and right optical axes of the stereoscopic imaging system to be parallel to one another, and likewise for the display system. It was also found that, for AR applications handling real spaces at short distances, it was necessary to align the optical axes of the imaging system with the optical axes of the display system in order to avoid spatial perception errors. We refer to this as a 'parallax-free' system. Based on these studies and experiments, a parallax-free stereo video see-through HMD with a wide field of view was developed. It was also confirmed experimentally that a small displacement of the entrance pupils of the imaging system along the axes to the exit pupils of the display system does not cause serious spatial perception errors. By displacing the position of the entrance pupils of the imaging system by 30 mm, we have downsized the HMD to a thickness of 36 mm and lightened it to 340 g. The HMD was then applied to an actual AR system and its usefulness was verified.

Patent
08 Sep 2000
TL;DR: A game state manager manages the state of an AR game (information that pertains to rendering of each virtual object, the score of a player, the AR game round count, and the like) as discussed by the authors.
Abstract: A game state manager (201) manages the state of an AR game (information that pertains to rendering of each virtual object (102), the score of a player (101), the AR game round count, and the like) An objective viewpoint video generator (202) generates a video of each virtual object (102) viewed from a camera (103) An objective viewpoint video composition unit (203) generates a composite video of the video of the virtual object (102) and an actually sensed video, and outputs it to a display (106) A subjective viewpoint video generator (212) generates a video of the virtual object (102) viewed from an HMD (107) A subjective viewpoint video composition unit (213) generates a composite video of the video of the virtual object (102) and an actually sensed video, and outputs it to the HMD (107)