scispace - formally typeset
Search or ask a question

Showing papers on "Augmented reality published in 1998"


Proceedings ArticleDOI
24 Jul 1998
TL;DR: The apparatus comprises a closed container having a plurality of compartments for containing mustard and catsup, and a valve arrangement is associated with the container to uncover selected openings in compartments, and air under slight pressure is introduced into the Container to assist in ejecting the mustard or catsup.
Abstract: We introduce ideas, proposed technologies, and initial results for an office of the future that is based on a unified application of computer vision and computer graphics in a system that combines and builds upon the notions of the CAVE™, tiled display systems, and image-based modeling . The basic idea is to use real-time computer vision techniques to dynamically extract per-pixel depth and reflectance information for the visible surfaces in the office including walls, furniture, objects, and people, and then to either project images on the surfaces, render images of the surfaces , or interpret changes in the surfaces. In the first case, one could designate every-day (potentially irregular) real surfaces in the office to be used as spatially immersive display surfaces, and then project high-resolution graphics and text onto those surfaces. In the second case, one could transmit the dynamic image-based models over a network for display at a remote site. Finally, one could interpret dynamic changes in the surfaces for the purposes of tracking, interaction, or augmented reality applications. To accomplish the simultaneous capture and display we envision an office of the future where the ceiling lights are replaced by computer controlled cameras and “smart” projectors that are used to capture dynamic image-based models with imperceptible structured light techniques, and to display high-resolution images on designated display surfaces. By doing both simultaneously on the designated display surfaces, one can dynamically adjust or autocalibrate for geometric, intensity, and resolution variations resulting from irregular or changing display surfaces, or overlapped projector images. Our current approach to dynamic image-based modeling is to use an optimized structured light scheme that can capture per-pixel depth and reflectance at interactive rates. Our system implementation is not yet imperceptible, but we can demonstrate the approach in the laboratory. Our approach to rendering on the designated (potentially irregular) display surfaces is to employ a two-pass projective texture scheme to generate images that when projected onto the surfaces appear correct to a moving headtracked observer. We present here an initial implementation of the overall vision, in an office-like setting, and preliminary demonstrations of our dynamic modeling and display techniques.

947 citations


Book ChapterDOI
11 Oct 1998
TL;DR: The system uses 3D visualization, depth extraction from laparoscopic images, and six degree-of-freedom head and laparoscope tracking to display a merged real and synthetic image in the surgeon’s video-see-through head-mounted display.
Abstract: We present the design and a prototype implementation of a three-dimensional visualization system to assist with laparoscopic surgical procedures. The system uses 3D visualization, depth extraction from laparoscopic images, and six degree-of-freedom head and laparoscope tracking to display a merged real and synthetic image in the surgeon’s video-see-through head-mounted display. We also introduce a custom design for this display. A digital light projector, a camera, and a conventional laparoscope create a prototype 3D laparoscope that can extract depth and video imagery.

424 citations


Proceedings ArticleDOI
19 Oct 1998
TL;DR: This paper describes a system that allows users to dynamically attach newly created digital information such as voice notes photographs to the physical environment, through wearable computers as well as normal computers.
Abstract: Most existing augmented reality systems only provide a method for browsing information that is situated in the real world context. This paper describes a system that allows users to dynamically attach newly created digital information such as voice notes photographs to the physical environment, through wearable computers as well as normal computers. Attached data is stored with contextual tags such as location IDs and object IDs that are obtained by wearable sensors, so the same or other wearable users can notice them when they come to the same context. Similar to the role that Post-it notes play in community messaging, we expect our proposed method to be a fundamental communication platform when wearable computers become commonplace.

331 citations


Proceedings ArticleDOI
15 Jul 1998
TL;DR: A novel technique for producing augmented reality systems that simultaneously identify real world objects and estimate their coordinate systems is introduced, using a 2D matrix marker, a square shaped barcode, which can identify a large number of objects.
Abstract: The paper introduces a novel technique for producing augmented reality systems that simultaneously identify real world objects and estimate their coordinate systems. This method utilizes a 2D matrix marker, a square shaped barcode, which can identify a large number of objects. It also acts as a landmark to register information on real world images. As a result, it costs virtually nothing to produce and attach codes to various kinds of real world objects, because the matrix code is printable. We have developed an augmented reality system based on this method, and demonstrated several potential applications.

308 citations


Proceedings ArticleDOI
14 Mar 1998
TL;DR: A specific set of applications is described in detail, as well as a prototype system and the software library that it is built upon that could provide interoperability between various AR implementations.
Abstract: This paper presents cognitive studies and analyses relating to how augmented reality (AR) interacts with human abilities in order to benefit manufacturing and maintenance tasks A specific set of applications is described in detail, as well as a prototype system and the software library that it is built upon An integrated view of information flow to support AR is also presented, along with a proposal for an AR media language (ARML) that could provide interoperability between various AR implementations

296 citations


Proceedings ArticleDOI
24 Jul 1998
TL;DR: A new tracking system for augmented reality and virtual set applications, based on an inertial navigation system aided by ultrasonic time-of-flight range measurements to a constellation of wireless transponder beacons, is presented.
Abstract: We present a new tracking system for augmented reality and virtual set applications, based on an inertial navigation system aided by ultrasonic time-of-flight range measurements to a constellation of wireless transponder beacons. An extended Kalman filter operating on 1-D range measurements allows the inertial sensors to filter out corrupt range measurements and perform optimal smoothing and prediction, while at the same time using the pre-screened range measurements to correct the drift of the inertial system. The use of inside-out ultrasonic tracking allows for tetherless tracking over a building-wide range with no acoustic propagation latency. We have created a simulation to account for error sources in the ultrasonic ranging system. The fully implemented tracking system is tested and found to have accuracy consistent with the simulation results. The simulation also predicts that with some further compensation of transducer misalignment, accuracies better than 2 mm can be achieved. CR

277 citations


Journal ArticleDOI
TL;DR: A taxonomy that classifies current approaches to shared spaces according to the three dimensions of transportation, artificiality, and spatiality is introduced and the technique of mixed-reality boundaries is introduced as a way of joining real and virtual spaces together in order to address some of these problems.
Abstract: We propose an approach to creating shared mixed realities based on the construction of transparent boundaries between real and virtual spaces. First, we introduce a taxonomy that classifies current approaches to shared spaces according to the three dimensions of transportation, artificiality, and spatiality. Second, we discuss our experience of staging a poetry performance simultaneously within real and virtual theaters. This demonstrates the complexities involved in establishing social interaction between real and virtual spaces and motivates the development of a systematic approach to mixing realities. Third, we introduce and demonstrate the technique of mixed-reality boundaries as a way of joining real and virtual spaces together in order to address some of these problems.

277 citations


01 Jan 1998
TL;DR: This dissertation introduces a new approach to vision-based tracking using structured light to generate landmarks and believes the system specified here contributes to tracking in AR applications in two key ways: it takes advantage of equipment already used for AR, and it has the potential to provide sufficient registration for demanding AR applications without imposing the limitations of current vision- based tracking systems.
Abstract: Tracking has proven a difficult problem to solve accurately without limiting the user or the application Vision-based systems have shown promise, but are limited by occlusion of the landmarks We introduce a new approach to vision-based tracking using structured light to generate landmarks The novel aspect of this approach is the system need not know the 3D locations of landmarks This implies that motion within the field of view of the camera does not disturb tracking as long as landmarks are reflected off any surface into the camera This dissertation specifies an algorithm which tracks a camera using structured light A simulator demonstrates excellent performance on user motion data from an application currently limited by inaccurate tracking Further analysis reveals directions for implementation of the system, theoretical limitations, and potential extensions to the algorithm The term augmented reality (AR) has been given to applications that merge computer graphics with images of the user's surroundings AR could give a doctor “X-ray vision” with which to examine the patient before or during surgery At this point in time, AR systems have not been used in place of the traditional methods of performing medical or other tasks One important problem that limits acceptance of AR systems is lack of precise registration—alignment—between real and synthetic objects There are many components of an AR system that contribute to registration One of the most important is the tracking system The tracking data must be accurate, so that the real and synthetic objects are aligned properly Our work in augmented reality focuses on medical applications These require precise alignment of medical imagery with the physician's view of the patient Although many technologies have been applied—mechanical, magnetic, optical, et al—we have yet to find a system sufficiently accurate and robust to provide correct and reliable registration We believe the system specified here contributes to tracking in AR applications in two key ways: it takes advantage of equipment already used for AR, and it has the potential to provide sufficient registration for demanding AR applications without imposing the limitations of current vision-based tracking systems

227 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose an architecture for multi-user augmented reality with applications in visualisation, presentation and education, which they call "Studierstube", which presents three-dimensional stereoscopic graphics simultaneously to a group of users wearing light weight see-through head mounted displays.
Abstract: We propose an architecture for multi-user augmented reality with applications in visualisation, presentation and education, which we call "Studierstube". Our system presents three-dimensional stereoscopic graphics simultaneously to a group of users wearing light weight see-through head mounted displays. The displays do not affect natural communication and interaction, making working together very effective. Users see the same spatially aligned model, but can independently control their viewpoint and different layers of the data to be displayed. The setup serves computer supported cooperative work and enhances cooperation of visualisation experts. This paper presents the client-server software architecture underlying this system and details that must be addressed to create a high-quality augmented reality setup.

222 citations


Proceedings ArticleDOI
19 Oct 1998
TL;DR: This paper reports the outcomes of a set of trials using an off the shelf wearable computer, equipped with a custom built navigation software package, "map-in-the-hat", to provide visual navigation aids to users.
Abstract: To date augmented realities are typically operated in only a small defined area, in the order of a large room. This paper reports on our investigation into expanding augmented realities to outdoor environments. The project entails providing visual navigation aids to users. A wearable computer system with a see-through display, digital compass, and a differential GPS are used to provide visual cues while performing a standard orienteering task. This paper reports the outcomes of a set of trials using an off the shelf wearable computer, equipped with a custom built navigation software package, "map-in-the-hat".

191 citations


Proceedings ArticleDOI
14 Mar 1998
TL;DR: This paper describes the AR/sup 2/Hockey (Augmented Reality AiR Hockey) system, where players can share a physical game field and mallets, and a virtual puck to play an air-hockey game, as a case study of a collaborative AR system.
Abstract: Introduces a collaborative augmented reality (AR) system for real-time interactive operations AR enables us to enhance physical space with computer-generated virtual space In addition, collaborative AR allows multiple participants to simultaneously share the physical space surrounding them and a virtual space that is visually registered with the physical one They can also communicate with each other through the mixed space This paper describes the AR/sup 2/Hockey (Augmented Reality AiR Hockey) system, where players can share a physical game field and mallets, and a virtual puck to play an air-hockey game, as a case study of a collaborative AR system Since real-time accurate registration between both spaces and players is crucial for the collaboration, a video-rate registration algorithm is implemented with magnetic head-trackers and video cameras attached to optical see-through head-mounted displays (HMDs) The configuration of the system and the details of the registration are described Our experimental collaborative AR system achieves higher interactivity than a totally immersive collaborative VR system

Proceedings ArticleDOI
19 Oct 1998
TL;DR: This paper introduces an assistant for playing the real-space game Patrol, and continues augmented reality research, started in 1995, for binding virtual data to physical locations.
Abstract: Small, body-mounted video cameras enable a different style of wearable computing interface. As processing power increases, a wearable computer can spend more time observing its user to provide serendipitous information, manage interruptions and tasks, and predict future needs without being directly commanded by the user. This paper introduces an assistant for playing the real-space game Patrol. This assistant tracks the wearer's location and current task through computer vision techniques and without off-body infrastructure. In addition, this paper continues augmented reality research, started in 1995, for binding virtual data to physical locations.

Proceedings ArticleDOI
24 May 1998
TL;DR: Instead of immersing people in an artificially-created virtual world, the goal is to augment objects in the physical world by enhancing them with a wealth of digital information and communication capabilities.
Abstract: A revolution in computer interface design is changing the way we think about computers Rather than typing on a keyboard and watching a television monitor, Augmented Reality lets people use familiar, everyday objects in ordinary ways The difference is that these objects also provide a link into a computer network Doctors can examine patients while viewing superimposed medical images; children can program their own LEGO constructions; construction engineers can use ordinary paper engineering drawings to communicate with distant colleagues Rather than immersing people in an artificially-created virtual world, the goal is to augment objects in the physical world by enhancing them with a wealth of digital information and communication capabilities

Proceedings ArticleDOI
01 Jan 1998
TL;DR: It is argued that augmented reality is more promising than the current strategies that seek to replace flight strips with keyboard/monitor interfaces and an exploration of the design space, with active participation from the controllers, is essential not only for designing particular artifacts, but also for understanding the strengths and limitations of augmented reality in general.
Abstract: This paper describes our exploration of a design space for an augmented reality prototype. We began by observing air traffic controllers and their interactions with paper flight strips. We then worked with a multi-disciplinary team of researchers and controllers over a period of a year to brainstorm and prototype ideas for enhancing paper flight strips, We argue that augmented reality is more promising (and simpler to implement) than the current strategies that seek to replace flight strips with keyboard/monitor interfaces. We also argue that an exploration of the design space, with active participation from the controllers, is essential not only for designing particular artifacts, but also for understanding the strengths and limitations of augmented reality in general.

Journal ArticleDOI
TL;DR: A new approach to video-based augmented reality that avoids both camera calibration and Euclidean 3D measurements is described, which is readily implementable, imposes minimal computational and hardware requirements, and generates real-time and accurate video overlays even when the camera parameters vary dynamically.
Abstract: Camera calibration and the acquisition of Euclidean 3D measurements have so far been considered necessary requirements for overlaying three-dimensional graphical objects with live video. We describe a new approach to video-based augmented reality that avoids both requirements: it does not use any metric information about the calibration parameters of the camera or the 3D locations and dimensions of the environment's objects. The only requirement is the ability to track across frames at least four fiducial points that are specified by the user during system initialization and whose world coordinates are unknown. Our approach is based on the following observation: given a set of four or more noncoplanar 3D points, the projection of all points in the set can be computed as a linear combination of the projections of just four of the points. We exploit this observation by: tracking regions and color fiducial points at frame rate; and representing virtual objects in a non-Euclidean, affine frame of reference that allows their projection to be computed as a linear combination of the projection of the fiducial points. Experimental results on two augmented reality systems, one monitor-based and one head-mounted, demonstrate that the approach is readily implementable, imposes minimal computational and hardware requirements, and generates real-time and accurate video overlays even when the camera parameters vary dynamically.

Proceedings ArticleDOI
19 Oct 1998
TL;DR: This paper describes how the computing power of wearables can be used to provide spatialized 3D graphics and audio cues to aid communication and results are a wearable augmented reality communication space with audio enabled avatars of the remote collaborators surrounding the user.
Abstract: Wearable computers provide constant access to computing and communications resources. In this paper we describe how the computing power of wearables can be used to provide spatialized 3D graphics and audio cues to aid communication. The result is a wearable augmented reality communication space with audio enabled avatars of the remote collaborators surrounding the user. The user can use natural head motions to attend to the remote collaborators, can communicate freely while being aware of other side conversations and can move through the communication space. In this way the conferencing space can support dozens of simultaneous users. Informal user studies suggest that wearable communication spaces may offer several advantages, both through the increase in the amount of information it is possible to access and the naturalness of the interface.

Proceedings ArticleDOI
02 Nov 1998
TL;DR: An augmented reality setup for multiple users with see-trough head-mounted displays, allowing dedicated stereoscopic views and individualized interaction for each user, and a layering concept allowing individual views onto the common data structure is introduced.
Abstract: We introduce a local collaborative environment for gaming. In our setup multiple users can interact with the virtual game and the real surroundings at the same time. They are able to communicate with other players during the game. We describe an augmented reality setup for multiple users with see-trough head-mounted displays, allowing dedicated stereoscopic views and individualized interaction for each user. We use face-snapping for fast and precise direct object manipulation. With face snapping and the subdivision of the gaming space into spatial regions, the semantics of actions can be derived out of geometric actions of the user. Further, we introduce a layering concept allowing individual views onto the common data structure. The layer concept allows to make privacy management very easy by simply manipulating the common data structure. Moreover, assigning layers to spatial regions carefully, a special privacy management is often not necessary. Moving objects from one region into another will automatically change their visibility and privacy for each participant. We demonstrate our system with two example board-games: Virtual Roulette and MahJongg, both relying heavily on social communication and the need of a private space.

Proceedings ArticleDOI
01 Apr 1998
TL;DR: Think Tags, small, name-tag sized computers that communicate with each other via infrared, are used to add a thin layer of computation to participant’s social interactions, transforming a group of people into participants in a dynamic simulation.
Abstract: New technology developed at the MIT Media Laboratory enables students to become active participants in life-sized, computational simulations of dynamic systems. These Participatory Simulations provide an individual, “firstperson” perspective on the system, just as acting in Hamlet provides such a perspective on Shakespeare. Using our Thinking Tags, small, name-tag sized computers that communicate with each other via infrared, we add a thin layer of computation to participant’s social interactions, transforming a group of people into participants in a dynamic simulation. Participants in these simulations get highly engaged in the activities and collaboratively study the underlying systemic model.

01 Jan 1998
TL;DR: Techniques that can free the user from restricttive requirements such as working in calibrated environments, resutls with haptic interface technology incorporated into augmented reality domains, and systems considerations that underlie the practical realization of these interactive augmented reality techinques are presented.
Abstract: Augmented reality is the merging of synthetic sensory information into a user''s perception of a real environment. Until recently, it has presented a passive interface to its human users, who were merely viewers of the scene augmented only with visual information. In contrast, practically since its inception, computer graphics--and its outgrowth into virtual reality--has presented an interactive environment. It is our thesis that the agumented reality interfce can be made interactive. We present: techniques that can free the user from restricttive requirements such as working in calibrated environments, resutls with haptic interface technology incorporated into augmented reality domains, and systems considerations that underlie the practical realization of these interactive augmented reality techinques.

Journal ArticleDOI
TL;DR: The basic technologies of augmented reality are discussed, augmented reality systems currently being used in the medical domain are examined, and some future uses of these systems in orthopaedic applications are explored.
Abstract: Augmented reality is a display technique that combines supplemental information with the real world environment. Augmented reality systems are on the verge of being used everyday in medical training, preoperative planning, preoperative and intraoperative data visualization, and intraoperative tool guidance. The basic technologies of augmented reality are discussed, augmented reality systems currently being used in the medical domain are examined, and some future uses of these systems in orthopaedic applications are explored.

Proceedings ArticleDOI
01 Apr 1998
TL;DR: Criteria useful for the design of such interpersonal augmentation, experiences that inform the principles, and initial evidence of their success are put forward.
Abstract: We have built a set of computationally-augmented nametags capable of providing information about the relationship between two people engaged in a face-to-face conversation. This paper puts forward criteria useful for the design of such interpersonal augmentation, experiences that inform the principles, and initial evidence of their success.

Proceedings ArticleDOI
Steve Mann1
19 Oct 1998
TL;DR: The operational principles of 'WearCam', the basis for wearable tetherless computer-mediated reality both in its idealized form, as well as in some practical embodiments of the invention, including one of its latest embodiments are disclosed.
Abstract: The purpose of this paper is to disclose the operational principles of 'WearCam', the basis for wearable tetherless computer-mediated reality both in its idealized form, as well as in some practical embodiments of the invention, including one of its latest embodiments. The specific inner workings of WearCam, in particular; details of its optical arrangement, have not previously been disclosed, other than by allowing a small number of individuals to look inside the glasses. General considerations, background, and relevant findings, in the area of long-term use of wearable, tetherless computer-mediated reality are also presented. Some general insight (arising from having designed and built more than 100 different kinds of personal imaging systems over the last 20 years) is also provided. Unlike the artificiality of many controlled laboratory experiments, much of the insight gained from these experiences relates to the natural complexity of real-life situations.

Journal ArticleDOI
TL;DR: The authors review some of the research involving AR systems, basic system configurations, image-registration approaches, and technical problems involved with AR technology, and touch upon the requirements for an interventive AR system, which can help guide surgeons in executing a surgical plan.
Abstract: Augmented reality (AR) is a technology in which a computer-generated image is superimposed onto the user's vision of the real world, giving the user additional information generated from the computer model. This technology is different from virtual reality, in which the user is immersed in a virtual world generated by the computer. Rather, the AR system brings the computer into the "world" of the user by augmenting the real environment with virtual objects. Using an AR system, the user's view of the real world is enhanced. This enhancement may be in the form of labels, 3D rendered models, or shaded modifications. In this article, the authors review some of the research involving AR systems, basic system configurations, image-registration approaches, and technical problems involved with AR technology. They also touch upon the requirements for an interventive AR system, which can help guide surgeons in executing a surgical plan.

Journal ArticleDOI
TL;DR: This paper describes the Shared Space concept—the application of Augmented Reality for three-dimensional CSCW, and presents preliminary results which show that this approach may be better for some applications.
Abstract: Virtual Reality (VR) appears a natural medium for three-dimensional computer supported collaborative work (CSCW). However the current trend in CSCW is to adapt the computer interface to work with the user's traditional tools, rather than separating the user from the real world as does immersive VR. One solution is through Augmented Reality, the overlaying of virtual objects on the real world. In this paper we describe the Shared Space concept--the application of Augmented Reality for three-dimensional CSCW. This combines the advantages of Virtual Reality with current CSCW approaches. We describe a collaborative experiment based on this concept and present preliminary results which show that this approach may be better for some applications.


Proceedings ArticleDOI
01 Apr 1998
TL;DR: Following the fundamental constraints of natural way of interacting, a set of recommendations for the next generation of user interfaces: the Natural User Interface (NUI) is derived.
Abstract: It is time to go beyond the established approaches in humancomputer interaction. With the Augmented Reality (AR) design strategy humans are able to behave as much as possible in a natural way: behavior of humans in the real world with other humans and/or real world objects. Following the fundamental constraints of natural way of interacting we derive a set of recommendations for the next generation of user interfaces: the Natural User Interface (NUI). The concept of NUI is presented in form of a runnable demonstrator: a computer vision-based interaction technique for a planning tool for construction and design tasks.

Book ChapterDOI
11 Oct 1998
TL;DR: In this article, the authors describe prototype Image Overlay systems and initial experimental results from those systems and describe prototype image overlay systems and their experimental results for 3D medical images such as CT reconstructions.
Abstract: Image Overlay is a computer display technique which superimposes computer images over the user’s direct view of the real world. The images are transformed in real-time so they appear to the user to be an integral part of the surrounding environment. By using Image Overlay with three-dimensional medical images such as CT reconstructions, a surgeon can visualize the data ‘in-vivo’, exactly positioned within the patient’s anatomy, and potentially enhance the surgeon’s ability to perform a complex procedure. This paper describes prototype Image Overlay systems and initial experimental results from those systems.

Patent
12 Feb 1998
TL;DR: An ophthalmic augmented reality environment is developed in order to allow for more precise laser treatment for ocular diseases, teaching, telemedicine, and real-time image measurement, analysis, and comparison as discussed by the authors.
Abstract: An ophthalmic augmented reality environment is developed in order to allow for (a) more precise laser treatment for ophthalmic diseases, (b) teaching, (c) telemedicine, and (d) real-time image measurement, analysis, and comparison A preferred embodiment of the system is designed around a standard slit-lamp biomicroscope The microscope is interfaced to a CCD camera, and the image is sent to a video capture board A single computer workstation coordinates image capture, registration, and display The captured image is registered with previously stored, montaged photographic and/or angiographic data, with superposition facilitated by fundus-landmark-based fast registration algorithms The computer then drives a high intensity, VGA resolution video display with adjustable brightness and contrast attached to one of the oculars of the slit-lamp biomicroscope

Book
01 Jan 1998
TL;DR: CommunityWare - concept and practice bridging humans via agent networks freeWalk - supporting casual meetings in a network market-based QoS control for incorporating human preferences.
Abstract: CommunityWare - concept and practice bridging humans via agent networks freeWalk - supporting casual meetings in a network market-based QoS control for incorporating human preferences the knowledgeable community - knowledge sharing among humans agent augmented reality - agents integrate the real world with cyberspace ICMAS '96 mobile assistance project - massive mobile computing for communities.

Journal ArticleDOI
TL;DR: The authors discuss Studierstube, a low-cost augmented reality system that features true stereoscopy, 3D interaction, individual viewpoints and customized views for multiple users, and unhindered natural collaboration.
Abstract: The authors discuss Studierstube, a low-cost augmented reality system. The system features true stereoscopy, 3D interaction, individual viewpoints and customized views for multiple users, and unhindered natural collaboration.