scispace - formally typeset
Search or ask a question

Showing papers on "Augmented reality published in 2005"


Journal ArticleDOI
TL;DR: The NavShoe device provides not only robust approximate position, but also an extremely accurate orientation tracker on the foot, which can greatly reduce the database search space for computer vision, making it much simpler and more robust.
Abstract: A navigation system that tracks the location of a person on foot is useful for finding and rescuing firefighters or other emergency first responders, or for location-aware computing, personal navigation assistance, mobile 3D audio, and mixed or augmented reality applications. One of the main obstacles to the real-world deployment of location-sensitive wearable computing, including mixed reality (MR), is that current position-tracking technologies require an instrumented, marked, or premapped environment. At InterSense, we've developed a system called NavShoe, which uses a new approach to position tracking based on inertial sensing. Our wireless inertial sensor is small enough to easily tuck into the shoelaces, and sufficiently low power to run all day on a small battery. Although it can't be used alone for precise registration of close-range objects, in outdoor applications augmenting distant objects, a user would barely notice the NavShoe's meter-level error combined with any error in the head's assumed location relative to the foot. NavShoe can greatly reduce the database search space for computer vision, making it much simpler and more robust. The NavShoe device provides not only robust approximate position, but also an extremely accurate orientation tracker on the foot.

1,432 citations


Journal ArticleDOI
TL;DR: Virtual reality (VR) for improved performance of MIS is now a reality, however, VR is only a training tool that must be thoughtfully introduced into a surgical training curriculum for it to successfully improve surgical technical skills.
Abstract: Summary Background Data: To inform surgeons about the practical issues to be considered for successful integration of virtual reality simulation into a surgical training program. The learning and practice of minimally invasive surgery (MIS) makes unique demands on surgical training programs. A decade ago Satava proposed virtual reality (VR) surgical simulation as a solution for this problem. Only recently have robust scientific studies supported that vision

950 citations


Book
31 Aug 2005
TL;DR: This survey reviews the different techniques and approaches that have been developed by industry and research on 3D tracking and includes a comprehensive study of the massive literature on the subject.
Abstract: Many applications require tracking of complex 3D objects. These include visual servoing of robotic arms on specific target objects, Augmented Reality systems that require real-time registration of the object to be augmented, and head tracking systems that sophisticated interfaces can use. Computer Vision offers solutions that are cheap, practical and non-invasive.This survey reviews the different techniques and approaches that have been developed by industry and research. First, important mathematical tools are introduced: Camera representation, robust estimation and uncertainty estimation. Then a comprehensive study is given of the numerous approaches developed by the Augmented Reality and Robotics communities, beginning with those that are based on point or planar fiducial marks and moving on to those that avoid the need to engineer the environment by relying on natural features such as edges, texture or interest. Recent advances that avoid manual initialization and failures due to fast motion are also presented. The survery concludes with the different possible choices that should be made when implementing a 3D tracking system and a discussion of the future of vision-based 3D tracking.Because it encompasses many computer vision techniques from low-level vision to 3D geometry and includes a comprehensive study of the massive literature on the subject, this survey should be the handbook of the student, the researcher, or the engineer who wants to implement a 3D tracking system.

741 citations


Book
01 Jan 2005
TL;DR: This book discusses spatial augmented reality approaches that exploit optical elements, video projectors, holograms, radio frequency tags, and tracking technology, as well as interactive rendering algorithms and calibration techniques in order to embed synthetic supplements into the real environment or into a live video of thereal environment.
Abstract: Like virtual reality, augmented reality is becoming an emerging platform in new application areas for museums, edutainment, home entertainment, research, industry, and the art communities using novel approaches which have taken augmented reality beyond traditional eye-worn or hand-held displays. In this book, the authors discuss spatial augmented reality approaches that exploit optical elements, video projectors, holograms, radio frequency tags, and tracking technology, as well as interactive rendering algorithms and calibration techniques in order to embed synthetic supplements into the real environment or into a live video of the real environment. Special Features: - Comprehensive overview - Detailed mathematical equations - Code fragments - Implementation instructions - Examples of Spatial AR displays The authors have put together a preliminary collection of Errata. Updates will be posted to this site as necessary.

717 citations


Journal ArticleDOI
01 Jul 2005
TL;DR: This article gives an introduction and overview of the field of pervasive gaming, an emerging genre in which traditional, real-world games are augmented with computing functionality, or, depending on the perspective, purely virtual computer entertainment is brought back to the real world.
Abstract: This article gives an introduction and overview of the field of pervasive gaming, an emerging genre in which traditional, real-world games are augmented with computing functionality, or, depending on the perspective, purely virtual computer entertainment is brought back to the real world.The field of pervasive games is diverse in the approaches and technologies used to create new and exciting gaming experiences that profit by the blend of real and virtual game elements. We explicitly look at the pervasive gaming sub-genres of smart toys, affective games, tabletop games, location-aware games, and augmented reality games, and discuss them in terms of their benefits and critical issues, as well as the relevant technology base.

386 citations


Journal ArticleDOI
TL;DR: A number of automatic and semi-automatic reconstruction methods are reviewed in more detail in order to reveal their underlying principles and some general properties of reconstruction approaches which have evolved.

312 citations


Proceedings ArticleDOI
05 Oct 2005
TL;DR: A custom port of the ARToolKit library to the Symbian mobile phone operating system is created and a sample collaborative AR game is developed based on this, which is described in detail and user feedback is described.
Abstract: Mobile phones are an ideal platform for augmented reality. In this paper we describe how they also can be used to support face to face collaborative AR applications. We have created a custom port of the ARToolKit library to the Symbian mobile phone operating system and then developed a sample collaborative AR game based on this. We describe the game in detail and user feedback from people who have played it. We also provide general design guidelines that could be useful for others who are developing mobile phone collaborative AR applications.

289 citations


Book ChapterDOI
08 May 2005
TL;DR: A system architecture for interactive, infrastructure-independent multi-user AR applications running on off-the-shelf handheld devices is presented and a four-user interactive game installation is implemented as an evaluation setup to encourage playful engagement of participants in a cooperative task.
Abstract: Augmented Reality (AR) can naturally complement mobile computing on wearable devices by providing an intuitive interface to a three-dimensional information space embedded within physical reality. Unfortunately, current wearable AR systems are relatively complex, expensive, fragile and heavy, rendering them unfit for large-scale deployment involving untrained users outside constrained laboratory environments. Consequently, the scale of collaborative multi-user experiments have not yet exceeded a handful of participants. In this paper, we present a system architecture for interactive, infrastructure-independent multi-user AR applications running on off-the-shelf handheld devices. We implemented a four-user interactive game installation as an evaluation setup to encourage playful engagement of participants in a cooperative task. Over the course of five weeks, more than five thousand visitors from a wide range of professional and socio-demographic backgrounds interacted with our system at four different locations.

284 citations


Journal ArticleDOI
TL;DR: This article describes how a multi-disciplinary research team transformed core MR technology and methods into diverse urban terrain applications that are used for military training and situational awareness, as well as for community learning to significantly increase the entertainment, educational, and satisfaction levels of existing experiences in public venues.
Abstract: Transferring research from the laboratory to mainstream applications requires the convergence of people, knowledge, and conventions from divergent disciplines. Solutions involve more than combining functional requirements and creative novelty. To transform technical capabilities of emerging mixed reality (MR) technology into the mainstream involves the integration and evolution of unproven systems. For example, real-world applications require complex scenarios (a content issue) involving an efficient iterative pipeline (a production issue) and driving the design of a story engine (a technical issue) that provides an adaptive experience with an after-action review process (a business issue). This article describes how a multi-disciplinary research team transformed core MR technology and methods into diverse urban terrain applications. These applications are used for military training and situational awareness, as well as for community learning to significantly increase the entertainment, educational, and satisfaction levels of existing experiences in public venues.

269 citations


Patent
02 Aug 2005
TL;DR: In this paper, a portable device configured to provide an augmented reality experience is provided, which includes an image capture device associated with the display screen, and an image generation logic is configured to incorporate an additional image into the real world scene.
Abstract: A portable device configured to provide an augmented reality experience is provided. The portable device has a display screen configured to display a real world scene. The device includes an image capture device associated with the display screen. The image capture device is configured to capture image data representing the real world scene. The device includes image recognition logic configured to analyze the image data representing the real world scene. Image generation logic responsive to the image recognition logic is included. The image generation logic is configured to incorporate an additional image into the real world scene. A computer readable medium and a system providing an augmented reality environment are also provided.

248 citations


Journal ArticleDOI
TL;DR: 10 AR projects from those the research, development, and deployment of AR systems in the automotive, aviation, and astronautics industries for more than five years are selected to examine the main challenges faced and to share some of the lessons learned.
Abstract: The 2003 International Symposium on Mixed and Augmented Reality was accompanied by a workshop on potential industrial applications. The organizers wisely called it potential because the real use of augmented reality (AR) in an industrial context is still in its infancy. Our own experience in this field clearly supports this viewpoint. We have been actively involved in the research, development, and deployment of AR systems in the automotive, aviation, and astronautics industries for more than five years and have developed and implemented AR systems in a wide variety of environments while working at DaimlerChrysler in Germany. In this article we have selected 10 AR projects from those we have managed and implemented in the past to examine the main challenges we faced and to share some of the lessons we learned.

Patent
16 Nov 2005
TL;DR: In this paper, a landscape detector is provided that can obtain information about the user's landscape, in addition to user's location, in order to provide overlaying information to an AR head-mounted display and control information to non-user controlled video game characters.
Abstract: Handheld location based games are provided in which a user's physical location correlates to the virtual location of a virtual character on a virtual playfield. Augmented Reality (AR) systems are provided in which video game indicia are overlaid onto a user's physical environment. A landscape detector is provided that may obtain information about the user's landscape, in addition to the user's location, in order to provide overlaying information to an AR head-mounted display and control information to non-user controlled video game characters.

Journal ArticleDOI
TL;DR: It is proposed that this approach can help to build a bridge between the analytic and inspirational approaches to design and can help designers meet the challenges raised by a diversification of sensing technologies and interface forms, increased mobility, and an emerging focus on technologies for everyday life.
Abstract: Movements of interfaces can be analyzed in terms of whether they are expected, sensed, and desired. Expected movements are those that users naturally perform; sensed are those that can be measured by a computer; and desired movements are those that are required by a given application. We show how a systematic comparison of expected, sensed, and desired movements, especially with regard to how they do not precisely overlap, can reveal potential problems with an interface and also inspire new features. We describe how this approach has been applied to the design of three interfaces: pointing flashlights at walls and posters in order to play sounds; the Augurscope II, a mobile augmented reality interface for outdoors; and the Drift Table, an item of furniture that uses load sensing to control the display of aerial photographs. We propose that this approach can help to build a bridge between the analytic and inspirational approaches to design and can help designers meet the challenges raised by a diversification of sensing technologies and interface forms, increased mobility, and an emerging focus on technologies for everyday life.

Proceedings ArticleDOI
20 Jun 2005
TL;DR: The algorithm moves beyond the limits of a static environment to make real-time color compensation in a dynamic environment possible and can be applied broadly to assist HCI, visualization, shape recovery, and entertainment applications.
Abstract: Projection systems can be used to implement augmented reality, as well as to create both displays and interfaces on ordinary surfaces. Ordinary surfaces have varying reflectance, color, and geometry. Current methods use a camera to account for these variations, but are fundamentally limited since they assume the camera, projector, and scene are static. In this article, we describe a technique for photometrically adaptive projection that makes it possible to handle a dynamic environment. We begin by presenting a co-axial projector-camera system. It consists of a camera and beam splitter, which attaches to an off-the-shelf projector. The co-axial design makes geometric calibration scene-independent. To handle photometric changes, our method uses the errors between the desired and measured appearance of the projected image. A key novel aspect of our algorithm is that we combine a physics-based model with dynamic feedback to achieve real time adaptation to the changing environment. We verify our algorithm through a wide variety of experiments. We show that it is accurate and runs in real-time. Our algorithm moves beyond the limits of a static environment to make real-time color compensation in a dynamic environment possible. It can be applied broadly to assist HCI, visualization, shape recovery, and entertainment applications.

01 Jan 2005
TL;DR: The development of an interactive visualization system based on Augmented Reality Technologies and the integration into a tourist application to increase the tourist experience of the user, who can retrieve this information by a user-friendly interface.
Abstract: This paper describes the development of an interactive visualization system based on Augmented Reality Technologies and the integration into a tourist application. The basic idea is the combination of the commonly known concept of tourist binoculars with Augmented Reality. By means of Augmented Reality, the real scene is enhanced by multimedia personalized interactive information to increase the tourist experience of the user, who can retrieve this information by a user-friendly interface.

Journal ArticleDOI
TL;DR: This paper presents an innovative 3D reconstruction of ancient fresco paintings through the real‐time revival of their fauna and flora, featuring groups of virtual animated characters with artificial‐life dramaturgical behaviours in an immersive, fully mobile augmented reality (AR) environment.
Abstract: This paper presents an innovative 3D reconstruction of ancient fresco paintings through the real-time revival of their fauna and flora, featuring groups of virtual animated characters with artificial-life dramaturgical behaviours in an immersive, fully mobile augmented reality (AR) environment. The main goal is to push the limits of current AR and virtual storytelling technologies and to explore the processes of mixed narrative design of fictional spaces (e.g. fresco paintings) where visitors can experience a high degree of realistic immersion. Based on a captured/real-time video sequence of the real scene in a video-see-through HMD set-up, these scenes are enhanced by the seamless accurate real-time registration and 3D rendering of realistic complete simulations of virtual flora and fauna (virtual humans and plants) in a real-time storytelling scenario-based environment. Thus the visitor of the ancient site is presented with an immersive and innovative multi-sensory interactive trip to the past

Journal ArticleDOI
TL;DR: The challenge has been to demonstrate that remote programming combined with an advanced multimedia user interface for remote control is very suitable, flexible, and profitable for the design of a telelaboratory.
Abstract: In this paper, we present the user interface and the system architecture of an Internet-based telelaboratory, which allows researchers and students to remotely control and program two educational online robots. In fact, the challenge has been to demonstrate that remote programming combined with an advanced multimedia user interface for remote control is very suitable, flexible, and profitable for the design of a telelaboratory. The user interface has been designed by using techniques based on augmented reality and nonimmersive virtual reality, which enhance the way operators get/put information from/to the robotic scenario. Moreover, the user interface provides the possibility of letting the operator manipulate the remote environment by using multiple ways of interaction (i.e., from the simplification of the natural language to low-level remote programming). In fact, the paper focuses on the lowest level of interaction between the operator and the robot, which is remote programming. As explained in the paper, the system architecture permits any external program (i.e., remote experiment, speech-recognition module, etc.) to have access to almost every feature of the telelaboratory (e.g., cameras, object recognition, robot control, etc.). The system validation was performed by letting 40 Ph.D. students within the "European Robotics Research Network Summer School on Internet and Online Robots for Telemanipulation" workshop (Benica/spl grave/ssim, Spain, 2003) program several telemanipulation experiments with the telelaboratory. Some of these experiments are shown and explained in detail. Finally, the paper focuses on the analysis of the network performance for the proposed architecture (i.e., time delay). In fact, several configurations are tested through various networking protocols (i.e., Remote Method Invocation, Transmission Control Protocol/IP, User Datagram Protocol/IP). Results show the real possibilities offered by these remote-programming techniques, in order to design experiments intended to be performed from both home and the campus.

01 Jan 2005
TL;DR: This survey finds that the work is progressing along three complementary lines of effort: those that study low-level tasks, with the goal of understanding how human perception and cognition operate in AR contexts, those that examine user task performance within specific AR applications or application domains, and those that examined user interaction and communication between collaborating users.
Abstract: Although augmented reality (AR) was first conceptualized over 35 years ago (Sutherland, 1968), until recently the field was primarily concerned with the engineering challenges associated with developing AR hardware and software. Because AR is such a compelling medium with many potential uses, there is a need to further develop AR systems from a technology-centric medium to a user-centric medium. This transformation will not be realized without systematic user-based experimentation. This paper surveys and categorizes the user-based studies that have been conducted using AR to date. Our survey finds that the work is progressing along three complementary lines of effort: (1) those that study low-level tasks, with the goal of understanding how human perception and cognition operate in AR contexts, (2) those that examine user task performance within specific AR applications or application domains, in order to gain an understanding of how AR technology could impact underlying tasks, and (3) those that examine user interaction and communication between collaborating users.

Journal ArticleDOI
TL;DR: This study showed that training with a computer simulator, just as with the CVT, resulted in a reproducible training effect, and showed that skills learned in virtual reality are transferable to the physical reality of a CVT.
Abstract: Indications for laparoscopic interventions are constantly expanding, and currently the standard for an increasing number of operations is minimally invasive surgery (MIS). This means that surgeons in training have to learn laparoscopic techniques, having never performed the procedure in the conventional way. MIS demands psychomotor skills that are not required in conventional surgery, such as hand–eye coordination within a 3-dimensional scene seen on a 2-dimensional monitor. Moreover, traditional surgical training in which a learning surgeon performs parts of an operation guided by an experienced surgeon is hardly applicable in MIS. The introduction of MIS was initially associated with a high complication rate.1,2 The term “learning curve” was introduced to surgery to refer to the number of operations a surgeon has to perform to reach an experience level with a low complication rate.3 Depending on the type of operation, 15 to 100 procedures are required to reach the plateau of this learning curve.4–6 Further studies showed that even experienced laparoscopic surgeons had to go through a learning curve again when they learned new laparoscopic techniques or used new instruments.7 This led to the development of special training programs, which are associated with certain problems. More realistic training usually involves training on animals, which is elaborate, expensive, and not available to many surgeons. Moreover, this training does not include pathologic situations and anatomic variations, and thus, does not allow specific training for difficult situations. Basic training is carried out on conventional video training devices (CVT) using mechanical models. This training is inexpensive and readily available but not realistic. To overcome these problems, virtual reality (VR) trainers were developed that offer several advantages: permanent availability, training of specific skills, more or less realistic surgical scenarios, assessment of trainees, etc. Recent improvements in computer technology made the construction of advanced simulators possible, and a number of companies are now offering VR-trainers. The increasing economic orientation of medicine in conjunction with shorter training schedules underscores the importance of specific training programs. This is especially true for MIS because of its special requirements. Despite the importance of training for MIS, there is little scientific knowledge about the learning mechanisms involved in acquiring laparoscopic skills and the methods suitable for training.8 Conventional laparoscopic training aims to overcome these problems by providing realistic training conditions using laparoscopic instruments in mechanical models. Basic and advanced tasks are chosen based on empirical considerations. As soon as the trainee has reached a certain level of expertise, the training is continued in animal models, which are assumed to provide the most realistic training conditions. VR-trainers are likely to be an integral part of MIS training in the near future. It is therefore essential to know whether a simulator can provide a training environment that is suitable to improve surgical skills. Several studies have recently been conducted that document learning curves and training improvement with simulators.9–14 However, the important question remains whether skills acquired during simulator training can be transferred to a real situation. There are few studies on this topic, and they do not provide uniform results.15–18 Virtual laparoscopy simulators aim to resemble real instrument handling and object interactions. The degree of physical realism of the simulation varies depending on the hardware and software capabilities of a simulator. Because of limited computing power, a simulator can only represent a part of physical reality. This means that certain limitations have to be accepted for any simulation, eg, simplified surfaces of objects or organs, simplified or no haptic feedback, limited visual details, etc. The decision as to which part of reality should be simulated is based on assumptions about how laparoscopic skills are acquired and which parts of physical reality are important for successful training. Currently, there is little scientific evidence to prove that these assumptions are correct. The study of Seymour et al impressively showed that the MIST VR simulator could improve operating room performance.19 Although this study examined the training success of a VR-simulator in general, it has not yet been thoroughly validated whether a specific simulator design and specific tasks represent the reality that is needed to train laparoscopic skills. The rapid technical development and increasing availability of simulators necessitates methods to validate whether a VR-trainer provides a suitable environment for MIS training and whether the acquired skills can be transferred to real situations. The aim of this study was to compare a VR-trainer with a physically realistic environment. For the purpose of this study, a CVT served as the counterpart. The study design included specific training tasks that were identically constructed for the VR-trainer and the CVT. In doing so, we were able to directly compare the computer trainer, which is based on virtual reality, with the conventional trainer, which is based on physical reality. The chosen tasks were part of the basic skills training program and included instrument and camera manipulation. We postulated 3 hypotheses for the current study: 1) Training results in conventional and VR-training are comparable. 2) Skills acquired on the VR-simulator are transferable to the physically realistic environment of a conventional video trainer. 3) Laparoscopically experienced surgeons perform better than novices in conventional and VR-simulator training.

Journal ArticleDOI
TL;DR: In this paper, the authors used AR to treat phobias such as fear of flying, agoraphobia, claustrophobia, and phobia to insects and small animals.
Abstract: Virtual reality (VR) is useful for treating several psychological problems, including phobias such as fear of flying, agoraphobia, claustrophobia, and phobia to insects and small animals. We believe that augmented reality (AR) could also be used to treat some psychological disorders. AR and VR share some advantages over traditional treatments. However, AR gives a greater feeling of presence (the sensation of being there) and reality judgment (judging an experience as real) than VR because the environment and the elements the patient uses to interact with the application are real. Moreover, in AR users see their own hands, feet, and so on, whereas VR only simulates this experience. With these differences in mind, the question arises as to the kinds of psychological treatments AR and VR are most suited for. In our system, patients see their own hands, feet, and so on. They can touch the table that animals are crossing or seeing their feet while the animals are running on the floor. They can also hold a marker with a dead spider or cockroach or pick up a flyswatter, a can of insecticide, or a dustpan.

Journal ArticleDOI
TL;DR: The architecture of a system which uses the technologies of augmented and virtual reality to support the planning process of complex manufacturing systems is described, which assists the user in modeling, the validation of the simulation model, and the subsequent optimization of the production system.

Patent
03 May 2005
TL;DR: An augmented reality system comprises means for gathering image data of a real environment, means for generating virtual image data from said image data, and means for identifying a predefined marker object of the real environment based on the image data.
Abstract: An augmented reality system comprises means for gathering image data of a real environment, means for generating virtual image data from said image data, means for identifying a predefined marker object of the real environment based on the image data, and means for superimposing a set of object image data with the virtual image data at a virtual image position corresponding to the predefined marker object.

Journal ArticleDOI
TL;DR: This work has extended the molecular modeling environment, PMV, to support the fabrication of a wide variety of physical molecular models, and has adapted an augmented reality system to allow virtual 3D representations to be overlaid onto the tangible molecular models.

Journal ArticleDOI
TL;DR: The first case study in which AR has been used for the treatment of a specific phobia, cockroaches phobia is presented, addressing a system of AR that permits exposure to virtual cockroach super-imposed on the real world.
Abstract: Augmented reality (AR) refers to the introduction of virtual elements in the real world That is, the person is seeing an image composed of a visualization of the real world, and a series of virtual elements that, at that same moment, are super-imposed on the real world The most important aspect of AR is that the virtual elements supply to the person relevant and useful information that is not contained in the real world AR has notable potential, and has already been used in diverse fields, such as medicine, the army, coaching, engineering, design, and robotics Until now, AR has never been used in the scope of psychological treatment Nevertheless, AR presents various advantages Just like in the classical systems of virtual reality, it is possible to have total control over the virtual elements that are super-imposed on the real world, and how one interacts with those elements AR could involve additional advantages; on one side it could be less expensive since it also uses the real world (this does n

Journal ArticleDOI
01 Jul 2005
TL;DR: DART allows designers to specify complex relationships between the physical and virtual worlds, and supports 3D animatic actors (informal, sketch-based content) in addition to more polished content.
Abstract: In this paper [MacIntyre et al 2004]. we describe The Designer's Augmented Reality Toolkit (DART). DART is built on top of Macromedia Director, a widely used multimedia development environment. We summarize the most significant problems faced by designers working with AR in the real world, and discuss how DART addresses them. Most of DART is implemented in an interpreted scripting language, and can be modified by designers to suit their needs. Our work focuses on supporting early design activities, especially a rapid transition from storyboards to working experience, so that the experiential part of a design can be tested early and often. DART allows designers to specify complex relationships between the physical and virtual worlds, and supports 3D animatic actors (informal, sketch-based content) in addition to more polished content. Designers can capture and replay synchronized video and sensor data, allowing them to work off-site and to test specific parts of their experience more effectively.

Book ChapterDOI
05 Dec 2005
TL;DR: This work presents a new approach to determine and track areas with less visual interest based on feature density and to automatically compute label layout from this information, which works in under 5ms per frame and provides flexible constraints for controlling label placement behaviour to the application designer.
Abstract: Augmented reality (AR) provides an intuitive user interface to present information in the context of the real world. A common application is to overlay screen-aligned annotations for real world objects to create in-situ information displays for users. While the referenced object’s location is fixed in the view the annotating labels should be placed in such a way as to not interfere with other content of interest such as other labels or objects in the real world. We present a new approach to determine and track areas with less visual interest based on feature density and to automatically compute label layout from this information. The algorithm works in under 5ms per frame, which is fast enough that it can be used with existing AR systems. Moreover, it provides flexible constraints for controlling label placement behaviour to the application designer. The resulting overlays are demonstrated with a simple hand-held augmented reality system for information display in a lab environment.

Patent
11 Feb 2005
TL;DR: In this paper, a method for augmented reality navigation of a medical intervention includes providing a stereoscopic head mounted display, the display including a pair of stereo viewing cameras, at least one tracking camera, and a stereo guidance display.
Abstract: A method for augmented reality navigation of a medical intervention includes providing a stereoscopic head mounted display, the display including a pair of stereo viewing cameras, at least one tracking camera, and a stereoscopic guidance display. During a medical intervention on a patient, the patient's body pose is determined from a rigid body transformation between the tracking camera and frame markers on the scanning table, and the pose of an intervention instrument with respect to the table is determined. A visual representation of the patient overlaid with an image of the intervention target, the instrument, and a path for guiding the instrument to perform said medical intervention is displayed in the stereoscopic guidance display.

Proceedings ArticleDOI
05 Dec 2005
TL;DR: This paper looks at user interface issues where an AR enabled mobile phone acts as an interaction device and describes AR manipulation techniques implemented on a mobile phone and presents a small pilot study evaluating these methods.
Abstract: Augmented Reality (AR) on mobile phones has reached a level of maturity where it can be used as a tool for 3D object manipulation. In this paper we look at user interface issues where an AR enabled mobile phone acts as an interaction device. We discuss how traditional 3D manipulation techniques apply to this new platform. The high tangibility of the device and its button interface makes it interesting to compare manipulation techniques. We describe AR manipulation techniques we have implemented on a mobile phone and present a small pilot study evaluating these methods.

Journal ArticleDOI
TL;DR: This paper details the deployment and evaluation of ec(h)o – an augmented audio reality system for museums, and explores the possibility of supporting a context-aware adaptive system by linking environment, interaction objects and users at an abstract semantic level instead of at the content level.
Abstract: Ubiquitous computing is a challenging area that allows us to further our understanding and techniques of context-aware and adaptive systems. Among the challenges is the general problem of capturing the larger context in interaction from the perspective of user modeling and human---computer interaction (HCI). The imperative to address this issue is great considering the emergence of ubiquitous and mobile computing environments. This paper provides an account of our addressing the specific problem of supporting functionality as well as the experience design issues related to museum visits through user modeling in combination with an audio augmented reality and tangible user interface system. This paper details our deployment and evaluation of ec(h)o --- an augmented audio reality system for museums. We explore the possibility of supporting a context-aware adaptive system by linking environment, interaction objects and users at an abstract semantic level instead of at the content level. From the user modeling perspective ec(h)o is a knowledge-based recommender system. In this paper we present our findings from user testing and how our approach works well with an audio and tangible user interface within a ubiquitous computing system. We conclude by showing where further research is needed.

Journal ArticleDOI
01 Feb 2005
TL;DR: In augmented reality (AR) interfaces, three-dimensional virtual images appear superimposed over real objects in head-mounted or handheld displays to make computer graphics appear in the user's environment.
Abstract: Most interactive computer graphics appear on a screen separate from the real world and the user's surroundings. However this does not always have to be the case. In augmented reality (AR) interfaces, three-dimensional virtual images appear superimposed over real objects. AR applications typically use head-mounted or handheld displays to make computer graphics appear in the user's environment.