scispace - formally typeset
Search or ask a question

Showing papers on "Augmented reality published in 2004"


Book
23 Jul 2004
TL;DR: This book discusses 3D User Interfaces, the history and roadmap of 3D UIs, and strategies for Designing and Developing 3D user Interfaces.
Abstract: From video games to mobile augmented reality, 3D interaction is everywhere. But simply choosing to use 3D input or 3D displays isn't enough: 3D user interfaces (3D UIs) must be carefully designed for optimal user experience. 3D User Interfaces: Theory and Practice, Second Edition is today's most comprehensive primary reference to building outstanding 3D UIs. Four pioneers in 3D user interface research and practice have extensively expanded and updated this book, making it today's definitive source for all things related to state-of-the-art 3D interaction.

1,806 citations


Proceedings ArticleDOI
05 Apr 2004
TL;DR: A system that allows museums to build and manage Virtual andAugmented Reality exhibitions based on 3D models of artifacts is presented and the presentation based on Augmented Reality technologies allows museum visitors to interact with the content in an intuitive and exciting manner.
Abstract: A system that allows museums to build and manage Virtual and Augmented Reality exhibitions based on 3D models of artifacts is presented. Dynamic content creation based on pre-designed visualization templates allows content designers to create virtual exhibitions very efficiently. Virtual Reality exhibitions can be presented both inside museums, e.g. on touch-screen displays installed inside galleries and, at the same time, on the Internet. Additionally, the presentation based on Augmented Reality technologies allows museum visitors to interact with the content in an intuitive and exciting manner.

357 citations


Proceedings ArticleDOI
02 Nov 2004
TL;DR: A complete system architecture for fully automated markerless augmented reality that constructs a sparse metric model of the real-world environment, provides interactive means for specifying the pose of a virtual object, and performs model-based camera tracking with visually pleasing augmentation results is presented.
Abstract: We present a complete system architecture for fully automated markerless augmented reality (AR). The system constructs a sparse metric model of the real-world environment, provides interactive means for specifying the pose of a virtual object, and performs model-based camera tracking with visually pleasing augmentation results. Our approach does not require camera pre-calibration, prior knowledge of scene geometry, manual initialization of the tracker or placement of special markers. Robust tracking in the presence of occlusions and scene changes is achieved by using highly distinctive natural features to establish image correspondences.

330 citations


Proceedings ArticleDOI
15 Jun 2004
TL;DR: This paper uses image processing software and finger- and hand-based fiducial markers to track gestures from the user, stencil buffering to enable the user to see their fingers at all times, and fingertip-based haptic feedback devices to enableThe user to feel virtual objects.
Abstract: This paper presents a technique for natural, fingertip-based interaction with virtual objects in Augmented Reality (AR) environments. We use image processing software and finger- and hand-based fiducial markers to track gestures from the user, stencil buffering to enable the user to see their fingers at all times, and fingertip-based haptic feedback devices to enable the user to feel virtual objects. Unlike previous AR interfaces, this approach allows users to interact with virtual content using natural hand gestures. The paper describes how these techniques were applied in an urban planning interface, and also presents preliminary informal usability results.

310 citations


Book
31 Aug 2004
TL;DR: Common manufacturing activities such as product design, robotics, facilities layout planning, maintenance, CNC machining simulation and assembly planning, and some of the issues and future trends of AR technology are addressed.
Abstract: Augmented Reality (AR) is a fast rising technology and it has been applied in many fields such as gaming, learning, entertainment, medical, military, sports, etc. This paper reviews some of the academic studies of AR applications in manufacturing operations. Comparatively, it is lesser addressed due to stringent requirements of high accuracy, fast response and the desirable alignment with industrial standards and practices such that the users will not find drastic transition when adopting this new technology. This paper looks into common manufacturing activities such as product design, robotics, facilities layout planning, maintenance, CNC machining simulation and assembly planning. Some of the issues and future trends of AR technology are also addressed.

256 citations


Journal ArticleDOI
TL;DR: Augmented reality is an effective tool in executing surgical procedures requiring low-performance surgical dexterity; it remains a science determined mainly by stereotactic registration and ergonomics.
Abstract: Objective To evaluate the history and current knowledge of computer-augmented reality in the field of surgery and its potential goals in education, surgeon training, and patient treatment. Data Sources National Library of Medicine's database and additional library searches. Study Selection Only articles suited to surgical sciences with a well-defined aim of study, methodology, and precise description of outcome were included. Data Synthesis Augmented reality is an effective tool in executing surgical procedures requiring low-performance surgical dexterity; it remains a science determined mainly by stereotactic registration and ergonomics. Strong evidence was found that it is an effective teaching tool for training residents. Weaker evidence was found to suggest a significant influence on surgical outcome, both morbidity and mortality. No evidence of cost-effectiveness was found. Conclusions Augmented reality is a new approach in executing detailed surgical operations. Although its application is in a preliminary stage, further research is needed to evaluate its long-term clinical impact on patients, surgeons, and hospital administrators. Its widespread use and the universal transfer of such technology remains limited until there is a better understanding of registration and ergonomics.

248 citations


Proceedings ArticleDOI
24 Oct 2004
TL;DR: DART allows designers to specify complex relationships between the physical and virtual worlds, and supports 3D animatic actors (informal, sketch-based content) in addition to more polished content.
Abstract: In this paper, we describe The Designer's Augmented Reality Toolkit (DART). DART is built on top of Macromedia Director, a widely used multimedia development environment. We summarize the most significant problems faced by designers working with AR in the real world, and discuss how DART addresses them. Most of DART is implemented in an interpreted scripting language, and can be modified by designers to suit their needs. Our work focuses on supporting early design activities, especially a rapid transition from story-boards to working experience, so that the experiential part of a design can be tested early and often. DART allows designers to specify complex relationships between the physical and virtual worlds, and supports 3D animatic actors (informal, sketch-based content) in addition to more polished content. Designers can capture and replay synchronized video and sensor data, allowing them to work off-site and to test specific parts of their experience more effectively.

242 citations


Proceedings ArticleDOI
25 Apr 2004
TL;DR: Papier-Mache introduces a high-level event model for working with computer vision, electronic tags, and barcodes that facilitates technology portability and finds the input abstractions, technologyPortability, and monitoring window to be highly effective.
Abstract: Tangible user interfaces (TUIs) augment the physical world by integrating digital information with everyday physical objects. Currently, building these UIs requires "getting down and dirty" with input technologies such as computer vision. Consequently, only a small cadre of technology experts can currently build these UIs. Based on a literature review and structured interviews with nine TUI researchers, we created Papier-Mâche, a toolkit for building tangible interfaces using computer vision, electronic tags, and barcodes. Papier-Mache introduces a high-level event model for working with these technologies that facilitates technology portability. For example, an application can be prototyped with computer vision and deployed with RFID. We present an evaluation of our toolkit with six class projects and a user study with seven programmers, finding the input abstractions, technology portability, and monitoring window to be highly effective.

242 citations


Proceedings ArticleDOI
02 Nov 2004
TL;DR: This work presents a first running video see-through augmented reality system on a consumer cell-phone that supports the detection and differentiation of different markers, and correct integration of rendered 3D graphics into the live video stream via a weak perspective projection camera model and an OpenGL rendering pipeline.
Abstract: We present a first running video see-through augmented reality system on a consumer cell-phone. It supports the detection and differentiation of different markers, and correct integration of rendered 3D graphics into the live video stream via a weak perspective projection camera model and an OpenGL rendering pipeline.

209 citations


Journal ArticleDOI
TL;DR: In this paper, an augmented reality system capable of nanolithography and manipulation of nano-particles is proposed. But the system requires a new image scan for each nano particle.
Abstract: Using atomic force microscopy (AFM) as a nanomanipulation tool has been discussed for more than a decade. However, its lack of real-time visual feedback during manipulation has hindered its wide application. Fortunately, this problem has been overcome by our recently developed augmented reality system. By locally updating the AFM image based on real-time force information during manipulation, not only can this new system provide real-time force feedback but also real-time visual feedback. The real-time visual display combined with the real-time force feedback provides an augmented reality environment, in which the operator not only can feel the interaction forces but also observe the real-time changes of the nano-environment. This augmented reality system capable of nanolithography and manipulation of nano-particles helps the operator to perform several operations without the need of a new image scan, which makes AFM-based nano-assembly feasible and applicable.

208 citations


Proceedings ArticleDOI
02 Nov 2004
TL;DR: The functionalities and the interaction design for the proposed authoring system that are specifically targeted for intuitive specification of scenes and various object behaviors are described.
Abstract: In this paper, we suggest a new approach for authoring tangible augmented reality applications, called 'immersive authoring.' The approach allows the user to carry out the authoring tasks within the AR application being built, so that the development and testing of the application can be done concurrently throughout the development process. We describe the functionalities and the interaction design for the proposed authoring system that are specifically targeted for intuitive specification of scenes and various object behaviors. Several cases of applications developed using the authoring system are presented. A small pilot user study was conducted to compare the proposed method to a non-immersive approach, and the results have shown that the users generally found it easier and faster to carry out authoring tasks in the immersive environment.

Journal ArticleDOI
TL;DR: This work uses a novel way of personalizing the head related transfer functions (HRTFs) from a database, based on anatomical measurements, to create virtual auditory spaces by rendering cues that arise from anatomical scattering, environmental scattering, and dynamical effects.
Abstract: High-quality virtual audio scene rendering is required for emerging virtual and augmented reality applications, perceptual user interfaces, and sonification of data. We describe algorithms for creation of virtual auditory spaces by rendering cues that arise from anatomical scattering, environmental scattering, and dynamical effects. We use a novel way of personalizing the head related transfer functions (HRTFs) from a database, based on anatomical measurements. Details of algorithms for HRTF interpolation, room impulse response creation, HRTF selection from a database, and audio scene presentation are presented. Our system runs in real time on an office PC without specialized DSP hardware.

01 Jan 2004
TL;DR: An educational application that allows users to interact with 3D web content using virtual and augmented reality (AR) enables the potential benefits of Web3D and AR technologies in engineering education and learning to be explored.
Abstract: We present an educational application that allows users to interact with 3D web content using virtual and augmented reality (AR). This enables us to explore the potential benefits of Web3D and AR technologies in engineering education and learning. A lecturer’s traditional delivery can be enriched by viewing multimedia content locally or over the Internet, as well as in a table-top AR environment. The implemented framework is composed an XML data repository, an XML based communications server, and an XML based client visualisation application. In this paper we illustrate the architecture by configuring it to deliver multimedia content related to the teaching of mechanical engineering. We illustrate four mechanical engineering themes (machines, vehicles, platonic solids and tools) to demonstrate use of the system to support learning through Web3D.

Proceedings ArticleDOI
02 Nov 2004
TL;DR: VITA (visual interaction tool for archaeology), an experimental collaborative mixed reality system for offsite visualization of an archaeological dig, focuses on augmenting existing archaeological analysis methods with new ways to organize, visualize, and combine the standard 2D information available from an excavation with textured, laser range-scanned 3D models of objects and the site itself.
Abstract: We present VITA (visual interaction tool for archaeology), an experimental collaborative mixed reality system for offsite visualization of an archaeological dig. Our system allows multiple users to visualize the dig site in a mixed reality environment in which tracked, see-through, head-worn displays are combined with a multi-user, multi-touch, projected table surface, a large screen display, and tracked hand-held displays. We focus on augmenting existing archaeological analysis methods with new ways to organize, visualize, and combine the standard 2D information available from an excavation (drawings, pictures, and notes) with textured, laser range-scanned 3D models of objects and the site itself. Users can combine speech, touch, and 3D hand gestures to interact multimodally with the environment. Preliminary user tests were conducted with archaeology researchers and students, and their feedback is presented here.

Proceedings ArticleDOI
02 Nov 2004
TL;DR: This paper presents a breakthrough in display hardware from a mobility (i.e. compactness), resolution, and a switching speed based criteria and focuses on the research that is related to virtual objects being able to occlude real objects.
Abstract: We are proposing a novel optical see-through head- worn display that is capable of mutual occlusions. Mutual occlusion is an attribute of an augmented reality display where real objects can occlude virtual objects and virtual objects can occlude real objects. For a user to achieve the perception of indifference between the real and the virtual images superimposed on the real environment, mutual occlusion is a strongly desired attribute for certain applications. This paper presents a breakthrough in display hardware from a mobility (i.e. compactness), resolution, and a switching speed based criteria. Specifically, we focus on the research that is related to virtual objects being able to occlude real objects. The core of the system is a spatial light modulator (SLM) and polarization-based optics which allow us to block or pass certain parts of a scene which is viewed through the head-worn display. An objective lens images the scene onto the SLM and the modulated image is mapped back to the original scene via an eyepiece. We are combining computer generated imagery with the modulated version of the scene to form the final image a user would see.

Journal Article
TL;DR: A framework is introduced for mobile augmented reality audio (MARA) based on a specific headset configuration where binaural microphone elements are integrated into stereo earphones and listening test results show that the proposed system has interesting properties.
Abstract: The concept of augmented reality audio characterizes techniques where a real sound environment is extended with virtual auditory environments and communications scenarios. A framework is introduced for mobile augmented reality audio (MARA) based on a specific headset configuration where binaural microphone elements are integrated into stereo earphones. When microphone signals are routed directly to the earphones, a user is exposed to a pseudoacoustic representation of the real environment. Virtual sound events are then mixed with microphone signals to produce a hybrid, an augmented reality audio representation, for the user. An overview of related technology, literature, and application scenarios is provided. Listening test results with a prototype system show that the proposed system has interesting properties. For example, in some cases listeners found it very difficult to determine which sound sources in an augmented reality audio representation are real and which are virtual.

Proceedings ArticleDOI
15 Jun 2004
TL;DR: Based on an extension of 2D MagicLenses, techniques involving 3D lenses, information filtering and semantic zooming are developed that provide users with a natural, tangible interface for selectively zooming in and out of specific areas of interest in an Augmented Reality scene.
Abstract: In this paper we present new interaction techniques for virtual environments. Based on an extension of 2D MagicLenses, we have developed techniques involving 3D lenses, information filtering and semantic zooming. These techniques provide users with a natural, tangible interface for selectively zooming in and out of specific areas of interest in an Augmented Reality scene. They use rapid and fluid animation to help users assimilate the relationship between views of detailed focus and global context. As well as supporting zooming, the technique is readily applied to semantic information filtering, in which only the pertinent information subtypes within a filtered region are shown. We describe our implementations, preliminary user feedback and future directions for this research.

Proceedings ArticleDOI
15 Jun 2004
TL;DR: Five projects developed at the Human Interface Technology Laboratory in New Zealand that have explored different techniques for applying AR to educational exhibits appear to have educational benefits involving spatial, temporal and contextual conceptualisation and provide kinaesthetic, explorative and knowledge-challenging stimulus.
Abstract: Recent advances in computer graphics and interactive techniques have increased the visual quality and flexibility of Augmented Reality (AR) applications. This, in turn has increased the viability of applying AR to educational exhibits for use in Science Centres, Museums, Libraries and other education centres. This article outlines a selection of five projects developed at the Human Interface Technology Laboratory in New Zealand (HIT Lab NZ) that have explored different techniques for applying AR to educational exhibits.These exhibits have received very positive feedback and appear to have educational benefits involving spatial, temporal and contextual conceptualisation and provide kinaesthetic, explorative and knowledge-challenging stimulus. The controls available to a user of turning a page, moving an AR marker, moving their head and moving a slider provide sufficient freedom to create many interaction scenarios that can serve educative outcomes. While the use of virtual media provides many advantages, creating new content is still quite difficult, requiring specialist software and skills. Useability observations are shared.

Journal ArticleDOI
TL;DR: SmartTouch uses optical sensors to gather information and electrical stimulation to translate it into tactile display, which makes physical contact with an object and touches the surface information of any modality, even those that are typically untouchable.
Abstract: Augmented haptics lets users touch surface information of any modality. SmartTouch uses optical sensors to gather information and electrical stimulation to translate it into tactile display. Augmented reality is an engineer's approach to this dream. In AR, sensors capture artificial information from the world, and existing sensing channels display it. Hence, we virtually acquire the sensor's physical ability as our own. Augmented haptics, the result of applying AR to haptics, would allow a person to touch the untouchable. Our system, SmartTouch, uses a tactile display and a sensor. When the sensor contacts an object, an electrical stimulation translates the acquired information into a tactile sensation, such as a vibration or pressure, through the tactile display. Thus, an individual not only makes physical contact with an object, but also touches the surface information of any modality, even those that are typically untouchable.

Proceedings ArticleDOI
02 Nov 2004
TL;DR: This paper applies miniature MEMS sensors to cockpit helmet-tracking for enhanced/synthetic vision by implementing algorithms for differential inertial tracking between helmet-mounted and aircraft-mounted inertial sensors, and novel optical drift correction techniques.
Abstract: One of the earliest fielded augmented reality applications was enhanced vision for pilots, in which a display projected on the pilot's visor provides geo-spatially registered information to help the pilot navigate, avoid obstacles, maintain situational awareness in reduced visibility, and interact with avionics instruments without looking down. This requires exceptionally robust and accurate head-tracking, for which there is not a sufficient solution yet available. In this paper, we apply miniature MEMS sensors to cockpit helmet-tracking for enhanced/synthetic vision by implementing algorithms for differential inertial tracking between helmet-mounted and aircraft-mounted inertial sensors, and novel optical drift correction techniques. By fusing low-rate inside-out and outside-in optical measurements with high-rate inertial data, we achieve millimeter position accuracy and milliradian angular accuracy, low-latency and high robustness using small and inexpensive sensors.

Journal Article
TL;DR: In this paper, the authors present some important issues in choosing the set of gestures for the interface from a user-centred view such as the learning rate, ergonomics, and intuition.
Abstract: Many disciplines of multimedia and communication go towards ubiquitous computing and hand-free- and no-touch interaction with computers. Application domains in this direction involve virtual reality, augmented reality, wearable computing, and smart spaces, where gesturing is a possible method of interaction. This paper presents some important issues in choosing the set of gestures for the interface from a user-centred view such as the learning rate, ergonomics, and intuition. A procedure is proposed which includes those issues in the selection of gestures, and to test the resulting set of gestures. The procedure is tested and demonstrated on an example application with a small test group. The procedure is concluded to be useful for finding a basis for the choice of gestures. The importance of tailoring the gesture vocabulary for the user group was also shown.

Proceedings Article
01 Jan 2004


Proceedings ArticleDOI
27 Sep 2004
TL;DR: A novel approach for performing explosive ordnance disposal by use of a bimanual haptic telepresence system that enables an operator to perceive multimodal feedback from a remote environment for proper task execution is outlined.
Abstract: The paper outlines a novel approach for performing explosive ordnance disposal by use of a bimanual haptic telepresence system. This system enables an operator to perceive multimodal feedback from a remote environment for proper task execution. The developed experimental setup, comprising a bimanual human system interface and the corresponding bimanual teleoperator for use of both hands, is presented in detail. The teleoperation control architecture is discussed as well as a local model-based impedance control algorithm for manipulator control. Human-system performance is improved by means of stereo visualization of the tele-environment together with overlayed Augmented Reality assistance, and an algorithm for avoidance of dangerous manipulator configurations supported by augmented force feedback. Furthermore, a recently developed UDP communication library is presented for system interconnection taking into account compression of haptic data. Thus efficient and low delay of data transfer is ensured. The usability and effectiveness of the developed bimanual telepresence system are demonstrated by focusing a relevant task scenario, such as demining operations in a remote environment.

Proceedings ArticleDOI
27 Oct 2004
TL;DR: The UMAR, a conceptual framework for developing Ubiquitous Mobile Augmented Reality applications which consists of research areas identified as relevant for successfully bridging the physical world and the digital domain using MobileAugmented Reality, is presented.
Abstract: In this paper we discuss the prospects of using marker based Augmented Reality for context aware applications on mobile phones. We also present the UMAR, a conceptual framework for developing Ubiquitous Mobile Augmented Reality applications which consists of research areas identified as relevant for successfully bridging the physical world and the digital domain using Mobile Augmented Reality. A step towards this we have successfully ported the ARToolkit to consumer mobile phones running on the Symbian platform and present results around this. We also present three sample applications based on UMAR and future case study work planned.

Proceedings ArticleDOI
02 Nov 2004
TL;DR: Four interactive tools are presented: the tunnel tool and room selector tool directly augment the user's view of the environment, allowing them to explore the scene in direct, first person view, and the room in miniature tool allows the user to select and interact with a room from a third person perspective.
Abstract: This paper presents a set of interactive tools designed to give users virtual x-ray vision. These tools address a common problem in depicting occluded infrastructure: either too much information is displayed, confusing users, or too little information is displayed, depriving users of important depth cues. Four tools are presented: the tunnel tool and room selector tool directly augment the user's view of the environment, allowing them to explore the scene in direct, first person view. The room in miniature tool allows the user to select and interact with a room from a third person perspective, allowing users to view the contents of the room from points of view that would normally be difficult or impossible to achieve. The room slicer tool aids users in exploring volumetric data displayed within the room in miniature tool. Used together, the tools presented in this paper can be used to achieve the virtual x-ray vision effect. We test our prototype system in a far-field mobile augmented reality setup, visualizing the interiors of a small set of buildings on the UCSB campus.

Journal ArticleDOI
TL;DR: This paper discusses about industrial augmented reality, which lets users reconstruct virtual models of their area of interest and visualize models within their static views of a real scene.
Abstract: We discuss about industrial augmented reality. Each industrial process imposes its own peculiar requirements. This creates the need for specialized technical solutions, which in turn poses new sets of challenges. Because most industries must concern themselves with at least some of these industrial procedures, we consider design, commissioning, manufacturing, quality control, training, monitoring and control, and service and maintenance. AR lets users reconstruct virtual models of their area of interest and visualize models within their static views of a real scene.

Proceedings ArticleDOI
02 Nov 2004
TL;DR: The results show a specific distribution of tracking accuracy dependent on distance as well as angle between camera and marker, applicable for designing the set-up of AR applications in general that rely on optical tracking.
Abstract: Optical tracking with fiducial markers is commonly used in Augmented Reality (AR) systems. AR systems that rely on the ARToolKit [Marker Tracking and HMD Calibration for a Video-based Augmented Reality Conferencing System] are prominent examples. The information obtained by the tracking subsystem are widely used in AR, e.g. in order to calculate how virtual objects should be located and oriented. The results of extensive accuracy experiments with single markers are reported and made operational by the definition of an accuracy function. The results show a specific distribution of tracking accuracy dependent on distance as well as angle between camera and marker. This insight is applicable for designing the set-up of AR applications in general that rely on optical tracking.

Journal ArticleDOI
TL;DR: The Visual Interaction Platform (VIP) as discussed by the authors is a Natural User Interface (NUl) that builds on human skills of real-world object manipulation and enables unhindered human-human communication in a collaborative situation.
Abstract: The Visual Interaction Platform (VIP) is a Natural User Interface (NUl) that builds on human skills of real-world object manipulation and enables unhindered human-human communication in a collaborative situation. The existing VIP is being extended towards the VIP-3 to allow support for new kinds of interactions. An example of a natural augmented reality interface to be realized via the VIP-3 is a pen-and-paper interface that combines properties of real pen-and-paper with typical computer functionality for flexible re-use of information. Two ongoing VIP-3 research projects are also briefly discussed: the first project aims to develop supporting tools for early architectural design, while the second project aims at supporting 3D interaction for navigating and browsing through multidimensional data sets.

Proceedings ArticleDOI
27 Mar 2004
TL;DR: A real-time hybrid tracking system that integrates gyroscopes and line-based vision tracking technology that achieves robust, accurate, and real- time performance for outdoor augmented reality is presented.
Abstract: We present a real-time hybrid tracking system that integrates gyroscopes and line-based vision tracking technology. Gyroscope measurements are used to predict orientation and image line positions. Gyroscope drift is corrected by vision tracking. System robustness is achieved by using a heuristic control system to evaluate measurement quality and select measurements accordingly. Experiments show that the system achieves robust, accurate, and real-time performance for outdoor augmented reality.