scispace - formally typeset
Search or ask a question

Showing papers on "Augmented reality published in 2006"


Journal ArticleDOI
TL;DR: Education uses of virtual learning environment concerned with issues of learning, training and entertainment show that VLE can be means of enhancing, motivating and stimulating learners' understanding of certain events, especially those for which the traditional notion of instructional learning have proven inappropriate or difficult.

542 citations


Journal ArticleDOI
TL;DR: In this paper, nonlinear pose estimation is formulated by means of a virtual visual servoing approach and has been validated on several complex image sequences including outdoor environments.
Abstract: Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a "video see through" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. In this paper, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.

490 citations


Journal ArticleDOI
TL;DR: Analysis of teacher–child dialogue in a comparative study between use of an AR virtual mirror interface and more traditional science teaching methods for 10-year-old children revealed that the children using AR were less engaged than those using traditional resources.
Abstract: The use of augmented reality (AR) in formal education could prove a key component in future learning environments that are richly populated with a blend of hardware and software applications. However, relatively little is known about the potential of this technology to support teaching and learning with groups of young children in the classroom. Analysis of teacher–child dialogue in a comparative study between use of an AR virtual mirror interface and more traditional science teaching methods for 10-year-old children, revealed that the children using AR were less engaged than those using traditional resources. We suggest four design requirements that need to be considered if AR is to be successfully adopted into classroom practice. These requirements are: flexible content that teachers can adapt to the needs of their children, guided exploration so learning opportunities can be maximised, in a limited time, and attention to the needs of institutional and curricular requirements.

465 citations


Proceedings ArticleDOI
22 Oct 2006
TL;DR: A model-based hybrid tracking system for outdoor augmented reality in urban environments enabling accurate, realtime overlays for a handheld device and the accuracy and robustness of the resulting system is demonstrated with comparisons to map-based ground truth data.
Abstract: This paper presents a model-based hybrid tracking system for outdoor augmented reality in urban environments enabling accurate, realtime overlays for a handheld device. The system combines several well-known approaches to provide a robust experience that surpasses each of the individual components alone: an edge-based tracker for accurate localisation, gyroscope measurements to deal with fast motions, measurements of gravity and magnetic field to avoid drift, and a back store of reference frames with online frame selection to re-initialize automatically after dynamic occlusions or failures. A novel edge-based tracker dispenses with the conventional edge model, and uses instead a coarse, but textured, 3D model. This yields several advantages: scale-based detail culling is automatic, appearance-based edge signatures can be used to improve matching and the models needed are more commonly available. The accuracy and robustness of the resulting system is demonstrated with comparisons to map-based ground truth data.

442 citations


Journal ArticleDOI
TL;DR: A competency-based training curriculum for novice laparoscopic surgeons has been defined to ensure that junior trainees have acquired prerequisite levels of skill prior to entering the operating room, and put them directly into practice.
Abstract: The implementation of a competency-based surgical skills curriculum necessitates the development of tools to enable structured training, with in-built objective measures of assessment.1 Simulation in the form of virtual reality and synthetic models has been proposed for technical skills training at the early part of the learning curve.2–4 To be efficacious, these tools must convey a sense of realism and a degree of standardization to enable graded acquisition of technical skills. Progression along the curriculum is charted by passing predefined expert benchmark criteria, which lead to more technically demanding tasks. In the laparoscopic era, training on inanimate video trainers, and more recently on virtual reality simulators, has been shown to improve skills performance in the operating room.5,6 Nevertheless, structured training programs utilizing these tools do not exist and have not been validated in terms of which tasks should be performed, at which level, for how long, how often, and to which set of benchmark criteria The aim of this paper was to develop an evidence-based virtual reality training program for the initial acquisition of technical skill, leading to a basic level of proficiency prior to entering the operating theater. Basic and procedural tasks can be simulated in a high-fidelity virtual environment that closely resembles the operative field. Virtual tissues can be manipulated, clipped and cut, and incorporated into a recognizable simulation of Calot triangle dissection, which bleeds and can respond to diathermy (Figs. 1, 2). At the end of each task, performance can be measured using parameters such as time taken, number of errors made, and path length for each hand. This makes it possible to chart the performance of a trainee surgeon along the curriculum and define the attainment of proficiency. FIGURE 1. The “Cutting” task on the LapSim virtual reality simulator. FIGURE 2. The “Dissection” task on the LapSim virtual reality laparoscopic simulator. The structured curriculum can enable trainees to be confident in their skills prior to assisting in and performing the initial laparoscopic procedures, safe in the knowledge that they have achieved preset expert criteria. The ultimate aim is to reduce their learning curve on real patients, leading to acquisition of proficiency at an earlier stage than training on patients alone. Airline pilots become proficient at flying an aeroplane before even leaving the ground, acquiring skills on a high-fidelity flight simulator. The analogous situation should now be possible for the early part of the learning curve in laparoscopic surgery. This may lead to a reduction in the number of unnecessary complications occurring due to a failure of technical skills,7 and the time and expense spent acquiring basic laparoscopic skills in the operating room.8

319 citations


Patent
Juha Henrik Arrasvuori1
19 Sep 2006
TL;DR: In this paper, the authors present a system that facilitates shopping for a tangible object via a network using a mobile device, where a graphical representation of a scene of a local environment using a sensor of the mobile device is obtained via the network.
Abstract: Facilitating shopping for a tangible object via a network using a mobile device involves obtaining a graphical representation of a scene of a local environment using a sensor of the mobile device. Graphical object data that enables a three-dimensional representation of the tangible object to be rendered on the mobile device is obtained via the network, in response to a shopping selection. The three-dimensional representation of the tangible object is displayed with the graphical representation of the scene via the mobile device so that the appearance of the tangible object in the scene is simulated.

289 citations


Book
10 Nov 2006
TL;DR: "Emerging Technologies of Augmented Reality: Interfaces and Design" provides a foundation of the main concepts of augmented reality, with a particular emphasis on user interfaces, design, and practical AR techniques, from tracking algorithms to design principles for AR interfaces.
Abstract: Although the field of mixed reality has grown significantly over the last decade, there have been few published books about augmented reality, particularly the interface design aspects. "Emerging Technologies of Augmented Reality: Interfaces and Design" provides a foundation of the main concepts of augmented reality (AR), with a particular emphasis on user interfaces, design, and practical AR techniques, from tracking algorithms to design principles for AR interfaces. "Emerging Technologies of Augmented Reality: Interfaces and Design" contains comprehensive information focusing on the following topics: technologies that support AR, development environments, interface design and evaluation of applications, and case studies of AR applications.

198 citations


Book ChapterDOI
TL;DR: This chapter describes a system for constructing 3D metric models from multiple images taken with an uncalibrated handheld camera, recognizing these models in new images, and precisely solving for object pose.
Abstract: Many applications of 3D object recognition, such as augmented reality or robotic manipulation, require an accurate solution for the 3D pose of the recognized objects. This is best accomplished by building a metrically accurate 3D model of the object and all its feature locations, and then fitting this model to features detected in new images. In this chapter, we describe a system for constructing 3D metric models from multiple images taken with an uncalibrated handheld camera, recognizing these models in new images, and precisely solving for object pose. This is demonstrated in an augmented reality application where objects must be recognized, tracked, and superimposed on new images taken from arbitrary viewpoints without perceptible jitter. This approach not only provides for accurate pose, but also allows for integration of features from multiple training images into a single model that provides for more reliable recognition.

196 citations


Journal ArticleDOI
TL;DR: This paper proposes the car as an AR apparatus and presents an innovative visualization paradigm for navigation systems that is anticipated to enhance user interaction.
Abstract: The augmented reality (AR) research community has been developing a manifold of ideas and concepts to improve the depiction of virtual objects in a real scene. In contrast, current AR applications require the use of unwieldy equipment which discourages their use. In order to essentially ease the perception of digital information and to naturally interact with the pervasive computing landscape, the required AR equipment has to be seamlessly integrated into the user’s natural environment. Considering this basic principle, this paper proposes the car as an AR apparatus and presents an innovative visualization paradigm for navigation systems that is anticipated to enhance user interaction.

170 citations


Proceedings ArticleDOI
06 Jul 2006
TL;DR: This paper first reviews studies that used VR technologies to study different aspects of spatial ability and results and findings will be presented from one of the first large-scale studies that investigated the potential of an AR application to train spatial ability.
Abstract: Virtual reality (VR) and augmented reality (AR -- overlaying virtual objects onto the real world) offer interesting and wide spread possibilities to study different components of human behaviour and cognitive processes. One aspect of human cognition that has been frequently studied using VR technology is spatial ability. Research ranges from training studies that investigate whether and/or how spatial ability can be improved by using these new technologies to studies that focus on specific aspects of spatial ability for which VR is an efficient investigational tool. In this paper we first review studies that used VR technologies to study different aspects of spatial ability. Then results and findings will be presented from one of the first large-scale studies (215 students) that investigated the potential of an AR application to train spatial ability.

168 citations


Patent
15 Aug 2006
TL;DR: In this paper, a system, apparatus, and method is provided for augmented reality glasses that enable an end-user programmer to visualize an Ambient Intelligence environment having a physical dimension such that virtual interaction mechanisms / patterns of the environment are superimposed over real locations, surfaces, objects and devices.
Abstract: A system, apparatus, and method is provided for augmented reality (AR) glasses (131) that enable an end-user programmer to visualize an Ambient Intelligence environment having a physical dimension such that virtual interaction mechanisms / patterns of the Ambient Intelligence environment are superimposed over real locations, surfaces, objects and devices. Further, the end-user can program virtual interaction mechanisms / patterns and superimpose them over corresponding real objects and devices in the Ambient Intelligence environment.

Proceedings ArticleDOI
22 May 2006
TL;DR: An initial experiment with inexpensive body-worn gyroscopes and acceleration sensors for the chum kiu motion sequence in wing tsun (a popular form of kung fu) confirms the feasibility of this vision to add ambient intelligence and context awareness to gaming applications in general and games of martial arts in particular.
Abstract: Beside their stunning graphics, modern entertainment systems feature ever-higher levels of immersive user-interaction. Today, this is mostly achieved by virtual (VR) and augmented reality (AK) setups. On top of these, we envision to add ambient intelligence and context awareness to gaming applications in general and games of martial arts in particular. To this end, we conducted an initial experiment with inexpensive body-worn gyroscopes and acceleration sensors for the chum kiu motion sequence in wing tsun (a popular form of kung fu). The resulting data confirm the feasibility of our vision. Fine-tuned adaptations of various thresholding and pattern-matching techniques known from the fields of computational intelligence and signal processing should suffice to automate the analysis and recognition of important wing tsun movements in real time. Moreover, the data also seem to allow for the possibility of automatically distinguishing between certain levels of expertise and quality in executing the movements.

Proceedings ArticleDOI
14 Jun 2006
TL;DR: Investigating how students interact with AR and physical models and evaluate their perceptions regarding these two representations in learning about amino acids shows that some students liked to manipulate AR by rotating the markers to see different orientations of the virtual objects, but some students preferred to interact with physical models in order to get a feeling of physical contact.
Abstract: Augmented reality (AR) is an emerging technology, which renders three-dimensional (3-D) virtual objects and allows people to interact with virtual and real objects at the same time. The purpose of this study is to investigate how students interact with AR and physical models and evaluate their perceptions regarding these two representations in learning about amino acids. The results show that some students liked to manipulate AR by rotating the markers to see different orientations of the virtual objects. However, some students preferred to interact with physical models in order to get a feeling of physical contact. Their interactions with AR demonstrated that they tended to treat AR as real objects. Based on the findings, some AR design issues are elicited and the possibility to use AR in the chemistry classroom is discussed.

Journal ArticleDOI
TL;DR: This paper presents a pilot study and a follow-on user-based study that examined the effects on user performance of outdoor background textures, changing outdoor illuminance values, and text drawing styles in a text identification task using an optical, see-through AR system.
Abstract: A challenge in presenting augmenting information in outdoor augmented reality (AR) settings lies in the broad range of uncontrollable environmental conditions that may be present, specifically large-scale fluctuations in natural lighting and wide variations in likely backgrounds or objects in the scene. In this paper, we motivate the need for research on the effects of text drawing styles, outdoor background textures, and natural lighting on user performance in outdoor AR. We present a pilot study and a follow-on user-based study that examined the effects on user performance of outdoor background textures, changing outdoor illuminance values, and text drawing styles in a text identification task using an optical, see-through AP, system. We report significant effects for all these variables, and discuss user interface design guidelines and ideas for future work.

Proceedings ArticleDOI
22 Apr 2006
TL;DR: The attention funnel has potential applicability as a general 3D cursor or cue in a wide array of spatially enabled mobile and AR systems, and for applications where systems can support users in visual search, object awareness, and emergency warning in indoor and outdoor spaces.
Abstract: The attention funnel is a general purpose AR interface technique that interactively guides the attention of a user to any object, person, or place in space. The technique utilizes dynamic perceptual affordances to draw user attention "down" the funnel to the target location. Attention funnel can be used to cue objects completely out of sight including objects behind the user, or occluded by other objects or walls.An experiment evaluating user performance with the attention funnel and other conventional AR attention directing techniques found that the attention funnel increased the consistency of the user's search by 65%, increased search speed by 22%, and decreased mental workload by 18%. The attention funnel has potential applicability as a general 3D cursor or cue in a wide array of spatially enabled mobile and AR systems, and for applications where systems can support users in visual search, object awareness, and emergency warning in indoor and outdoor spaces.

Book ChapterDOI
29 Nov 2006
TL;DR: Experimental results reveal that integration of GPS/RFID/dead-reckoning improve positioning accuracy in both indoor and outdoor environments.
Abstract: This paper describes an embedded pedestrian navigation system composed of a self-contained sensors, the Global Positioning System (GPS) and an active Radio Frequency Identification (RFID) tag system. We use self-contained sensors (accelerometers, gyrosensors and magnetometers) to estimate relative displacement by analyzing human walking locomotion. The GPS is used outdoors to adjust errors in position and direction accumulated by the dead-reckoning. In indoor environments, we use an active RFID tag system sparsely placed in key spot areas. The tag system obviously has limited availability and thus dead-reckoning is used to cover the environment. We propose a method of complementary compensation algorithm for the GPS/RFID localization and the self-contained navigation represented by simple equations in a Kalman filter framework. Experimental results using the proposed method reveals that integration of GPS/RFID/dead-reckoning improve positioning accuracy in both indoor and outdoor environments. The pedestrian positioning is realized as a software module with the web-based APIs so that cross-platform development can easily be achieved. A pedestrian navigation system is implemented on an embedded wearable system and is proven to be useful even for unexperienced users.

Patent
03 Feb 2006
TL;DR: In this paper, an augmented reality device is used to combine a real worldview with an object image, and an optical combiner combines the object image with a real world view of the object and conveys the combined image to a user.
Abstract: An augmented reality device to combine a real worldview with an object image (112). An optical combiner (102) combines the object image (112) with a real worldview of the object and conveys the combined image to a user. A tracking system tracks one or more objects. At least a part of the tracking system (108) is at a fixed location with respect to the display (104). An eyepiece (110) is used to view the combined object and real world images, and fixes the user location with respect to the display and optical combiner location

Patent
24 Jan 2006
TL;DR: In this article, a compact haptic and augmented virtual reality system that produces an augmented reality environment is presented, equipped with software and devices that provide users with stereoscopic visualization and force feedback simultaneously in real time.
Abstract: The invention provides compact haptic (18) and augmented virtual reality system that produces an augmented reality environment. The system is equipped with software and devices that provide users with stereoscopic visualization and force feedback simultaneously in real time. High resolution, high pixel density, head and hand tracking ability are provided. Well-matched haptics and graphics volumes are realized. Systems of the invention are compact, making use of a standard personal display device, e. g., a computer monitor, as the display driver. Systems of the invention may therefore be inexpensive compared to many conventional virtual reality systems.

Patent
11 Aug 2006
TL;DR: In this article, a method of operation for use with an augmented reality spatial interaction and navigational system includes receiving initialization information, including a target location corresponding to a point of interest in space, and a source location correspond to a spatially enabled display.
Abstract: A method of operation for use with an augmented reality spatial interaction and navigational system includes receiving initialization information, including a target location corresponding to a point of interest in space, and a source location corresponding to a spatially enabled display. It further includes computing a curve in a screen space of the spatially enabled display between the source location and the target location, and placing a set of patterns along the curve, including illustrating the patterns in the screen space.

Journal ArticleDOI
TL;DR: In this paper, the feasibility of augmenting human abilities via MR applications in construction tasks from the perspective of cognitive engineering was analyzed and validated through an experiment comparing a head mounted display versus a desktop monitor in performing an orientation task.

Book ChapterDOI
01 Oct 2006
TL;DR: Novel visualization techniques that are designed to overcome misleading depth perception of trivially superimposed virtual images on the real view are described and evaluated to guide future research and development on medical augmented reality.
Abstract: The idea of in-situ visualization for surgical procedures has been widely discussed in the community [1,2,3,4]. While the tracking technology offers nowadays a sufficient accuracy and visualization devices have been developed that fit seamlessly into the operational workflow [1,3], one crucial problem remains, which has been discussed already in the first paper on medical augmented reality [4]. Even though the data is presented at the correct place, the physician often perceives the spatial position of the visualization to be closer or further because of virtual/real overlay. This paper describes and evaluates novel visualization techniques that are designed to overcome misleading depth perception of trivially superimposed virtual images on the real view. We have invited 20 surgeons to evaluate seven different visualization techniques using a head mounted display (HMD). The evaluation has been divided into two parts. In the first part, the depth perception of each kind of visualization is evaluated quantitatively. In the second part, the visualizations are evaluated qualitatively in regard to user friendliness and intuitiveness. This evaluation with a relevant number of surgeons using a state-of-the-art system is meant to guide future research and development on medical augmented reality.

Proceedings ArticleDOI
30 Jul 2006
TL;DR: Results of initial trials of RtR suggest that AR games, when properly designed for pedagogical purposes, can motivate the authentic practice of 21st century skills.
Abstract: Augmented Reality (AR) games can potentially teach 21st century skills, such as interpretation, multimodal thinking, problem-solving, information management, teamwork, flexibility, civic engagement, and the acceptance of diverse perspectives. To explore this, I designed Reliving the Revolution (RtR) as a novel model for evaluating educational AR games. RtR takes place in Lexington, Massachusetts, the site of the Battle of Lexington. Participants interact with virtual historic figures and items, which are triggered by GPS to appear on their PDA (personal digital assistant) depending on where they are standing in Lexington. Game participants receive differing evidence, as appropriate for their role in the game (Minuteman soldier, Loyalist, African American soldier, or British soldier), and use this information to decide who fired the first shot at the Battle. Results of initial trials of RtR suggest that AR games, when properly designed for pedagogical purposes, can motivate the authentic practice of 21st century skills.

Journal ArticleDOI
TL;DR: The current apprentice system has served the art of surgery for over 100 years, and the foresee virtual reality working synergistically with the current curriculum modalities to streamline and enhance the resident's learning experience.

Journal ArticleDOI
01 Mar 2006
TL;DR: A classification of illumination methods for MR applications that aim at generating a merged environment in which illumination and shadows are consistent is proposed, with four categories of methods that vary depending on the type of geometric model used for representing the real scene, and the different radiance information available for each point of the realscene.
Abstract: A mixed reality (MR) represents an environment composed both by real and virtual objects. MR applications are used more and more, for instance in surgery, architecture, cultural heritage, entertainment, etc. For some of these applications it is important to merge the real and virtual elements using consistent illumination. This paper proposes a classification of illumination methods for MR applications that aim at generating a merged environment in which illumination and shadows are consistent. Three different illumination methods can be identified: common illumination, relighting and methods based on inverse illumination. In this paper a classification of the illumination methods for MR is given based on their input requirements: the amount of geometry and radiance known of the real environment. This led us to define four categories of methods that vary depending on the type of geometric model used for representing the real scene, and the sdifferent radiance information available for each point of the real scene. Various methods are described within their category. The classification points out that in general the quality of the illumination interactions increases with the amount of input information available. On the other hand, the accessibility of the method decreases since its pre-processing time increases to gather the extra information. Recent developed techniques managed to compensate unknown data with clever techniques using an iterative algorithm, hardware illumination or recent progress in stereovision. Finally, a review of illumination techniques for MR is given with a discussion on important properties such as the possibility of interactivity or the amount of complexity in the simulated illumination.

Proceedings ArticleDOI
01 Nov 2006
TL;DR: This work proposes to give the possibility to the people to use the first and the third-person perspective like in video games, and verifies this comportment is extendable to simulations in augmented and virtual reality.
Abstract: Instead of the reality in which you can see your own limbs, in virtual reality simulations it is sometimes disturbing not to be able to see your own body. It seems to create an issue in the proprio-perception of the user who does not completely feel integrated in the environment. This perspective should be beneficial for the users. We propose to give the possibility to the people to use the first and the third-person perspective like in video games (e.g. GTA). As the gamers prefer to use the third-person perspective for moving actions and the first-person view for the thin operations, we will verify this comportment is extendable to simulations in augmented and virtual reality.

Proceedings ArticleDOI
22 Oct 2006
TL;DR: A system that implements the god-like interaction metaphor as well as a series of novel applications to facilitate collaboration between indoor and outdoor users are constructed and a well-known video based rendering algorithm is extended to make it suitable for use on outdoor wireless networks of limited bandwidth.
Abstract: This paper presents a new interaction metaphor we have termed "god-like interaction". This is a metaphor for improved communication of situational and navigational information between outdoor users, equipped with mobile augmented reality systems, and indoor users, equipped with tabletop projector display systems. Physical objects are captured by a series of cameras viewing a table surface indoors, the data is sent over a wireless network, and is then reconstructed at a real-world location for outdoor augmented reality users. Our novel god-like interaction metaphor allows users to communicate information using physical props as well as natural gestures. We have constructed a system that implements our god-like interaction metaphor as well as a series of novel applications to facilitate collaboration between indoor and outdoor users. We have extended a well-known video based rendering algorithm to make it suitable for use on outdoor wireless networks of limited bandwidth. This paper also describes the limitations and lessons learned during the design and construction of the hardware that supports this research.

Proceedings ArticleDOI
25 Mar 2006
TL;DR: A perceptual matching task and experimental design for measuring egocentric AR depth judgments at medium- and far-field distances of 5 to 45 meters, and a quantification of how much more difficult the x-ray vision condition makes the task, and ideas for improving the experimental methodology.
Abstract: A fundamental problem in optical, see-through augmented reality (AR) is characterizing how it affects the perception of spatial layout and depth. This problem is important because AR system developers need to both place graphics in arbitrary spatial relationships with real-world objects, and to know that users will perceive them in the same relationships. Furthermore, AR makes possible enhanced perceptual techniques that have no real-world equivalent, such as x-ray vision, where AR users are supposed to perceive graphics as being located behind opaque surfaces. This paper reviews and discusses techniques for measuring egocentric depth judgments in both virtual and augmented environments. It then describes a perceptual matching task and experimental design for measuring egocentric AR depth judgments at medium- and far-field distances of 5 to 45 meters. The experiment studied the effect of field of view, the x-ray vision condition, multiple distances, and practice on the task. The paper relates some of the findings to the well-known problem of depth underestimation in virtual environments, and further reports evidence for a switch in bias, from underestimating to overestimating the distance of AR-presented graphics, at 23 meters. It also gives a quantification of how much more difficult the x-ray vision condition makes the task, and then concludes with ideas for improving the experimental methodology.

Patent
05 Apr 2006
TL;DR: In this paper, the orientation parameters indicative of an orientation of the mobile device (1) relative to the visual background (2) are determined, and the captured visual background is displayed on the mobile devices overlaid with visual objects based on these orientation parameters.
Abstract: For executing an application (13) in a mobile device (1) comprising a camera (14), a visual background (2) is captured through the camera (14). A selected application (13) is associated with the visual background (2). The selected application (13) is executed in the mobile device (1). Determined are orientation parameters indicative of an orientation of the mobile device (1) relative to the visual background (2). Based on the orientation parameters, application-specific output signals are generated in the mobile device (1). Particularly, the captured visual background (2) is displayed on the mobile device (1 ) overlaid with visual objects based on the orientation parameters. Displaying the captured visual background with overlaid visual objects, selected and/or positioned dependent on the relative orientation of the mobile device (1 ), makes possible interactive augmented reality applications, e.g. interactive augmented reality games, controlled by the orientation of the mobile device (1) relative to the visual background (2).

Patent
11 Jul 2006
TL;DR: In this article, a system for displaying different views of a sporting event and using the spectator's GPS position to assist in displaying a view from the observer's position is presented. But the system is limited to the use of a personal device.
Abstract: A spectator sport system and method that displays different views of a sporting event and, in particular, uses the spectator's GPS position to assist in displaying a view from the spectator's position. The spectator, using a personal device, can zoom, pan, tilt and change the view, as well as change the view to another position, such as a finish line, goal, or a participant position. Vital information on the sporting event or a participant can be appended to the view. In some forms, augmented reality can be used, such as a finish line or goal, to enhance the experience. Additional service requests can be made from the personal device.

Journal ArticleDOI
TL;DR: This paper discusses the realization of a tool for early architectural design on an existing augmented reality system, called the Visual Interaction Platform, and describes the development process, the resulting tool and its performance for elementary tasks such as positioning and overdrawing.