scispace - formally typeset
Search or ask a question
Topic

Augmented reality

About: Augmented reality is a research topic. Over the lifetime, 36039 publications have been published within this topic receiving 479617 citations. The topic is also known as: AR.


Papers
More filters
Journal ArticleDOI
TL;DR: These are the key performance indicators (KPIs) that are able to benchmark the impact of using ready-for-market AR tools on automotive maintenance performance and novice users were identified as a potential target group.

124 citations

01 Jan 2001
TL;DR: This thesis describes a system that completely automatically builds a three-dimensional model of a scene given a sequence of images of the scene, and estimates the internalparameters of the camera and the poses from where the original images were taken.
Abstract: This thesis describes a system that completely automaticallybuilds a three-dimensional model of a scene given a sequence ofimages of the scene. The system also estimates the internalparameters of the camera and the poses from where the originalimages were taken. Results that have been produced from realworld sequences acquired with a handheld video camera arepresented.The main contribution of the thesis is in building acomplete system and applying it to full-scale real worldproblems, thereby facing the practical difficulties of far fromideal imagery. Contributions are also made to several systemcomponents, most notably in dealing with variable amounts ofmotion between frames, auto-calibration and densereconstruction from a large number of images. Thesecontributions are presented as appended papers to enable theexperienced reader to easily study the novelty of the thesis.The main text gives a detailed coherent account of thetheoretical foundation for the system and its components.There are several motivations for constructing systems ofthe proposed type. One motivation is to make it possible forany amateur photographer to produce graphical models of theworld with the use of a computer. The viewer of the materialcan then navigate through the model and view it from any point.Another application is the insertion of synthetic objects intoan existing video sequence. This task is frequently carried outin movie making but is then performed with a great deal ofexpensive manual work. A quite futuristic but highlyinteresting application is augmented reality where theuser’s view of the world is augmented by the insertion ofsynthetic objects.

124 citations

Proceedings ArticleDOI
29 Apr 2007
TL;DR: It is found that immersive AR can create an increased sense of presence, confirming generally held expectations, however, it is demonstrated that increased presence does not necessarily lead to more engagement.
Abstract: In this paper we present the results of a qualitative, empirical study exploring the impact of immersive technologies on presence and engagement, using the interactive drama Facade as the object of study. In this drama, players are situated in a married couple's apartment, and interact primarily through conversation with the characters and manipulation of objects in the space. We present participants' experiences across three different versions of Facade -- augmented reality (AR) and two desktop computing based implementations, one where players communicate using speech and the other using typed keyboard input. Through interviews and observations of players, we find that immersive AR can create an increased sense of presence, confirming generally held expectations. However, we demonstrate that increased presence does not necessarily lead to more engagement. Rather, mediation may be necessary for some players to fully engage with certain interactive media experiences.

124 citations

Proceedings ArticleDOI
07 Oct 2003
TL;DR: In this article, a user study was conducted to determine which representations best express occlusion relationships among far-field objects, and a drawing style and opacity setting was identified to accurately interpret three layers of occluded objects, even in the absence of perspective constraints.
Abstract: A useful function of augmented reality (AR) systems is their ability to visualize occluded infrastructure directly in a user's view of the environment. This is especially important for our application context, which utilizes mobile AR for navigation and other operations in an urban environment. A key problem in the AR field is how to best depict occluded objects in such a way that the viewer can correctly infer the depth relationships between different physical and virtual objects. Showing a single occluded object with no depth context presents an ambiguous picture to the user. But showing all occluded objects in the environments leads to the "Superman's X-ray vision" problem, in which the user sees too much information to make sense of the depth relationships of objects. Our efforts differ qualitatively from previous work in AR occlusion, because our application domain involves far-field occluded objects, which are tens of meters distant from the user. Previous work has focused on near-field occluded objects, which are within or just beyond arm's reach, and which use different perceptual cues. We designed and evaluated a number of sets of display attributes. We then conducted a user study to determine which representations best express occlusion relationships among far-field objects. We identify a drawing style and opacity settings that enable the user to accurately interpret three layers of occluded objects, even in the absence of perspective constraints.

123 citations

Patent
26 May 2015
TL;DR: In this paper, a multimodal, multiplatform switching, unmanned vehicle (UV) swarm system which can execute missions in diverse environments is presented, which includes onboard and ground processors to handle and integrate multiple sensor inputs.
Abstract: Architecture for a multimodal, multiplatform switching, unmanned vehicle (UV) swarm system which can execute missions in diverse environments. The architecture includes onboard and ground processors to handle and integrate multiple sensor inputs generating a unique UV pilot experience for a remote drone pilot (RDP) via a virtual augmented reality cockpit (VARC). The RDP is monitored by an operational control system and an experienced control pilot. A ground processor handles real-time localization, forwarding of commands, generation and delivery of augmented content to users, along with safety features and overrides. The UVs onboard processors and autopilot execute the commands and provide a redundant source of safety features and override in the case of loss of signal. The UVs perform customizable missions, with adjustable rules for differing skill levels. RDPs experience real-time virtual piloting of the UV with augmented interactive and actionable visual and audio content delivered to them via VARC systems.

123 citations


Network Information
Related Topics (5)
User interface
85.4K papers, 1.7M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Object detection
46.1K papers, 1.3M citations
82% related
Segmentation
63.2K papers, 1.2M citations
82% related
Image segmentation
79.6K papers, 1.8M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20242
20231,885
20224,115
20212,941
20204,123
20194,549