scispace - formally typeset
Book ChapterDOI

Computer Vision for Mobile Augmented Reality

TLDR
This chapter discusses the use of computer vision for mobile augmented reality and presents work on a vision-based AR application (mobile sign detection and translation), a vision -supplied AR resource (indoor localization and post estimation), and a low-level correspondence tracking and model estimation approach to increase accuracy and efficiency ofComputer vision methods in augmented reality.
Abstract
Mobile augmented reality (AR) employs computer vision capabilities in order to properly integrate the real and the virtual, whether that integration involves the user’s location, object-based interaction, 2D or 3D annotations, or precise alignment of image overlays. Real-time vision technologies vital for the AR context include tracking, object and scene recognition, localization, and scene model construction. For mobile AR, which has limited computational resources compared with static computing environments, efficient processing is critical, as are consideration of power consumption (i.e., battery life), processing and memory limitations, lag, and the processing and display requirements of the foreground application. On the other hand, additional sensors (such as gyroscopes, accelerometers, and magnetometers) are typically available in the mobile context, and, unlike many traditional computer vision applications, user interaction is often available for user feedback and disambiguation. In this chapter, we discuss the use of computer vision for mobile augmented reality and present work on a vision-based AR application (mobile sign detection and translation), a vision-supplied AR resource (indoor localization and post estimation), and a low-level correspondence tracking and model estimation approach to increase accuracy and efficiency of computer vision methods in augmented reality.

read more

Citations
More filters
Proceedings ArticleDOI

Virtual, Augmented, and Mixed Reality for Human-Robot Interaction

TL;DR: The 1st International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interactions (VAM-HRI) as discussed by the authors will bring together HRI, robotics, Artificial Intelligence, and mixed reality researchers to identify challenges in mixed reality interactions between humans and robots.
Proceedings ArticleDOI

Projection-Aware Task Planning and Execution for Human-in-the-Loop Operation of Robots in a Mixed-Reality Workspace

TL;DR: A new planning paradigm is posed - projection-aware planning - whereby a robot can trade off its plan cost with its ability to reveal its intentions using its projection actions.
Proceedings ArticleDOI

Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI)

TL;DR: The 2nd International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interactions (VAM-HRI) as mentioned in this paper was held for the first time at HRI 2018, where it served as the first workshop of its kind at an academic AI or Robotics conference, and served as a timely call to arms to the academic community to the growing promise of this emerging field.
Journal ArticleDOI

Advancing pharmacy and healthcare with virtual digital technologies

TL;DR: In this article , a review of the benefits and challenges of virtual health interventions, as well as an outlook on how such technologies can be transitioned from research-focused towards real-world healthcare and pharmaceutical applications to transform treatment pathways for patients worldwide.

Foundations of Human-Aware Planning -- A Tale of Three Models

TL;DR: This research explores how the AI agent can leverage the human task model to generate symbiotic behavior and how the introduction of the human mental model in the deliberative process of theAI agent allows it to generate explanations for a plan or resort to explicable plans when explanations are not desired.
References
More filters
Journal ArticleDOI

Distinctive Image Features from Scale-Invariant Keypoints

TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Journal ArticleDOI

A Computational Approach to Edge Detection

TL;DR: There is a natural uncertainty principle between detection and localization performance, which are the two main goals, and with this principle a single operator shape is derived which is optimal at any scale.
Journal ArticleDOI

Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography

TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Book

Multiple view geometry in computer vision

TL;DR: In this article, the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly in a unified framework, including geometric principles and how to represent objects algebraically so they can be computed and applied.
Journal ArticleDOI

Speeded-Up Robust Features (SURF)

TL;DR: A novel scale- and rotation-invariant detector and descriptor, coined SURF (Speeded-Up Robust Features), which approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster.