scispace - formally typeset
Search or ask a question
Topic

Augmented reality

About: Augmented reality is a research topic. Over the lifetime, 36039 publications have been published within this topic receiving 479617 citations. The topic is also known as: AR.


Papers
More filters
Patent
13 Jun 2015
TL;DR: In this article, the authors provide methods and systems for creating virtual and augmented reality experiences to users, which include an image capturing device to capture one or more images and a processor communicatively coupled to the image-capturing device to extract a set of map points from the set of images.
Abstract: To provide methods and systems for creating virtual and augmented reality.SOLUTION: Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The systems may comprise an image capturing device to capture one or more images, the one or more images corresponding to a field of view of a user of a head-mounted augmented reality device, and a processor communicatively coupled to the image capturing device to extract a set of map points from the set of images, to identify a set of sparse points and a set of dense points from the extracted set of map points, and to perform normalization on the set of map points.SELECTED DRAWING: Figure 1

995 citations

Posted Content
TL;DR: A comprehensive review of recent pioneering efforts in semantic and instance segmentation, including convolutional pixel-labeling networks, encoder-decoder architectures, multiscale and pyramid-based approaches, recurrent networks, visual attention models, and generative models in adversarial settings are provided.
Abstract: Image segmentation is a key topic in image processing and computer vision with applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among many others. Various algorithms for image segmentation have been developed in the literature. Recently, due to the success of deep learning models in a wide range of vision applications, there has been a substantial amount of works aimed at developing image segmentation approaches using deep learning models. In this survey, we provide a comprehensive review of the literature at the time of this writing, covering a broad spectrum of pioneering works for semantic and instance-level segmentation, including fully convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid based approaches, recurrent networks, visual attention models, and generative models in adversarial settings. We investigate the similarity, strengths and challenges of these deep learning models, examine the most widely used datasets, report performances, and discuss promising future research directions in this area.

950 citations

Journal ArticleDOI
TL;DR: Virtual reality (VR) for improved performance of MIS is now a reality, however, VR is only a training tool that must be thoughtfully introduced into a surgical training curriculum for it to successfully improve surgical technical skills.
Abstract: Summary Background Data: To inform surgeons about the practical issues to be considered for successful integration of virtual reality simulation into a surgical training program. The learning and practice of minimally invasive surgery (MIS) makes unique demands on surgical training programs. A decade ago Satava proposed virtual reality (VR) surgical simulation as a solution for this problem. Only recently have robust scientific studies supported that vision

950 citations

Proceedings ArticleDOI
24 Jul 1998
TL;DR: The apparatus comprises a closed container having a plurality of compartments for containing mustard and catsup, and a valve arrangement is associated with the container to uncover selected openings in compartments, and air under slight pressure is introduced into the Container to assist in ejecting the mustard or catsup.
Abstract: We introduce ideas, proposed technologies, and initial results for an office of the future that is based on a unified application of computer vision and computer graphics in a system that combines and builds upon the notions of the CAVE™, tiled display systems, and image-based modeling . The basic idea is to use real-time computer vision techniques to dynamically extract per-pixel depth and reflectance information for the visible surfaces in the office including walls, furniture, objects, and people, and then to either project images on the surfaces, render images of the surfaces , or interpret changes in the surfaces. In the first case, one could designate every-day (potentially irregular) real surfaces in the office to be used as spatially immersive display surfaces, and then project high-resolution graphics and text onto those surfaces. In the second case, one could transmit the dynamic image-based models over a network for display at a remote site. Finally, one could interpret dynamic changes in the surfaces for the purposes of tracking, interaction, or augmented reality applications. To accomplish the simultaneous capture and display we envision an office of the future where the ceiling lights are replaced by computer controlled cameras and “smart” projectors that are used to capture dynamic image-based models with imperceptible structured light techniques, and to display high-resolution images on designated display surfaces. By doing both simultaneously on the designated display surfaces, one can dynamically adjust or autocalibrate for geometric, intensity, and resolution variations resulting from irregular or changing display surfaces, or overlapped projector images. Our current approach to dynamic image-based modeling is to use an optimized structured light scheme that can capture per-pixel depth and reflectance at interactive rates. Our system implementation is not yet imperceptible, but we can demonstrate the approach in the laboratory. Our approach to rendering on the designated (potentially irregular) display surfaces is to employ a two-pass projective texture scheme to generate images that when projected onto the surfaces appear correct to a moving headtracked observer. We present here an initial implementation of the overall vision, in an office-like setting, and preliminary demonstrations of our dynamic modeling and display techniques.

947 citations

Journal ArticleDOI
13 Oct 1997
TL;DR: A prototype system that combines the overlaid 3D graphics of augmented reality with the untethered freedom of mobile computing is described, to explore how these two technologies might together make possible wearable computer systems that can support users in their everyday interactions with the world.
Abstract: We describe a prototype system that combines together the overlaid 3D graphics of augmented reality with the untethered freedom of mobile computing The goal is to explore how these two technologies might together make possible wearable computer systems that can support users in their everyday interactions with the world We introduce an application that presents information about our university's campus, using a head-tracked, see-through, head-worn, 3D display, and an untracked, opaque, handheld, 2D display with stylus and trackpad We provide an illustrated explanation of how our prototype is used, and describe our rationale behind designing its software infrastructure and selecting the hardware on which it runs

916 citations


Network Information
Related Topics (5)
User interface
85.4K papers, 1.7M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Object detection
46.1K papers, 1.3M citations
82% related
Segmentation
63.2K papers, 1.2M citations
82% related
Image segmentation
79.6K papers, 1.8M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20242
20231,885
20224,115
20212,941
20204,123
20194,549