scispace - formally typeset
Search or ask a question
Topic

Augmented reality

About: Augmented reality is a research topic. Over the lifetime, 36039 publications have been published within this topic receiving 479617 citations. The topic is also known as: AR.


Papers
More filters
Proceedings ArticleDOI
30 Oct 2008
TL;DR: An outdoors augmented reality system for mobile phones that matches camera-phone images against a large database of location-tagged images using a robust image retrieval algorithm and shows a smart-phone implementation that achieves a high image matching rate while operating in near real-time.
Abstract: We have built an outdoors augmented reality system for mobile phones that matches camera-phone images against a large database of location-tagged images using a robust image retrieval algorithm. We avoid network latency by implementing the algorithm on the phone and deliver excellent performance by adapting a state-of-the-art image retrieval algorithm based on robust local descriptors. Matching is performed against a database of highly relevant features, which is continuously updated to reflect changes in the environment. We achieve fast updates and scalability by pruning of irrelevant features based on proximity to the user. By compressing and incrementally updating the features stored on the phone we make the system amenable to low-bandwidth wireless connections. We demonstrate system robustness on a dataset of location-tagged images and show a smart-phone implementation that achieves a high image matching rate while operating in near real-time.

406 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide a comprehensive overview of mobile edge computing (MEC) and its potential use cases and applications, as well as discuss challenges and potential future directions for MEC research.
Abstract: Driven by the emergence of new compute-intensive applications and the vision of the Internet of Things (IoT), it is foreseen that the emerging 5G network will face an unprecedented increase in traffic volume and computation demands. However, end users mostly have limited storage capacities and finite processing capabilities, thus how to run compute-intensive applications on resource-constrained users has recently become a natural concern. Mobile edge computing (MEC), a key technology in the emerging fifth generation (5G) network, can optimize mobile resources by hosting compute-intensive applications, process large data before sending to the cloud, provide the cloud-computing capabilities within the radio access network (RAN) in close proximity to mobile users, and offer context-aware services with the help of RAN information. Therefore, MEC enables a wide variety of applications, where the real-time response is strictly required, e.g., driverless vehicles, augmented reality, robotics, and immerse media. Indeed, the paradigm shift from 4G to 5G could become a reality with the advent of new technological concepts. The successful realization of MEC in the 5G network is still in its infancy and demands for constant efforts from both academic and industry communities. In this survey, we first provide a holistic overview of MEC technology and its potential use cases and applications. Then, we outline up-to-date researches on the integration of MEC with the new technologies that will be deployed in 5G and beyond. We also summarize testbeds and experimental evaluations, and open source activities, for edge computing. We further summarize lessons learned from state-of-the-art research works as well as discuss challenges and potential future directions for MEC research.

402 citations

Proceedings ArticleDOI
16 Dec 2009
TL;DR: This prototype is comprised of a pocket projector, a mirror, and a camera contained in a pendant-like wearable device that recognizes and tracks the user's hand gestures and physical objects using computer-vision techniques.
Abstract: In this note, we present SixthSense, a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information. By using a tiny projector and a camera coupled in a pendant like mobile wearable device, SixthSense sees what the user sees and visually augments surfaces, walls or physical objects the user is interacting with; turning them into just-in-time information interfaces. SixthSense attempts to free information from its confines by seamlessly integrating it with the physical world.

402 citations

Journal ArticleDOI
TL;DR: An algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPI) scene representation, then renders novel views by blending adjacent local light fields.
Abstract: We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration. Previous approaches either require intractably dense view sampling or provide little to no guidance for how users should sample views of a scene to reliably render high-quality novel views. Instead, we propose an algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPI) scene representation, then renders novel views by blending adjacent local light fields. We extend traditional plenoptic sampling theory to derive a bound that specifies precisely how densely users should sample views of a given scene when using our algorithm. In practice, we apply this bound to capture and render views of real world scenes that achieve the perceptual quality of Nyquist rate view sampling while using up to 4000X fewer views. We demonstrate our approach's practicality with an augmented reality smart-phone app that guides users to capture input images of a scene and viewers that enable realtime virtual exploration on desktop and mobile platforms.

400 citations

Proceedings ArticleDOI
03 Oct 2010
TL;DR: The interactions and algorithms unique to LightSpace are detailed, some initial observations of use are discussed, and future directions are suggested.
Abstract: Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may "pick up" the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and "drop" the object onto the wall by touching it with their other hand. We detail the interactions and algorithms unique to LightSpace, discuss some initial observations of use and suggest future directions.

398 citations


Network Information
Related Topics (5)
User interface
85.4K papers, 1.7M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Object detection
46.1K papers, 1.3M citations
82% related
Segmentation
63.2K papers, 1.2M citations
82% related
Image segmentation
79.6K papers, 1.8M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20242
20231,885
20224,115
20212,941
20204,123
20194,549