scispace - formally typeset
Search or ask a question

Showing papers on "Augmented reality published in 2018"


Proceedings ArticleDOI
18 Jun 2018
TL;DR: This paper introduces the first benchmark datasets specifically designed for analyzing the impact of day-night changes, weather and seasonal variations, as well as sequence-based localization approaches and the need for better local features on visual localization.
Abstract: Visual localization enables autonomous vehicles to navigate in their surroundings and augmented reality applications to link virtual to real worlds. Practical visual localization approaches need to be robust to a wide variety of viewing condition, including day-night changes, as well as weather and seasonal variations, while providing highly accurate 6 degree-of-freedom (6DOF) camera pose estimates. In this paper, we introduce the first benchmark datasets specifically designed for analyzing the impact of such factors on visual localization. Using carefully created ground truth poses for query images taken under a wide variety of conditions, we evaluate the impact of various factors on 6DOF camera pose estimation accuracy through extensive experiments with state-of-the-art localization approaches. Based on our results, we draw conclusions about the difficulty of different conditions, showing that long-term localization is far from solved, and propose promising avenues for future work, including sequence-based localization approaches and the need for better local features. Our benchmark is available at visuallocalization.net.

595 citations


Journal ArticleDOI
TL;DR: A generic taxonomy consisting of VR/AR technology characteristics, application domains, safety scenarios and evaluation methods is brought up to assist both researchers and industrial practitioners with appreciating the research and practice frontier ofVR/AR-CS and soliciting the latest VR/ AR applications.

532 citations


Journal ArticleDOI
TL;DR: The present work discusses the evolution and changes over the time of the use of VR in the main areas of application with an emphasis on the future expected VR’s capacities, increases and challenges.
Abstract: The recent appearance of low cost Virtual Reality (VR) technologies – like the Oculus Rift, the HTC Vive and the Sony PlayStation VR – and Mixed Reality Interfaces (MRITF) – like the Hololens – is attracting the attention of users and researchers suggesting it may be the next largest stepping stone in technological innovation. However, the history of VR technology is longer than it may seem: the concept of VR was formulated in the 1960s and the first commercial VR tools appeared in the late 1980s. For this reason, during the last twentyyears, hundreds of researchers explored the processes, effects and applications of this technology producing thousands of scientific papers. What is the outcome of this significant research work? This paper wants to provide an answer to this question by exploring, using advanced scientometric techniques, the existing research corpus in the field. We collected all the existent articles about VR in the Web of Science Core Collection scientific database, and the resultant dataset contained 21,667 records for VR and 9,944 for AR. The bibliographic record contained various fields, such as author, title, abstract, country, and all the references (needed for the citation analysis). The network and cluster analysis of the literature showed a composite panorama characterized by evolutions over the time. Indeed, whether until five years ago, the main publication media on VR concerned both conference proceeding and journals, more recently journals constitute the main medium. Similarly, if at first computer science was the leading research field, nowadays clinical areas increased, as well as the number of countries involved in virtual reality research. The present work discusses the evolution of the use of virtual reality in the main areas of application with an emphasis on the future expected virtual reality’s capacities, increases and challenges. We conclude considering the disruptive contribution that VR/AR/MRITF will be able to get in scientific fields, as well in human communication and interaction, as already happened with the advent of mobile phones by increasing the use and the development of scientific applications (e.g. in clinical areas) and by modifying the social communication and interaction among people.

479 citations


Journal ArticleDOI
TL;DR: The results indicate a high fragmentation among hardware, software and AR solutions which lead to a high complexity for selecting and developing AR systems.
Abstract: Augmented Reality (AR) technologies for supporting maintenance operations have been an academic research topic for around 50 years now. In the last decade, major progresses have been made and the AR technology is getting closer to being implemented in industry. In this paper, the advantages and disadvantages of AR have been explored and quantified in terms of Key Performance Indicators (KPI) for industrial maintenance. Unfortunately, some technical issues still prevent AR from being suitable for industrial applications. This paper aims to show, through the results of a systematic literature review, the current state of the art of AR in maintenance and the most relevant technical limitations. The analysis included filtering from a large number of publications to 30 primary studies published between 1997 and 2017. The results indicate a high fragmentation among hardware, software and AR solutions which lead to a high complexity for selecting and developing AR systems. The results of the study show the areas where AR technology still lacks maturity. Future research directions are also proposed encompassing hardware, tracking and user-AR interaction in industrial maintenance is proposed.

479 citations


Journal ArticleDOI
TL;DR: The article surveys the state-of-the-art in augmented-, virtual-, and mixed-reality systems as a whole and from a cultural heritage perspective and identifies specific application areas in digital cultural heritage and makes suggestions as to which technology is most appropriate in each case.
Abstract: A multimedia approach to the diffusion, communication, and exploitation of Cultural Heritage (CH) is a well-established trend worldwide. Several studies demonstrate that the use of new and combined media enhances how culture is experienced. The benefit is in terms of both number of people who can have access to knowledge and the quality of the diffusion of the knowledge itself. In this regard, CH uses augmented-, virtual-, and mixed-reality technologies for different purposes, including education, exhibition enhancement, exploration, reconstruction, and virtual museums. These technologies enable user-centred presentation and make cultural heritage digitally accessible, especially when physical access is constrained. A number of surveys of these emerging technologies have been conducted; however, they are either not domain specific or lack a holistic perspective in that they do not cover all the aspects of the technology. A review of these technologies from a cultural heritage perspective is therefore warranted. Accordingly, our article surveys the state-of-the-art in augmented-, virtual-, and mixed-reality systems as a whole and from a cultural heritage perspective. In addition, we identify specific application areas in digital cultural heritage and make suggestions as to which technology is most appropriate in each case. Finally, the article predicts future research directions for augmented and virtual reality, with a particular focus on interaction interfaces and explores the implications for the cultural heritage domain.

473 citations


Journal ArticleDOI
TL;DR: An experimental evaluation of edge computing and its enabling technologies in a selected use case represented by mobile gaming shows that edge computing is necessary to meet the latency requirements of applications involving virtual and augmented reality.
Abstract: The amount of data generated by sensors, actuators, and other devices in the Internet of Things (IoT) has substantially increased in the last few years IoT data are currently processed in the cloud, mostly through computing resources located in distant data centers As a consequence, network bandwidth and communication latency become serious bottlenecks This paper advocates edge computing for emerging IoT applications that leverage sensor streams to augment interactive applications First, we classify and survey current edge computing architectures and platforms, then describe key IoT application scenarios that benefit from edge computing Second, we carry out an experimental evaluation of edge computing and its enabling technologies in a selected use case represented by mobile gaming To this end, we consider a resource-intensive 3-D application as a paradigmatic example and evaluate the response delay in different deployment scenarios Our experimental results show that edge computing is necessary to meet the latency requirements of applications involving virtual and augmented reality We conclude by discussing what can be achieved with current edge computing platforms and how emerging technologies will impact on the deployment of future IoT applications

448 citations


Journal ArticleDOI
TL;DR: This paper explores an intriguing scenario for view synthesis: extrapolating views from imagery captured by narrow-baseline stereo cameras, including VR cameras and now-widespread dual-lens camera phones, and proposes a learning framework that leverages a new layered representation that is called multiplane images (MPIs).
Abstract: The view synthesis problem---generating novel views of a scene from known imagery---has garnered recent attention due in part to compelling applications in virtual and augmented reality. In this paper, we explore an intriguing scenario for view synthesis: extrapolating views from imagery captured by narrow-baseline stereo cameras, including VR cameras and now-widespread dual-lens camera phones. We call this problem stereo magnification, and propose a learning framework that leverages a new layered representation that we call multiplane images (MPIs). Our method also uses a massive new data source for learning view extrapolation: online videos on YouTube. Using data mined from such videos, we train a deep network that predicts an MPI from an input stereo image pair. This inferred MPI can then be used to synthesize a range of novel views of the scene, including views that extrapolate significantly beyond the input baseline. We show that our method compares favorably with several recent view synthesis methods, and demonstrate applications in magnifying narrow-baseline stereo images.

410 citations


Journal ArticleDOI
TL;DR: In this paper, a systematic review of the literature on the use of augmented reality technology to support science, technology, engineering and mathematics (STEM) learning is presented, where a qualitative content analysis is used to investigate the general characteristics of AR applications in STEM education, instructional strategies and techniques deployed in the studies reviewed, and the evaluation approaches followed in the interventions.
Abstract: This study presents a systematic review of the literature on the use of augmented reality technology to support science, technology, engineering and mathematics (STEM) learning. It synthesizes a set of 28 publications from 2010 to 2017. A qualitative content analysis is used to investigate the general characteristics of augmented reality applications in STEM education, the instructional strategies and techniques deployed in the studies reviewed, and the evaluation approaches followed in the interventions. This review found that most augmented reality applications for STEM learning offered exploration or simulation activities. The applications reviewed offered a number of similar design features based on digital knowledge discovery mechanisms to consume information through the interaction with digital elements. However, few studies provided students with assistance in carrying out learning activities. Most of the studies reviewed evaluated the effects of augmented reality technology in fostering students' conceptual understanding, followed by those that investigated affective learning outcomes. A number of suggestions for future research arose from this review. Researchers need to design features that allow students to acquire basic competences related with STEM disciplines, and future applications need to include metacognitive scaffolding and experimental support for inquiry-based learning activities. Finally, it would be useful to explore how augmented reality learning activities can be part of blended instructional strategies such as the flipped classroom.

395 citations


Journal ArticleDOI
TL;DR: It is found that the VR technologies adopted for CEET evolve over time, from desktop-based VR, immersive VR, 3D game- based VR, to Building Information Modelling (BIM)-enabled VR.
Abstract: Virtual Reality (VR) has been rapidly recognized and implemented in construction engineering education and training (CEET) in recent years due to its benefits of providing an engaging and immersive environment. The objective of this review is to critically collect and analyze the VR applications in CEET, aiming at all VR-related journal papers published from 1997 to 2017. The review follows a three-stage analysis on VR technologies, applications and future directions through a systematic analysis. It is found that the VR technologies adopted for CEET evolve over time, from desktop-based VR, immersive VR, 3D game-based VR, to Building Information Modelling (BIM)-enabled VR. A sibling technology, Augmented Reality (AR), for CEET adoptions has also emerged in recent years. These technologies have been applied in architecture and design visualization, construction health and safety training, equipment and operational task training, as well as structural analysis. Future research directions, including the integration of VR with emerging education paradigms and visualization technologies, have also been provided. The findings are useful for both researchers and educators to usefully integrate VR in their education and training programs to improve the training performance.

375 citations


Proceedings ArticleDOI
16 Apr 2018
TL;DR: This work designs a framework that ties together front-end devices with more powerful backend “helpers” to allow deep learning to be executed locally or remotely in the cloud/edge, and designs an Android application that performs real-time object detection for AR applications.
Abstract: Deep learning shows great promise in providing more intelligence to augmented reality (AR) devices, but few AR apps use deep learning due to lack of infrastructure support. Deep learning algorithms are computationally intensive, and front-end devices cannot deliver sufficient compute power for real-time processing. In this work, we design a framework that ties together front-end devices with more powerful backend “helpers” (e.g., home servers) to allow deep learning to be executed locally or remotely in the cloud/edge. We consider the complex interaction between model accuracy, video quality, battery constraints, network data usage, and network conditions to determine an optimal offloading strategy. Our contributions are: (1) extensive measurements to understand the tradeoffs between video quality, network conditions, battery consumption, processing delay, and model accuracy; (2) a measurement-driven mathematical framework that efficiently solves the resulting combinatorial optimization problem; (3) an Android application that performs real-time object detection for AR applications, with experimental results that demonstrate the superiority of our approach.

346 citations


Journal ArticleDOI
TL;DR: A systematic literature review of immersive technology research in diverse settings, including education, marketing, business, and healthcare, identifies existing gaps in the current literature and suggests future research directions with specific research agendas.

Posted Content
TL;DR: This paper explores an intriguing scenario for view synthesis: extrapolating views from imagery captured by narrow-baseline stereo cameras, including VR cameras and now-widespread dual-lens camera phones, and proposes a learning framework that leverages a new layered representation that is called multiplane images (MPIs).
Abstract: The view synthesis problem--generating novel views of a scene from known imagery--has garnered recent attention due in part to compelling applications in virtual and augmented reality. In this paper, we explore an intriguing scenario for view synthesis: extrapolating views from imagery captured by narrow-baseline stereo cameras, including VR cameras and now-widespread dual-lens camera phones. We call this problem stereo magnification, and propose a learning framework that leverages a new layered representation that we call multiplane images (MPIs). Our method also uses a massive new data source for learning view extrapolation: online videos on YouTube. Using data mined from such videos, we train a deep network that predicts an MPI from an input stereo image pair. This inferred MPI can then be used to synthesize a range of novel views of the scene, including views that extrapolate significantly beyond the input baseline. We show that our method compares favorably with several recent view synthesis methods, and demonstrate applications in magnifying narrow-baseline stereo images.

Journal ArticleDOI
TL;DR: This work proposes an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models, and introduces a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects.
Abstract: The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an acceptance model for augmented reality in the context of urban heritage tourism, and five focus groups with young British female tourists visiting Dublin and experiencing a mobile AR application were conducted.
Abstract: Latest mobile technologies have revolutionized the way people experience their environment. Recent research explored the opportunities of using augmented reality (AR) in order to enhance user experience; however, there is only limited research on users’ acceptance of AR in the tourism context. The technology acceptance model is the predominant theory for researching technology acceptance. Previous researchers used the approach of proposing external dimensions based on the secondary literature; however, they missed the opportunity to integrate context-specific dimensions. This paper therefore aims to propose an AR acceptance model in the context of urban heritage tourism. Five focus groups, with young British female tourists visiting Dublin and experiencing a mobile AR application, were conducted. The data were analysed using thematic analysis and revealed seven dimensions that should be incorporated into AR acceptance research, including information quality, system quality, costs of use, recommendations, ...

Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this paper, a fully convolutional neural network (FCN) is used to predict the 6D camera pose from a single RGB image in a given 3D environment, and the network is prepended to a new end-to-end trainable pipeline.
Abstract: Popular research areas like autonomous driving and augmented reality have renewed the interest in image-based camera localization. In this work, we address the task of predicting the 6D camera pose from a single RGB image in a given 3D environment. With the advent of neural networks, previous works have either learned the entire camera localization process, or multiple components of a camera localization pipeline. Our key contribution is to demonstrate and explain that learning a single component of this pipeline is sufficient. This component is a fully convolutional neural network for densely regressing so-called scene coordinates, defining the correspondence between the input image and the 3D scene space. The neural network is prepended to a new end-to-end trainable pipeline. Our system is efficient, highly accurate, robust in training, and exhibits outstanding generalization capabilities. It exceeds state-of-the-art consistently on indoor and outdoor datasets. Interestingly, our approach surpasses existing techniques even without utilizing a 3D model of the scene during training, since the network is able to discover 3D scene geometry automatically, solely from single-view constraints.

Journal ArticleDOI
TL;DR: This article presents for the first time a survey of visual SLAM and SfM techniques that are targeted toward operation in dynamic environments and identifies three main problems: how to perform reconstruction, how to segment and track dynamic objects, and how to achieve joint motion segmentation and reconstruction.
Abstract: In the last few decades, Structure from Motion (SfM) and visual Simultaneous Localization and Mapping (visual SLAM) techniques have gained significant interest from both the computer vision and robotic communities. Many variants of these techniques have started to make an impact in a wide range of applications, including robot navigation and augmented reality. However, despite some remarkable results in these areas, most SfM and visual SLAM techniques operate based on the assumption that the observed environment is static. However, when faced with moving objects, overall system accuracy can be jeopardized. In this article, we present for the first time a survey of visual SLAM and SfM techniques that are targeted toward operation in dynamic environments. We identify three main problems: how to perform reconstruction (robust visual SLAM), how to segment and track dynamic objects, and how to achieve joint motion segmentation and reconstruction. Based on this categorization, we provide a comprehensive taxonomy of existing approaches. Finally, the advantages and disadvantages of each solution class are critically discussed from the perspective of practicality and robustness.

Journal ArticleDOI
TL;DR: An IAR system architecture that combines cloudlets and fog computing, which reduce latency response and accelerate rendering tasks while offloading compute intensive tasks from the cloud is proposed.
Abstract: Shipbuilding companies are upgrading their inner workings in order to create Shipyards 4.0, where the principles of Industry 4.0 are paving the way to further digitalized and optimized processes in an integrated network. Among the different Industry 4.0 technologies, this paper focuses on augmented reality, whose application in the industrial field has led to the concept of industrial augmented reality (IAR). This paper first describes the basics of IAR and then carries out a thorough analysis of the latest IAR systems for industrial and shipbuilding applications. Then, in order to build a practical IAR system for shipyard workers, the main hardware and software solutions are compared. Finally, as a conclusion after reviewing all the aspects related to IAR for shipbuilding, it proposed an IAR system architecture that combines cloudlets and fog computing, which reduce latency response and accelerate rendering tasks while offloading compute intensive tasks from the cloud.

Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this paper, a joint 3D geometric and semantic understanding of the world is used for robust visual localization under a wide range of viewing conditions, enabling it to succeed under conditions where previous approaches failed.
Abstract: Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, e.g., in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes.

Book ChapterDOI
08 Sep 2018
TL;DR: Complex-YOLO, a state of the art real-time 3D object detection network on point clouds only, is introduced and a specific Euler-Region-Proposal Network (E-RPN) is proposed to estimate the pose of the object by adding an imaginary and a real fraction to the regression network.
Abstract: Lidar based 3D object detection is inevitable for autonomous driving, because it directly links to environmental understanding and therefore builds the base for prediction and motion planning. The capacity of inferencing highly sparse 3D data in real-time is an ill-posed problem for lots of other application areas besides automated vehicles, e.g. augmented reality, personal robotics or industrial automation. We introduce Complex-YOLO, a state of the art real-time 3D object detection network on point clouds only. In this work, we describe a network that expands YOLOv2, a fast 2D standard object detector for RGB images, by a specific complex regression strategy to estimate multi-class 3D boxes in Cartesian space. Thus, we propose a specific Euler-Region-Proposal Network (E-RPN) to estimate the pose of the object by adding an imaginary and a real fraction to the regression network. This ends up in a closed complex space and avoids singularities, which occur by single angle estimations. The E-RPN supports to generalize well during training. Our experiments on the KITTI benchmark suite show that we outperform current leading methods for 3D object detection specifically in terms of efficiency. We achieve state of the art results for cars, pedestrians and cyclists by being more than five times faster than the fastest competitor. Further, our model is capable of estimating all eight KITTI-classes, including Vans, Trucks or sitting pedestrians simultaneously with high accuracy.

Journal ArticleDOI
17 Apr 2018
TL;DR: It is found that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing.
Abstract: Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We summarize the high-level contributions from each category of papers, and present examples of the most influential user studies. We also identify areas where there have been few user studies, and opportunities for future research. Among other things, we find that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing. This research will be useful for AR researchers who want to follow best practices in designing their own AR user studies.

Proceedings ArticleDOI
19 Apr 2018
TL;DR: Mini-Me, an adaptive avatar for enhancing Mixed Reality (MR) remote collaboration between a local Augmented Reality (AR) user and a remote Virtual Reality (VR) user is presented.
Abstract: We present Mini-Me, an adaptive avatar for enhancing Mixed Reality (MR) remote collaboration between a local Augmented Reality (AR) user and a remote Virtual Reality (VR) user. The Mini-Me avatar represents the VR user's gaze direction and body gestures while it transforms in size and orientation to stay within the AR user's field of view. A user study was conducted to evaluate Mini-Me in two collaborative scenarios: an asymmetric remote expert in VR assisting a local worker in AR, and a symmetric collaboration in urban planning. We found that the presence of the Mini-Me significantly improved Social Presence and the overall experience of MR collaboration.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: MaskFusion as discussed by the authors is a real-time object-aware, semantic and dynamic RGB-D SLAM system that goes beyond traditional systems which output a purely geometric map of a static scene.
Abstract: We present MaskFusion, a real-time, object-aware, semantic and dynamic RGB-D SLAM system that goes beyond traditional systems which output a purely geometric map of a static scene. MaskFusion recognizes, segments and assigns semantic class labels to different objects in the scene, while tracking and reconstructing them even when they move independently from the camera. As an RGB-D camera scans a cluttered scene, image-based instance-level semantic segmentation creates semantic object masks that enable realtime object recognition and the creation of an object-level representation for the world map. Unlike previous recognition-based SLAM systems, MaskFusion does not require known models of the objects it can recognize, and can deal with multiple independent motions. MaskFusion takes full advantage of using instance-level semantic segmentation to enable semantic labels to be fused into an object-aware map, unlike recent semantics enabled SLAM systems that perform voxel-level semantic segmentation. We show augmented-reality applications that demonstrate the unique features of the map output by MaskFusion: instance-aware, semantic and dynamic. Code will be made available.

Book ChapterDOI
01 Jan 2018
TL;DR: In this article, a comparative chronological analysis of AR and VR research and applications in a retail context is presented to provide an up-to-date perspective, incorporating issues relating to motives, applications and implementation of AR by retailers, as well as consumer acceptance.
Abstract: Augmented reality (AR) and virtual reality (VR) have emerged as rapidly developing technologies used in both physical and online retailing to enhance the selling environment and shopping experience. However, academic research on, and practical applications of, AR and VR in retail are still fragmented, and this state of affairs is arguably attributable to the interdisciplinary origins of the topic. Undertaking a comparative chronological analysis of AR and VR research and applications in a retail context, this paper synthesises current debates to provide an up-to-date perspective—incorporating issues relating to motives, applications and implementation of AR and VR by retailers, as well as consumer acceptance—and to frame the basis for a future research agenda.

Journal ArticleDOI
TL;DR: In this article, the authors examined the impact of information type (dynamic verbal vs. dynamic visual cues) and augmenting immersive scenes (high vs. low virtual presence) on visitors' evaluation of the AR-facilitated museum experience and their subsequent purchase intentions.

Proceedings ArticleDOI
26 Feb 2018
TL;DR: A new design space for communicating robot motion in-tent is explored by investigating how augmented reality (AR) might mediate human-robot interactions and developing a series of explicit and implicit designs for visually signaling robot motion intent using AR.
Abstract: Humans coordinate teamwork by conveying intent through social cues, such as gestures and gaze behaviors. However, these methods may not be possible for appearance-constrained robots that lack anthropomorphic or zoomorphic features, such as aerial robots. We explore a new design space for communicating robot motion in-tent by investigating how augmented reality (AR) might mediate human-robot interactions. We develop a series of explicit and implicit designs for visually signaling robot motion intent using AR, which we evaluate in a user study. We found that several of our AR designs significantly improved objective task efficiency over a base-line in which users only received physically-embodied orientation cues. In addition, our designs offer several trade-offs in terms of intent clarity and user perceptions of the robot as a teammate.

Journal ArticleDOI
TL;DR: In cultural heritage sites around the globe, augmented reality is being utilized as a tool to provide visitors with better experiences while preserving the integrity of the sites as discussed by the authors, however, lit...
Abstract: In cultural heritage sites around the globe, augmented reality (AR) is being utilized as a tool to provide visitors with better experiences while preserving the integrity of the sites. However, lit...

Journal ArticleDOI
TL;DR: In this article, the authors provide a theoretical reflection on the phenomenon of embodiment relation in technological mediation and assess the embodiment of wearable augmented reality technology in a tourism attraction, concluding that technology embodiment is a multidimensional construct consisting of ownership, location and agency.
Abstract: The increasing use of wearable devices for tourism purposes sets the stage for a critical discussion on technological mediation in tourism experiences This article provides a theoretical reflection on the phenomenon of embodiment relation in technological mediation and then assesses the embodiment of wearable augmented reality technology in a tourism attraction The findings suggest that technology embodiment is a multidimensional construct consisting of ownership, location, and agency These support the concept of technology withdrawal, where technology disappears as it becomes part of human actions, and contest the interplay of subjectivity and intentionality between humans and technology in situated experiences such as tourism It was also found that technology embodiment affects enjoyment and enhances experience with tourism attractions

Journal ArticleDOI
TL;DR: Core findings are that expected utilitarian, hedonic, and symbolic benefits drive consumers' reactions to ARSGs and that the extent to which ARsGs threaten other people's, but not one's own, privacy can strongly influence users' decision making.

Proceedings ArticleDOI
19 Apr 2018
TL;DR: This work investigates precise, multimodal selection techniques using head motion and eye gaze for augmented reality applications, including compact menus with deep structure, and a proof-of-concept method for on-line correction of calibration drift.
Abstract: Head and eye movement can be leveraged to improve the user's interaction repertoire for wearable displays. Head movements are deliberate and accurate, and provide the current state-of-the-art pointing technique. Eye gaze can potentially be faster and more ergonomic, but suffers from low accuracy due to calibration errors and drift of wearable eye-tracking sensors. This work investigates precise, multimodal selection techniques using head motion and eye gaze. A comparison of speed and pointing accuracy reveals the relative merits of each method, including the achievable target size for robust selection. We demonstrate and discuss example applications for augmented reality, including compact menus with deep structure, and a proof-of-concept method for on-line correction of calibration drift.

Journal ArticleDOI
TL;DR: An overview of the research that has been performed since redirected walking was first practically demonstrated 15 years ago is given.
Abstract: Virtual reality users wearing head-mounted displays can experience the illusion of walking in any direction for infinite distance while, in reality, they are walking a curvilinear path in physical space. This is accomplished by introducing unnoticeable rotations to the virtual environment-a technique called redirected walking. This paper gives an overview of the research that has been performed since redirected walking was first practically demonstrated 15 years ago.