scispace - formally typeset
Search or ask a question

Showing papers on "Mixed reality published in 2019"


Journal ArticleDOI
TL;DR: A new taxonomy of technologies is proposed, namely the “EPI Cube”, which allows academics and managers to classify all technologies, current and potential, which might support or empower customer experiences, but can also produce new experiences along the customer journey.

531 citations


Proceedings ArticleDOI
05 Aug 2019
TL;DR: This work designs a system that enables high accuracy object detection for commodity AR/MR system running at 60fps, employs low latency offloading techniques, decouples the rendering pipeline from the offloading pipeline, and uses a fast object tracking method to maintain detection accuracy.
Abstract: Most existing Augmented Reality (AR) and Mixed Reality (MR) systems are able to understand the 3D geometry of the surroundings but lack the ability to detect and classify complex objects in the real world. Such capabilities can be enabled with deep Convolutional Neural Networks (CNN), but it remains difficult to execute large networks on mobile devices. Offloading object detection to the edge or cloud is also very challenging due to the stringent requirements on high detection accuracy and low end-to-end latency. The long latency of existing offloading techniques can significantly reduce the detection accuracy due to changes in the user's view. To address the problem, we design a system that enables high accuracy object detection for commodity AR/MR system running at 60fps. The system employs low latency offloading techniques, decouples the rendering pipeline from the offloading pipeline, and uses a fast object tracking method to maintain detection accuracy. The result shows that the system can improve the detection accuracy by 20.2%-34.8% for the object detection and human keypoint detection tasks, and only requires 2.24ms latency for object tracking on the AR device. Thus, the system leaves more time and computational resources to render virtual elements for the next frame and enables higher quality AR/MR experiences.

371 citations


Proceedings ArticleDOI
02 May 2019
TL;DR: The goal with this paper is to support classification and discussion of MR applications' design and provide a better means to researchers to contextualize their work within the increasingly fragmented MR landscape.
Abstract: What is Mixed Reality (MR)? To revisit this question given the many recent developments, we conducted interviews with ten AR/VR experts from academia and industry, as well as a literature survey of 68 papers. We find that, while there are prominent examples, there is no universally agreed on, one-size-fits-all definition of MR. Rather, we identified six partially competing notions from the literature and experts' responses. We then started to isolate the different aspects of reality relevant for MR experiences, going beyond the primarily visual notions and extending to audio, motion, haptics, taste, and smell. We distill our findings into a conceptual framework with seven dimensions to characterize MR applications in terms of the number of environments, number of users, level of immersion, level of virtuality, degree of interaction, input, and output. Our goal with this paper is to support classification and discussion of MR applications' design and provide a better means to researchers to contextualize their work within the increasingly fragmented MR landscape.

210 citations


Proceedings ArticleDOI
02 May 2019
TL;DR: A hybrid prototype for mixing 360 video and 3D reconstruction together for remote collaboration is developed, by preserving benefits of both systems while reducing drawbacks of each.
Abstract: Remote Collaboration using Virtual Reality (VR) and Augmented Reality (AR) has recently become a popular way for people from different places to work together. Local workers can collaborate with remote helpers by sharing 360-degree live video or 3D virtual reconstruction of their surroundings. However, each of these techniques has benefits and drawbacks. In this paper we explore mixing 360 video and 3D reconstruction together for remote collaboration, by preserving benefits of both systems while reducing drawbacks of each. We developed a hybrid prototype and conducted user study to compare benefits and problems of using 360 or 3D alone to clarify the needs for mixing the two, and also to evaluate the prototype system. We found participants performed significantly better on collaborative search tasks in 360 and felt higher social presence, yet 3D also showed potential to complement. Participant feedback collected after trying our hybrid system provided directions for improvement.

117 citations


Proceedings ArticleDOI
17 Oct 2019
TL;DR: An optimization-based approach for Mixed Reality (MR) systems to automatically control when and where applications are shown, and how much information they display, which can be solved efficiently in real-time.
Abstract: We present an optimization-based approach for Mixed Reality (MR) systems to automatically control when and where applications are shown, and how much information they display. Currently, content creators design applications, and users then manually adjust which applications are visible and how much information they show. This choice has to be adjusted every time users switch context, i.e., whenever they switch their task or environment. Since context switches happen many times a day, we believe that MR interfaces require automation to alleviate this problem. We propose a real-time approach to automate this process based on users' current cognitive load and knowledge about their task and environment. Our system adapts which applications are displayed, how much information they show, and where they are placed. We formulate this problem as a mix of rule-based decision making and combinatorial optimization which can be solved efficiently in real-time. We present a set of proof-of-concept applications showing that our approach is applicable in a wide range of scenarios. Finally, we show in a dual-task evaluation that our approach decreased secondary tasks interactions by 36%.

103 citations


Journal ArticleDOI
TL;DR: In this article, the authors evaluated the effectiveness of training using traditional tools and computer-aided technologies (e.g., serious games, computer-generated simulations, virtual reality, augmented reality, and mixed reality) on the well-being of individuals.
Abstract: For workers, the exposure to on-site hazards can result in fatalities and serious injuries. To improve safety outcomes, different approaches have been implemented for health and safety training in the construction sector, such as traditional tools and computer-aided technologies (e.g., serious games, computer-generated simulations, virtual reality, augmented reality, and mixed reality). However, the effectiveness of these approaches has been barely explored. In order to bridge this gap, a systematic review of existing studies was conducted. Unlike previous review studies in this field that focused on uncovering the technology characters and challenges, this study mainly evaluated the effectiveness of training using traditional tools and computer-aided technologies on the well-being of individuals. Measures of the effectiveness included knowledge acquisition, unsafe behaviour alteration, and injury rate reduction. Results indicated that: 1. the effectiveness of traditional tools is sufficiently supported by statistical evidence; and 2. the use of computer-aided technologies has evidence to support its effectiveness, but more solid evidence is required to support this statement. The systematic review also revealed that the overall performance of computer-aided technologies is superior in several technical aspects compared to traditional tools, namely, representing the actual workplace situations, providing text-free interfaces, and having better user engagement.

99 citations


Journal ArticleDOI
02 Mar 2019-Sensors
TL;DR: Whether these devices could substitute more expensive sensors in the industry or on the market is questioned, while in general the answer is yes, it is not as easy as it seems.
Abstract: As the need for sensors increases with the inception of virtual reality, augmented reality and mixed reality, the purpose of this paper is to evaluate the suitability of the two Kinect devices and the Leap Motion Controller. When evaluating the suitability, the authors’ focus was on the state of the art, device comparison, accuracy, precision, existing gesture recognition algorithms and on the price of the devices. The aim of this study is to give an insight whether these devices could substitute more expensive sensors in the industry or on the market. While in general the answer is yes, it is not as easy as it seems: There are significant differences between the devices, even between the two Kinects, such as different measurement ranges, error distributions on each axis and changing depth precision relative to distance.

97 citations


Journal ArticleDOI
22 Feb 2019-Sensors
TL;DR: This work verifies the state-of-the-art as it currently applies to eight digital domains: Autonomous vehicles and robotics; artificial intelligence; big data; virtual reality, augmented and mixed reality; internet of things; the cloud and edge computing; digital security; and 3D printing and additive engineering.
Abstract: Although maritime transport is the backbone of world commerce, its digitalization lags significantly behind when we consider some basic facts. This work verifies the state-of-the-art as it currently applies to eight digital domains: Autonomous vehicles and robotics; artificial intelligence; big data; virtual reality, augmented and mixed reality; internet of things; the cloud and edge computing; digital security; and 3D printing and additive engineering. It also provides insight into each of the three sectors into which this industry has been divided: Ship design and shipbuilding; shipping; and ports. The work, based on a systematic literature review, demonstrates that there are domains on which almost no formal study has been done thus far and concludes that there are major areas that require attention in terms of research. It also illustrates the increasing interest on the subject, arising from the necessity of raising the maritime transport industry to the same level of digitalization as other industries.

93 citations


Journal ArticleDOI
08 Feb 2019
TL;DR: The feasibility of room-scale MR collaboration is established and the utility of providing virtual awareness cues is established, with the combination of the FoV frustum and the Head-gaze ray being best.
Abstract: Augmented and Virtual Reality provide unique capabilities for Mixed Reality collaboration. This paper explores how different combinations of virtual awareness cues can provide users with valuable information about their collaborator's attention and actions. In a user study (n = 32, 16 pairs), we compared different combinations of three cues: Field-of-View (FoV) frustum, Eye-gaze ray, and Head-gaze ray against a baseline condition showing only virtual representations of each collaborator's head and hands. Through a collaborative object finding and placing task, the results showed that awareness cues significantly improved user performance, usability, and subjective preferences, with the combination of the FoV frustum and the Head-gaze ray being best. This work establishes the feasibility of room-scale MR collaboration and the utility of providing virtual awareness cues.

93 citations


Proceedings ArticleDOI
02 May 2019
TL;DR: The results show users prefer a shoulder mounted camera view, while a view frustum with a complimentary avatar is a good visualization for the Miniature virtual representation.
Abstract: We propose a multi-scale Mixed Reality (MR) collaboration between the Giant, a local Augmented Reality user, and the Miniature, a remote Virtual Reality user, in Giant-Miniature Collaboration (GMC). The Miniature is immersed in a 360-video shared by the Giant who can physically manipulate the Miniature through a tangible interface, a combined 360-camera with a 6 DOF tracker. We implemented a prototype system as a proof of concept and conducted a user study (n=24) comprising of four parts comparing: A) two types of virtual representations, B) three levels of Miniature control, C) three levels of 360-video view dependencies, and D) four 360-camera placement positions on the Giant. The results show users prefer a shoulder mounted camera view, while a view frustum with a complimentary avatar is a good visualization for the Miniature virtual representation. From the results, we give design recommendations and demonstrate an example Giant-Miniature Interaction.

84 citations


Proceedings ArticleDOI
11 Mar 2019
TL;DR: This work explores how advances in augmented reality (AR) may enable the design of novel teleoperation interfaces that increase operation effectiveness, support the user in conducting concurrent work, and decrease stress, and presents two AR interfaces using such a surrogate: one focused on real-time control and one inspired by waypoint delegation.
Abstract: Teleoperation remains a dominant control paradigm for human interaction with robotic systems. However, teleoperation can be quite challenging, especially for novice users. Even experienced users may face difficulties or inefficiencies when operating a robot with unfamiliar and/or complex dynamics, such as industrial manipulators or aerial robots, as teleoperation forces users to focus on low-level aspects of robot control, rather than higher level goals regarding task completion, data analysis, and problem solving. We explore how advances in augmented reality (AR) may enable the design of novel teleoperation interfaces that increase operation effectiveness, support the user in conducting concurrent work, and decrease stress. Our key insight is that AR may be used in conjunction with prior work on predictive graphical interfaces such that a teleoperator controls a virtual robot surrogate, rather than directly operating the robot itself, providing the user with foresight regarding where the physical robot will end up and how it will get there. We present the design of two AR interfaces using such a surrogate: one focused on real-time control and one inspired by waypoint delegation. We compare these designs against a baseline teleoperation system in a laboratory experiment in which novice and expert users piloted an aerial robot to inspect an environment and analyze data. Our results revealed that the augmented reality prototypes provided several objective and subjective improvements, demonstrating the promise of leveraging AR to improve human-robot interactions.

Journal ArticleDOI
TL;DR: It is argued that it is necessary to define a rigorous methodological framework for the use of XRs in marketing, and a conceptual framework is provided to organize this disparate body of work.
Abstract: Marketing scholars and practitioners are showing increasing interest in Extended Reality (XR) technologies (XRs), such as virtual reality (VR), augmented reality (AR), and mixed reality (MR), as very promising technological tools for producing satisfactory consumer experiences that mirror those experienced in physical stores. However, most of the studies published to date lack a certain measure of methodological rigor in their characterization of XR technologies and in the assessment techniques used to characterize the consumer experience, which limits the generalization of the results. We argue that it is necessary to define a rigorous methodological framework for the use of XRs in marketing. This article reviews the literature on XRs in marketing, and provides a conceptual framework to organize this disparate body of work.

Journal ArticleDOI
01 Feb 2019
TL;DR: The purpose of this review article is to introduce the application of MR technology in the medical field and prospect its trend in the future.
Abstract: Mixed reality (MR) technology is a new digital holographic image technology, which appears in the field of graphics after virtual reality (VR) and augmented reality (AR) technology, a new interdisciplinary frontier. As a new generation of technology, MR has attracted great attention of clinicians in recent years. The emergence of MR will bring about revolutionary changes in medical education training, medical research, medical communication, and clinical treatment. At present, MR technology has become the popular frontline information technology for medical applications. With the popularization of digital technology in the medical field, the development prospects of MR are inestimable. The purpose of this review article is to introduce the application of MR technology in the medical field and prospect its trend in the future.

Proceedings ArticleDOI
17 Oct 2019
TL;DR: Loki leverages video, audio and spatial capture along with mixed-reality presentation methods to allow users to explore and annotate the local and remote environments, and record and review their own performance as well as their peer's.
Abstract: Remotely instructing and guiding users in physical tasks has offered promise across a wide variety of domains. While it has been the subject of many research projects, current approaches are often limited in the communication bandwidth (lacking context, spatial information) or interactivity (unidirectional, asynchronous) between the expert and the learner. Systems that use Mixed-Reality systems for this purpose have rigid configurations for the expert and the learner. We explore the design space of bi-directional mixed-reality telepresence systems for teaching physical tasks, and present Loki, a novel system which explores the various dimensions of this space. Loki leverages video, audio and spatial capture along with mixed-reality presentation methods to allow users to explore and annotate the local and remote environments, and record and review their own performance as well as their peer's. The system design of Loki also enables easy transitions between different configurations within the explored design space. We validate its utility through a varied set of scenarios and a qualitative user study.

Journal ArticleDOI
TL;DR: The proposed framework integrates multisource facilities information, BIM models, and feature-based tracking in an MR-based setting to retrieve information based on time and support remote collaboration and visual communication between the field worker and the manager at the office.

Journal ArticleDOI
TL;DR: This work proposes a mixed-reality head-mounted display (HMD) visualization of the intended robot motion over the wearer's real-world view of the robot and its environment, and describes its implementation, which connects a ROS-enabled robot to the HoloLens using ROS Reality, using MoveIt for motion planning, and using Unity to render the visualization.
Abstract: Efficient motion intent communication is necessary for safe and collaborative work environments with co-located humans and robots. Humans efficiently communicate their motion intent to other humans...

Proceedings ArticleDOI
Bernard C. Kress1
16 Jul 2019
TL;DR: This paper is a review of the main waveguide combiner architectures that have been implemented in industry and academia to produce Augmented Reality (AR) and Optical See-Through (OST) Mixed Reality (MR) Head Mounted Displays (HMDs).
Abstract: This paper is a review of the main waveguide combiner architectures that have been implemented in industry and academia to produce Augmented Reality (AR) and Optical See-Through (OST) Mixed Reality (MR) Head Mounted Displays (HMDs). We review their features and limitations and show product examples. We review also the design constrains and the main optical simulation techniques used to model and predict the overall performances of such optical waveguide combiners.

Proceedings ArticleDOI
02 May 2019
TL;DR: The study results showed that the participants completed the task significantly faster and felt a significantly higher level of usability when the sketch cue is added to the hand gesture cue, but not with adding the pointer cue.
Abstract: Many researchers have studied various visual communication cues (e.g. pointer, sketching, and hand gesture) in Mixed Reality remote collaboration systems for real-world tasks. However, the effect of combining them has not been so well explored. We studied the effect of these cues in four combinations: hand only, hand + pointer, hand + sketch, and hand + pointer + sketch, with three problem tasks: Lego, Tangram, and Origami. The study results showed that the participants completed the task significantly faster and felt a significantly higher level of usability when the sketch cue is added to the hand gesture cue, but not with adding the pointer cue. Participants also preferred the combinations including hand and sketch cues over the other combinations. However, using additional cues (pointer or sketch) increased the perceived mental effort and did not improve the feeling of co-presence. We discuss the implications of these results and future research directions.

Journal ArticleDOI
20 Mar 2019
TL;DR: It is found that there are three complementary factors to support and enhance collaboration in MR environments: annotation techniques, which provide non-verbal communication cues to users, cooperative object manipulation techniques, and user perception and cognition studies, which aim to lessen cognitive workload for task understanding and completion.
Abstract: Over the last few decades, Mixed Reality (MR) interfaces have received great attention from academia and industry. Although a considerable amount of research has already been done to support collaboration between users in MR, there is still no systematic review to determine the current state of collaborative MR applications. In this paper, collaborative MR studies published from 2013 to 2018 were reviewed. A total of 259 papers have been categorised based on their application areas, types of display devices used, collaboration setups, and user interaction and experience aspects. The primary contribution of this paper is to present a high-level overview of collaborative MR influence across several research disciplines. The achievements from each application area are summarised. In addition, remarkable papers in their respective areas are highlighted. Among other things, our study finds that there are three complementary factors to support and enhance collaboration in MR environments: (i) annotation techniques, which provide non-verbal communication cues to users, (ii) cooperative object manipulation techniques, which divide complex 3D object manipulation process into simpler tasks between different users, and (iii) user perception and cognition studies, which aim to lessen cognitive workload for task understanding and completion, and to increase users’ perceptual awareness and presence. Finally, this paper identifies research gaps and future directions that can be useful for researchers who want to explore ways on how to foster collaboration between users and to develop collaborative applications in MR.

Journal ArticleDOI
01 Jan 2019
TL;DR: This paper attempts to compare the existing immersive reality technologies and interaction methods against their potential to enhance cultural learning in VH applications, and proposes a specific integration of collaborative and multimodal interaction methods into a Mixed Reality (MxR) scenario that can be applied to Vh applications that aim at enhancing culturallearning in situ.
Abstract: In recent years, Augmented Reality (AR), Virtual Reality (VR), Augmented Virtuality (AV), and Mixed Reality (MxR) have become popular immersive reality technologies for cultural knowledge dissemination in Virtual Heritage (VH). These technologies have been utilized for enriching museums with a personalized visiting experience and digital content tailored to the historical and cultural context of the museums and heritage sites. Various interaction methods, such as sensor-based, device-based, tangible, collaborative, multimodal, and hybrid interaction methods, have also been employed by these immersive reality technologies to enable interaction with the virtual environments. However, the utilization of these technologies and interaction methods isn't often supported by a guideline that can assist Cultural Heritage Professionals (CHP) to predetermine their relevance to attain the intended objectives of the VH applications. In this regard, our paper attempts to compare the existing immersive reality technologies and interaction methods against their potential to enhance cultural learning in VH applications. To objectify the comparison, three factors have been borrowed from existing scholarly arguments in the Cultural Heritage (CH) domain. These factors are the technology's or the interaction method's potential and/or demonstrated capability to: (1) establish a contextual relationship between users, virtual content, and cultural context, (2) allow collaboration between users, and (3) enable engagement with the cultural context in the virtual environments and the virtual environment itself. Following the comparison, we have also proposed a specific integration of collaborative and multimodal interaction methods into a Mixed Reality (MxR) scenario that can be applied to VH applications that aim at enhancing cultural learning in situ.

Journal ArticleDOI
TL;DR: The main objective of this study is to survey the recently conducted studies on depth perception in VR, augmented reality (AR), and mixed reality (MR).
Abstract: Depth perception is one of the important elements in virtual reality (VR). The perceived depth is influenced by the head mounted displays that inevitability decreases the virtual content's depth perception. While several questions within this area are still under research; the main objective of this study is to survey the recently conducted studies on depth perception in VR, augmented reality (AR), and mixed reality (MR). First, depth perception in the human visual system is discussed including the different visual cues involved in depth perception. Second, research performed to understand and confirm depth perception issue is examined. The contributions made to improve depth perception and specifically distance perception will be discussed with their main proposed design key, advantages, and limitations. Most of the contributions were based on using one or two depth cues to improve depth perception in VR, AR, and MR.

Journal ArticleDOI
TL;DR: A collection of open access and proprietary software and services are identified and combined via a practical workflow which can be used for 3D reconstruction to MxR visualisation of cultural heritage assets.

Journal ArticleDOI
TL;DR: The authors devised a method that can align the surgical field and holograms precisely within a short time using a simple manual operation and the clinical usefulness of the mixed reality device HoloLens will be expanded.
Abstract: The technology used to add information to a real visual field is defined as augmented reality technology. Augmented reality technology that can interactively manipulate displayed information is called mixed reality technology. HoloLens from Microsoft, which is a head-mounted mixed reality device released in 2016, can display a precise three-dimensional model stably on the real visual field as hologram. If it is possible to accurately superimpose the position/direction of the hologram in the surgical field, surgical navigation-like use can be expected. However, in HoloLens, there was no such function. The authors devised a method that can align the surgical field and holograms precisely within a short time using a simple manual operation. The mechanism is to match the three points on the hologram to the corresponding marking points of the body surface. By making it possible to arbitrarily select any of the three points as a pivot/axis of the rotational movement of the hologram, alignment by manual operation becomes very easy. The alignment between the surgical field and the hologram was good and thus contributed to intraoperative objective judgment. By using the method of this study, the clinical usefulness of the mixed reality device HoloLens will be expanded.

Proceedings ArticleDOI
02 May 2019
TL;DR: Methods for improving the acceptability of VR in-flight, including using mixed reality to help users transition between virtual and physical environments and supporting interruption from other co-located people are discussed.
Abstract: Virtual reality (VR) headsets allow wearers to escape their physical surroundings, immersing themselves in a virtual world. Although escape may not be realistic or acceptable in many everyday situations, air travel is one context where early adoption of VR could be very attractive. While travelling, passengers are seated in restricted spaces for long durations, reliant on limited seat-back displays or mobile devices. This paper explores the social acceptability and usability of VR for in-flight entertainment. In an initial survey, we captured respondents' attitudes towards the social acceptability of VR headsets during air travel. Based on the survey results, we developed a VR in-flight entertainment prototype and evaluated this in a focus group study. Our results discuss methods for improving the acceptability of VR in-flight, including using mixed reality to help users transition between virtual and physical environments and supporting interruption from other co-located people.

Proceedings ArticleDOI
02 May 2019
TL;DR: The results of a study that probed how users perform document-intensive analytical tasks when both physical and digital versions of documents were available informed the design of HoloDoc, a mixed reality system that augments physical artifacts with rich interaction and dynamic virtual content.
Abstract: Prior research identified that physical paper documents have many positive attributes, for example natural tangibility and inherent physical flexibility. When documents are presented on digital devices, however, they can provide unique functionality to users, such as the ability to search, view dynamic multimedia content, and make use of indexing. This work explores the fusion of physical and digital paper documents. It first presents the results of a study that probed how users perform document-intensive analytical tasks when both physical and digital versions of documents were available. The study findings then informed the design of HoloDoc, a mixed reality system that augments physical artifacts with rich interaction and dynamic virtual content. Finally, we present the interaction techniques that HoloDoc affords, and the results of a second study that assessed HoloDoc's utility when working with digital and physical copies of academic articles.

Proceedings ArticleDOI
15 Jun 2019
TL;DR: The authors' inference runs at interactive frame rates on a mobile device, enabling realistic rendering of virtual objects into real scenes for mobile mixed reality and improves the realism of rendered objects compared to the state-of-the art methods for both indoor and outdoor scenes.
Abstract: We present a learning-based method to infer plausible high dynamic range (HDR), omnidirectional illumination given an unconstrained, low dynamic range (LDR) image from a mobile phone camera with a limited field of view (FOV). For training data, we collect videos of various reflective spheres placed within the camera's FOV, leaving most of the background unoccluded, leveraging that materials with diverse reflectance functions reveal different lighting cues in a single exposure. We train a deep neural network to regress from the LDR background image to HDR lighting by matching the LDR ground truth sphere images to those rendered with the predicted illumination using image-based relighting, which is differentiable. Our inference runs at interactive frame rates on a mobile device, enabling realistic rendering of virtual objects into real scenes for mobile mixed reality. Training on automatically exposed and white-balanced videos, we improve the realism of rendered objects compared to the state-of-the art methods for both indoor and outdoor scenes.

Book ChapterDOI
01 Jan 2019
TL;DR: This paper reviews the body of literature published at the intersections between each two of these fields, and discusses a vision for the nexus of all three technologies.
Abstract: In recent years we are beginning to see the convergence of three distinct research fields: augmented reality (AR), intelligent virtual agents (IVAs), and the Internet of things (IoT). Each of these has been classified as a disruptive technology for our society. Since their emergence, the advancement of knowledge and development of technologies and systems in these fields were traditionally performed with limited input from each other. However, over recent years, we have seen research prototypes and commercial products being developed that cross the boundaries between these distinct fields to leverage their collective strengths. In this paper, we review the body of literature published at the intersections between each two of these fields, and we discuss a vision for the nexus of all three technologies.

Proceedings ArticleDOI
20 May 2019
TL;DR: A Mixed Reality Head-Mounted Display (MRHMD) interface that enables end-users to easily create and edit robot motions using waypoints is presented, and a higher number of users were able to complete both tasks in significantly less time, and reported experiencing lower cognitive workload, higher usability, and higher naturalness with the MR-HMD interface.
Abstract: Mixed Reality (MR) is a promising interface for robot programming because it can project an immersive 3D visualization of a robot’s intended movement onto the real world. MR can also support hand gestures, which provide an intuitive way for users to construct and modify robot motions. We present a Mixed Reality Head-Mounted Display (MRHMD) interface that enables end-users to easily create and edit robot motions using waypoints. We describe a user study where 20 participants were asked to program a robot arm using 2D and MR interfaces to perform two pick-and-place tasks. In the primitive task, participants created typical pickand-place programs. In the adapted task, participants adapted their primitive programs to address a more complex pickand-place scenario, which included obstacles and conditional reasoning. Compared to the 2D interface, a higher number of users were able to complete both tasks in significantly less time, and reported experiencing lower cognitive workload, higher usability, and higher naturalness with the MR-HMD interface.

Proceedings ArticleDOI
02 May 2019
TL;DR: This workshop will explore key challenges of HMD usage in shared, social contexts; methods for tackling the virtual isolation of the VR/AR user and the exclusion of collocated others; the design of shared experiences in shared spaces; and the ethical implications of appropriating the environment and those within it.
Abstract: Everyday mobile usage of AR and VR Head-Mounted Displays (HMDs) is becoming a feasible consumer reality. The current research agenda for HMDs has a strong focus on technological impediments (e.g. latency, field of view, locomotion, tracking, input) as well as perceptual aspect (e.g. distance compression, vergence-accomodation ). However, this ignores significant challenges in the usage and acceptability of HMDs in shared, social and public spaces. This workshop will explore these key challenges of HMD usage in shared, social contexts; methods for tackling the virtual isolation of the VR/AR user and the exclusion of collocated others; the design of shared experiences in shared spaces; and the ethical implications of appropriating the environment and those within it.

Journal ArticleDOI
10 Apr 2019
TL;DR: The increasing need for, and benefits of, extended virtual, augmented, or mixed reality throughout the continuum of medical education, from anatomy for medical students to procedures for residents is discussed.
Abstract: Simulation is a widely used technique for medical education. Due to decreased training opportunities with real patients, and increased emphasis on both patient outcomes and remote access, demand has increased for more advanced, realistic simulation methods. Here, we discuss the increasing need for, and benefits of, extended (virtual, augmented, or mixed) reality throughout the continuum of medical education, from anatomy for medical students to procedures for residents. We discuss how to drive the adoption of mixed reality tools into medical school's anatomy, and procedural, curricula.