scispace - formally typeset
Search or ask a question

Showing papers on "Augmented reality published in 2019"


Journal ArticleDOI
21 Nov 2019-Nature
TL;DR: A wireless, battery-free platform of electronic systems and haptic interfaces capable of softly laminating onto the curved surfaces of the skin to communicate information via spatio-temporally programmable patterns of localized mechanical vibrations is presented.
Abstract: Traditional technologies for virtual reality (VR) and augmented reality (AR) create human experiences through visual and auditory stimuli that replicate sensations associated with the physical world. The most widespread VR and AR systems use head-mounted displays, accelerometers and loudspeakers as the basis for three-dimensional, computer-generated environments that can exist in isolation or as overlays on actual scenery. In comparison to the eyes and the ears, the skin is a relatively underexplored sensory interface for VR and AR technology that could, nevertheless, greatly enhance experiences at a qualitative level, with direct relevance in areas such as communications, entertainment and medicine1,2. Here we present a wireless, battery-free platform of electronic systems and haptic (that is, touch-based) interfaces capable of softly laminating onto the curved surfaces of the skin to communicate information via spatio-temporally programmable patterns of localized mechanical vibrations. We describe the materials, device structures, power delivery strategies and communication schemes that serve as the foundations for such platforms. The resulting technology creates many opportunities for use where the skin provides an electronically programmable communication and sensory input channel to the body, as demonstrated through applications in social media and personal engagement, prosthetic control and feedback, and gaming and entertainment. Interfaces for epidermal virtual reality technology are demonstrated that can communicate by programmable patterns of localized mechanical vibrations.

500 citations


Journal ArticleDOI
TL;DR: Despite the growing interest and discussions on Virtual Reality (VR) and Augmented Reality (AR) in tourism, we do not yet know systematically the knowledge that has been built from academic papers as discussed by the authors.
Abstract: Despite the growing interest and discussions on Virtual Reality (VR) and Augmented Reality (AR) in tourism, we do not yet know systematically the knowledge that has been built from academic papers ...

471 citations


Journal ArticleDOI
TL;DR: An algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPI) scene representation, then renders novel views by blending adjacent local light fields.
Abstract: We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration. Previous approaches either require intractably dense view sampling or provide little to no guidance for how users should sample views of a scene to reliably render high-quality novel views. Instead, we propose an algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPI) scene representation, then renders novel views by blending adjacent local light fields. We extend traditional plenoptic sampling theory to derive a bound that specifies precisely how densely users should sample views of a given scene when using our algorithm. In practice, we apply this bound to capture and render views of real world scenes that achieve the perceptual quality of Nyquist rate view sampling while using up to 4000X fewer views. We demonstrate our approach's practicality with an augmented reality smart-phone app that guides users to capture input images of a scene and viewers that enable realtime virtual exploration on desktop and mobile platforms.

400 citations


Proceedings ArticleDOI
05 Aug 2019
TL;DR: This work designs a system that enables high accuracy object detection for commodity AR/MR system running at 60fps, employs low latency offloading techniques, decouples the rendering pipeline from the offloading pipeline, and uses a fast object tracking method to maintain detection accuracy.
Abstract: Most existing Augmented Reality (AR) and Mixed Reality (MR) systems are able to understand the 3D geometry of the surroundings but lack the ability to detect and classify complex objects in the real world. Such capabilities can be enabled with deep Convolutional Neural Networks (CNN), but it remains difficult to execute large networks on mobile devices. Offloading object detection to the edge or cloud is also very challenging due to the stringent requirements on high detection accuracy and low end-to-end latency. The long latency of existing offloading techniques can significantly reduce the detection accuracy due to changes in the user's view. To address the problem, we design a system that enables high accuracy object detection for commodity AR/MR system running at 60fps. The system employs low latency offloading techniques, decouples the rendering pipeline from the offloading pipeline, and uses a fast object tracking method to maintain detection accuracy. The result shows that the system can improve the detection accuracy by 20.2%-34.8% for the object detection and human keypoint detection tasks, and only requires 2.24ms latency for object tracking on the AR device. Thus, the system leaves more time and computational resources to render virtual elements for the next frame and enables higher quality AR/MR experiences.

371 citations


Posted Content
TL;DR: An algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPI) scene representation, then renders novel views by blending adjacent local light fields.
Abstract: We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration. Previous approaches either require intractably dense view sampling or provide little to no guidance for how users should sample views of a scene to reliably render high-quality novel views. Instead, we propose an algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPI) scene representation, then renders novel views by blending adjacent local light fields. We extend traditional plenoptic sampling theory to derive a bound that specifies precisely how densely users should sample views of a given scene when using our algorithm. In practice, we apply this bound to capture and render views of real world scenes that achieve the perceptual quality of Nyquist rate view sampling while using up to 4000x fewer views. We demonstrate our approach's practicality with an augmented reality smartphone app that guides users to capture input images of a scene and viewers that enable realtime virtual exploration on desktop and mobile platforms.

338 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present and empirically test a framework that theorizes how consumers perceive and evaluate the benefits and augmentation quality of AR apps, and how this evaluation drives subsequent changes in brand attitude.

293 citations


Posted Content
TL;DR: This survey provides a holistic overview of MEC technology and its potential use cases and applications, and outlines up-to-date researches on the integration of M EC with the new technologies that will be deployed in 5G and beyond.
Abstract: Driven by the emergence of new compute-intensive applications and the vision of the Internet of Things (IoT), it is foreseen that the emerging 5G network will face an unprecedented increase in traffic volume and computation demands. However, end users mostly have limited storage capacities and finite processing capabilities, thus how to run compute-intensive applications on resource-constrained users has recently become a natural concern. Mobile edge computing (MEC), a key technology in the emerging fifth generation (5G) network, can optimize mobile resources by hosting compute-intensive applications, process large data before sending to the cloud, provide the cloud computing capabilities within the radio access network (RAN) in close proximity to mobile users, and offer context-aware services with the help of RAN information. Therefore, MEC enables a wide variety of applications, where the real-time response is strictly required, e.g., driverless vehicles, augmented reality, robotics, and immerse media. Indeed, the paradigm shift from 4G to 5G could become a reality with the advent of new technological concepts. The successful realization of MEC in the 5G network is still in its infancy and demands for constant efforts from both academic and industry communities. In this survey, we first provide a holistic overview of MEC technology and its potential use cases and applications. Then, we outline up-to-date researches on the integration of MEC with the new technologies that will be deployed in 5G and beyond. We also summarize testbeds and experimental evaluations, and open source activities, for edge computing. We further summarize lessons learned from state-of-the-art research works as well as discuss challenges and potential future directions for MEC research.

279 citations


Journal ArticleDOI
TL;DR: It is found that, while technological aspects are of importance, organisational issues are more relevant for industry, which has not been reflected to the same extent in literature.
Abstract: Industrial augmented reality (AR) is an integral part of Industry 4.0 concepts, as it enables workers to access digital information and overlay that information with the physical world. While not being broadly adopted in some applications, the compound annual growth rate of the industrial AR market is projected to grow rapidly. Hence, it is important to understand the issues arising from implementation of AR in industry. This study identifies critical success factors and challenges for industrial AR implementation projects, based on an industry survey. The broadly used technology, organisation, environment (TOE) framework is used as a theoretical basis for the quantitative part of the questionnaire. A complementary qualitative part is used to underpin and extend the findings. It is found that, while technological aspects are of importance, organisational issues are more relevant for industry, which has not been reflected to the same extent in literature.

237 citations


Journal ArticleDOI
TL;DR: A literature review that covers 61 studies published between 2012 and 2018 in scientific journals and conference proceedings identifies the status and tendencies in the usage of AR in education, the impact of this technology on learning processes, open questions as well as opportunities and challenges for developers and practitioners.
Abstract: Augmented reality (AR) is an important technology to enhance learning experiences. Many studies have been conducted to establish the tendencies, affordances and challenges of this technology in educational settings. However, these studies have little analyzed important issues such as the special needs of specific users or the impact of AR on education through the quantitative analysis of the data. This paper presents a literature review that covers 61 studies published between 2012 and 2018 in scientific journals and conference proceedings. As a result, it identifies the status and tendencies in the usage of AR in education, the impact of this technology on learning processes, open questions as well as opportunities and challenges for developers and practitioners. The results indicate that AR has a medium effect on learning effectiveness (d = .64, p < .001). The most reported advantages of AR systems in education are “learning gains” and “motivation.” Otherwise, it is also important to mention that only one of the AR systems of the studies includes accessibility features, which represents a setback in terms of social inclusion. Therefore, given the apparent multiple benefits of using AR systems in educational settings, stakeholders have great opportunities to develop new and better systems that benefit all learners. This technology covers a wide range of topics, target groups, academic levels and more. This could be an indicator that AR is achieving maturity and has successfully taken root in educational settings.

235 citations


Journal ArticleDOI
TL;DR: The results showed that using an augmented reality mobile application increased the learning motivation of students, and the attention, satisfaction, and confidence factors of motivation were increased, and these results were found to be significant.
Abstract: The research on augmented reality applications in education is still in an early stage, and there is a lack of research on the effects and implications of augmented reality in the field of education. The purpose of this research was to measure and understand the impact of an augmented reality mobile application on the learning motivation of undergraduate health science students at the University of Cape Town. We extend previous research that looked specifically at the impact of augmented reality technology on student learning motivation. The intrinsic motivation theory was used to explain motivation in the context of learning. The attention, relevance, confidence, and satisfaction (ARCS) model guided the understanding of the impact of augmented reality on student motivation, and the Instructional Materials Motivation Survey was used to design the research instrument. The research examined the differences in student learning motivation before and after using the augmented reality mobile application. A total of 78 participants used the augmented reality mobile application and completed the preusage and postusage questionnaires. The results showed that using an augmented reality mobile application increased the learning motivation of students. The attention, satisfaction, and confidence factors of motivation were increased, and these results were found to be significant. Although the relevance factor showed a decrease it proved to be insignificant.

230 citations


Journal ArticleDOI
22 Feb 2019
TL;DR: The purpose of this review is to classify the literature on AR published from 2006 to early 2017, to identify the main areas and sectors where AR is currently deployed, describe the technological solutions adopted, as well as the main benefits achievable with this kind of technology.
Abstract: The aim of this article is to analyze and review the scientific literature relating to the application of Augmented Reality (AR) technology in industry. AR technology is becoming increasingly diffu...

Journal ArticleDOI
TL;DR: The research introduces a new set of augmented reality attributes, namely, AR novelty, AR interactivity and AR vividness and establishes their influence on technology acceptance attributes of perceived ease of use, usefulness, enjoyment and subjective norms.

Proceedings ArticleDOI
02 May 2019
TL;DR: The goal with this paper is to support classification and discussion of MR applications' design and provide a better means to researchers to contextualize their work within the increasingly fragmented MR landscape.
Abstract: What is Mixed Reality (MR)? To revisit this question given the many recent developments, we conducted interviews with ten AR/VR experts from academia and industry, as well as a literature survey of 68 papers. We find that, while there are prominent examples, there is no universally agreed on, one-size-fits-all definition of MR. Rather, we identified six partially competing notions from the literature and experts' responses. We then started to isolate the different aspects of reality relevant for MR experiences, going beyond the primarily visual notions and extending to audio, motion, haptics, taste, and smell. We distill our findings into a conceptual framework with seven dimensions to characterize MR applications in terms of the number of environments, number of users, level of immersion, level of virtuality, degree of interaction, input, and output. Our goal with this paper is to support classification and discussion of MR applications' design and provide a better means to researchers to contextualize their work within the increasingly fragmented MR landscape.

Proceedings ArticleDOI
01 Oct 2019
TL;DR: The first extensive fisheye automotive dataset, WoodScape, named after Robert Wood, which comprises of four surround view cameras and nine tasks including segmentation, depth estimation, 3D bounding box detection and soiling detection is released.
Abstract: Fisheye cameras are commonly employed for obtaining a large field of view in surveillance, augmented reality and in particular automotive applications. In spite of their prevalence, there are few public datasets for detailed evaluation of computer vision algorithms on fisheye images. We release the first extensive fisheye automotive dataset, WoodScape, named after Robert Wood who invented the fisheye camera in 1906. WoodScape comprises of four surround view cameras and nine tasks including segmentation, depth estimation, 3D bounding box detection and soiling detection. Semantic annotation of 40 classes at the instance level is provided for over 10,000 images and annotation for other tasks are provided for over 100,000 images. With WoodScape, we would like to encourage the community to adapt computer vision models for fisheye camera instead of using naive rectification.

Journal ArticleDOI
TL;DR: In this paper, the authors conducted a meta-analysis of 64 quantitative research papers published between 2010 and 2018 in major journals to analyze the impact of AR on students' learning gains and analyzed the influence of moderating variables such as control treatment, learning environment, learner type, and domain subject on the learning gains.

Journal ArticleDOI
TL;DR: The augmented reality microscope (ARM) overlays AI-based information onto the current view of the sample in real time, enabling seamless integration of AI into routine workflows and will remove barriers towards the use of AI designed to improve the accuracy and efficiency of cancer diagnosis.
Abstract: The microscopic assessment of tissue samples is instrumental for the diagnosis and staging of cancer, and thus guides therapy. However, these assessments demonstrate considerable variability and many regions of the world lack access to trained pathologists. Though artificial intelligence (AI) promises to improve the access and quality of healthcare, the costs of image digitization in pathology and difficulties in deploying AI solutions remain as barriers to real-world use. Here we propose a cost-effective solution: the augmented reality microscope (ARM). The ARM overlays AI-based information onto the current view of the sample in real time, enabling seamless integration of AI into routine workflows. We demonstrate the utility of ARM in the detection of metastatic breast cancer and the identification of prostate cancer, with latency compatible with real-time use. We anticipate that the ARM will remove barriers towards the use of AI designed to improve the accuracy and efficiency of cancer diagnosis.

Journal ArticleDOI
TL;DR: This work consists of explaining the reasons behind the new rise of AR and VR and why their actual adoption in education will be a reality in a near fu-ture.
Abstract: Augmented Reality and Virtual Reality are not new technologies. But several constraints prevented their actual adoption. Recent technological progresses added to the proliferation of affordable hardware and software have made AR and VR more viable and desirable in many domains, including educa-tion; they have been relaunched with new promises previously unimaginable. The nature of AR and VR promises new teaching and learning models that better meet the needs of the 21st century learner. We’re now on a path to re-invent education. This work consists of explaining the reasons behind the new rise of AR and VR and why their actual adoption in education will be a reality in a near fu-ture.

Journal ArticleDOI
TL;DR: This state‐of‐the‐art report investigates the background theory of perception and vision as well as the latest advancements in display engineering and tracking technologies involved in near‐eye displays.
Abstract: Virtual and augmented reality (VR/AR) are expected to revolutionise entertainment, healthcare, communication and the manufacturing industries among many others. Near-eye displays are an enabling vessel for VR/AR applications, which have to tackle many challenges related to ergonomics, comfort, visual quality and natural interaction. These challenges are related to the core elements of these near-eye display hardware and tracking technologies. In this state-of-the-art report, we investigate the background theory of perception and vision as well as the latest advancements in display engineering and tracking technologies. We begin our discussion by describing the basics of light and image formation. Later, we recount principles of visual perception by relating to the human visual system. We provide two structured overviews on state-of-the-art near-eye display and tracking technologies involved in such near-eye displays. We conclude by outlining unresolved research questions to inspire the next generation of researchers.

Journal ArticleDOI
TL;DR: Bifunctional electronic skins equipped with a compliant magnetic microelectromechanical system able to transduce both tactile—via mechanical pressure—and touchless—via magnetic fields—stimulations simultaneously are realized.
Abstract: The emergence of smart electronics, human friendly robotics and supplemented or virtual reality demands electronic skins with both tactile and touchless perceptions for the manipulation of real and virtual objects. Here, we realize bifunctional electronic skins equipped with a compliant magnetic microelectromechanical system able to transduce both tactile-via mechanical pressure-and touchless-via magnetic fields-stimulations simultaneously. The magnetic microelectromechanical system separates electric signals from tactile and touchless interactions into two different regions, allowing the electronic skins to unambiguously distinguish the two modes in real time. Besides, its inherent magnetic specificity overcomes the interference from non-relevant objects and enables signal-programmable interactions. Ultimately, the magnetic microelectromechanical system enables complex interplay with physical objects enhanced with virtual content data in augmented reality, robotics, and medical applications.

Journal ArticleDOI
TL;DR: This work proposes a novel methodology for the conversion of existing “traditional” documentation, and for the authoring of new manuals in AR in compliance to Industry 4.0 principles.
Abstract: Augmented Reality (AR), is one of the most promising technology for technical manuals in the context of Industry 4.0. However, the implementation of AR documentation in industry is still challenging because specific standards and guidelines are missing. In this work, we propose a novel methodology for the conversion of existing “traditional” documentation, and for the authoring of new manuals in AR in compliance to Industry 4.0 principles. The methodology is based on the optimization of text usage with the ASD Simplified Technical English, the conversion of text instructions into 2D graphic symbols, and the structuring of the content through the combination of Darwin Information Typing Architecture (DITA) and Information Mapping (IM). We tested the proposed approach with a case study of a maintenance manual of hydraulic breakers. We validated it with a user test collecting subjective feedbacks of 22 users. The results of this experiment confirm that the manual obtained using our methodology is clearer than other templates.

Journal ArticleDOI
18 Feb 2019
TL;DR: This paper reviews the state-of-the-art technology and existing implementations of Mobile AR, as well as enabling technologies and challenges when AR meets the Web, and elaborate on the different potential Web AR provisioning approaches, especially the adaptive and scalable collaborative distributed solution which adopts the osmotic computing paradigm to provide Web AR services.
Abstract: Mobile augmented reality (Mobile AR) is gaining increasing attention from both academia and industry. Hardware-based Mobile AR and App-based Mobile AR are the two dominant platforms for Mobile AR applications. However, hardware-based Mobile AR implementation is known to be costly and lacks flexibility, while the App-based one requires additional downloading and installation in advance and is inconvenient for cross-platform deployment. In comparison, Web-based AR (Web AR) implementation can provide a pervasive Mobile AR experience to users thanks to the many successful deployments of the Web as a lightweight and cross-platform service provisioning platform. Furthermore, the emergence of 5G mobile communication networks has the potential to enhance the communication efficiency of Mobile AR dense computing in the Web-based approach. We conjecture that Web AR will deliver an innovative technology to enrich our ways of interacting with the physical (and cyber) world around us. This paper reviews the state-of-the-art technology and existing implementations of Mobile AR, as well as enabling technologies and challenges when AR meets the Web. Furthermore, we elaborate on the different potential Web AR provisioning approaches, especially the adaptive and scalable collaborative distributed solution which adopts the osmotic computing paradigm to provide Web AR services. We conclude this paper with the discussions of open challenges and research directions under current 3G/4G networks and the future 5G networks. We hope that this paper will help researchers and developers to gain a better understanding of the state of the research and development in Web AR and at the same time stimulate more research interest and effort on delivering life-enriching Web AR experiences to the fast-growing mobile and wireless business and consumer industry of the 21st century.

Proceedings ArticleDOI
15 Jun 2019
TL;DR: In this article, a deep neural architecture, PlaneRCNN, is proposed to detect and reconstruct piecewise planar regions from a single RGB image, which employs a variant of Mask R-CNN to detect planes with their plane parameters and segmentation masks.
Abstract: This paper proposes a deep neural architecture, PlaneRCNN, that detects and reconstructs piecewise planar regions from a single RGB image. PlaneRCNN employs a variant of Mask R-CNN to detect planes with their plane parameters and segmentation masks. PlaneRCNN then refines an arbitrary number of segmentation masks with a novel loss enforcing the consistency with a nearby view during training. The paper also presents a new benchmark with more fine-grained plane segmentations in the ground-truth, in which, PlaneRCNN outperforms existing state-of-the-art methods with significant margins in the plane detection, segmentation, and reconstruction metrics. PlaneRCNN makes an important step towards robust plane extraction method, which would have immediate impact on a wide range of applications including Robotics, Augmented Reality, and Virtual Reality.

Journal ArticleDOI
TL;DR: The intention is to demonstrate that Augmented Reality and Additive Manufacturing are viable tools in aviation maintenance, and while a strong effort is necessary to develop an appropriate regulatory framework, mandatory before the wide-spread introduction of these technologies in the aerospace systems maintenance process, there has been a great interest and pull from the industry sector.

Proceedings ArticleDOI
20 May 2019
TL;DR: Visual-inertial navigation systems (VINS) have become ubiquitous in a wide range of applications from mobile augmented reality to aerial navigation to autonomous driving, in part because of the complementary sensing capabilities and the decreasing costs and size of the sensors as discussed by the authors.
Abstract: As inertial and visual sensors are becoming ubiquitous, visual-inertial navigation systems (VINS) have prevailed in a wide range of applications from mobile augmented reality to aerial navigation to autonomous driving, in part because of the complementary sensing capabilities and the decreasing costs and size of the sensors. In this paper, we survey thoroughly the research efforts taken in this field and strive to provide a concise but complete review of the related work – which is unfortunately missing in the literature while being greatly demanded by researchers and engineers – in the hope to accelerate the VINS research and beyond in our society as a whole.

Journal ArticleDOI
TL;DR: Comparing AR and VR technologies with regard to their impact on learning outcomes, such as retention of science information suggests that VR is more immersive and engaging through the mechanism of spatial presence, however, AR seems to be a more effective medium for conveying auditory information through the pathway ofatial presence.
Abstract: The propagation of augmented reality (AR) and virtual reality (VR) applications that leverage smartphone technology has increased along with the ubiquity of smartphone adoption. Although A...

Posted Content
TL;DR: This paper surveys thoroughly the research efforts taken in visual-inertial navigation research and strives to provide a concise but complete review of the related work in the hope to accelerate the VINS research and beyond in the authors' society as a whole.
Abstract: As inertial and visual sensors are becoming ubiquitous, visual-inertial navigation systems (VINS) have prevailed in a wide range of applications from mobile augmented reality to aerial navigation to autonomous driving, in part because of the complementary sensing capabilities and the decreasing costs and size of the sensors. In this paper, we survey thoroughly the research efforts taken in this field and strive to provide a concise but complete review of the related work -- which is unfortunately missing in the literature while being greatly demanded by researchers and engineers -- in the hope to accelerate the VINS research and beyond in our society as a whole.

Journal ArticleDOI
TL;DR: In this article, a four-stage conceptual model of heritage preservation for managing heritage into digital tourism experiences is proposed, where the four stages include the presentation of historical facts, contested heritage, integration of historical fact and contested heritage; and/or an alternate scenario.

Journal ArticleDOI
TL;DR: This survey is to present current state-of-the-art research on edge caching and computing with a focus on AR/VR applications and tactile internet and to discuss applications, opportunities and challenges in this emerging field.
Abstract: As a result of increasing popularity of augmented reality and virtual reality (AR/VR) applications, there are significant efforts to bring AR/VR to mobile users. Parallel to the advances in AR/VR technologies, tactile internet is gaining interest from the research community. Both AR/VR and tactile internet applications require massive computational capability, high communication bandwidth, and ultra-low latency that cannot be provided with the current wireless mobile networks. By 2020, long term evolution (LTE) networks will start to be replaced by fifth generation (5G) networks. Edge caching and mobile edge computing are among the potential 5G technologies that bring content and computing resources close to the users, reducing latency and load on the backhaul. The aim of this survey is to present current state-of-the-art research on edge caching and computing with a focus on AR/VR applications and tactile internet and to discuss applications, opportunities and challenges in this emerging field.

Journal ArticleDOI
TL;DR: A comprehensive conceptual framework for the viewing and manipulation of medical images in virtual and augmented reality is introduced, outlining considerations for placing these methods directly into a radiology-based workflow and showing how it can be applied to a variety of clinical scenarios.
Abstract: Recent technological innovations have created new opportunities for the increased adoption of virtual reality (VR) and augmented reality (AR) applications in medicine While medical applications of VR have historically seen greater adoption from patient-as-user applications, the new era of VR/AR technology has created the conditions for wider adoption of clinician-as-user applications Historically, adoption to clinical use has been limited in part by the ability of the technology to achieve a sufficient quality of experience This article reviews the definitions of virtual and augmented reality and briefly covers the history of their development Currently available options for consumer-level virtual and augmented reality systems are presented, along with a discussion of technical considerations for their adoption in the clinical environment Finally, a brief review of the literature of medical VR/AR applications is presented prior to introducing a comprehensive conceptual framework for the viewing and manipulation of medical images in virtual and augmented reality Using this framework, we outline considerations for placing these methods directly into a radiology-based workflow and show how it can be applied to a variety of clinical scenarios

Journal ArticleDOI
TL;DR: The clinical and non-laboratory utility of the Microsoft Kinect devices holds great promise for physical function assessment, and recent developments could strengthen their ability to provide important and impactful health-related data.