scispace - formally typeset
Search or ask a question

Showing papers on "Mixed reality published in 2018"


Journal ArticleDOI
TL;DR: The present work discusses the evolution and changes over the time of the use of VR in the main areas of application with an emphasis on the future expected VR’s capacities, increases and challenges.
Abstract: The recent appearance of low cost Virtual Reality (VR) technologies – like the Oculus Rift, the HTC Vive and the Sony PlayStation VR – and Mixed Reality Interfaces (MRITF) – like the Hololens – is attracting the attention of users and researchers suggesting it may be the next largest stepping stone in technological innovation. However, the history of VR technology is longer than it may seem: the concept of VR was formulated in the 1960s and the first commercial VR tools appeared in the late 1980s. For this reason, during the last twentyyears, hundreds of researchers explored the processes, effects and applications of this technology producing thousands of scientific papers. What is the outcome of this significant research work? This paper wants to provide an answer to this question by exploring, using advanced scientometric techniques, the existing research corpus in the field. We collected all the existent articles about VR in the Web of Science Core Collection scientific database, and the resultant dataset contained 21,667 records for VR and 9,944 for AR. The bibliographic record contained various fields, such as author, title, abstract, country, and all the references (needed for the citation analysis). The network and cluster analysis of the literature showed a composite panorama characterized by evolutions over the time. Indeed, whether until five years ago, the main publication media on VR concerned both conference proceeding and journals, more recently journals constitute the main medium. Similarly, if at first computer science was the leading research field, nowadays clinical areas increased, as well as the number of countries involved in virtual reality research. The present work discusses the evolution of the use of virtual reality in the main areas of application with an emphasis on the future expected virtual reality’s capacities, increases and challenges. We conclude considering the disruptive contribution that VR/AR/MRITF will be able to get in scientific fields, as well in human communication and interaction, as already happened with the advent of mobile phones by increasing the use and the development of scientific applications (e.g. in clinical areas) and by modifying the social communication and interaction among people.

479 citations


Journal ArticleDOI
TL;DR: The article surveys the state-of-the-art in augmented-, virtual-, and mixed-reality systems as a whole and from a cultural heritage perspective and identifies specific application areas in digital cultural heritage and makes suggestions as to which technology is most appropriate in each case.
Abstract: A multimedia approach to the diffusion, communication, and exploitation of Cultural Heritage (CH) is a well-established trend worldwide. Several studies demonstrate that the use of new and combined media enhances how culture is experienced. The benefit is in terms of both number of people who can have access to knowledge and the quality of the diffusion of the knowledge itself. In this regard, CH uses augmented-, virtual-, and mixed-reality technologies for different purposes, including education, exhibition enhancement, exploration, reconstruction, and virtual museums. These technologies enable user-centred presentation and make cultural heritage digitally accessible, especially when physical access is constrained. A number of surveys of these emerging technologies have been conducted; however, they are either not domain specific or lack a holistic perspective in that they do not cover all the aspects of the technology. A review of these technologies from a cultural heritage perspective is therefore warranted. Accordingly, our article surveys the state-of-the-art in augmented-, virtual-, and mixed-reality systems as a whole and from a cultural heritage perspective. In addition, we identify specific application areas in digital cultural heritage and make suggestions as to which technology is most appropriate in each case. Finally, the article predicts future research directions for augmented and virtual reality, with a particular focus on interaction interfaces and explores the implications for the cultural heritage domain.

473 citations


Proceedings ArticleDOI
19 Apr 2018
TL;DR: Mini-Me, an adaptive avatar for enhancing Mixed Reality (MR) remote collaboration between a local Augmented Reality (AR) user and a remote Virtual Reality (VR) user is presented.
Abstract: We present Mini-Me, an adaptive avatar for enhancing Mixed Reality (MR) remote collaboration between a local Augmented Reality (AR) user and a remote Virtual Reality (VR) user. The Mini-Me avatar represents the VR user's gaze direction and body gestures while it transforms in size and orientation to stay within the AR user's field of view. A user study was conducted to evaluate Mini-Me in two collaborative scenarios: an asymmetric remote expert in VR assisting a local worker in AR, and a symmetric collaboration in urban planning. We found that the presence of the Mini-Me significantly improved Social Presence and the overall experience of MR collaboration.

255 citations


Journal ArticleDOI
TL;DR: Cochrane et al. as mentioned in this paper found that participants in the traditional and VR conditions had improved overall performance (i.e. learning, including knowledge acquisition and understanding) compared to those in the video condition.
Abstract: Recent advances in virtual reality (VR) technology allow for potential learning and education applications. For this study, 99 participants were assigned to one of three learning conditions: traditional (textbook style), VR and video (a passive control). The learning materials used the same text and 3D model for all conditions. Each participant was given a knowledge test before and after learning. Participants in the traditional and VR conditions had improved overall performance (i.e. learning, including knowledge acquisition and understanding) compared to those in the video condition. Participants in the VR condition also showed better performance for ‘remembering’ than those in the traditional and the video conditions. Emotion self-ratings before and after the learning phase showed an increase in positive emotions and a decrease in negative emotions for the VR condition. Conversely there was a decrease in positive emotions in both the traditional and video conditions. The Web-based learning tools evaluation scale also found that participants in the VR condition reported higher engagement than those in the other conditions. Overall, VR displayed an improved learning experience when compared to traditional and video learning methods. Published: 27 November 2018 This paper is part of the special collection Mobile Mixed Reality Enhanced Learning, edited by Thom Cochrane, Fiona Smart, Helen Farley and Vickel Narayan. More papers from this collection can be found here Citation: Research in Learning Technology 2018, 26 : 2140 - http://dx.doi.org/10.25304/rlt.v26.2140

213 citations


Book ChapterDOI
01 Jan 2018
TL;DR: In this article, a comparative chronological analysis of AR and VR research and applications in a retail context is presented to provide an up-to-date perspective, incorporating issues relating to motives, applications and implementation of AR by retailers, as well as consumer acceptance.
Abstract: Augmented reality (AR) and virtual reality (VR) have emerged as rapidly developing technologies used in both physical and online retailing to enhance the selling environment and shopping experience. However, academic research on, and practical applications of, AR and VR in retail are still fragmented, and this state of affairs is arguably attributable to the interdisciplinary origins of the topic. Undertaking a comparative chronological analysis of AR and VR research and applications in a retail context, this paper synthesises current debates to provide an up-to-date perspective—incorporating issues relating to motives, applications and implementation of AR and VR by retailers, as well as consumer acceptance—and to frame the basis for a future research agenda.

211 citations


Proceedings ArticleDOI
26 Feb 2018
TL;DR: A new design space for communicating robot motion in-tent is explored by investigating how augmented reality (AR) might mediate human-robot interactions and developing a series of explicit and implicit designs for visually signaling robot motion intent using AR.
Abstract: Humans coordinate teamwork by conveying intent through social cues, such as gestures and gaze behaviors. However, these methods may not be possible for appearance-constrained robots that lack anthropomorphic or zoomorphic features, such as aerial robots. We explore a new design space for communicating robot motion in-tent by investigating how augmented reality (AR) might mediate human-robot interactions. We develop a series of explicit and implicit designs for visually signaling robot motion intent using AR, which we evaluate in a user study. We found that several of our AR designs significantly improved objective task efficiency over a base-line in which users only received physically-embodied orientation cues. In addition, our designs offer several trade-offs in terms of intent clarity and user perceptions of the robot as a teammate.

194 citations


Journal ArticleDOI
TL;DR: Core findings are that expected utilitarian, hedonic, and symbolic benefits drive consumers' reactions to ARSGs and that the extent to which ARsGs threaten other people's, but not one's own, privacy can strongly influence users' decision making.

190 citations


Journal ArticleDOI
TL;DR: It is not surprising that managers find it hard to distinguish similar-sounding, IT-based concepts such as augmented reality and virtual reality.

166 citations


Journal ArticleDOI
01 Jan 2018
TL;DR: The primary focus of this article is on the embodiment afforded by gesture in 3D for learning, and the new generation of hand controllers induces embodiment and agency via meaningful and congruent movements with the content to be learned.
Abstract: This article explores relevant applications of educational theory for the design of immersive virtual reality (VR). Two unique attributes associated with VR position the technology to positively affect education: 1) the sensation of presence, and 2) the embodied affordances of gesture and manipulation in the 3rd dimension. These are referred to as the two profound affordances of VR. The primary focus of this article is on the embodiment afforded by gesture and 3D for learning. The new generation of hand controllers induces embodiment and agency via meaningful and congruent movements with the content to be learned. Several examples of gesture-rich lessons are presented. The final section includes an extensive set of design principles for immersive VR in education, and finishes with the Necessary Nine which are hypothesized to optimize the pedagogy within a lesson.

164 citations


Journal ArticleDOI
TL;DR: A novel approach to automatically determine the locations for soil samples based on a soil map created from drone imaging after ploughing, and a wearable augmented reality technology to guide the user to the generated sample points is presented.

138 citations



Journal ArticleDOI
TL;DR: The potentialities of mixed-reality using the HoloLens to develop a hybrid training system for orthopaedic open surgery and the perceived overall workload was low, and the self-assessed performance was considered satisfactory.
Abstract: Orthopaedic simulators are popular in innovative surgical training programs, where trainees gain procedural experience in a safe and controlled environment. Recent studies suggest that an ideal simulator should combine haptic, visual, and audio technology to create an immersive training environment. This article explores the potentialities of mixed-reality using the HoloLens to develop a hybrid training system for orthopaedic open surgery. Hip arthroplasty, one of the most common orthopaedic procedures, was chosen as a benchmark to evaluate the proposed system. Patient-specific anatomical 3D models were extracted from a patient computed tomography to implement the virtual content and to fabricate the physical components of the simulator. Rapid prototyping was used to create synthetic bones. The Vuforia SDK was utilized to register virtual and physical contents. The Unity3D game engine was employed to develop the software allowing interactions with the virtual content using head movements, gestures, and voice commands. Quantitative tests were performed to estimate the accuracy of the system by evaluating the perceived position of augmented reality targets. Mean and maximum errors matched the requirements of the target application. Qualitative tests were carried out to evaluate workload and usability of the HoloLens for our orthopaedic simulator, considering visual and audio perception and interaction and ergonomics issues. The perceived overall workload was low, and the self-assessed performance was considered satisfactory. Visual and audio perception and gesture and voice interactions obtained a positive feedback. Postural discomfort and visual fatigue obtained a nonnegative evaluation for a simulation session of 40 minutes. These results encourage using mixed-reality to implement a hybrid simulator for orthopaedic open surgery. An optimal design of the simulation tasks and equipment setup is required to minimize the user discomfort. Future works will include Face Validity, Content Validity, and Construct Validity to complete the assessment of the hip arthroplasty simulator.

Journal ArticleDOI
TL;DR: It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion and be leveraged to modify the proposed blind quality metric to a sizable margin.
Abstract: New challenges have been brought out along with the emerging of 3D-related technologies, such as virtual reality, augmented reality (AR), and mixed reality. Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, and so on, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers’ attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the “blind” environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced-, and no-reference models.

Proceedings ArticleDOI
26 Feb 2018
TL;DR: It is explored how advances in augmented reality (AR) technologies are creating a new design space for mediating robot teleoperation by enabling novel forms of intuitive, visual feedback and several objective and subjective performance benefits over existing systems.
Abstract: Robot teleoperation can be a challenging task, often requiring a great deal of user training and expertise, especially for platforms with high degrees-of-freedom (e.g., industrial manipulators and aerial robots). Users often struggle to synthesize information robots collect (e.g., a camera stream) with contextual knowledge of how the robot is moving in the environment. We explore how advances in augmented reality (AR) technologies are creating a new design space for mediating robot teleoperation by enabling novel forms of intuitive, visual feedback. We prototype several aerial robot teleoperation interfaces using AR, which we evaluate in a 48-participant user study where participants completed an environmental inspection task. Our new interface designs provided several objective and subjective performance benefits over existing systems, which often force users into an undesirable paradigm that divides user attention between monitoring the robot and monitoring the robot’s camera feed(s).

Journal ArticleDOI
01 Feb 2018-Sensors
TL;DR: A brief study of the various approaches and the techniques of emotion recognition is presented, including a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions.
Abstract: Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human–Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam.

Journal ArticleDOI
TL;DR: Outcomes are presented from post-intervention student interviews and discipline academic observation, which highlight improvements in learner motivation and skills, but also demonstrated pedagogical challenges to overcome with mobile mixed reality learning.
Abstract: New accessible learning methods delivered through mobile mixed reality are becoming possible in education, shifting pedagogy from the use of two dimensional images and videos to facilitating learning via interactive mobile environments This is especially important in medical and health education, where the required knowledge acquisition is typically much more experiential, self-directed, and hands-on than in many other disciplines Presented are insights obtained from the implementation and testing of two mobile mixed reality interventions across two Australian higher education classrooms in medicine and health sciences, concentrating on student perceptions of mobile mixed reality for learning physiology and anatomy in a face-to-face medical and health science classroom and skills acquisition in airways management focusing on direct laryngoscopy with foreign body removal in a distance paramedic science classroom This is unique because most studies focus on a single discipline, focusing on either skills or the learner experience and a single delivery modality rather than linking cross-discipline knowledge acquisition and the development of a student’s tangible skills across multimodal classrooms Outcomes are presented from post-intervention student interviews and discipline academic observation, which highlight improvements in learner motivation and skills, but also demonstrated pedagogical challenges to overcome with mobile mixed reality learning

Posted Content
TL;DR: Multimediated Reality is proposed as a multidimensional multisensory mediated reality that includes not just interactive multimedia-based reality for the authors' five senses, but also includes additional senses (like sensory sonar, sensory radar, etc.), as well as their human actions/actuators.
Abstract: The contributions of this paper are: (1) a taxonomy of the "Realities" (Virtual, Augmented, Mixed, Mediated, etc.), and (2) some new kinds of "reality" that come from nature itself, i.e. that expand our notion beyond synthetic realities to include also phenomenological realities. VR (Virtual Reality) replaces the real world with a simulated experience (virtual world). AR (Augmented Reality) allows a virtual world to be experienced while also experiencing the real world at the same time. Mixed Reality provides blends that interpolate between real and virtual worlds in various proportions, along a "Virtuality" axis, and extrapolate to an "X-axis". Mediated Reality goes a step further by mixing/blending and also modifying reality. This modifying of reality introduces a second axis. Mediated Reality is useful as a seeing aid (e.g. modifying reality to make it easier to understand), and for psychology experiments like Stratton's 1896 upside-down eyeglasses experiment. We propose Multimediated Reality as a multidimensional multisensory mediated reality that includes not just interactive multimedia-based reality for our five senses, but also includes additional senses (like sensory sonar, sensory radar, etc.), as well as our human actions/actuators. These extra senses are mapped to our human senses using synthetic synesthesia. This allows us to directly experience real (but otherwise invisible) phenomena, such as wave propagation and wave interference patterns, so that we can see radio waves and sound waves and how they interact with objects and each other. Multimediated reality is multidimensional, multimodal, multisensory, and multiscale. It is also multidisciplinary, in that we must consider not just the user, but also how the technology affects others, e.g. how its physical appearance affects social situations.

Proceedings ArticleDOI
19 Apr 2018
TL;DR: A method that uses an underlying voxel grid holding information like visibility and transformations, which is applied to live geometry in real time is contributed, which enables changing the environment as easily as geometry can be changed in virtual reality.
Abstract: We present Remixed Reality, a novel form of mixed reality. In contrast to classical mixed reality approaches where users see a direct view or video feed of their environment, with Remixed Reality they see a live 3D reconstruction, gathered from multiple external depth cameras. This approach enables changing the environment as easily as geometry can be changed in virtual reality, while allowing users to view and interact with the actual physical world as they would in augmented reality. We characterize a taxonomy of manipulations that are possible with Remixed Reality: spatial changes such as erasing objects; appearance changes such as changing textures; temporal changes such as pausing time; and viewpoint changes that allow users to see the world from different points without changing their physical location. We contribute a method that uses an underlying voxel grid holding information like visibility and transformations, which is applied to live geometry in real time.

Journal ArticleDOI
Robert Xiao1, Julia Schwarz1, Nick Throm1, Andrew D. Wilson1, Hrvoje Benko1 
TL;DR: This work presents MRTouch, a novel multitouch input solution for head-mounted mixed reality systems that enables users to reach out and directly manipulate virtual interfaces affixed to surfaces in their environment, as though they were touchscreens.
Abstract: We present MRTouch, a novel multitouch input solution for head-mounted mixed reality systems. Our system enables users to reach out and directly manipulate virtual interfaces affixed to surfaces in their environment, as though they were touchscreens. Touch input offers precise, tactile and comfortable user input, and naturally complements existing popular modalities, such as voice and hand gesture. Our research prototype combines both depth and infrared camera streams together with real-time detection and tracking of surface planes to enable robust finger-tracking even when both the hand and head are in motion. Our technique is implemented on a commercial Microsoft HoloLens without requiring any additional hardware nor any user or environmental calibration. Through our performance evaluation, we demonstrate high input accuracy with an average positional error of 5.4 mm and 95% button size of 16 mm, across 17 participants, 2 surface orientations and 4 surface materials. Finally, we demonstrate the potential of our technique to enable on-world touch interactions through 5 example applications.

Journal ArticleDOI
TL;DR: A concept is presented to evaluate the potential of inspection and maintenance processes in the aviation industry regarding the use of mixed reality systems.

Proceedings ArticleDOI
27 Dec 2018
TL;DR: An augmented reality robotic interface with four interactive functions to ease the robot programming task and an industrial case study that illustrates the AR manufacturing paradigm by interacting with a 7-DOF robot arm to reduce wrinkles during the pleating step of the carbon-fiber-reinforcement-polymer vacuum bagging process in a simulated scenario.
Abstract: This paper presents a future-focused approach for robot programming based on augmented trajectories. Using a mixed reality head-mounted display (Microsoft Hololens) and a 7-DOF robot arm, we designed an augmented reality (AR) robotic interface with four interactive functions to ease the robot programming task: 1) Trajectory specification. 2) Virtual previews of robot motion. 3) Visualization of robot parameters. 4) Online reprogramming during simulation and execution. We validate our AR-robot teaching interface by comparing it with a kinesthetic teaching interface in two different scenarios as part of a pilot study: creation of contact surface path and free space path. Furthermore, we present an industrial case study that illustrates our AR manufacturing paradigm by interacting with a 7-DOF robot arm to reduce wrinkles during the pleating step of the carbon-fiber-reinforcement-polymer vacuum bagging process in a simulated scenario.

Journal ArticleDOI
TL;DR: This study provides a comprehensive overview of the available literature regarding the use of AR in open surgery, both in clinical and simulated settings to help developers and end/users discuss and understand benefits and shortcomings of these systems inopen surgery.
Abstract: Augmented reality (AR) has been successfully providing surgeons an extensive visual information of surgical anatomy to assist them throughout the procedure. AR allows surgeons to view surgical field through the superimposed 3D virtual model of anatomical details. However, open surgery presents new challenges. This study provides a comprehensive overview of the available literature regarding the use of AR in open surgery, both in clinical and simulated settings. In this way, we aim to analyze the current trends and solutions to help developers and end/users discuss and understand benefits and shortcomings of these systems in open surgery. We performed a PubMed search of the available literature updated to January 2018 using the terms (1) "augmented reality" AND "open surgery", (2) "augmented reality" AND "surgery" NOT "laparoscopic" NOT "laparoscope" NOT "robotic", (3) "mixed reality" AND "open surgery", (4) "mixed reality" AND "surgery" NOT "laparoscopic" NOT "laparoscope" NOT "robotic". The aspects evaluated were the following: real data source, virtual data source, visualization processing modality, tracking modality, registration technique, and AR display type. The initial search yielded 502 studies. After removing the duplicates and by reading abstracts, a total of 13 relevant studies were chosen. In 1 out of 13 studies, in vitro experiments were performed, while the rest of the studies were carried out in a clinical setting including pancreatic, hepatobiliary, and urogenital surgeries. AR system in open surgery appears as a versatile and reliable tool in the operating room. However, some technological limitations need to be addressed before implementing it into the routine practice.

Proceedings ArticleDOI
21 Apr 2018
TL;DR: A mobile system that enhances mixed reality experiences and games with force feedback by means of electrical muscle stimulation (EMS) while keeping the users' hands free to interact unencumbered, and demonstrates how this supports three classes of applications along the mixed-reality continuum.
Abstract: We present a mobile system that enhances mixed reality experiences and games with force feedback by means of electrical muscle stimulation (EMS). The benefit of our approach is that it adds physical forces while keeping the users' hands free to interact unencumbered-not only with virtual objects, but also with physical objects, such as props and appliances. We demonstrate how this supports three classes of applications along the mixed-reality continuum: (1) entirely virtual objects, such as furniture with EMS friction when pushed or an EMS-based catapult game. (2) Virtual objects augmented via passive props with EMS-constraints, such as a light control panel made tangible by means of a physical cup or a balance-the-marble game with an actuated tray. (3) Augmented appliances with virtual behaviors, such as a physical thermostat dial with EMS-detents or an escape-room that repurposes lamps as levers with detents. We present a user-study in which participants rated the EMS-feedback as significantly more realistic than a no-EMS baseline.

Journal ArticleDOI
TL;DR: In this paper, the authors used grounded theory to obtain a definition for virtual world that is directly applicable to technology, and compared the obtained definition with related work and used to classify advanced technologies such as a pseudo-persistent video game, a MANet, virtual and mixed reality, and the Metaverse.
Abstract: There is no generally accepted definition for a virtual world, with many complimentary terms and acronyms having emerged implying a virtual world. Advances in networking techniques such as host migration of instances, mobile ad hoc networking, and distributed computing, bring in to question whether architectures can actually support a virtual world. Without a concrete definition, controversy ensues and it is problematic to design an architecture for a virtual world. Several researchers provided a definition but aspects of each definition are still problematic and simply can not be applied to contemporary technologies. The approach of this article is to sample technologies using grounded theory and to obtain a definition for a “virtual world” that is directly applicable to technology. The obtained definition is compared with related work and used to classify advanced technologies such as a pseudo-persistent video game, a MANet, virtual and mixed reality, and the Metaverse. The results of this article include a break down of which properties set apart the various technologies; a definition that is validated by comparing it with other definitions; an ontology showing the relation of the different complimentary terms and acronyms; and the usage of pseudo-persistence to categories those technologies, which only mimic persistence.


Proceedings ArticleDOI
01 Oct 2018
TL;DR: Investigating how to further improve live 360 panorama based remote collaborative experiences by adding Mixed Reality (MR) cues showed that providing view independence through sharing live panorama enhances co-presence in collaboration, and the MR cues help users understanding each other.
Abstract: Sharing and watching live 360 panorama video is available on modern social networking platforms, yet the communication is often a passive one-directional experience. This research investigates how to further improve live 360 panorama based remote collaborative experiences by adding Mixed Reality (MR) cues. SharedSphere is a wearable MR remote collaboration system that enriches a live captured immersive panorama based collaboration through MR visualisation of non-verbal communication cues (e.g., view awareness and gestures cues). We describe the design and implementation details of the prototype system, and report on a user study investigating how MR live panorama sharing affects the user's collaborative experience. The results showed that providing view independence through sharing live panorama enhances co-presence in collaboration, and the MR cues help users understanding each other. Based on the study results we discuss design implications and future research direction.

Journal ArticleDOI
TL;DR: This study introduces VR, AR, and MR applications in medical practices and education, and aims to help health professionals know more about these applications, becoming interested to improve the quality of medical care via the technology.
Abstract: As technology advances, mobile devices have gradually turned into wearable devices, and Virtual Reality (VR), Augmented Reality (AR) as well as Mixed Reality (MR) have been applied more and more widely. For example, VR, AR and MR are applied in the medical fields like medical education and training, surgical simulation, neurological rehabilitation, psychotherapy, and telemedicine. Related research result has proved that VR, AR and MR ameliorate inconvenience of traditional medical care, reduce medical malpractice caused by unskilled operation, and lower the cost of medical education and training. Moreover, the application has enhanced effectiveness of medical education and training, raised the level of diagnosis and treatment, improved the doctor-patient relationship, and boosted efficiency of medical execution. This study introduces VR, AR, and MR applications in medical practices and education, and aims to help health professionals know more about these applications, becoming interested to improve the quality of medical care via the technology.

Book ChapterDOI
01 Jan 2018
TL;DR: This paper focuses on drones with four sets of rotor blades, known as a Quadcopter, and how they are applied in the field of entertainment and AVR because their usages are getting expanding in science, commercial, or entertainment use.
Abstract: This paper explores the use of drones for entertainment with the emerging technology of AVR (Augmented and Virtual Reality) over the past 10 years from 2006 to 2016. Drones, known as UAV (Unmanned Aerial Vehicle) or UAS (Unmanned Aircraft System), is an aircraft without a pilot or a person board also known as an unmanned aircraft. This paper focuses on drones with four sets of rotor blades, known as a Quadcopter, and how they are applied in the field of entertainment and AVR because their usages are getting expanding in science, commercial, or entertainment use. Industries and individuals began to see opportunities of drone technology and these days and it is expanding to the field of creating aerial immersive mixed reality. This paper introduces the overview of drones and characteristics of their usages in the field of entertainment and AVR areas.

Journal ArticleDOI
TL;DR: Computer-based solutions such as 3-D anatomical reconstruction and computer-based procedures planning have helped improve both patient outcome and safety in surgical innovation.
Abstract: Surgical innovation aims to improve both patient outcome and safety. Among many others, computer-based solutions such as 3-D anatomical reconstruction and computer-based procedures planning have sh...

Proceedings ArticleDOI
06 Feb 2018
TL;DR: This work proposes a decentralized blockchain-based peer-to-peer model of distribution, with virtual spaces represented as blocks, that can be archived, mapped, shared, and reused among different applications.
Abstract: Mixed reality telepresence is becoming an increasingly popular form of interaction in social and collaborative applications. We are interested in how created virtual spaces can be archived, mapped, shared, and reused among different applications. Therefore, we propose a decentralized blockchain-based peer-to-peer model of distribution, with virtual spaces represented as blocks. We demonstrate the integration of our system in a collaborative mixed reality application and discuss the benefits and limitations of our approach.