scispace - formally typeset
Search or ask a question

Showing papers on "Augmented virtuality published in 2019"


Journal ArticleDOI
TL;DR: The proposed framework integrates multisource facilities information, BIM models, and feature-based tracking in an MR-based setting to retrieve information based on time and support remote collaboration and visual communication between the field worker and the manager at the office.

74 citations


Journal ArticleDOI
01 Jan 2019
TL;DR: This paper attempts to compare the existing immersive reality technologies and interaction methods against their potential to enhance cultural learning in VH applications, and proposes a specific integration of collaborative and multimodal interaction methods into a Mixed Reality (MxR) scenario that can be applied to Vh applications that aim at enhancing culturallearning in situ.
Abstract: In recent years, Augmented Reality (AR), Virtual Reality (VR), Augmented Virtuality (AV), and Mixed Reality (MxR) have become popular immersive reality technologies for cultural knowledge dissemination in Virtual Heritage (VH). These technologies have been utilized for enriching museums with a personalized visiting experience and digital content tailored to the historical and cultural context of the museums and heritage sites. Various interaction methods, such as sensor-based, device-based, tangible, collaborative, multimodal, and hybrid interaction methods, have also been employed by these immersive reality technologies to enable interaction with the virtual environments. However, the utilization of these technologies and interaction methods isn't often supported by a guideline that can assist Cultural Heritage Professionals (CHP) to predetermine their relevance to attain the intended objectives of the VH applications. In this regard, our paper attempts to compare the existing immersive reality technologies and interaction methods against their potential to enhance cultural learning in VH applications. To objectify the comparison, three factors have been borrowed from existing scholarly arguments in the Cultural Heritage (CH) domain. These factors are the technology's or the interaction method's potential and/or demonstrated capability to: (1) establish a contextual relationship between users, virtual content, and cultural context, (2) allow collaboration between users, and (3) enable engagement with the cultural context in the virtual environments and the virtual environment itself. Following the comparison, we have also proposed a specific integration of collaborative and multimodal interaction methods into a Mixed Reality (MxR) scenario that can be applied to VH applications that aim at enhancing cultural learning in situ.

62 citations


Proceedings ArticleDOI
18 Jun 2019
TL;DR: Three different ways to represent physical passersby in a Virtual Environment using Augmented Virtuality are proposed and evaluated and it is shown that while an Image Overlay and a 3D-Model are the fastest representations to spot passersby, the 3d-Model and the Pointcloud representations were the most accurate.
Abstract: With the proliferation of room-scale Virtual Reality (VR), more and more users install a VR system in their homes. When users are in VR, they are usually completely immersed in their application. However, sometimes passersby invade these tracking spaces and walk up to users that are currently immersed in VR to try and interact with them. As this either scares the user in VR or breaks the user's immersion, research has yet to find a way to seamlessly represent physical passersby in virtual worlds. In this paper, we propose and evaluate three different ways to represent physical passersby in a Virtual Environment using Augmented Virtuality. The representations encompass showing a Pointcloud, showing a 3D-Model, and showing an Image Overlay of the passerby. Our results show that while an Image Overlay and a 3D-Model are the fastest representations to spot passersby, the 3D-Model and the Pointcloud representations were the most accurate.

33 citations


Journal ArticleDOI
TL;DR: An extensive review of the literature from 2007 to 2017 and an analysis of the works on mixed reality technologies and its subcategories applied to collaborative education environments highlighted the lack of research across the mixed reality spectrum, especially in the augmented virtuality subcategory.
Abstract: In this paper, we report findings from a systematic mapping study, conducted to review the existing literature on collaborative educational environments incorporating mixed reality technologies. There is increasing interest in mixed reality technologies in education, especially with the introduction of new over head mounted displays (OHMDs), such as HoloLens, Oculus Rift, and HTC Vive. with the consideration of areas, such as education, dynamic technology, and complex environments, a research area is identified. We carried out an extensive review of the literature from 2007 to 2017 and conducted an analysis of the works on mixed reality technologies and its subcategories applied to collaborative education environments. Results highlighted the lack of research across the mixed reality spectrum, especially in the augmented virtuality subcategory, as well as technical limitations such as response time in the development of mixed reality technologies for collaborative environments. Furthermore, the difficulty of teaching professionals to replicate mixed reality experiments in real environments, due to the technical skills required, was identified. The main contribution of this paper is the discussion of the current works with visualization of the present state of the area, which is aimed to encourage educators to develop mixed reality artefacts and conduct further research to support collaborative educational environments.

29 citations


Proceedings ArticleDOI
23 Mar 2019
TL;DR: An immersive gastronomic experience which consists of tasting small pieces of food, while being immersed in a remote place designed to pair with the food, thus creating an innovative concept with potential impact in hospitality and tourism industries.
Abstract: We have developed an immersive gastronomic experience as a proof of concept of Distributed Reality, a type of Augmented Virtuality which combines a reality transmitted from a remote place, using 360◦ video, with a local reality, using video see-through. In order to reach fully immersive experience, local objects of interest such as hands and local food are segmented using red chrominance keying. Only those segmented objects are merged with the remote reality, enabling this way to increase self-presence and to allow user interaction. More concretely, the gastronomic experience consists of tasting small pieces of food, while being immersed in a remote place designed to pair with the food, thus creating an innovative concept with potential impact in hospitality and tourism industries. An evaluation performed with 66 users shows that it provides good levels of immersion, local interactivity, and general user satisfaction. The application achieves real time performance and has been developed for a smartphone mounted on a consumer headset, thus being easy to deploy and to reuse in other use cases.

24 citations


Proceedings ArticleDOI
10 Sep 2019
TL;DR: An augmented virtuality application was developed in which a physical cake buffet and a virtual living room coexisted and interacted in real time and was used to facilitate remote social eating for solitary older adults.
Abstract: This study used augmented virtuality with an overall aim to prevent undernourishment among older adults. The purpose was to facilitate remote social eating for solitary older adults, as people tend to eat more when socializing. In this study, an augmented virtuality application was developed in which a physical cake buffet and a virtual living room coexisted and interacted in real time. This was possible using an Oculus Rift CV1 HMD with an Intel Realsense SR300 depth sensor mounted on top of the HMD. The study included initial workshops with 30 experts and 16 older adults, prototyping with 7 mobility-restricted older adults, and a final user study with 27 older adults. In the user study, we evaluated the user experience of a system designed to establish a meal conversation between three older friends while eating a solitary meal in augmented virtuality. Within three overall factors (user, context, and system), we outlined sub- elements that constituted both opportunities and limitations, which included interactions, perceptions, and behavior in the augmented virtuality. The virtual living room was described very positively by all participants. However, there were also some technological limitations in terms of fidelity, HMD comfort, and quality. When developing virtual environments, we found it very important to include very specific elements within the user, contextual and system categories, as well including qualitative methods throughout the entire design process.

19 citations


Journal ArticleDOI
09 Oct 2019-Sensors
TL;DR: A rigorous overview and introduction of the challenges and methodologies used to study and control large-scale and complex socio-technical systems using agent-based methods are given.
Abstract: Modelling and simulation of social interaction and networks are of high interest in multiple disciplines and fields of application ranging from fundamental social sciences to smart city management. Future smart city infrastructures and management are characterised by adaptive and self-organising control using real-world sensor data. In this work, humans are considered as sensors. Virtual worlds, e.g., simulations and games, are commonly closed and rely on artificial social behaviour and synthetic sensor information generated by the simulator program or using data collected off-line by surveys. In contrast, real worlds have a higher diversity. Agent-based modelling relies on parameterised models. The selection of suitable parameter sets is crucial to match real-world behaviour. In this work, a framework combining agent-based simulation with crowd sensing and social data mining using mobile agents is introduced. The crowd sensing via chat bots creates augmented virtuality and reality by augmenting the simulated worlds with real-world interaction and vice versa. The simulated world interacts with real-world environments, humans, machines, and other virtual worlds in real-time. Among the mining of physical sensors (e.g., temperature, motion, position, and light) of mobile devices like smartphones, mobile agents can perform crowd sensing by participating in question–answer dialogues via a chat blog (provided by smartphone Apps or integrated into WEB pages and social media). Additionally, mobile agents can act as virtual sensors (offering data exchanged with other agents) and create a bridge between virtual and real worlds. The ubiquitous usage of digital social media has relevant impact on social interaction, mobility, and opinion-making, which has to be considered. Three different use-cases demonstrate the suitability of augmented agent-based simulation for social network analysis using parameterised behavioural models and mobile agent-based crowd sensing. This paper gives a rigorous overview and introduction of the challenges and methodologies used to study and control large-scale and complex socio-technical systems using agent-based methods.

17 citations


Proceedings ArticleDOI
02 May 2019
TL;DR: The effects of using the augmented virtuality impairment simulation system--Empath-D--to support experienced designer-developers to redesign a mockup of commonly used mobile application for cataract-impaired users is examined, showing that the use of augmentedvirtuality for assessing usability supports enhanced usability challenge identification, finding more defects and doing so more accurately than with existing methods.
Abstract: With mobile apps rapidly permeating all aspects of daily living with use by all segments of the population, it is crucial to support the evaluation of app usability for specific impaired users to improve app accessibility. In this work, we examine the effects of using our augmented virtuality impairment simulation system--Empath-D--to support experienced designer-developers to redesign a mockup of commonly used mobile application for cataract-impaired users, comparing this with existing tools that aid designing for accessibility. We show that the use of augmented virtuality for assessing usability supports enhanced usability challenge identification, finding more defects and doing so more accurately than with existing methods. Through our user interviews, we also show that augmented virtuality impairment simulation supports realistic interaction and evaluation to provide a concrete understanding over the usability challenges that impaired users face, and complements the existing guidelines-based approaches meant for general accessibility.

9 citations


01 Jan 2019
TL;DR: A redefinition of Mixed Reality is presented, taking into consideration the experiential symbiotic relationship and interaction between users, reality, and current immersive reality technologies, and its contextual applicability to the Virtual Heritage (VH) domain is suggested.
Abstract: The primary objective of this paper is to present a redefinition of Mixed Reality from a perspective emphasizing the relationship between users, virtuality and reality as a fundamental component. The redefinition is motivated by three primary reasons. Firstly, current literature in which Augmented Reality is the focus appears to approach Augmented Reality as an alternative to Mixed Reality. Secondly, Mixed Reality is often considered to encompass Augmented Reality and Virtual Reality rather than specifying it as a segment along the reality-virtuality continuum. Thirdly, most common definitions of Augmented Reality (AR), Augmented Virtuality (AV), Virtual Reality (VR) and Mixed Reality (MxR) in current literature are based on outdated display technologies, and a relationship between virtuality and reality, neglecting the importance of the users necessarily complicit sense of immersion from the relationship. The focus of existing definitions is thus currently technological, rather than experiential. We resolve this by redefining the continuum and MxR, taking into consideration the experiential symbiotic relationship and interaction between users, reality, and current immersive reality technologies. In addition, the paper will suggest some high-level overview of the redefinition's contextual applicability to the Virtual Heritage (VH) domain.

8 citations


Proceedings ArticleDOI
20 May 2019
TL;DR: The results show that, for the cutting task, the augmented virtuality visualization can improve operator performance compared to the conventional visualization, but that operators are more proficient with the conventional control interface than with the da Vinci master console.
Abstract: On-orbit servicing of satellites is complicated by the fact that almost all existing satellites were not designed to be serviced. This creates a number of challenges, one of which is to cut and partially remove the protective thermal blanketing that encases a satellite prior to performing the servicing operation. A human operator on Earth can perform this task telerobotically, but must overcome difficulties presented by the multi-second round-trip telemetry delay between the satellite and the operator and the limited, or even obstructed, views from the available cameras.This paper reports the results of ground-based experiments with trained NASA robot teleoperators to compare our recently-reported augmented virtuality visualization to the conventional camera-based visualization. We also compare the master console of a da Vinci surgical robot to the conventional teleoperation interface. The results show that, for the cutting task, the augmented virtuality visualization can improve operator performance compared to the conventional visualization, but that operators are more proficient with the conventional control interface than with the da Vinci master console.

6 citations


Proceedings ArticleDOI
01 Dec 2019
TL;DR: In this article, a convolutional neural network is used for real-time semantic segmentation of users' bodies in the stereoscopic RGB video streams acquired from the perspective of the user.
Abstract: In this paper, we present preliminary results on the use of deep learning techniques to integrate the user's self-body and other participants into a head-mounted video see-through augmented virtuality scenario. It has been previously shown that seeing user's bodies in such simulations may improve the feeling of both self and social presence in the virtual environment, as well as user performance. We propose to use a convolutional neural network for real time semantic segmentation of users' bodies in the stereoscopic RGB video streams acquired from the perspective of the user. We describe design issues as well as implementation details of the system and demonstrate the feasibility of using such neural networks for merging users' bodies in an augmented virtuality simulation.

Journal ArticleDOI
24 May 2019
TL;DR: In this paper, the choreographic as extended sculptural (plastic and virtual) scenographies and augmented virtuality is also discussed, drawing inspiration from contemporary associations between the choreography as extended sculptures.
Abstract: Deriving its impetus from contemporary associations between the choreographic as extended sculptural (plastic and virtual) scenographies and augmented virtuality, and thus also reimagining ...

Book ChapterDOI
08 Jan 2019
TL;DR: Through the engaging experience of Space Wars, this work aims to demonstrate how digital games, as forerunners of innovative technology, are perfectly suited as an application area to embrace the underlying low-cost technology, and thus pave the way for other adopters to follow suit.
Abstract: Over the past couple of years, Virtual and Augmented Reality have been at the forefront of the Mixed Reality development scene, whereas Augmented Virtuality has significantly lacked behind. Widespread adoption however requires efficient low-cost platforms and minimalistic interference design. In this work we present Space Wars, an end-to-end proof of concept for an elegant and rapid-deployment Augmented VR platform. Through the engaging experience of Space Wars, we aim to demonstrate how digital games, as forerunners of innovative technology, are perfectly suited as an application area to embrace the underlying low-cost technology, and thus pave the way for other adopters (such as healthcare, education, tourism and e-commerce) to follow suit.

Posted Content
TL;DR: This work proposes a new virtual locomotion technique, Combined Walking in Place (CWIP), which is the first approach to combined different locomotion modalities in a safe manner, and evaluated it in a user study to validate their ability to navigate a virtual world while walking in a confined and cluttered real space.
Abstract: New technologies allow ordinary people to access Virtual Reality at affordable prices in their homes. One of the most important tasks when interacting with immersive Virtual Reality is to navigate the virtual environments (VEs). Arguably, the best methods to accomplish this use of direct control interfaces. Among those, natural walking (NW) makes for enjoyable user experience. However, common techniques to support direct control interfaces in VEs feature constraints that make it difficult to use those methods in cramped home environments. Indeed, NW requires unobstructed and open space. To approach this problem, we propose a new virtual locomotion technique, Combined Walking in Place (CWIP). CWIP allows people to take advantage of the available physical space and empowers them to use NW to navigate in the virtual world. For longer distances, we adopt Walking in Place (WIP) to enable them to move in the virtual world beyond the confines of a cramped real room. However, roaming in immersive alternate reality, while moving in the confines of a cluttered environment can lead people to stumble and fall. To approach these problems, we developed Augmented Virtual Reality (AVR), to inform users about real-world hazards, such as chairs, drawers, walls via proxies and signs placed in the virtual world. We propose thus CWIP-AVR as a way to safely explore VR in the cramped confines of your own home. To our knowledge, this is the first approach to combined different locomotion modalities in a safe manner. We evaluated it in a user study with 20 participants to validate their ability to navigate a virtual world while walking in a confined and cluttered real space. Our results show that CWIP-AVR allows people to navigate VR safely, switching between locomotion modes flexibly while maintaining a good immersion.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: The augmented virtuality system provides a remote telepresence for operators with real-time interaction capabilities, and focuses on providing situation awareness while maintaining operator capabilities.
Abstract: With increased orientation towards fleet operations, new technologies to improve autonomy and connectivity are needed. Most vehicles already come with mission planning and execution capabilities, and all state data is available remotely. With more autonomy, operators take on the supervision role for multiple vehicles. However, managing multiple vehicles in real-time requires supervisory interfaces that will reduce sensory overload. This presents a concept of an immersive user interface for supervision of marine systems. The augmented virtuality system provides a remote telepresence for operators with real-time interaction capabilities. The system focuses on providing situation awareness while maintaining operator capabilities. Data are streamed into the Unity3D interface through dedicated plugins, allowing two-way communication with LabView. Data is distilled into visual clues, while sound and speech recognition are employed to lessen the visual strain and improve interaction.

Proceedings ArticleDOI
26 Nov 2019
TL;DR: An explorative study with 15 participants develops a Mixed Reality board game that offered different combinations of real and virtual game components, such as the board, the pieces and the dice, and indicates that virtual interaction elements work better on a real background than vice versa.
Abstract: The concept of Mixed Reality has existed in research for decades but has experienced rapid growth in recent years, mainly due to technological advances and peripherals such as the Microsoft HoloLens reaching the market. Despite this, certain design aspects of Mixed Reality experiences, such as the different nuances of real and virtual elements, remain largely unexplored. This paper presents an explorative study with 15 participants which aims to investigate and gain a better understanding of the different qualities of real and virtual objects. To that end, we developed a Mixed Reality board game that offered different combinations of real and virtual game components, such as the board, the pieces and the dice. Our analysis shows that the participants generally preferred the completely virtual variant but appreciated different qualities of real and virtual elements. The results also indicate that virtual interaction elements work better on a real background than vice versa. However, this conflicts with some participants' preference of using physical pieces for the haptic experience, creating a design trade-off. This study represents a first step in exploring how the experience changes when swapping elements of differing realities for one another and identifying these trade-offs.

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed method is a convenient alternative to conventional marker-based methods for calibrating multiple Kinect devices and can be incorporated in various applications of augmented reality and augmented virtuality that require 3D environment reconstruction by employingmultiple Kinect devices.
Abstract: Reconstruction of the three-dimensional (3D) environment is a key aspect of augmented reality and augmented virtuality, which utilize and incorporate a user's surroundings. Such reconstruction can be easily realized by employing a Kinect device. However, multiple Kinect devices are required for enhancing the reconstruction density and for spatial expansion. While employing multiple Kinect devices, they must be calibrated with respect to each other in advance, and a marker is often used for this purpose. However, a marker needs to be placed at each calibration, and the result of marker detection significantly affects the calibration accuracy. Therefore, a user-friendly, efficient, accurate, and marker-less method for calibrating multiple Kinect devices is proposed in this study. The proposed method includes a joint tracking algorithm for approximate calibration, and the obtained result is further refined by applying the iterative closest point algorithm. Experimental results indicate that the proposed method is a convenient alternative to conventional marker-based methods for calibrating multiple Kinect devices. Hence, the proposed method can be incorporated in various applications of augmented reality and augmented virtuality that require 3D environment reconstruction by employing multiple Kinect devices.

Proceedings ArticleDOI
20 Sep 2019
TL;DR: A system that is based on augmented virtuality technology, where real data from the drone are integrated into the virtual 3D environment model (video-stream, 3D structures, location information) to help the pilot with orientation capability during the mission.
Abstract: Since the remote drones control is mentally very demanding, supporting the pilot with both, first person view (FPV) and third person view (TPV) of the drone may help the pilot with orientation capability during the mission. Therefore, we present a system that is based on augmented virtuality technology, where real data from the drone are integrated into the virtual 3D environment model (video-stream, 3D structures, location information). In our system, the pilot is mostly piloting the drone using FPV, but can whenever switch to TPV in order to freely look around the situation of poor orientation. The proposed system also enables efficient mission planning, where the pilot can define 3D areas with different potential security risks or set navigation waypoints, which will be used during the mission to navigate in defined zones and visualize the overall situation in the virtual scene augmented by online real data.

Proceedings ArticleDOI
23 Mar 2019
TL;DR: An augmented virtuality system using low-cost or outdated devices as sensors, attached to tangible objects and tied together by a puzzling narrative is presented, featuring rotation, pressing, pulling, pushing and insertion movements, including passive mechanical resistance.
Abstract: We present an augmented virtuality system using low-cost or outdated devices as sensors, attached to tangible objects. Five interactions were implemented, featuring rotation, pressing, pulling, pushing and insertion movements, including passive mechanical resistance. Interactions were tied together by a puzzling narrative. Concerning storytelling, in contrast to AR, pure virtual reality offers unlimited visual possibilities and higher flexibility. Augmenting VR with physical objects allows ignoring most details of the physical environment and exploring ‘magical’ interactions. Precise registration was accomplished using wireless tracking system. The main challenge was the absent physical counterparts of the scenario, which was tackled by limiting user's reach to available interactions. Despite Leap Motion's erratic hand tracking, mouse and tablet sensors were precise enough and the augmented environment allowed a high sense of presence.