scispace - formally typeset
Search or ask a question

Showing papers on "Mixed reality published in 2014"


Journal ArticleDOI
TL;DR: It is argued the merit of having students design Augmented Reality experiences in order to develop their higher order thinking capabilities, as well as establishing a future outlook forAugmented Reality and setting a research agenda going forward.
Abstract: Augmented Reality is poised to profoundly transform Education as we know it. The capacity to overlay rich media onto the real world for viewing through web-enabled devices such as phones and tablet devices means that information can be made available to students at the exact time and place of need. This has the potential to reduce cognitive overload by providing students with “perfectly situated scaffolding”, as well as enable learning in a range of other ways. This paper will review uses of Augmented Reality both in mainstream society and in education, and discuss the pedagogical potentials afforded by the technology. Based on the prevalence of information delivery uses of Augmented Reality in Education, we argue the merit of having students design Augmented Reality experiences in order to develop their higher order thinking capabilities. A case study of “learning by design” using Augmented Reality in high school Visual Art is presented, with samples of student work and their feedback indicating that the...

367 citations


Dissertation
01 Jan 2014
TL;DR: This thesis explores flat displays, deformable displays, flexible materials, static, and mobile projection displays in dynamic environments, and serves as an example application for merging the digital and the physical through flexible Materials, embodied computation, and actuation.
Abstract: In 1965 Ivan E. Sutherland envisioned the Ultimate Display, a room in which a computer can directly control the existence of matter. This type of display would merge the digital and the physical world, dramatically changing how people interact with computers. This thesis explores flat displays, deformable displays, flexible materials, static, and mobile projection displays in dynamic environments. %Dynamic environments are inherent to human behavior, but pose big problems to Human-Computer Interaction since computing devices rely on many assumptions of the interaction. Two aspects of the dynamic environment are considered. One is mobile human nature -- a person moving through or inside an environment. The other is the change or movement of the environment itself. The initial study consisted of a mixed reality application, based on recent motor learning research. It tested if a performer's attentional focus on markers external to the body improves the accuracy and duration of acquiring a motor skill, as compared with the performer focusing on their own body accompanied by verbal instructions. This experiment showed the need for displays that resemble physical reality. Deformable displays and Organic User Interfaces (OUIs) leverage shape, material, and the inherent properties of matter in order to create natural, intuitive forms of interaction. We suggested designing OUIs employing depth sensors as 3D input, deformable displays as 3D output, and identifying attributes that couple matter to human perception and motor skills. Flexible materials were explored by developing a soft gripper able to hold everyday objects of various shapes and sizes. It did not use complex hardware or control algorithms, but rather combined sheets of flexible plastic materials and a single servo motor. The gripper showed how a simple design with a minimal control mechanism can solve a complex problem in a dynamic environment. It serves as an example application for merging the digital and the physical through flexible materials, embodied computation, and actuation. The next two experiments merge digital information with the physical dynamic environment by using mobile and static projectors. The mobile projector experiment consisted of GPS navigation using a bike-mounted projector, displaying a map on the pavement in front of the bike. We found out that if compared with a bike-mounted smartphone, the mobile projector yields a lower cognitive load for the map navigation task. A dynamic space emerges from the navigation task requirements, and the projected display becomes a part of the physical environment. In the final experiment, a person interacts with a changing, growing environment, on which digital information is projected from above using a static projector. The interactive space consists of cardboard building blocks, the arrangement of which are limited by the area of projection. The user adds cardboard blocks to the cluster based upon feedback projected from above. Concepts from artificial intelligence and architecture were applied for understanding the interaction between the environment, the user, the morphology, and the material of the physical building system.

319 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate the extent to which an embodied mixed reality learning environment (EMRELE) can enhance science learning compared to regular classroom instruction, and they hypothesize that the positive results are due to the embodiment designed into the lessons and the high degree of collaboration engendered by the co-located EMRELE.
Abstract: These 2 studies investigate the extent to which an Embodied Mixed Reality Learning Environment (EMRELE) can enhance science learning compared to regular classroom instruction. Mixed reality means that physical tangible and digital components were present. The content for the EMRELE required that students map abstract concepts and relations onto their gestures and movements so that the concepts would become grounded in embodied action. The studies compare an immersive, highly interactive learning platform that uses a motion-capture system to track students’ gestures and locomotion as they kinesthetically learn with a quality classroom experience (teacher and content were held constant). Two science studies are presented: chemistry titration and disease transmission. In the counterbalanced design 1 group received the EMRELE intervention, while the other group received regular instruction; after 3 days and a midtest, the interventions switched. Each study lasted for 6 days total, with 3 test points: pretest, midtest, and posttest. Analyses revealed that placement in the embodied EMRELE condition consistently led to greater learning gains (effect sizes ranged from 0.53 to 1.93), compared to regular instruction (effect sizes ranged from 0.09 to 0.37). Order of intervention did not affect the final outcomes at posttest. These results are discussed in relation to a new taxonomy of embodiment in educational settings. We hypothesize that the positive results are due to the embodiment designed into the lessons and the high degree of collaboration engendered by the co-located EMRELE.

220 citations


Patent
19 Feb 2014
TL;DR: In this article, a mobile device captures an image of a real-world object where the image has content information that can be used to control a mixed reality object through an offered command set.
Abstract: Methods of interacting with a mixed reality are presented A mobile device captures an image of a real-world object where the image has content information that can be used to control a mixed reality object through an offered command set The mixed reality object can be real, virtual, or a mixture of both real and virtual

215 citations


Journal ArticleDOI
TL;DR: A basic VR system is described and how it may be used for this purpose, and this system is extended with the introduction of real-time motion capture, a simple haptics system and the integration of physiological and brain electrical activity recordings.
Abstract: Advances in computer graphics algorithms and virtual reality (VR) systems, together with the reduction in cost of associated equipment, have led scientists to consider VR as a useful tool for conducting experimental studies in fields such as neuroscience and experimental psychology. In particular virtual body ownership, where the feeling of ownership over a virtual body is elicited in the participant, has become a useful tool in the study of body representation, in cognitive neuroscience and psychology, concerned with how the brain represents the body. Although VR has been shown to be a useful tool for exploring body ownership illusions, integrating the various technologies necessary for such a system can be daunting. In this paper we discuss the technical infrastructure necessary to achieve virtual embodiment. We describe a basic VR system and how it may be used for this purpose, and then extend this system with the introduction of real-time motion capture, a simple haptics system and the integration of physiological and brain electrical activity recordings.

195 citations


Proceedings ArticleDOI
26 Apr 2014
TL;DR: This paper describes the design and implementation of MixFab, a mixed-reality environment for personal fabrication that lowers the barrier for users to engage in personal fabrication, and describes a user study evaluating the system's prototype.
Abstract: Personal fabrication machines, such as 3D printers and laser cutters, are becoming increasingly ubiquitous. However, designing objects for fabrication still requires 3D modeling skills, thereby rendering such technologies inaccessible to a wide user-group. In this paper, we introduce MixFab, a mixed-reality environment for personal fabrication that lowers the barrier for users to engage in personal fabrication. Users design objects in an immersive augmented reality environment, interact with virtual objects in a direct gestural manner and can introduce existing physical objects effortlessly into their designs. We describe the design and implementation of MixFab, a user-defined gesture study that informed this design, show artifacts designed with the system and describe a user study evaluating the system's prototype.

151 citations


Proceedings ArticleDOI
11 Nov 2014
TL;DR: This paper presents a touchscreen interface for creating freehand drawings as world-stabilized annotations and for virtually navigating a scene reconstructed live in 3D, all in the context of live remote collaboration.
Abstract: Augmented reality annotations and virtual scene navigation add new dimensions to remote collaboration. In this paper, we present a touchscreen interface for creating freehand drawings as world-stabilized annotations and for virtually navigating a scene reconstructed live in 3D, all in the context of live remote collaboration. Two main focuses of this work are (1) automatically inferring depth for 2D drawings in 3D space, for which we evaluate four possible alternatives, and (2) gesture-based virtual navigation designed specifically to incorporate constraints arising from partially modeled remote scenes. We evaluate these elements via qualitative user studies, which in addition provide insights regarding the design of individual visual feedback elements and the need to visualize the direction of drawings.

112 citations


Patent
29 Apr 2014
TL;DR: In this paper, a mixed reality interaction program identifies an object based on an image from captured by the display and a profile for the physical object is queried to determine interaction modes for the object.
Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.

94 citations


Journal ArticleDOI
TL;DR: This experiment suggested that embodying light-skinned people in a dark-skinned virtual body led to a reduction in their implicit racial bias.
Abstract: Cognitive neuroscientists have discovered through various experiments that our body representation is surprisingly flexible. Multisensory body illusions work well in immersive virtual reality, and recent findings suggest that they offer both a powerful tool for neuroscience and a new path for future exploration. The first Web extra at http://youtu.be/rf39t1iV0Ao is a video showing how embodying adults in a virtual child's body can cause overestimation of object sizes and implicit attitude changes. The second Web extra at http://youtu.be/H9-il4cx2mA is a video showing how embodying adults in virtual reality with different avatars can affect their drumming performance. The third Web extra at http://youtu.be/DEofSgdv3Nc is a video showing embodiment in a virtual body that "substitutes" the person's own body. This experiment suggested that embodying light-skinned people in a dark-skinned virtual body led to a reduction in their implicit racial bias.

85 citations


Proceedings ArticleDOI
04 Oct 2014
TL;DR: Ethereal Planes is a design framework that ties together many existing variations of 2D information spaces aimed at assisting the design of user interfaces for next-generation technologies such as head-worn displays and can be methodically applied to help inspire new designs.
Abstract: Information spaces are virtual workspaces that help us manage information by mapping it to the physical environment. This widely influential concept has been interpreted in a variety of forms, often in conjunction with mixed reality. We present Ethereal Planes, a design framework that ties together many existing variations of 2D information spaces. Ethereal Planes is aimed at assisting the design of user interfaces for next-generation technologies such as head-worn displays. From an extensive literature review, we encapsulated the common attributes of existing novel designs in seven design dimensions. Mapping the reviewed designs to the framework dimensions reveals a set of common usage patterns. We discuss how the Ethereal Planes framework can be methodically applied to help inspire new designs. We provide a concrete example of the framework's utility during the design of the Personal Cockpit, a window management system for head-worn displays.

79 citations


Proceedings ArticleDOI
08 Aug 2014
TL;DR: This paper compares two virtual reality systems in a variety of tasks: distance estimation, virtual object interaction, a complex search task, and a simple viewing experiment, and finds that the low-cost system consistently outperforms the high- cost system, but there is some qualitative evidence that some people are more subject to simulator sickness in theLow cost system.
Abstract: Recent advances in technology and the opportunity to obtain commodity-level components have made the development and use of three-dimensional virtual environments more available than ever before. How well such components work to generate realistic virtual environments, particularly environments suitable for perception and action studies, is an open question. In this paper we compare two virtual reality systems in a variety of tasks: distance estimation, virtual object interaction, a complex search task, and a simple viewing experiment. The virtual reality systems center around two different head-mounted displays, a low-cost Oculus Rift and a high-cost Nvis SX60, which differ in resolution, field-of-view, and inertial properties, among other factors. We measure outcomes of the individual tasks as well as assessing simulator sickness and presence. We find that the low-cost system consistently outperforms the high-cost system, but there is some qualitative evidence that some people are more subject to simulator sickness in the low-cost system.

Patent
15 Apr 2014
TL;DR: In this article, the authors propose a system that allows users of mobile computing devices to generate augmented reality scenarios by overlaying augmented reality content onto frames on a video when a trigger item is detected.
Abstract: The systems and methods allow users of mobile computing devices to generate augmented reality scenarios. Augmented reality content is paired with a real world trigger item to generate the augmented reality scenario. The augmented reality content is overlaid onto frames on a video when a trigger item is detected. Each mobile computing device may have an augmented reality application resident on the mobile computing device to allow a user to generate the augmented reality scenarios.

Journal ArticleDOI
TL;DR: This work incorporates the latest AR and CV algorithms into a Virtual English Classroom, called VECAR, to promote immersive and interactive language learning, and design three cultural learning activities that introduce students to authentic cultural products and new cultural practices, and allow them to examine various cultural perspectives.
Abstract: The physical-virtual immersion and real-time interaction play an essential role in cultural and language learning. Augmented reality (AR) technology can be used to seamlessly merge virtual objects with real-world images to realize immersions. Additionally, computer vision (CV) technology can recognize free-hand gestures from live images to enable intuitive interactions. Therefore, we incorporate the latest AR and CV algorithms into a Virtual English Classroom, called VECAR, to promote immersive and interactive language learning. By wearing a pair of mobile computing glasses, users can interact with virtual contents in a three-dimensional space by using intuitive free-hand gestures. We design three cultural learning activities that introduce students to authentic cultural products and new cultural practices, and allow them to examine various cultural perspectives. The objectives of the VECAR are to make cultural and language learning appealing, improve cultural learning effectiveness, and enhance interpersonal communication between teachers and students.

Journal ArticleDOI
20 Jan 2014
TL;DR: This paper presents the recent effort to create rich, seamless, and adaptive AR browsers and discusses major challenges in the area and presents an agenda on future research directions for an everyday augmented world.
Abstract: As low-level hardware will soon allow us to visualize virtual content anywhere in the real world, managing it in a more structured manner still needs to be addressed. Augmented reality (AR) browser technology is the gateway to such structured software platform and an anywhere AR user experience. AR browsers are the substitute of Web browsers in the real world, permitting overlay of interactive multimedia content on the physical world or objects they refer to. As the current generation allows us to barely see floating virtual items in the physical world, a tighter coupling with our reality has not yet been explored. This paper presents our recent effort to create rich, seamless, and adaptive AR browsers. We discuss major challenges in the area and present an agenda on future research directions for an everyday augmented world.

01 Jan 2014
TL;DR: Augmented reality (AR) browser technology is the gateway to such structured software platform and an anywhere AR user experience as mentioned in this paper, which is the substitute of Web browsers in the real world, permitting overlay of interactive multimedia content on the physical world or objects they refer to.
Abstract: As low-level hardware will soon allow us to visualize virtual content anywhere in the real world, managing it in a more structured manner still needs to be addressed. Augmented reality (AR) browser technology is the gateway to such structured software platform and an anywhere AR user experience. AR browsers are the substitute of Web browsers in the real world, permitting overlay of interactive multimedia content on the physical world or objects they refer to. As the current generation allows us to barely see floating virtual items in the physical world, a tighter coupling with our reality has not yet been explored. This paper presents our recent effort to create rich, seamless, and adaptive AR browsers. We discuss major challenges in the area and present an agenda on future research directions for an everyday augmented world.

Journal ArticleDOI
TL;DR: By creating a spatial link between images appearing in mid-air and physical objects, the MARIO system extends video games into the real world and enables images to be displayed in 3D spaces beyond screens.

Patent
13 Nov 2014
TL;DR: In this article, the augmented reality component of an augmented reality service is described and a path to the at least one object within the AR view is calculated to create a mapped augmented reality view.
Abstract: Techniques for an augmented reality component are described. An apparatus may comprise an augmented reality component to execute an augmented reality service in a data system. The augmented reality service operative to generate an augmented reality view of one or more objects within a target location. The augmented reality service operative to receive spatial awareness information for at least one object. The augmented reality service operative to calculate a path to the at least one object within the augmented reality view. The augmented reality service operative to add a digital representation of the path to the augmented reality view to create a mapped augmented reality view. The augmented reality service operative to present the mapped augmented reality view on an electronic device.

Journal ArticleDOI
TL;DR: A mixed reality simulator that contains all the physical components encountered for the ventriculostomy procedure with superimposed 3-D virtual elements for the neuroanatomical structures will be an instrumental tool in training the next generation of neurosurgeons.
Abstract: Background Medicine and surgery are turning toward simulation to improve on limited patient interaction during residency training. Many simulators today use virtual reality with augmented haptic feedback with little to no physical elements. In a collaborative effort, the University of Florida Department of Neurosurgery and the Center for Safety, Simulation & Advanced Learning Technologies created a novel "mixed" physical and virtual simulator to mimic the ventriculostomy procedure. The simulator contains all the physical components encountered for the procedure with superimposed 3-D virtual elements for the neuroanatomical structures. Objective To introduce the ventriculostomy simulator and its validation as a necessary training tool in neurosurgical residency. Methods We tested the simulator in more than 260 residents. An algorithm combining time and accuracy was used to grade performance. Voluntary postperformance surveys were used to evaluate the experience. Results Results demonstrate that more experienced residents have statistically significant better scores and completed the procedure in less time than inexperienced residents. Survey results revealed that most residents agreed that practice on the simulator would help with future ventriculostomies. Conclusion This mixed reality simulator provides a real-life experience, and will be an instrumental tool in training the next generation of neurosurgeons. We have now implemented a standard where incoming residents must prove efficiency and skill on the simulator before their first interaction with a patient.

Book ChapterDOI
01 Jan 2014
TL;DR: Philip Brey argues that behavior in virtual environments can be evaluated according to how it affects users of the environments, and the way in which the designs of virtual environments are value laden — they express values, structure choices, and encourage evaluative attitudes.
Abstract: Virtual reality and computer simulation are becoming increasingly immersive and interactive, and people are spending more and more time and money in virtual and simulated environments. In this chapter, Philip Brey begins with an overview of what virtual reality is — including the senses in which it is “virtual” and the senses in which it is “real.” He then discusses a set of issues that are connected to the representational nature of virtuality — i.e. the possibility of misrepresentation, biased representation, and indecent representation — before examining behavior in virtual environments. Brey argues that behavior in virtual environments can be evaluated according to how it affects users of the environments — for example, property can be stolen in virtual environments and too much time and emotion spent in virtual environments can be detrimental to living well. Finally, Brey discusses several issues associated with video games, including their impact on children and gender representation and bias in them. A theme that runs throughout the chapter is the way in which the designs of virtual environments are value laden — they express values, structure choices, and encourage evaluative attitudes.

Journal ArticleDOI
TL;DR: In this article, a calibration procedure for depth and color cameras was proposed to improve the accuracy of the 3D measurements of a RGB-D camera and to co-register different calibrated devices.

Journal ArticleDOI
TL;DR: Results of a research study involving 61 children from a local summer camp that shows a large increase in recorded and observed activity, alongside observational evidence that the virtual pet was responsible for that change, demonstrate the practical potential to impact the exercise behaviors of children with mixed reality.
Abstract: Novel approaches are needed to reduce the high rates of childhood obesity in the developed world. While multifactorial in cause, a major factor is an increasingly sedentary lifestyle of children. Our research shows that a mixed reality system that is of interest to children can be a powerful motivator of healthy activity. We designed and constructed a mixed reality system that allowed children to exercise, play with, and train a virtual pet using their own physical activity as input. The health, happiness, and intelligence of each virtual pet grew as its associated child owner exercised more, reached goals, and interacted with their pet. We report results of a research study involving 61 children from a local summer camp that shows a large increase in recorded and observed activity, alongside observational evidence that the virtual pet was responsible for that change. These results, and the ease at which the system integrated into the camp environment, demonstrate the practical potential to impact the exercise behaviors of children with mixed reality.

Patent
24 Feb 2014
TL;DR: In this article, a mixed reality augmentation system receives motion data from a head-mounted display device that corresponds to motion of a user in a physical environment, which is then used to provide motion amplification to a virtual environment.
Abstract: Embodiments that relate to providing motion amplification to a virtual environment are disclosed. For example, in one disclosed embodiment a mixed reality augmentation program receives from a head-mounted display device motion data that corresponds to motion of a user in a physical environment. The program presents via the display device the virtual environment in motion in a principal direction, with the principal direction motion being amplified by a first multiplier as compared to the motion of the user in a corresponding principal direction. The program also presents the virtual environment in motion in a secondary direction, where the secondary direction motion is amplified by a second multiplier as compared to the motion of the user in a corresponding secondary direction, and the second multiplier is less than the first multiplier.

Patent
11 Jun 2014
TL;DR: In this article, a system and method for displaying virtual objects in a mixed reality environment including shared virtual objects and private virtual objects is described, where multiple users can collaborate together in interacting with the shared virtual object.
Abstract: A system and method are disclosed for displaying virtual objects in a mixed reality environment including shared virtual objects and private virtual objects. Multiple users can collaborate together in interacting with the shared virtual objects. A private virtual object may be visible to a single user. In examples, private virtual objects of respective users may facilitate the users' collaborative interaction with one or more shared virtual objects.

Proceedings ArticleDOI
24 Apr 2014
TL;DR: Omegalib, a software framework that facilitates application development on HREs, and how a Hybrid Reality Environment proved effective in supporting work for a co-located research group in the environmental sciences is presented.
Abstract: In the domain of large-scale visualization instruments, hybrid reality environments (HREs) are a recent innovation that combines the best-in-class capabilities of immersive environments, with the best-in-class capabilities of ultra-high-resolution display walls. HREs create a seamless 2D/3D environment that supports both information-rich analysis as well as virtual reality simulation exploration at a resolution matching human visual acuity. Co-located research groups in HREs tend to work on a variety of tasks during a research session (sometimes in parallel), and these tasks require 2D data views, 3D views, linking between them and the ability to bring in (or hide) data quickly as needed. In this paper we present Omegalib, a software framework that facilitates application development on HREs. Omegalib is designed to support dynamic reconfigurability of the display environment, so that areas of the display can be interactively allocated to 2D or 3D workspaces as needed. Compared to existing frameworks and toolkits, Omegalib makes it possible to have multiple immersive applications running on a cluster-controlled display system, have different input sources dynamically routed to applications, and have rendering results optionally redirected to a distributed compositing manager. Omegalib supports pluggable front-ends, to simplify the integration of third-party libraries like OpenGL, OpenSceneGraph, and the Visualization Toolkit (VTK). We present examples of applications developed with Omegalib for the 74-megapixel, 72-tile CAVE2™ system, and show how a Hybrid Reality Environment proved effective in supporting work for a co-located research group in the environmental sciences.

Patent
15 Jan 2014
TL;DR: In this article, a mixed reality filtering program receives a plurality of geo-located data items and selectively filters the data items based on one or more modes, such as social mode, popular mode, recent mode, work mode, play mode, and user interest mode.
Abstract: Embodiments that relate to selectively filtering geo-located data items in a mixed reality environment are disclosed. For example, in one disclosed embodiment a mixed reality filtering program receives a plurality of geo-located data items and selectively filtering the data items based on one or more modes. The modes comprise one or more of a social mode, a popular mode, a recent mode, a work mode, a play mode, and a user interest mode. Such filtering yields a filtered collection of the geo-located data items. The filtered collection of data items is then provided to a mixed reality display program for display by a display device.

Patent
22 May 2014
TL;DR: In this paper, a system and method are disclosed for displaying virtual objects in a mixed reality environment in a way that is optimal and most comfortable for a user to interact with the virtual objects.
Abstract: A system and method are disclosed for displaying virtual objects in a mixed reality environment in a way that is optimal and most comfortable for a user to interact with the virtual objects. When a user is moving through the mixed reality environment, the virtual objects may remain world-locked, so that the user can move around and explore the virtual objects from different perspectives. When the user is motionless in the mixed reality environment, the virtual objects may rotate to face the user so that the user can easily view and interact with the virtual objects.

Proceedings ArticleDOI
11 Nov 2014
TL;DR: The hypothesis is that by presenting to the users an egocentric view of the virtual environment "populated" by their own bodies, a very strong feeling of presence is developed as well.
Abstract: This paper presents a novel fully immersive Mixed Reality system that we have recently developed where the user freely walks in a life-size virtual scenario wearing an HMD and can see and use her/his own body when interacting with objects. This form of natural interaction is made possible in our system because the user's hands are real-time captured by means of a RGBD camera on the HMD. This allow the system to have in real-time a texturized geometric mesh of the hands and body (as seen from her/his own perspective) that can be rendered like any other polygonal model in the scene. Our hypothesis is that by presenting to the users an egocentric view of the virtual environment "populated" by their own bodies, a very strong feeling of presence is developed as well.

Proceedings ArticleDOI
06 Nov 2014
TL;DR: A differential illumination method is introduced that allows for a consistent illumination of the inserted virtual objects on mobile devices, avoiding a delay, and allows for an interactive illumination of virtual objects with a consistent appearance under both temporally and spatially varying real illumination conditions.
Abstract: Mobile devices become more and more important today, especially for augmented reality (AR) applications in which the camera of the mobile device acts like a window into the mixed reality world. Up to now, no photorealistic augmentation is possible since the computational power of the mobile devices is still too weak. Even a streaming solution from a stationary PC would cause a latency that affects user interactions considerably. Therefore, we introduce a differential illumination method that allows for a consistent illumination of the inserted virtual objects on mobile devices, avoiding a delay. The necessary computation effort is shared between a stationary PC and the mobile devices to make use of the capacities available on both sides. The method is designed such that only a minimum amount of data has to be transferred asynchronously between the stationary PC and one or multiple mobile devices. This allows for an interactive illumination of virtual objects with a consistent appearance under both temporally and spatially varying real illumination conditions. To describe the complex near-field illumination in an indoor scenario, multiple HDR video cameras are used to capture the illumination from multiple directions. In this way, sources of illumination can be considered that are not directly visible to the mobile device because of occlusions and the limited field of view of built-in cameras.

Journal ArticleDOI
TL;DR: The article describes how features of large-scale, projector-based augmented reality affect the design of spatial user interfaces for these environments and explores promising research directions and application domains.
Abstract: Spatial augmented reality applies the concepts of spatial user interfaces to large-scale, projector-based augmented reality. Such virtual environments have interesting characteristics. They deal with large physical objects, the projection surfaces are nonplanar, the physical objects provide natural passive haptic feedback, and the systems naturally support collaboration between users. The article describes how these features affect the design of spatial user interfaces for these environments and explores promising research directions and application domains.

Patent
10 Jan 2014
TL;DR: In this paper, a mixed reality accommodation system and related methods are provided for displaying holographic objects in mixed reality environments, where a mixed-reality safety program is configured to receive a holographic object and associated content provider ID from a source.
Abstract: A mixed reality accommodation system and related methods are provided. In one example, a head-mounted display device includes a plurality of sensors and a display system for presenting holographic objects. A mixed reality safety program is configured to receive a holographic object and associated content provider ID from a source. The program assigns a trust level to the object based on the content provider ID. If the trust level is less than a threshold, the object is displayed according to a first set of safety rules that provide a protective level of display restrictions. If the trust level is greater than or equal to the threshold, the object is displayed according to a second set of safety rules that provide a permissive level of display restrictions that are less than the protective level of display restrictions.