scispace - formally typeset
Search or ask a question

Showing papers on "Mixed reality published in 2010"


Patent
Avi Bar-Zeev1, J. Andrew Goossen1, John Tardif1, Mark S. Grossman1, Harjit Singh1 
27 Oct 2010
TL;DR: In this paper, a system that includes a head mounted display device and a processing unit connected to the device is used to fuse virtual content into real content, and state data is extrapolated to predict a field of view for a user in the future at a time when the mixed reality is to be displayed to the user.
Abstract: A system that includes a head mounted display device and a processing unit connected to the head mounted display device is used to fuse virtual content into real content. In one embodiment, the processing unit is in communication with a hub computing device. The processing unit and hub may collaboratively determine a map of the mixed reality environment. Further, state data may be extrapolated to predict a field of view for a user in the future at a time when the mixed reality is to be displayed to the user. This extrapolation can remove latency from the system.

238 citations


Journal ArticleDOI
TL;DR: RoboStage significantly improved the sense of authenticity of the task and also positively affected learning motivation, and learning performance was conditionally affected by RoboStage.
Abstract: The main aim of the modern popular teaching method of authentic learning has been to provide students with everyday-life challenges that develop knowledge and skills through problem solving in different situations. Many emerging information technologies have been used to present authentic environment in pedagogical purpose. However, there are few studies that have been discussed the sense of authenticity and characters in scene and how students interact with the characters involved in the task. We designed a system, RoboStage, with authentic scenes by using mixed-reality technology and robot to investigate the difference in learning with either physical or virtual characters and learning behaviors and performance through the system. Robots were designed to play real interactive characters in the task. The experiment of the study conducted with 36 junior high students. The results indicated that RoboStage significantly improved the sense of authenticity of the task and also positively affected learning motivation. Learning performance was conditionally affected by RoboStage.

141 citations


Journal ArticleDOI
01 Dec 2010
TL;DR: In this paper, the authors presented and explained the usage of AR technology in what can be named Augmented Reality Student Card (ARSC) for serving the education field, using single static markers combined in one card for assigning different objects, leaving the choice to the computer application for minimizing the tracking process.
Abstract: Augmented Reality (AR) is the technology of adding virtual objects to the real scenes through enabling the addition of missing information at real life. As the lack of resources is a problem that can be solved through AR, this paper represents and explains the usage of AR technology in what can be named Augmented Reality Student Card (ARSC) for serving the education field. ARSC uses single static markers combined in one card for assigning different objects, leaving the choice to the computer application for minimizing the tracking process. ARSC is designed to be a useful low cost solution for serving the education field. ARSC represents any lesson in a 3D format that helps students to visualize the facts, interact with theories and deal with the information in a totally new effective and interactive way. ARSC can be used in offline, online and game applications with seven markers, four of them are used as a joystick game controller. One of the novelties in this paper is that full experimental tests had been made for the ARTag marker set for sorting them according to their efficiency. The results of the tests are used in this research to choose the most efficient markers for ARSC, and can be used for further researches. The experimental work that had been made in this paper also shows the constraints for marker creation for an AR application. Due to the need to work for online and offline application, merging of toolkits and libraries has been made, as presented in this paper. ARSC was examined by a number of students of both genders with average age between 10–17 years and it was found to have a great acceptance among them.

140 citations


Proceedings ArticleDOI
25 Mar 2010
TL;DR: In this article, the authors present early work in experimenting with desktop augmented reality (AR) for rehabilitation and discuss the development of rehabilitation prototypes using available AR libraries and express their thoughts on the potential of AR technology.
Abstract: Stroke is the number one cause of severe physical disability in the UK. Recent studies have shown that technologies such as virtual reality and imaging can provide an engaging and motivating tool for physical rehabilitation. In this paper we summarize previous work in our group using virtual reality technology and webcam-based games. We then present early work we are conducting in experimenting with desktop augmented reality (AR) for rehabilitation. AR allows the user to use real objects to interact with computer-generated environments. Markers attached to the real objects enable the system (via a webcam) to track the position and orientation of each object as it is moved. The system can then augment the captured image of the real environment with computer-generated graphics to present a variety of game or task-driven scenarios to the user. We discuss the development of rehabilitation prototypes using available AR libraries and express our thoughts on the potential of AR technology.

138 citations


Proceedings ArticleDOI
03 Dec 2010
TL;DR: Several techniques to improve efficiency are presented and it is shown that the AIM protocol can still outperform traffic signals and stop signs even if the cars are not as precisely controllable as has been assumed in previous studies.
Abstract: Fully autonomous vehicles are technologically feasible with the current generation of hardware, as demonstrated by recent robot car competitions. Dresner and Stone proposed a new intersection control protocol called Autonomous Intersection Management (AIM) and showed that with autonomous vehicles it is possible to make intersection control much more efficient than the traditional control mechanisms such as traffic signals and stop signs. The protocol, however, has only been tested in simulation and has not been evaluated with real autonomous vehicles. To realistically test the protocol, we implemented a mixed reality platform on which an autonomous vehicle can interact with multiple virtual vehicles in a simulation at a real intersection in real time. From this platform we validated realistic parameters for our autonomous vehicle to safely traverse an intersection in AIM. We present several techniques to improve efficiency and show that the AIM protocol can still outperform traffic signals and stop signs even if the cars are not as precisely controllable as has been assumed in previous studies.

132 citations


Proceedings ArticleDOI
22 Nov 2010
TL;DR: The results show that the presented method highly improves the illusion in mixed reality applications and significantly diminishes the artificial look of virtual objects superimposed onto real scenes.
Abstract: In this paper we present a novel plausible realistic rendering method for mixed reality systems, which is useful for many real life application scenarios, like architecture, product visualization or edutainment. To allow virtual objects to seamlessly blend into the real environment, the real lighting conditions and the mutual illumination effects between real and virtual objects must be considered, while maintaining interactive frame rates (20–30fps). The most important such effects are indirect illumination and shadows cast between real and virtual objects. Our approach combines Instant Radiosity and Differential Rendering. In contrast to some previous solutions, we only need to render the scene once in order to find the mutual effects of virtual and real scenes. The dynamic real illumination is derived from the image stream of a fish-eye lens camera. We describe a new method to assign virtual point lights to multiple primary light sources, which can be real or virtual. We use imperfect shadow maps for calculating illumination from virtual point lights and have significantly improved their accuracy by taking the surface normal of a shadow caster into account. Temporal coherence is exploited to reduce flickering artifacts. Our results show that the presented method highly improves the illusion in mixed reality applications and significantly diminishes the artificial look of virtual objects superimposed onto real scenes.

90 citations


Book ChapterDOI
01 Jan 2010
TL;DR: This work looks at how this combination of multi-sensory visualization and interactivity make VR ideally suited for effective learning and tries to explain this effectiveness in terms of the advantages afforded by active learning from experiences.
Abstract: Virtual Reality is produced by a combination of technologies that are used to visualize and provide interaction with a virtual environment. These environments often depict threedimensional space which may be realistic or imaginary, macroscopic or microscopic and based on realistic physical laws of dynamics or on imaginary dynamics. The multitude of scenarios that VR may be used to depict make it broadly applicable to the many areas in education. A key feature of VR is that it allows multi-sensory interaction with the space visualized. Here we look at how this combination of multi-sensory visualization and interactivity make VR ideally suited for effective learning and try to explain this effectiveness in terms of the advantages afforded by active learning from experiences. We also consider some of the applications of VR in education and the draw-backs associated with this technology.

81 citations


Proceedings ArticleDOI
02 Apr 2010
TL;DR: The idea behind FlexTorque is to reproduce human muscle structure, which allows us to perform dexterous manipulation and safe interaction with environment in daily life, and suggests new possibilities for highly realistic, very natural physical interaction in virtual environments.
Abstract: We developed novel haptic interfaces, FlexTorque and FlexTensor that enable realistic physical interaction with real and Virtual Environments. The idea behind FlexTorque is to reproduce human muscle structure, which allows us to perform dexterous manipulation and safe interaction with environment in daily life. FlexTorque suggests new possibilities for highly realistic, very natural physical interaction in virtual environments. There are no restrictions on the arm movement, and it is not necessary to hold a physical object during interaction with objects in virtual reality. Because the system can generate strong forces, even though it is light-weight, easily wearable, and intuitive, users experience a new level of realism as they interact with virtual environments.

66 citations


Journal ArticleDOI
TL;DR: This paper explores the possibilities of using a Virtual Reality for cooperative idea generation and then attempts to assess the relationship between a student’s cooperation and the design process, learning experiences and the pedagogy employed by the teacher.
Abstract: This paper explores the possibilities of using a Virtual Reality for cooperative idea generation and then attempts to assess the relationship between a student’s cooperation and the design process, learning experiences and the pedagogy employed by the teacher. The researchers based their research around the following questions: 1. How could collaborative idea generation be incorporated within the VRE? 2. How does this relate to teaching and learning within the lesson? 3. How do communications during the lesson support students’ work?

63 citations


01 Jan 2010
TL;DR: The proposed AR4BC (Augmented Reality for Building and Construction) software system can also be used for pre-construction architectural AR visualization, where in-house developed methods are employed to achieve photorealistic rendering quality.
Abstract: This article gives an presentation of the AR4BC (Augmented Reality for Building and Construction) software system, consisting of the following modules. The 4DStudio module is used to read in BIMs and link them to a project time table. It is also used to view photos and other information attached to the model from mobile devices. The MapStudio module is used to position the building model on a map using geographic coordinates from arbitrary map formats, e.g. Google Earth or more accurate ones. The OnSitePlayer module is the mobile application used to visualize the model data on top of the real world view using augmented reality. It also provides the ability to insert annotations on the virtual model. The OnSitePlayer may be operated either stand-alone, or in the case of large models, as a client-server solution. The system is compatible with laptop PCs, hand held PCs and even scales down to mobile phones. Data glasses provide another display option, with a novel user interface provided by a Wiimote controller. A technical discussion on methods for feature based tracking and tracking initialization is also presented. The proposed system can also be used for pre-construction architectural AR visualization, where in-house developed methods are employed to achieve photorealistic rendering quality.

59 citations


Proceedings ArticleDOI
20 Mar 2010
TL;DR: This paper presents a Mixed Reality (MR) teleconferencing application based on Second Life (SL) and the OpenSim virtual world and is implemented using open source Second Life viewer, ARToolKit and OpenCV libraries.
Abstract: In this paper we present a Mixed Reality (MR) teleconferencing application based on Second Life (SL) and the OpenSim virtual world. Augmented Reality (AR) techniques are used for displaying virtual avatars of remote meeting participants in real physical spaces, while Augmented Virtuality (AV), in form of video based gesture detection, enables capturing of human expressions to control avatars and to manipulate virtual objects in virtual worlds. The use of Second Life for creating a shared augmented space to represent different physical locations allows us to incorporate the application into existing infrastructure. The application is implemented using open source Second Life viewer, ARToolKit and OpenCV libraries.

Journal ArticleDOI
TL;DR: A tangible interaction method that provides an alternative instruction guideline based on the analysis of the previous interaction while manipulating virtual objects, which makes it possible for the user to minimize manipulation errors during the interaction phase and the learning process, which guides theuser to the right direction is proposed.
Abstract: In the environment of mixed reality (MR) or augmented reality (AR), there have been several previous works dealing with user interfaces for manipulating and interacting with virtual objects aimed at improving immersive feeling and natural interaction. However, it is still considered that there must be much progress in supporting human behavior-like interactions for providing control efficiency and natural feeling in MR/AR environments. This paper proposes a tangible interaction method by combining the advantages of soft interactions such as hand gesture and MR and hard interactions such as vibro-tactile feedback. One of the main goals is to provide more natural interaction interfaces similar to the manipulation task in the real world by utilizing hand gesture-based tangible interactions. It also provides multimodal interfaces by adopting the vibro-tactile feedback and tangible interaction for the virtual object manipulation. Thus, it can make users get more immersive and natural feeling in the manipulation and interaction with virtual objects. Furthermore, it provides an alternative instruction guideline based on the analysis of the previous interaction while manipulating virtual objects, which makes it possible for the user to minimize manipulation errors during the interaction phase and the learning process, which guides the user to the right direction. We will show the effectiveness and advantage of the proposed approach by demonstrating several implementation results.

Proceedings ArticleDOI
22 Nov 2010
TL;DR: An evaluation of depth perception in handheld outdoor mixed reality environment in far-field distances through two photorealistic visualizations of occluded objects (X-ray and Melt) in the presence and absence of a depth cue.
Abstract: Enabling users to accurately perceive the correct depth of occluded objects is one of the major challenges in user interfaces for Mixed Reality (MR). Therefore, several visualization techniques and user evaluations for this area have been published. Our research is focused on photorealistic X-ray type visualizations in outdoor environments. In this paper, we present an evaluation of depth perception in far-field distances through two photorealistic visualizations of occluded objects (X-ray and Melt) in the presence and absence of a depth cue. Our results show that the distance to occluded objects was underestimated in all tested conditions. This finding is curious, as it contradicts previously published results of other researchers. The Melt visualization coupled with a depth cue was the most accurate among all the experimental conditions.

Book ChapterDOI
01 Jan 2010
TL;DR: This chapter provides a review of the state of the art within presence research related to auditory environments, and various sound parameters such as externalization and spaciousness and consistency within and across modalities are discussed in relation to their presence-inducing effects.
Abstract: Presence, the “perceptual illusion of non-mediation,” is often a central goal in mediated and mixed environments, and sound is believed to be crucial for inducing high-presence experiences This chapter provides a review of the state of the art within presence research related to auditory environments Various sound parameters such as externalization and spaciousness and consistency within and across modalities are discussed in relation to their presence-inducing effects Moreover, these parameters are related to the use of audio in mixed realities and example applications are discussed Finally, we give an account of the technological possibilities and challenges within the area of presence-inducing sound rendering and presentation for mixed realities and outline future research aims

Journal ArticleDOI
TL;DR: Two experiments to quantify to what extent a gradual transition to a virtual world via a transitional environment improves a person's level of presence and ability to estimate distances in the VE find that the subjects' self-reported sense of presence shows significantly higher scores, and that theSubjects' distance estimation skills in theVE improved significantly, when they entered the Ve via a transition environment.

Patent
25 Aug 2010
TL;DR: In this article, marketing materials are integrated with virtual reality environments while introducing imperfections and/or other cues of realism to create a virtual reality environment representation, such as misaligned marketing materials, product label blemishes, packages placed slightly askew, etc.
Abstract: Effective virtual reality environments including in-store virtual reality environments such as supermarket aisles, store shelves, cooler displays, etc. are generated using frameworks and customer layout information. Marketing materials are integrated with the virtual reality environment while introducing imperfections and/or other cues of realism to create a virtual reality environment representation. Imperfections may include misaligned marketing materials, product label blemishes, packages placed slightly askew, etc. Sensory experiences output to the user via the virtual reality environment representation elicit interactivity with a user and user movements, motions, and responses are used to evaluate the effectiveness of the marketing materials and/or the virtual reality environment representations.

Proceedings ArticleDOI
22 Nov 2010
TL;DR: The Westwood Experience is a location-based narrative using Mixed Reality effects to connect participants to unique and evocative real locations, bridging the gap between the real and story worlds and general guidelines for this type of experience are described.
Abstract: The Westwood Experience is a location-based narrative using Mixed Reality effects to connect participants to unique and evocative real locations, bridging the gap between the real and story worlds. This paper describes the experience and a detailed evaluation of it. The experience itself centers around a narrative told by the “mayor” of Westwood. He tells a love story from his youth when he first came to Westwood, and intermixes the story with historical information. Most of this story is told on a mobile computer, using Mixed Reality and video for illustration. We evaluate the experience both quantitatively and qualitatively to find lessons learned about the experience itself and general guidelines for this type of experience. The analysis and guidelines from our evaluation are grouped into three categories: narration in mobile environments, social dynamics, and Mixed Reality effects.

Journal ArticleDOI
TL;DR: This article presents two studies from the earth science domain that address questions regarding the feasibility and efficacy of SMallab in a classroom context and data demonstrating that students learn more during a recent SMALLab intervention compared to regular classroom instruction.

Journal ArticleDOI
TL;DR: The design space of augmented paper maps is explored in which maps are augmented with additional functionality through a mobile device to achieve a meaningful integration between device and map that combines their respective strengths.
Abstract: Paper maps and mobile electronic devices have complementary strengths and shortcomings in outdoor use. In many scenarios, like small craft sailing or cross-country trekking, a complete replacement of maps is neither useful nor desirable. Paper maps are fail-safe, relatively cheap, offer superior resolution and provide large scale overview. In uses like open-water sailing it is therefore mandatory to carry adequate maps/charts. GPS based mobile devices, on the other hand, offer useful features like automatic positioning and plotting, real-time information update and dynamic adaptation to user requirements. While paper maps are now commonly used in combination with mobile GPS devices, there is no meaningful integration between the two, and the combined use leads to a number of interaction problems and potential safety issues. In this paper we explore the design space of augmented paper maps in which maps are augmented with additional functionality through a mobile device to achieve a meaningful integration between device and map that combines their respective strengths.

01 Dec 2010
TL;DR: Issues and potentials related to blending virtual worlds and face-to-face environments for the purposes of learning and teaching are described and a framework for evaluation based on an Activity Theory perspective is proposed.
Abstract: This paper describes issues and potentials related to blending virtual worlds and face-to-face environments for the purposes of learning and teaching By streaming a live video feed of a face-to-face classroom into a virtual world space at the same time as projecting the virtual world space onto a screen in the face-to-face classroom it is possible to merge participation in the two environments In this way students in remote locations can be offered improved access to and involvement in face-to-face classes, and face-to-face students can capitalise upon the affordances of the virtual world to extend the range of possible learning experiences A pilot of this technique revealed several potentials for learning and teaching were evident including enhanced remote access to face-to-face classes, increased possibilities for online interaction, and the capacity to leverage the affordances of both worlds within the one learning environment depending on needs However there were several implementation issues including latency and resolution of the video-stream into the virtual world, the quality of the audio feed, and distorted orientation between face-to-face and virtual world participants A framework for evaluation is proposed based on an Activity Theory perspective An invitation for participation in an Australian Learning and Teaching Council grant application is also extended

Patent
30 Sep 2010
TL;DR: In this article, an apparatus and method for mixed reality content operation based on indoor and outdoor context awareness is presented. And the apparatus includes a mixed reality visualization processing unit superposing at least one of a virtual object and a text on an actual image which is acquired through the camera to generate a mixed-reality image; a context awareness processing unit receiving sensed data peripheral to the mobile device and a location and posture data of the camera, to perceive a peripheral context of the mobile devices on the basis of the received data.
Abstract: Provided are an apparatus and method for mixed reality content operation based on indoor and outdoor context awareness. The apparatus for mixed reality content operation includes a mixed reality visualization processing unit superposing at least one of a virtual object and a text on an actual image which is acquired through the camera to generate a mixed reality image; a context awareness processing unit receiving at least one of sensed data peripheral to the mobile device and a location and posture data of the camera to perceive a peripheral context of the mobile device on the basis of the received data; and a mixed reality application content driving unit adding a content in the mixed reality image to generate an application service image, the content being provided in a context linking type according to the peripheral context.

Proceedings ArticleDOI
20 Mar 2010
TL;DR: This work designed and implemented space-distorting visualizations to address off-screen or occluded points of interest in augmented or mixed reality and hopes that the initial results can inspire other researchers to also investigate space- Distorting visualization for mixed and augmented reality.
Abstract: Most of today's mobile internet devices contain facilities to display maps of the user's surroundings with points of interest embedded into the map. Other researchers have already explored complementary, egocentric visualizations of these points of interest using mobile mixed reality. Being able to perceive the point of interest in detail within the user's current context is desirable, however, it is challenging to display off-screen or occluded points of interest. We have designed and implemented space-distorting visualizations to address these situations. While this class of visualizations has been extensively studied in information visualization, we are not aware of any attempts to apply them to augmented or mixed reality. Based on the informal user feedback that we have gathered, we have performed several iterations on our visualizations. We hope that our initial results can inspire other researchers to also investigate space-distorting visualizations for mixed and augmented reality.

Journal ArticleDOI
TL;DR: This research presents new ways for humans to relate to the natural world through augmented-reality applications, which combine computer vision, object recognition, and related technologies with real-time interaction with the physical world.
Abstract: Advancements in computer vision, object recognition, and related technologies are leading to new levels of sophistication in augmented-reality applications and presenting new ways for humans to relate to the natural world.

Proceedings ArticleDOI
19 Jul 2010
TL;DR: This work reports on its experiences in designing complex mixed-reality collaboration, control, and display systems for a real-world factory, for delivering real-time factory information to multiple types of users.
Abstract: Virtual, mobile, and mixed reality systems have diverse uses for data visualization and remote collaboration in industrial settings, especially factories. We report our experiences in designing complex mixed-reality collaboration, control, and display systems for a real-world factory, for delivering real-time factory information to multiple types of users. In collaboration with TCHO, a chocolate maker in San Francisco, our research group is building a virtual "mirror" world of a real-world chocolate factory and its processes. Sensor data is imported into the multi-user 3D environment from hundreds of sensors on the factory floor. The resulting "virtual factory" is designed for simulation, visualization, and collaboration, using a set of interlinked, real-time layers of information about the factory and its processes. We are also looking at appropriate industrial uses for mobile devices such as cell phones and tablet computers, and how they intersect with virtual worlds and mixed realities. For example, an experimental iPhone web app provides mobile laboratory monitoring and control. The mobile system is integrated with the database underlying the virtual factory world. These systems were deployed at the real-world factory and lab in 2009, and are now in beta development. Through this mashup of mobile, social, mixed and virtual technologies, we hope to create industrial systems for enhanced collaboration between physically remote people and places — for example, factories in China with managers in Japan or the US.

Proceedings ArticleDOI
24 Jun 2010
TL;DR: A technique for dynamically directing a viewer's attention to a focus object by analyzing and modulating bottom-up salient features of a video feed by inspecting the original image's saliency map, and modifying the image automatically to favor the focus object.
Abstract: We present a technique for dynamically directing a viewer's attention to a focus object by analyzing and modulating bottom-up salient features of a video feed. Rather than applying a static modulation strategy, we inspect the original image's saliency map, and modify the image automatically to favor the focus object. Image fragments are adaptively darkened, lightened and manipulated in hue according to local contrast information rather than global parameters. The goal is to suggest rather than force the attention of the user towards a specific location. The technique's goal is to apply only minimal changes to an image, while achieving a desired difference of saliency between focus and context regions of the image. Our technique exhibits temporal and spatial coherence and runs at interactive frame rates using GPU shaders. We present several application examples from the field of Mixed Reality, or more precisely Mediated Reality.

Book
01 Apr 2010
TL;DR: Law and Order in Virtual Worlds: Exploring Avatars, Their Ownership and Rights as discussed by the authors provides an understanding of the interface between the laws of the real world and the law of the virtual worlds.
Abstract: Law and Order in Virtual Worlds: Exploring Avatars, Their Ownership and Rights provides an understanding of the interface between the laws of the real world and the laws of the virtual worlds. Written for anyone who has ventured into a virtual reality and wondered what, if any, real world consequences would follow their actions, this book raises and answers compelling legal questions about such issues as owning virtual assets, intellectual property right infringements and virtual liabilities in the real world.

Proceedings ArticleDOI
27 Jul 2010
TL;DR: This paper studies how power users and small businesses can bring their content, advertizing and general data to an Augmented Reality view, with minimal effort, and presents three prototyped approaches based on the Image Space mirror world service.
Abstract: As smart phones are getting powerful multimedia devices, with a plethora of sensors, they are the perfect enablers for Augmented Reality, allowing users to see the real world through a magic lens. Augmented Reality applications and services have been typically utilized in a limited number of domains, while adding new content is typically a privilege of developers, as programming skills are required for linking to existing clients or systems. In this paper we study how power users and small businesses can bring their content, advertizing and general data to an Augmented Reality view, with minimal effort. We present three prototyped approaches based on the Image Space mirror world service.

Proceedings ArticleDOI
05 Jul 2010
TL;DR: An AR application to support the teaching of the digestive and circulatory systems in primary school and develops its own AR library, HUMANAR, in order to ensure the integration of AR into the authors' game engine and to overcome some drawbacks present in some public libraries.
Abstract: Augmented Reality (AR) appears as a promising technology to improve students motivation and interest and support the learning and teaching process in educational contexts. We present the collaborative development of an AR application to support the teaching of the digestive and circulatory systems. We developed this system with the support of a private Spanish school. The main objective of the application is to show the student in primary school, in the most accurate way, digestive and circulatory systems. By other hand, we also develop our own AR library, HUMANAR, in order to ensure the integration of AR into our game engine and to overcome some drawbacks present in some public libraries. Moreover, our system provides several advantages over the traditional learning as books, videos or practice with animal organs.

Journal ArticleDOI
TL;DR: This particular study addresses the sequential arousal and interdependencies of two drives: boredom and curiosity and introduces general design guidelines for arousing boredom and explains how boredom can result in curiosity.

Book ChapterDOI
01 Jan 2010
TL;DR: The development of a tele-robotic rock breaker deployed at a mine over 1000kms from the remote operations centre is described, which demonstrated that the system is safe, productive (sometimes faster) and integrates seamlessly with mine operations.
Abstract: This paper describes the development of a tele-robotic rock breaker deployed at a mine over 1000kms from the remote operations centre. This distance introduces a number of technical and cognitive challenges to the design of the system, which have been addressed with the development of shared autonomy in the control system and a mixed reality user interface. A number of trials were conducted, culminating in a production field trial, which demonstrated that the system is safe, productive (sometimes faster) and integrates seamlessly with mine operations.