scispace - formally typeset
Search or ask a question

Showing papers on "Mixed reality published in 2011"


Journal ArticleDOI
TL;DR: Challenges augmented reality is facing in each of these applications to go from the laboratories to the industry, as well as the future challenges the authors can forecast are also discussed in this paper.
Abstract: This paper surveys the current state-of-the-art of technology, systems and applications in Augmented Reality. It describes work performed by many different research groups, the purpose behind each new Augmented Reality system, and the difficulties and problems encountered when building some Augmented Reality applications. It surveys mobile augmented reality systems challenges and requirements for successful mobile systems. This paper summarizes the current applications of Augmented Reality and speculates on future applications and where current research will lead Augmented Reality's development. Challenges augmented reality is facing in each of these applications to go from the laboratories to the industry, as well as the future challenges we can forecast are also discussed in this paper. Section 1 gives an introduction to what Augmented Reality is and the motivations for developing this technology. Section 2 discusses Augmented Reality Technologies with computer vision methods, AR devices, interfaces and systems, and visualization tools. The mobile and wireless systems for Augmented Reality are discussed in Section 3. Four classes of current applications that have been explored are described in Section 4. These applications were chosen as they are the most famous type of applications encountered when researching AR apps. The future of augmented reality and the challenges they will be facing are discussed in Section 5.

1,012 citations


Journal ArticleDOI
TL;DR: AugAugmented Reality (AR) is an emerging form of experience in which the Real World (RW) is enhanced by computer-generated content tied to specific locations and/or activities as mentioned in this paper.
Abstract: Augmented Reality (AR) is an emerging form of experience in which the Real World (RW) is enhanced by computer-generated content tied to specific locations and/or activities. Over the last several years, AR applications have become portable and widely available on mobile de- vices. AR is becoming visible in our audio-visual media (e.g., news, entertainment, sports) and is beginning to enter other aspects of our lives (e.g., e-commerce, travel, marketing) in tangible and exciting ways. Facilitating ubiquitous learning, AR will give learners instant access to location- specific information compiled and provided by numerous sources (2009). Both the 2010 and 2011 Horizon Reports predict that AR will soon see widespread use on US college campuses. In prepa- ration, this paper offers an overview of AR, examines recent AR developments, explores the impact of AR on society, and evaluates the implications of AR for learning and education.

646 citations


BookDOI
28 Sep 2011
TL;DR: This book is intended for a wide variety of readers including academicians, designers, developers, educators, engineers, practitioners, researchers, and graduate students.
Abstract: Augmented Reality (AR) refers to the merging of a live view of the physical, real world with context-sensitive, computer-generated images to create a mixed reality. Through this augmented vision, a user can digitally interact with and adjust information about their surrounding environment on-the-fly. Handbook of Augmented Reality provides an extensive overview of the current and future trends in Augmented Reality, and chronicles the dramatic growth in this field. The book includes contributions from world expert s in the field of AR from academia, research laboratories and private industry. Case studies and examples throughout the handbook help introduce the basic concepts of AR, as well as outline the Computer Vision and Multimedia techniques most commonly used today. The book is intended for a wide variety of readers including academicians, designers, developers, educators, engineers, practitioners, researchers, and graduate students. This book can also be beneficial for business managers, entrepreneurs, and investors.

342 citations


Book ChapterDOI
01 Jan 2011
TL;DR: Augmented Reality is both interactive and registered in 3D as well as combines real and virtual objects and Milgram’s Reality-Virtuality Continuum is defined as a continuum that spans between the real environment and the virtual environment comprise Augmented Reality and Augmented Virtuality in between.
Abstract: We define Augmented Reality (AR) as a real-time direct or indirect view of a physical real-world environment that has been enhanced/augmented by adding virtual computer-generated information to it [1]. AR is both interactive and registered in 3D as well as combines real and virtual objects. Milgram’s Reality-Virtuality Continuum is defined by Paul Milgram and Fumio Kishino as a continuum that spans between the real environment and the virtual environment comprise Augmented Reality and Augmented Virtuality (AV) in between, where AR is closer to the real world and AV is closer to a pure virtual environment, as seen in Fig. 1.1 [2].

320 citations


Patent
22 Aug 2011
TL;DR: In this article, the authors present interfaces and methods for producing input for software applications based on the absolute pose of an item manipulated or worn by a user in a 3D environment.
Abstract: The present invention relates to interfaces and methods for producing input for software applications based on the absolute pose of an item manipulated or worn by a user in a three-dimensional environment. Absolute pose in the sense of the present invention means both the position and the orientation of the item as described in a stable frame defined in that three-dimensional environment. The invention describes how to recover the absolute pose with optical hardware and methods, and how to map at least one of the recovered absolute pose parameters to the three translational and three rotational degrees of freedom available to the item to generate useful input. The applications that can most benefit from the interfaces and methods of the invention involve 3D virtual spaces including augmented reality and mixed reality environments.

241 citations


Patent
30 Aug 2011
TL;DR: In this article, a system for enhancing the experience of a user wearing a see-through, near-eye mixed reality display device is described, based on an arrangement of gaze detection elements on each display optical system for each eye of the display.
Abstract: Technology is disclosed for enhancing the experience of a user wearing a see-through, near eye mixed reality display device. Based on an arrangement of gaze detection elements on each display optical system for each eye of the display device, a respective gaze vector is determined and a current user focal region is determined based on the gaze vectors. Virtual objects are displayed at their respective focal regions in a user field of view for a natural sight view. Additionally, one or more objects of interest to a user may be identified. The identification may be based on a user intent to interact with the object. For example, the intent may be determined based on a gaze duration. Augmented content may be projected over or next to an object, real or virtual. Additionally, a real or virtual object intended for interaction may be zoomed in or out.

236 citations


Book
05 Aug 2011
TL;DR: An overarching theory is developed to guide the study and design of mixed reality performances based on the approach of interleaved trajectories through hybrid structures of space, time, interfaces, and roles.
Abstract: Working at the cutting edge of live performance, an emerging generation of artists is employing digital technologies to create distinctive forms of interactive, distributed, and often deeply subjective theatrical performance. The work of these artists is not only fundamentally transforming the experience of theater, it is also reshaping the nature of human interaction with computers. In this book, Steve Benford and Gabriella Giannachi offer a new theoretical framework for understanding these experiences--which they term mixed reality performances--and document a series of landmark performances and installations that mix the real and the virtual, live performance and interactivity. Benford and Giannachi draw on a number of works that have been developed at the University of Nottingham's Mixed Reality Laboratory, describing collaborations with artists (most notably the group Blast Theory) that have gradually evolved a distinctive interdisciplinary approach to combining practice with research. They offer detailed and extended accounts of these works from different perspectives, including interviews with the artists and Mixed Reality Laboratory researchers. The authors develop an overarching theory to guide the study and design of mixed reality performances based on the approach of interleaved trajectories through hybrid structures of space, time, interfaces, and roles. Combinations of canonical, participant, and historic trajectories show how such performances establish complex configurations of real and virtual, local and global, factual and fictional, and personal and social.

222 citations


Patent
30 Sep 2011
TL;DR: In this paper, the authors describe a see-through, near-eye, mixed reality display device for providing customized experiences for a user, which can be used in various entertainment, sports, shopping and theme-parks situations to provide a mixed reality experience.
Abstract: The technology described herein incudes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The system can be used in various entertainment, sports, shopping and theme-park situations to provide a mixed reality experience.

217 citations


Book
05 Apr 2011
TL;DR: This talk explores a research program that explores what William Gibson referred to as “the infinite plasticity” of digital identity, and examines how watching one's own self behave in novel manners affects memory, health behavior, and persuasion.
Abstract: How do The Matrix, Avatar, and Tron reveal the future of existence? can our brains recognize where "reality" ends and "virtual" begins? What would it mean to live eternally in a digital universe? Where will technology lead us in five, fifty, and five hundred years? Two innovative scientists explore the mystery and reality of the virtual and examine the profound potential of emerging digital technologies Welcome to the future . . . The coming explosion of immersive digital technology, combined with recent progress in unlocking how the mind works, will soon revolutionize our lives in ways only science fiction has imagined. In Infinite Reality, Jeremy Bailenson (Stanford University) and Jim Blascovich (University of California, Santa Barbara)two of virtual reality's pioneering authorities whose pathbreaking research has mapped how our brain behaves in digital worldstake us on a mind-bending journey through the virtual universe. Infinite Reality explores what emerging computer technologies and their radical applications will mean for the future of human life and society. Along the way, Bailenson and Blascovich examine the timeless philosophical questions of the self and "reality" that arise through the digital experience; explain how virtual reality's latest and future formsincluding immersive video games and social-networking siteswill soon be seamlessly integrated into our lives; show the many surprising practical applications of virtual reality, from education and medicine to sex and warfare; and probe further-off possibilities like "total personality downloads" that would allow your great-great-grandchildren to have a conversation with "you" a century or more after your death. Equally fascinating, farsighted, and profound, Infinite Reality is an essential guide to our virtual future, where the experience of being human will be deeply transformed.

190 citations


Patent
14 Oct 2011
TL;DR: In this paper, a real object in a field of view of a see-through, mixed reality display device system based on user disappearance criteria is removed from the display by tracking the image data to the real object.
Abstract: The technology causes disappearance of a real object in a field of view of a see-through, mixed reality display device system based on user disappearance criteria. Image data is tracked to the real object in the field of view of the see-through display for implementing an alteration technique on the real object causing its disappearance from the display. A real object may satisfy user disappearance criteria by being associated with subject matter that the user does not wish to see or by not satisfying relevance criteria for a current subject matter of interest to the user. In some embodiments, based on a 3D model of a location of the display device system, an alteration technique may be selected for a real object based on a visibility level associated with the position within the location. Image data for alteration may be prefetched based on a location of the display device system.

174 citations


Book
01 Jan 2011
TL;DR: In this paper, the authors present a taxonomy of technologies and features of augmented environments, and present a survey of use cases for Mobile Augmented Reality Browsers and applications.
Abstract: PART I TECHNOLOGIES.- Augmented Reality: An Overview.- New Augmented Reality Taxonomy: Technologies and Features of Augmented Environment.- Visualization Techniques for Augmented Reality.- Mobile Augmented Reality Game Engine.- Head-Mounted Projection Display Technology and Applications.- Wireless Displays in Educational Augmented Reality Applications.- Mobile Projection Interfaces for Augmented Reality Applications.- Interactive Volume Segmentation and Visualization in Augmented Reality.- Virtual Roommates: Sampling and Reconstructing Presence in Multiple Shared Spaces.- Large Scale Spatial Augmented Reality for Design and Prototyping.- Markless Tracking for Augmented Reality.- Enhancing Interactivity in Handheld AR Environments.- Evaluating Augmented Reality Systems.- Situated Simulations Between Virtual Reality and Mobile Augmented Reality: Designing a Narrative Space.- Referencing Patterns in Collaborative Augmented Reality.- QR Code Based Augmented Reality and Its Applications.- Evolution of a Tracking System.- Navigation Techniques in Augmented and Mixed Reality: Crossing the Virtuality Continuum.- Survey of Use Cases for Mobile Augmented Reality Browsers.- PART II APPLICATIONS.- Augmented Reality for Nano Manipulation.- Augmented Reality in Psychology.- Environmental Planning Using Augmented Reality.- Mixed Reality Manikins for Medical Education.- Augmented Reality Applied to Edutainment.- Designing Mobile Augmented Reality Games.- Network Middleware for Mobile and Pervasive Large Scale Augmented Reality Games.- 3D Medical Imaging and Augmented Reality for Image-Guided Surgery.- Augmented Reality in Assistive Technology and Rehabilitation Engineering.- Using Augmentation Techniques for Performance Evaluation in Automotive Safety.- Augmented Reality in Product Development and Manufacturing.- Military Applications of Augmented Reality.- Augmented Reality in Exhibition and Entertainment for the Public.- GIS and Augmented Reality: State of the Art and Issues.

Journal ArticleDOI
Jason Wither1, Yun-Ta Tsai1, Ronald Azuma1
TL;DR: This paper replaces the live camera view used in video see through AR with a previously captured panoramic image to improve the perceived quality of the tracking while still maintaining a similar overall experience and evaluates this technique on both a performance and experiential basis.

Journal ArticleDOI
TL;DR: A study into users' acceptance of a mixed reality prototype, named Mixed Reality Regenerative Concept (MRRC), which was developed using mixed reality technology to provide Biomedical Science students with exposure to regenerative concepts and tissue engineering processes is discussed.
Abstract: This study investigates users' perception and acceptance of mixed reality (MR) technology. Acceptance of new information technologies has been important research area since 1990s. It is important to understand the reasons why people accept information technologies, as this can help to improve design, evaluation and prediction how users will respond to a new technology. MR is one of the potential technologies that has gained attention in recent time, offering a unique environment as it combines real and virtual objects, interactive in real time and registered in three dimensions. This paper discusses a study into users' acceptance of a mixed reality prototype, named Mixed Reality Regenerative Concept (MRRC). MRRC was developed using mixed reality technology to provide Biomedical Science students with exposure to regenerative concepts and tissue engineering processes. MRRC integrates situated learning as the model of instruction, emphasising authentic context and activities. Volunteer sampling was used in this study to obtain 63 participants comprising 2nd, 3rd and 4th year Biomedical Science students in two public universities in Malaysia, who had not previously experienced mixed reality technology. In this study, the constructs used to determine acceptance of mixed reality technology were personal innovativeness (PI), perceived enjoyment (PE), perceived ease of use (PEOU), perceived usefulness (PU), and intention to use (ITU). Results from simple correlation analyses showed positive linear correlations between the constructs. However, findings from regression analysis suggested that perceived usefulness was the most important factor determining users' intention to use this technology in the future. Findings from this study also suggested that tertiary level science students showed a high willingness to use mixed reality technology in the future.

Patent
10 Nov 2011
TL;DR: In this paper, a reconfigurable platform management apparatus for a virtual reality-based training simulator is presented, which enables a device platform to be reconfigured to suit various work environments and to fulfill various work scenario requirements of users.
Abstract: Disclosed herein is a reconfigurable platform management apparatus for a virtual reality-based training simulator, which enables a device platform to be reconfigured to suit various work environments and to fulfill various work scenario requirements of users. The reconfigurable platform management apparatus for a virtual reality-based training simulator includes an image output unit for outputting a stereoscopic image of mixed reality content that is used for work training of a user. A user working tool unit generates virtual sensation feedback corresponding to sensation feedback generated based on a user's motion to the outputted stereoscopic image when working with an actual working tool. A tracking unit transmits a sensing signal obtained by sensing a user's motion working tool unit to the image output unit and the user working tool unit.

Patent
19 Aug 2011
TL;DR: In this paper, location-based skin for a see-through, mixed-reality display device system is described, where user data is uploaded and displayed in a skin in accordance with user settings.
Abstract: The technology provides embodiments for providing a location-based skin for a see-through, mixed reality display device system. In many embodiments, a location-based skin includes a virtual object viewable by a see-through, mixed reality display device system which has been detected in a specific location. Some location-based skins implement an ambient effect. The see-through, mixed reality display device system is detected to be present in a location and receives and displays a skin while in the location in accordance with user settings. User data may be uploaded and displayed in a skin in accordance with user settings. A location may be a physical space at a fixed position and may also be a space defined relative to a position of a real object, for example, another see-through, mixed reality display device system. Furthermore, a location may be a location within another location.

Book ChapterDOI
29 Aug 2011
TL;DR: A new taxonomy enabling these environments to be classified by its purpose, ie.
Abstract: This article has a dual aim: firstly to define augmented reality (AR) environments and secondly, based on our definition, a new taxonomy enabling these environments to be classified. After briefly reviewing existing classifications, we define AR by its purpose, ie. to enable someone to create sensory-motor and cognitive activities in a new space combining the real environment and a virtual environment. Below we present our functional taxonomy of AR environments. We divide these environments into two distinct groups. The first concerns the different functionalities enabling us to discover and understand our environment, an augmented perception of reality. The second corresponds to applications whose aim is to create an artificial environment. Finally, more than a functional difference, we demonstrate that it is possible to consider that both types of AR have a pragmatic purpose. The difference therefore seems to lie in the ability of both types of AR to free themselves or not of location in time and space.

Book ChapterDOI
01 Dec 2011
TL;DR: In this article, the authors highlight some of the most exciting potential applications associated with engaging more of a user's senses while in a simulated environment and review the key technical challenges associated with stimulating multiple senses in a VR setting.
Abstract: Perception in the real world is inherently multisensory, often involving visual, auditory, tactile, olfactory, gustatory, and, on occasion, nociceptive (i.e., painful) stimulation. In fact, the vast majority of life’s most enjoyable experiences involve the stimulation of several senses simultaneously. Outside of the entertainment industry, however, the majority of virtual reality (VR) applications thus far have involved the stimulation of only one, or at most two, senses, typically vision, audition, and, on occasion, touch/ haptics. That said, the research that has been conducted to date has convincingly shown that increasing the number of senses stimulated in a VR simulator can dramatically enhance a user’s ‘sense of presence’, their enjoyment, and even their memory for the encounter/experience. What is more, given that the technology has been improving rapidly, and the costs associated with VR systems are continuing to come down, it seems increasingly likely that truly multisensory VR should be with us soon (albeit 50 years after Heilig, 1962, originally introduced Sensorama). However, it is important to note that there are both theoretical and practical limitations to the stimulation of certain senses in VR. In this chapter, after having defined the concept of ‘neurally-inspired VR’, we highlight some of the most exciting potential applications associated with engaging more of a user’s senses while in a simulated environment. We then review the key technical challenges associated with stimulating multiple senses in a VR setting. We focus on the particular problems associated with the stimulation of the senses of touch, smell, and taste.

Patent
08 Apr 2011
TL;DR: In this article, an interactive mixed reality simulator is provided that includes a virtual 3D model of internal or hidden features of an object; a physical model or object being interacted with; and a tracked instrument used to interact with the physical object.
Abstract: An interactive mixed reality simulator is provided that includes a virtual 3D model of internal or hidden features of an object; a physical model or object being interacted with; and a tracked instrument used to interact with the physical object. The tracked instrument can be used to simulate or visualize interactions with internal features of the physical object represented by the physical model. In certain embodiments, one or more of the internal features can be present in the physical model. In another embodiment, some internal features do not have a physical presence within the physical model.

Patent
11 Nov 2011
TL;DR: In this paper, the authors propose a system for recalibration of a set of outward facing cameras supported by a see-through, head mounted, mixed reality display system having a flexible portion between seethrough displays for the eyes.
Abstract: The technology provides embodiments for recalibration of outward facing cameras supported by a see-through, head mounted, mixed reality display system having a flexible portion between see-through displays for the eyes. Each outward facing camera has a fixed spatial relationship with a respective or corresponding see-through display positioned to be seen through by a respective eye. For front facing cameras, the fixed spatial relationship allows a predetermined mapping between positions on an image sensor of each camera and positions on the respective display. The mapping may be used to register a position of a virtual object to a position of a real object. A change in a first flexible spatial relationship between the outward facing cameras can be automatically detected. A second spatial relationship between the cameras is determined. A registration of a virtual object to a real object may be updated based on the second spatial relationship.

Journal ArticleDOI
TL;DR: Two pioneering field trials where MapLens, a magic lens that augments paper-based city maps, was used in small-group collaborative tasks are reviewed, finding place-making and use of artefacts to communicate and establish common ground as predominant modes of interaction in AR-mediated collaboration with users working on tasks together despite not needing to.

Book ChapterDOI
01 Jan 2011
TL;DR: This chapter reviews military benefits and requirements that have led to a series of research efforts in augmented reality and related systems for the military over the past few decades, beginning with the earliest specific application of AR.
Abstract: This chapter reviews military benefits and requirements that have led to a series of research efforts in augmented reality (AR) and related systems for the military over the past few decades, beginning with the earliest specific application of AR. While by no means a complete list, we note some themes from the various projects and discuss ongoing research at the Naval Research Laboratory. Two of the most important thrusts within these applications are the user interface and human factors. We summarize our research and place it in the context of the field.

Journal ArticleDOI
TL;DR: A virtual crane training system has been developed which can be controlled using control commands extracted from facial gestures and is capable to lift up loads/materials in the virtual construction sites and integrates affective computing concept into the conventional VR training platform for measuring the cognitive load and level of satisfaction during performance.

Patent
01 Dec 2011
TL;DR: In this paper, a video show is recorded in a tangible medium for distribution and presentation to an audience, which consists of video portions of a player based on a real person wearing a head mounted display through which the real person sees a virtual reality object with which it attempts to interact while moving within a real environment, at least a portion of the virtual reality environment corresponding to the real environment.
Abstract: A video show is recorded in a tangible medium for distribution and presentation to an audience. The video show comprises video portions of a player based on a real person wearing a head mounted display through which the real person sees a virtual reality object with which the real person attempts to interact while moving within a real environment. The video show also comprises generated virtual video portions of the virtual reality object within a virtual reality environment and depicting any action of the virtual reality object in response to attempts by the real person to interact with the virtual reality object, at least a portion of the virtual reality environment corresponding to the real environment.

Book ChapterDOI
05 Sep 2011
TL;DR: A virtual character controlled by an actor in real time, who talks with an audience through an augmented mirror using a mixture of technologies: two kinect systems for motion capture, depth map and real images, and control algorithms to manage avatar emotions.
Abstract: In this paper we present a virtual character controlled by an actor in real time, who talks with an audience through an augmented mirror. The application, which integrates video images, the avatar and other virtual objects within an Augmented Reality system, has been implemented using a mixture of technologies: two kinect systems for motion capture, depth map and real images, a gyroscope to detect head movements, and control algorithms to manage avatar emotions.

Journal ArticleDOI
TL;DR: This special section contains seven papers on mobile AR, covering a range of topics from tracking, user studies, visualization, and collaborative applications.

Journal ArticleDOI
TL;DR: By defining Mixed Reality Agents as a formal field, establishing a common taxonomy, and retrospectively placing existing MiRA projects within it, future researchers can effectively position their research within this landscape, thereby avoiding duplication and fostering reuse and interoperability.
Abstract: In recent years, an increasing number of Mixed Reality (MR) applications have been developed using agent technology - both for the underlying software and as an interface metaphor However, no unifying field or theory currently exists that can act as a common frame of reference for these varied works As a result, much duplication of research is evidenced in the literature This paper seeks to fill this important gap by outlining ''for the first time'' a formal field of research that has hitherto gone unacknowledged, namely the field of Mixed Reality Agents (MiRAs), which are defined as agents embodied in a Mixed Reality environment Based on this definition, a taxonomy is offered that classifies MiRAs along three axes: agency, based on the weak and strong notions outlined by Wooldridge and Jennings (1995); corporeal presence, which describes the degree of virtual or physical representation (body) of a MiRA; and interactive capacity, which characterises its ability to sense and act on the virtual and physical environment Furthermore, this paper offers the first comprehensive survey of the state-of-the-art of MiRA research and places each project within the proposed taxonomy Finally, common trends and future directions for MiRA research are discussed By defining Mixed Reality Agents as a formal field, establishing a common taxonomy, and retrospectively placing existing MiRA projects within it, future researchers can effectively position their research within this landscape, thereby avoiding duplication and fostering reuse and interoperability

Patent
07 Dec 2011
TL;DR: In this article, the authors proposed a system for making static printed content being viewed through a see-through, mixed reality display device system more dynamic with display of virtual data, where a printed content item, for example a book or magazine, is identified from image data captured by cameras on the display device, and user selection of a print content selection within the content item is identified based on physical action user input, such as eye gaze or gesture.
Abstract: The technology provides embodiments for making static printed content being viewed through a see-through, mixed reality display device system more dynamic with display of virtual data. A printed content item, for example a book or magazine, is identified from image data captured by cameras on the display device, and user selection of a printed content selection within the printed content item is identified based on physical action user input, for example eye gaze or a gesture. A task in relation to the printed content selection can also be determined based on physical action user input. Virtual data for the printed content selection is displayed in accordance with the task. Additionally, virtual data can be linked to a work embodied in a printed content item. Furthermore, a virtual version of the printed material may be displayed at a more comfortable reading position and with improved visibility of the content.

Patent
20 Jun 2011
TL;DR: In this article, the authors present a system that facilitates interaction between two entities located away from each other, including a virtual reality system, an augmented reality system and an object-state-maintaining mechanism.
Abstract: One embodiment of the present invention provides a system that facilitates interaction between two entities located away from each other. The system includes a virtual reality system (114), an augmented reality system (120), and an object-state-maintaining mechanism. During operation, the virtual reality system (114) displays an object associated with a real-world object. The augmented reality system (120) displays the object based on a change to the state of the object. The object-state-maintaining mechanism determines the state of the object and communicates a state change to the virtual reality system, the augmented reality system, or both. A respective state change of the object can be based on one or more of: a state change of the real-world object; a user input to the virtual reality system or the augmented reality system; and an analysis of an image of the real-world object.

Journal Article
TL;DR: An AR game concept, “Locatory”, is presented that combines a game logic with collaborative game play and personalized mobile augmented reality visualization for contextualization of learning experiences.
Abstract: This article discusses technological developments and applications of mobile augmented reality (AR) and their application in learning. Augmented reality interaction design patterns are introduced and educational patterns for supporting certain learning objectives with AR approaches are discussed. The article then identifies several dimensions of a user context identified with sensors contained in mobile devices and used for the contextualization of learning experiences. Finally, an AR game concept, “Locatory”, is presented that combines a game logic with collaborative game play and personalized mobile augmented reality visualization.

Journal ArticleDOI
TL;DR: The base idea behind using VR and AR techniques is to offer archaeologists and general public new insights on the reconstructed archaeological sites allowing archaeologists to study directly from within the virtual site and allowing the general public to immersively explore a realistic reconstruction of the sites.
Abstract: The paper presents different issues dealing with both the preservation of cultural heritage using virtual reality (VR) and augmented reality (AR) technologies in a cultural context. While the VR/AR technologies are mentioned, the attention is paid to the 3D visualization, and 3D interaction modalities illustrated through three different demonstrators: the VR demonstrators (immersive and semi-immersive) and the AR demonstrator including tangible user interfaces. To show the benefits of the VR and AR technologies for studying and preserving cultural heritage, we investigated the visualisation and interaction with reconstructed underwater archaeological sites. The base idea behind using VR and AR techniques is to offer archaeologists and general public new insights on the reconstructed archaeological sites allowing archaeologists to study directly from within the virtual site and allowing the general public to immersively explore a realistic reconstruction of the sites. Both activities are based on the same VR engine, but drastically differ in the way they present information and exploit interaction modalities. The visualisation and interaction techniques developed through these demonstrators are the results of the ongoing dialogue between the archaeological requirements and the technological solutions developed.