scispace - formally typeset
Search or ask a question

Showing papers on "Augmented reality published in 2012"


Journal ArticleDOI
TL;DR: This literature review research describes Augmented Reality (AR), how it applies to education and training, and the potential impact on the future of education.
Abstract: There are many different ways for people to be educated and trained with regard to specific information and skills they need. These methods include classroom lectures with textbooks, computers, handheld devices, and other electronic appliances. The choice of learning innovation is dependent on an individual’s access to various technologies and the infrastructure environment of a person’s surrounding. In a rapidly changing society where there is a great deal of available information and knowledge, adopting and applying information at the right time and right place is needed to main efficiency in both school and business settings. Augmented Reality (AR) is one technology that dramatically shifts the location and timing of education and training. This literature review research describes Augmented Reality (AR), how it applies to education and training, and the potential impact on the future of education.

700 citations


Patent
21 Aug 2012
TL;DR: In this article, a system and method for providing informational labels with perceived depth in the field of view of a user of a head mounted display device is presented, which includes determining a physical location of the user and the head mounted device, and identifying and determining a distance from the user to one or more objects of interest in the user's field of vision.
Abstract: A system and method for providing informational labels with perceived depth in the field of view of a user of a head mounted display device. In one embodiment, the method includes determining a physical location of the user and the head mounted display device, and identifying and determining a distance from the user to one or more objects of interest in the user's field of view. Using the distance from the user for each object, one can calculate a disparity value for viewing each object. The processor of the head mounted device may gather information concerning each of the objects in which the user is interested. The head mounted display device then provides a label for each of the objects and for each eye of the user, and, using the disparity values, places the labels within the field of view of the user.

624 citations


Journal ArticleDOI
TL;DR: The research and development of augmented reality (AR) applications in design and manufacturing is reviewed in this paper, which consists of seven main sections: the background of manufacturing simulation applications and the initial AR developments, current hardware and software tools associated with AR.

580 citations


Patent
29 Mar 2012
TL;DR: In this article, a head mounted device provides an immersive virtual or augmented reality experience for viewing data and enabling collaboration among multiple users, which includes capturing an image and spatial data with a body mounted camera and sensor array, and displaying a virtual object such that the virtual object appears anchored to the selected first anchor surface.
Abstract: A head mounted device provides an immersive virtual or augmented reality experience for viewing data and enabling collaboration among multiple users. Rendering images in a virtual or augmented reality system may include capturing an image and spatial data with a body mounted camera and sensor array, receiving an input indicating a first anchor surface, calculating parameters with respect to the body mounted camera and displaying a virtual object such that the virtual object appears anchored to the selected first anchor surface. Further operations may include receiving a second input indicating a second anchor surface within the captured image that is different from the first anchor surface, calculating parameters with respect to the second anchor surface and displaying the virtual object such that the virtual object appears anchored to the selected second anchor surface and moved from the first anchor surface.

487 citations


Journal ArticleDOI
TL;DR: Evaluations of AR experiences in an educational setting provide insights into how this technology can enhance traditional learning models and what obstacles stand in the way of its broader use.
Abstract: Evaluations of AR experiences in an educational setting provide insights into how this technology can enhance traditional learning models and what obstacles stand in the way of its broader use. A related video can be seen here: http://youtu.be/ndUjLwcBIOw. It shows examples of augmented reality experiences in an educational setting.

440 citations


Proceedings ArticleDOI
25 Jun 2012
TL;DR: A more fine grained cloudlet concept that manages applications on a component level is proposed that can be formed in a dynamic way with any device in the LAN network with available resources.
Abstract: Although mobile devices are gaining more and more capabilities (i.e. CPU power, memory, connectivity, ...), they still fall short to execute complex rich media and data analysis applications. Offloading to the cloud is not always a solution, because of the high WAN latencies, especially for applications with real-time constraints such as augmented reality. Therefore the cloud has to be moved closer to the mobile user in the form of cloudlets. Instead of moving a complete virtual machine from the cloud to the cloudlet, we propose a more fine grained cloudlet concept that manages applications on a component level. Cloudlets do not have to be fixed infrastructure close to the wireless access point, but can be formed in a dynamic way with any device in the LAN network with available resources. We present a cloudlet architecture together with a prototype implementation, showing the advantages and capabilities for a mobile real-time augmented reality application.

369 citations


Journal ArticleDOI
TL;DR: This paper provides an introduction to the technology of augmented reality (AR) and its possibilities for education and key technologies and methods are discussed within the context of education.

336 citations


Journal ArticleDOI
TL;DR: A model for developing AR mobile applications for the field of tourism is proposed, aiming to release AR's full potential within the field.
Abstract: This paper discusses the use of Augmented Reality (AR) applications for the needs of tourism. It describes the technology's evolution from pilot applications into commercial mobile applications. We address the technical aspects of mobile AR application development, emphasizing the technologies that render the delivery of augmented reality content possible and experientially superior. We examine the state of the art, providing an analysis concerning the development and the objectives of each application. Acknowledging the various technological limitations hindering AR's substantial end-user adoption, the paper proposes a model for developing AR mobile applications for the field of tourism, aiming to release AR's full potential within the field.

323 citations


Patent
31 Oct 2012
TL;DR: In this article, an approach and methods for providing a user augmented reality (UAR) service for a camera-enabled mobile device, so that a user of such mobile device can use the mobile device to obtain meta data regarding one or more images/video that are captured with such device.
Abstract: Apparatus and methods are described for providing a user augmented reality (UAR) service for a camera-enabled mobile device, so that a user of such mobile device can use the mobile device to obtain meta data regarding one or more images/video that are captured with such device. The meta data is interactive and allows the user to obtain additional information or specific types of information, such as information that will aid the user in making a decision regarding the identified objects or selectable action options that can be used to initiate actions with respect to the identified objects.

306 citations


Proceedings Article
01 Aug 2012
TL;DR: An overview of Mataio, a company that provides products and services in the field of augmented reality and computer vision, is presented, some of the specific topics discussed include: corporate structure, product launchings, markets served, and major areas of technnological innovation in theField.
Abstract: This article consists of a collection of slides from the author's conference presentation. Presents an overview of Mataio, a company that provides products and services in the field of augmented reality and computer vision. Some of the specific topics discussed include: corporate structure, product launchings, markets served, and major areas of technnological innovation in the field.

275 citations


Patent
10 Apr 2012
TL;DR: In this paper, the authors describe a system for providing realistic 3D spatial occlusion between a virtual object displayed by a head mounted, augmented reality display system and a real object visible to the user through the display.
Abstract: Technology is described for providing realistic occlusion between a virtual object displayed by a head mounted, augmented reality display system and a real object visible to the user's eyes through the display. A spatial occlusion in a user field of view of the display is typically a three dimensional occlusion determined based on a three dimensional space mapping of real and virtual objects. An occlusion interface between a real object and a virtual object can be modeled at a level of detail determined based on criteria such as distance within the field of view, display size or position with respect to a point of gaze. Technology is also described for providing three dimensional audio occlusion based on an occlusion between a real object and a virtual object in the user environment.

Proceedings ArticleDOI
05 May 2012
TL;DR: In this article, a single depth camera, a stereoscopic projector, and a curved screen are used to merge real and virtual worlds into a single spatially registered experience on top of a table.
Abstract: Instrumented with a single depth camera, a stereoscopic projector, and a curved screen, MirageTable is an interactive system designed to merge real and virtual worlds into a single spatially registered experience on top of a table. Our depth camera tracks the user's eyes and performs a real-time capture of both the shape and the appearance of any object placed in front of the camera (including user's body and hands). This real-time capture enables perspective stereoscopic 3D visualizations to a single user that account for deformations caused by physical objects on the table. In addition, the user can interact with virtual objects through physically-realistic freehand actions without any gloves, trackers, or instruments. We illustrate these unique capabilities through three application examples: virtual 3D model creation, interactive gaming with real and virtual objects, and a 3D teleconferencing experience that not only presents a 3D view of a remote person, but also a seamless 3D shared task space. We also evaluated the user's perception of projected 3D objects in our system, which confirmed that the users can correctly perceive such objects even when they are projected over different background colors and geometries (e.g., gaps, drops).

Proceedings ArticleDOI
22 Aug 2012
TL;DR: The basic principles to begin developing applications using Kinect are shown, and some projects developed at the VISGRAF Lab are presented, and the new possibilities, challenges and trends raised by Kinect are discussed.
Abstract: Kinect is a device introduced in November 2010 as an accessory of Xbox 360. The acquired data has different and complementary natures, combining geometry with visual attributes. For this reason, Kinect is a flexible tool that can be used in applications from several areas such as: Computer Graphics, Image Processing, Computer Vision and Human-Machine Interaction. In this way, the Kinect is a widely used device in industry (games, robotics, theater performers, natural interfaces, etc.) and in research. We will initially present some concepts about the device: the architecture and the sensor. We then will discuss about the data acquisition process: capturing, representation and filtering. Capturing process consists of obtaining a colored image (RGB) and performing a depth measurement (D), with structured light technique. This data is represented by a structure called RGBD Image. We will also talk about the main tools available for developing applications on various platforms. Furthermore, we will discuss some recent projects based on RGBD Images. In particular, those related to Object Recognition, 3D Reconstruction, Augmented Reality, Image Processing, Robotic, and Interaction. In this survey, we will show some research developed by the academic community and some projects developed for the industry. We intend to show the basic principles to begin developing applications using Kinect, and present some projects developed at the VISGRAF Lab. And finally, we intend to discuss the new possibilities, challenges and trends raised by Kinect.

Book
03 Dec 2012
TL;DR: This book gives a comprehensive understanding of what augmented reality is, what it can do, what is in store for the future and most importantly: how to benefit from using AR in the authors' lives and careers.
Abstract: With the explosive growth in mobile phone usage and rapid rise in search engine technologies over the last decade, augmented reality (AR) is poised to be one of this decade's most disruptive technologies, as the information that is constantly flowing around us is brought into view, in real-time, through augmented reality In this cutting-edge book, the authors outline and discuss never-before-published information about augmented reality and its capabilities With coverage of mobile, desktop, developers, security, challenges, and gaming,this book gives you a comprehensive understanding of what augmented reality is, what it can do, what is in store for the future and most importantly: how to benefit from using AR in our lives and careers Educatesreadershow best to use augmented reality regarless of industry Provides an in-depth understanding of AR and ideas ranging from new business applications to new crime fighting methods Includes actual examples and case studies from both private and government applications

Patent
28 Nov 2012
TL;DR: In this paper, a distributed approach for providing an augmented reality environment in which the environmental mapping process is decoupled from the localization processes performed by one or more mobile devices is described.
Abstract: A system and method for providing an augmented reality environment in which the environmental mapping process is decoupled from the localization processes performed by one or more mobile devices is described. In some embodiments, an augmented reality system includes a mapping system with independent sensing devices for mapping a particular real-world environment and one or more mobile devices. Each of the one or more mobile devices utilizes a separate asynchronous computing pipeline for localizing the mobile device and rendering virtual objects from a point of view of the mobile device. This distributed approach provides an efficient way for supporting mapping and localization processes for a large number of mobile devices, which are typically constrained by form factor and battery life limitations.

Patent
04 May 2012
TL;DR: In this paper, various methods and apparatus are described for enabling one or more users to interface with virtual or augmented reality environments, including a computing network having computer servers interconnected through high bandwidth interfaces to gateways for processing data and/or for enabling communication of data between the servers and local user interface devices.
Abstract: Various methods and apparatus are described herein for enabling one or more users to interface with virtual or augmented reality environments. An example system includes a computing network having computer servers interconnected through high bandwidth interfaces to gateways for processing data and/or for enabling communication of data between the servers and one or more local user interface devices. The servers include memory, processing circuitry, and software for designing and/or controlling virtual worlds, as well as for storing and processing user data and data provided by other components of the system. One or more virtual worlds may be presented to a user through a user device for the user to experience and interact. A large number of users may each use a device to simultaneously interface with one or more digital worlds by using the device to observe and interact with each other and with objects produced within the digital worlds.

Journal ArticleDOI
TL;DR: It is argued that the digital and physical enmesh to form an “augmented reality” to explain why the current flammable atmosphere of augmented revolution is happening.
Abstract: The rise of mobile phones and social media may come to be historically coupled with a growing atmosphere of dissent that is enveloping much of the globe. The Arab Spring, UK Riots, Occupy and many other protests and so-called “flash-mobs” are all massive gatherings of digitally-connected individuals in physical space; and they have recently become the new normal. The primary role of technology in producing this atmosphere has, in part, been to effectively link the on and the offline. The trend to view these as separate spaces, what I call “digital dualism”, is faulty. Instead, I argue that the digital and physical enmesh to form an “augmented reality”. Linking the power of the digital–creating and disseminating networked information–with the power of the physical–occupying geographic space with flesh-and-blood bodies–is an important part of why we have this current flammable atmosphere of augmented revolution.

Proceedings ArticleDOI
05 Nov 2012
TL;DR: The results show that both perceived usefulness and perceived enjoyment has a direct impact on the intention to use mobile augmented reality applications with historical pictures and information.
Abstract: We have developed a mobile augmented reality application with historical photographs and information about a historical street. We follow a design science research methodology and use an extended version of the technology acceptance model (TAM) to study the acceptance of this application. A prototype has been developed in accordance with general principles for usability design, and two surveys have been conducted. A web survey with 200 participants that watched a short video demonstration of the application to validate the adapted acceptance model, and a street survey, where 42 participants got the opportunity to try the application in a live setting before answering a similar questionnaire and provide more concrete feedback. The results show that both perceived usefulness and perceived enjoyment has a direct impact on the intention to use mobile augmented reality applications with historical pictures and information. Further a number of practical recommendations for the development and deployment of such systems are provided.

Journal ArticleDOI
TL;DR: This paper employs a sensorized handheld tool to capture the feel of a given texture, reduces the three-dimensional acceleration signals to a perceptually equivalent one-dimensional signal, and uses linear predictive coding to distill this raw haptic information into a database of frequency-domain texture models.
Abstract: Modern haptic interfaces are adept at conveying the large-scale shape of virtual objects, but they often provide unrealistic or no feedback when it comes to the microscopic details of surface texture. Direct texture-rendering challenges the state of the art in haptics because it requires a finely detailed model of the surface's properties, real-time dynamic simulation of complex interactions, and high-bandwidth haptic output to enable the user to feel the resulting contacts. This paper presents a new, fully realized solution for creating realistic virtual textures. Our system employs a sensorized handheld tool to capture the feel of a given texture, recording three-dimensional tool acceleration, tool position, and contact force over time. We reduce the three-dimensional acceleration signals to a perceptually equivalent one-dimensional signal, and then we use linear predictive coding to distill this raw haptic information into a database of frequency-domain texture models. Finally, we render these texture models in real time on a Wacom tablet using a stylus augmented with small voice coil actuators. The resulting virtual textures provide a compelling simulation of contact with the real surfaces, which we verify through a human subject study.

Proceedings ArticleDOI
01 Jan 2012
TL;DR: This work reports moving volume KinectFusion with additional algorithms that allow the camera to roam freely and allows the algorithm to handle a volume that moves arbitrarily on-line.
Abstract: Newcombe and Izadi et al’s KinectFusion [5] is an impressive new algorithm for real-time dense 3D mapping using the Kinect. It is geared towards games and augmented reality, but could also be of great use for robot perception. However, the algorithm is currently limited to a relatively small volume fixed in the world at start up (typically a ∼ 3m cube). This limits applications for perception. Here we report moving volume KinectFusion with additional algorithms that allow the camera to roam freely. We are interested in perception in rough terrain, but the system would also be useful in other applications including free-roaming games and awareness aids for hazardous environments or the visually impaired. Our approach allows the algorithm to handle a volume that moves arbitrarily on-line (Figure 1).

Proceedings ArticleDOI
04 Mar 2012
TL;DR: An augmented reality magic mirror for teaching anatomy that uses a depth camera to track the pose of a user standing in front of a large display and a volume visualization of a CT dataset is augmented onto the user, creating the illusion that the user can look into his body.
Abstract: We present an augmented reality magic mirror for teaching anatomy. The system uses a depth camera to track the pose of a user standing in front of a large display. A volume visualization of a CT dataset is augmented onto the user, creating the illusion that the user can look into his body. Using gestures, different slices from the CT and a photographic dataset can be selected for visualization. In addition, the system can show 3D models of organs, text information and images about anatomy. For interaction with this data we present a new interaction metaphor that makes use of the depth camera. The visibility of hands and body is modified based on the distance to a virtual interaction plane. This helps the user to understand the spatial relations between his body and the virtual interaction plane.

Journal ArticleDOI
06 Jul 2012
TL;DR: The two design principles behind the LPP curriculum, the use of socio-dramatic, embodied play in the form of participatory modeling to support inquiry and progressive symbolization within rich semiotic ecologies to help students construct meaning are outlined.
Abstract: The Learning Physics through Play Project (LPP) engaged 6–8 year old students (n = 43) in a series of scientific investigations of Newtonian force and motion including a series of augmented reality activities. We outline the two design principles behind the LPP curriculum: 1) the use of socio-dramatic, embodied play in the form of participatory modeling to support inquiry; and 2) progressive symbolization within rich semiotic ecologies to help students construct meaning. We then present pre- and post-test results to show that young students were able to develop a conceptual understanding of force, net force, friction and two-dimensional motion after participating in the LPP curriculum. Finally, we present two case studies that illustrate the design principles in action. Taken together the cases show some of the strengths and challenges associated with using augmented reality, embodied play, and a student invented semiotic ecology for scientific inquiry.

Proceedings ArticleDOI
05 May 2012
TL;DR: The results of this study demonstrate that MOSOCO facilitates practicing and learning social skills, increases both quantity and quality of social interactions, reduces social and behavioral missteps, and enables the integration of children with autism in social groups of neurotypical children.
Abstract: MOSOCO is a mobile assistive application that uses augmented reality and the visual supports of a validated curriculum, the Social Compass, to help children with autism practice social skills in real-life situations. In this paper, we present the results of a seven-week deployment study of MOSOCO in a public school in Southern California with both students with autism and neurotypical students. The results of our study demonstrate that MOSOCO facilitates practicing and learning social skills, increases both quantity and quality of social interactions, reduces social and behavioral missteps, and enables the integration of children with autism in social groups of neurotypical children. The findings from this study reveal emergent practices of the uses of mobile assistive technologies in real-life situations.

Patent
29 Jun 2012
TL;DR: In this article, the authors describe a system for providing a virtual spectator experience for a user of a personal A/V apparatus including a near-eye, augmented reality (AR) display, where a position volume of an event object participating in an event in a first 3D coordinate system for a first location is received and mapped to a second position volume in a second 3D coordinates system at a second location remote from where the event is occurring.
Abstract: Technology is described for providing a virtual spectator experience for a user of a personal A/V apparatus including a near-eye, augmented reality (AR) display. A position volume of an event object participating in an event in a first 3D coordinate system for a first location is received and mapped to a second position volume in a second 3D coordinate system at a second location remote from where the event is occurring. A display field of view of the near-eye AR display at the second location is determined, and real-time 3D virtual data representing the one or more event objects which are positioned within the display field of view are displayed in the near-eye AR display. A user may select a viewing position from which to view the event. Additionally, virtual data of a second user may be displayed at a position relative to a first user.

Journal ArticleDOI
25 Aug 2012
TL;DR: This study compared four conditions for learning science in a science museum using augmented reality and knowledge-building scaffolds known to be successful in formal classrooms and indicated that students demonstrated greater cognitive gains when scaffolds were used.
Abstract: Although learning science in informal non-school environments has shown great promise in terms of increasing interest and engagement, few studies have systematically investigated and produced evidence of improved conceptual knowledge and cognitive skills. Furthermore, little is known about how digital technologies that are increasingly being used in these informal environments can enhance learning. Through a quasi-experimental design, this study compared four conditions for learning science in a science museum using augmented reality and knowledge-building scaffolds known to be successful in formal classrooms. Results indicated that students demonstrated greater cognitive gains when scaffolds were used. Through the use of digital augmentations, the study also provided information about how such technologies impact learning in informal environments.

Journal ArticleDOI
01 Jul 2012
TL;DR: This paper expands tactile interfaces based on electrovibration beyond touch surfaces and bring them into the real world with a broad range of application scenarios where the technology can be used to enhance AR interaction with dynamic and unobtrusive tactile feedback.
Abstract: REVEL is an augmented reality (AR) tactile technology that allows for change to the tactile feeling of real objects by augmenting them with virtual tactile textures using a device worn by the user. Unlike previous attempts to enhance AR environments with haptics, we neither physically actuate objects or use any force- or tactile-feedback devices, nor require users to wear tactile gloves or other apparatus on their hands. Instead, we employ the principle of reverse electrovibration where we inject a weak electrical signal anywhere on the user body creating an oscillating electrical field around the user's fingers. When sliding his or her fingers on a surface of the object, the user perceives highly distinctive tactile textures augmenting the physical object. By tracking the objects and location of the touch, we associate dynamic tactile sensations to the interaction context. REVEL is built upon our previous work on designing electrovibration-based tactile feedback for touch surfaces [Bau, et al. 2010]. In this paper we expand tactile interfaces based on electrovibration beyond touch surfaces and bring them into the real world. We demonstrate a broad range of application scenarios where our technology can be used to enhance AR interaction with dynamic and unobtrusive tactile feedback.

Journal ArticleDOI
TL;DR: An overview of augmented reality is presented, discussing what it is, how it works, its current implementations, and its potential impact on libraries.
Abstract: Augmented reality is a technology that overlays digital information on objects or places in the real world for the purpose of enhancing the user experience. It is not virtual reality, that is, the technology that creates a totally digital or computer created environment. Augmented reality, with its ability to combine reality and digital information, is being studied and implemented in medicine, marketing, museums, fashion, and numerous other areas. This article presents an overview of augmented reality, discussing what it is, how it works, its current implementations, and its potential impact on libraries.

Patent
12 Jun 2012
TL;DR: In this article, a near-eye, augmented reality display with a real controller device is used to control a virtual object displayed on the display, and the user input data is received from a controller device requesting an action to be performed by the virtual object.
Abstract: Technology is described for controlling a virtual object displayed by a near-eye, augmented reality display with a real controller device. User input data is received from a real controller device requesting an action to be performed by the virtual object. A user perspective of the virtual object being displayed by the near-eye, augmented reality display is determined. The user input data requesting the action to be performed by the virtual object is applied based on the user perspective, and the action is displayed from the user perspective. The virtual object to be controlled by the real controller device may be identified based on user input data which may be from a natural user interface (NUI). A user selected force feedback object may also be identified, and the identification may also be based on NUI input data.

Patent
21 Mar 2012
TL;DR: In this article, a portable device is used to display data and other information in an augmented reality view along with a visual display of the actual datacenter environment, allowing installers and technicians to view instructions or other data that are visually correlated to the environment in which they are working.
Abstract: Datacenter datasets and other information are visually displayed in an augmented reality view using a portable device. The visual display of this information is presented along with a visual display of the actual datacenter environment. The combination of these two displays allows installers and technicians to view instructions or other data that are visually correlated to the environment in which they are working.

Journal ArticleDOI
TL;DR: The overall project design and architecture of the NAVIG system is presented and details of a new type of detection and localization device are presented that combines a bio-inspired vision system that can recognize and locate objects very quickly and a 3D sound rendering system that is able to perceptually position a sound at the location of the recognized object.
Abstract: Navigating complex routes and finding objects of interest are challenging tasks for the visually impaired The project NAVIG (Navigation Assisted by artificial VIsion and GNSS) is directed toward increasing personal autonomy via a virtual augmented reality system The system integrates an adapted geographic information system with different classes of objects useful for improving route selection and guidance The database also includes models of important geolocated objects that may be detected by real-time embedded vision algorithms Object localization (relative to the user) may serve both global positioning and sensorimotor actions such as heading, grasping, or piloting The user is guided to his desired destination through spatialized semantic audio rendering, always maintained in the head-centered reference frame This paper presents the overall project design and architecture of the NAVIG system In addition, details of a new type of detection and localization device are presented This approach combines a bio-inspired vision system that can recognize and locate objects very quickly and a 3D sound rendering system that is able to perceptually position a sound at the location of the recognized object This system was developed in relation to guidance directives developed through participative design with potential users and educators for the visually impaired