scispace - formally typeset
Search or ask a question

Showing papers on "Augmented reality published in 2015"


Patent
13 Jun 2015
TL;DR: In this article, the authors provide methods and systems for creating virtual and augmented reality experiences to users, which include an image capturing device to capture one or more images and a processor communicatively coupled to the image-capturing device to extract a set of map points from the set of images.
Abstract: To provide methods and systems for creating virtual and augmented reality.SOLUTION: Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The systems may comprise an image capturing device to capture one or more images, the one or more images corresponding to a field of view of a user of a head-mounted augmented reality device, and a processor communicatively coupled to the image capturing device to extract a set of map points from the set of images, to identify a set of sparse points and a set of dense points from the extracted set of map points, and to perform normalization on the set of map points.SELECTED DRAWING: Figure 1

995 citations


Book
31 Mar 2015
TL;DR: This survey summarizes almost 50 years of research and development in the field of Augmented Reality AR and provides an overview of the common definitions of AR, and shows how AR fits into taxonomies of other related technologies.
Abstract: This survey summarizes almost 50 years of research and development in the field of Augmented Reality AR. From early research in the1960's until widespread availability by the 2010's there has been steady progress towards the goal of being able to seamlessly combine real and virtual worlds. We provide an overview of the common definitions of AR, and show how AR fits into taxonomies of other related technologies. A history of important milestones in Augmented Reality is followed by sections on the key enabling technologies of tracking, display and input devices. We also review design guidelines and provide some examples of successful AR applications. Finally, we conclude with a summary of directions for future work and a review of some of the areas that are currently being researched.

573 citations


Journal ArticleDOI
TL;DR: The results suggest thatUse of the AR platform for training IMA tasks should be encouraged and use of the VR platform for that purpose should be further evaluated.
Abstract: The current study evaluated the use of virtual reality (VR) and augmented reality (AR) platforms, developed within the scope of the SKILLS Integrated Project, for industrial maintenance and assembly (IMA) tasks training. VR and AR systems are now widely regarded as promising training platforms for complex and highly demanding IMA tasks. However, there is a need to empirically evaluate their efficiency and effectiveness compared to traditional training methods. Forty expert technicians were randomly assigned to four training groups in an electronic actuator assembly task: VR (training with the VR platform twice), Control-VR (watching a filmed demonstration twice), AR (training with the AR platform once), and Control-AR (training with the real actuator and the aid of a filmed demonstration once). A post-training test evaluated performance in the real task. Results demonstrate that, in general, the VR and AR training groups required longer training time compared to the Control-VR and Control-AR groups, respe...

467 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used a quality model to test users' satisfaction and intention to recommend marker-based augmented reality applications and investigated the differences in these constructs between high and low-innovativeness groups visiting a theme park.

341 citations


Journal ArticleDOI
TL;DR: This work combines every learning process from the electrical machines course in the electrical engineering degree, which allows interactive and autonomous studying as well as collaborative performance of laboratory practices with other students and without a teacher's assistance.

283 citations


Journal ArticleDOI
TL;DR: This paper reviews the use of virtual reality environments for research and teaching in the context of three disciplines: architecture, landscape architecture and environmental planning, and describes current VR research opportunities and challenges in each discipline.

259 citations


Journal ArticleDOI
TL;DR: In this paper, the authors deal with the design of assembly stations, where human-robot collaborative tasks are carried out, based on the assembly process specifications, different control, safety and operator support strategies have to be implemented in order for the human safety and the overall system's productivity to be ensured.

256 citations


Journal ArticleDOI
TL;DR: A classification of existing data types, analytical methods, visualization techniques and tools, with a particular emphasis placed on surveying the evolution of visualization methodology over the past years is provided, and disadvantages of existing visualization methods are revealed.
Abstract: This paper provides a multi-disciplinary overview of the research issues and achievements in the field of Big Data and its visualization techniques and tools. The main aim is to summarize challenges in visualization methods for existing Big Data, as well as to offer novel solutions for issues related to the current state of Big Data Visualization. This paper provides a classification of existing data types, analytical methods, visualization techniques and tools, with a particular emphasis placed on surveying the evolution of visualization methodology over the past years. Based on the results, we reveal disadvantages of existing visualization methods. Despite the technological development of the modern world, human involvement (interaction), judgment and logical thinking are necessary while working with Big Data. Therefore, the role of human perceptional limitations involving large amounts of information is evaluated. Based on the results, a non-traditional approach is proposed: we discuss how the capabilities of Augmented Reality and Virtual Reality could be applied to the field of Big Data Visualization. We discuss the promising utility of Mixed Reality technology integration with applications in Big Data Visualization. Placing the most essential data in the central area of the human visual field in Mixed Reality would allow one to obtain the presented information in a short period of time without significant data losses due to human perceptual issues. Furthermore, we discuss the impacts of new technologies, such as Virtual Reality displays and Augmented Reality helmets on the Big Data visualization as well as to the classification of the main challenges of integrating the technology.

254 citations


Journal ArticleDOI
01 Jul 2015
TL;DR: In this article, a touchless motion interaction technology is designed and evaluated in order to develop touch-less, interactive and augmented reality games on vision-based wearable device, and three primitive AR games with eleven dynamic gestures are developed based on the proposed touchless interaction technology as proof.
Abstract: There is an increasing interest in creating pervasive games based on emerging interaction technologies. In order to develop touch-less, interactive and augmented reality games on vision-based wearable device, a touch-less motion interaction technology is designed and evaluated in this work. Users interact with the augmented reality games with dynamic hands/feet gestures in front of the camera, which triggers the interaction event to interact with the virtual object in the scene. Three primitive augmented reality games with eleven dynamic gestures are developed based on the proposed touch-less interaction technology as proof. At last, a comparing evaluation is proposed to demonstrate the social acceptability and usability of the touch-less approach, running on a hybrid wearable framework or with Google Glass, as well as workload assessment, user's emotions and satisfaction.

244 citations


Journal ArticleDOI
TL;DR: It is demonstrated that an IVE can be an effective tool in the design phase of AEC projects in order to acquire end-user performance feedback, which might lead to higher performing infrastructure design and end- user satisfaction.

240 citations


Patent
29 May 2015
TL;DR: In this paper, the authors provide methods and system for creating focal planes in virtual and augmented reality environments, which may include a spatial light modulator operatively coupled to an image source to project light associated with one or more frames of image data.
Abstract: To provide methods and system for creating focal planes in virtual and augmented reality.SOLUTION: Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The system may comprise a spatial light modulator operatively coupled to an image source to project light associated with one or more frames of image data, and a variable focus element (VFE) for varying a focus of the projected light such that a first frame of the image data is focused at a first depth plane, and a second frame of the image data is focused at a second depth plane, where a distance between the first depth plane and the second depth plane is fixed.SELECTED DRAWING: Figure 5

Posted Content
TL;DR: In this paper, a touch-less motion interaction technology is designed and evaluated in order to develop touchless, interactive and augmented reality games on a vision-based wearable device, and three primitive AR games with eleven dynamic gestures are developed based on the proposed touchless interaction technology as proof.
Abstract: This is the preprint version of our paper on Personal and Ubiquitous Computing. There is an increasing interest in creating pervasive games based on emerging interaction technologies. In order to develop touch-less, interactive and augmented reality games on vision-based wearable device, a touch-less motion interaction technology is designed and evaluated in this work. Users interact with the augmented reality games with dynamic hands/feet gestures in front of the camera, which triggers the interaction event to interact with the virtual object in the scene. Three primitive augmented reality games with eleven dynamic gestures are developed based on the proposed touch-less interaction technology as proof. At last, a comparing evaluation is proposed to demonstrate the social acceptability and usability of the touch-less approach, running on a hybrid wearable framework or with Google Glass, as well as workload assessment, user's emotions and satisfaction.

Journal ArticleDOI
TL;DR: This work develops a modular software framework for intelligent AR training systems, and a prototype based on this framework teaches novice users how to assemble a computer motherboard.
Abstract: We investigate the combination of Augmented Reality (AR) with Intelligent Tutoring Systems (ITS) to assist with training for manual assembly tasks Our approach combines AR graphics with adaptive guidance from the ITS to provide a more effective learning experience We have developed a modular software framework for intelligent AR training systems, and a prototype based on this framework that teaches novice users how to assemble a computer motherboard An evaluation found that our intelligent AR system improved test scores by 25 % and that task performance was 30 % faster compared to the same AR training system without intelligent support We conclude that using an intelligent AR tutor can significantly improve learning compared to more traditional AR training

Journal ArticleDOI
TL;DR: This research presents a mobile augmented reality travel guide, named CorfuAR, which supports personalized recommendations, and empirically validates the relation between functional system properties, user emotions, and adoption behavior.

Journal ArticleDOI
TL;DR: In this study, an AR-based surgical navigation system (AR-SNS) is developed using an optical see-through HMD (head-mounted display), aiming at improving the safety and reliability of the surgery.

Journal ArticleDOI
TL;DR: In this paper, the authors presented the experience of a new tool based on augmented reality focusing on the anatomy of the lower limb, which was constructed and developed based on TC and MRN images, dissections and drawings.
Abstract: The evolution of technologies and the development of new tools with educational purposes are growing up. This work presents the experience of a new tool based on augmented reality (AR) focusing on the anatomy of the lower limb. ARBOOK was constructed and developed based on TC and MRN images, dissections and drawings. For ARBOOK evaluation, a specific questionnaire of three blocks was performed and validated according to the Delphi method. The questionnaire included motivation and attention tasks, autonomous work and three-dimensional interpretation tasks. A total of 211 students from 7 public and private Spanish universities were divided in two groups. Control group received standard teaching sessions supported by books, and video. The ARBOOK group received the same standard sessions but additionally used the ARBOOK tool. At the end of the training, a written test on lower limb anatomy was done by students. Statistically significant better scorings for the ARBOOK group were found on attention–motivation, autonomous work and three-dimensional comprehension tasks. Additionally, significantly better scoring was obtained by the ARBOOK group in the written test. The results strongly suggest that the use of AR is suitable for anatomical purposes. Concretely, the results indicate how this technology is helpful for student motivation, autonomous work or spatial interpretation. The use of this type of technologies must be taken into account even more at the present moment, when new technologies are naturally incorporated to our current lives.

Journal ArticleDOI
TL;DR: A concept paper as discussed by the authors reviews the research that has been conducted on AR and describes the application of AR in a number of fields of learning including Medicine, Chemistry, Mathematics, Physics, Geography, Biology, Astronomy and History This paper also discusses the advantages of AR compared to traditional technology (such as e-learning and courseware) and traditional teaching methods (chalk and talk and traditional books)
Abstract: Technology in education can influence students to learn actively and can motivate them, leading to an effective process of learning Previous research has identified the problem that technology will create a passive learning process if the technology used does not promote critical thinking, meaning-making or metacognition Since its introduction, augmented reality (AR) has been shown to have good potential in making the learning process more active, effective and meaningful This is because its advanced technology enables users to interact with virtual and real-time applications and brings the natural experiences to the user In addition, the merging of AR with education has recently attracted research attention because of its ability to allow students to be immersed in realistic experiences Therefore, this concept paper reviews the research that has been conducted on AR The review describes the application of AR in a number of fields of learning including Medicine, Chemistry, Mathematics, Physics, Geography, Biology, Astronomy and History This paper also discusses the advantages of AR compared to traditional technology (such as e-learning and courseware) and traditional teaching methods (chalk and talk and traditional books) The review of the results of the research shows that, overall, AR technologies have a positive potential and advantages that can be adapted in education The review also indicates the limitations of AR which could be addressed in future research


01 Sep 2015
TL;DR: The focus in this position paper is on bringing attention to the higher-level usability and design issues in creating effective user interfaces for data analytics in immersive environments.
Abstract: Immersive Analytics is an emerging research thrust investigating how new interaction and display technologies can be used to support analytical reasoning and decision making. The aim is to provide multi-sensory interfaces that support collaboration and allow users to immerse themselves in their data in a way that supports real-world analytics tasks. Immersive Analytics builds on technologies such as large touch surfaces, immersive virtual and augmented reality environments, sensor devices and other, rapidly evolving, natural user interface devices. While there is a great deal of past and current work on improving the display technologies themselves, our focus in this position paper is on bringing attention to the higher-level usability and design issues in creating effective user interfaces for data analytics in immersive environments.

Journal Article
TL;DR: The results indicated that visitors who used AR guidance showed significant learning and sense of place effects, and a majority of the visitors who participated in the study demonstrated positive attitudes toward the use of the AR-guidance system.
Abstract: Based on the sense of place theory and the design principles of guidance and interpretation, this study developed an augmented reality mobile guidance system that used a historical geo-context-embedded visiting strategy. This tool for heritage guidance and educational activities enhanced visitor sense of place. This study consisted of 3 visitor groups (i.e., AR-guidance, audio-guidance, and no-guidance) composed of 87 university students. A quasi-experimental design was adopted to evaluate whether augmented reality guidance more effectively promoted sense of place and learning performance than the other groups. The results indicated that visitors who used AR guidance showed significant learning and sense of place effects. Interviews were also employed to determine the possible factors that contribute to the formation of sense of place. Finally, a majority of the visitors who participated in the study demonstrated positive attitudes toward the use of the AR-guidance system.

Journal ArticleDOI
TL;DR: The augmented reality system is accurate and reliable for the intraoperative projection of images to the head, skull, and brain surface and enables the surgeon to use direct visualization for image-guided neurosurgery.
Abstract: OBJECT An augmented reality system has been developed for image-guided neurosurgery to project images with regions of interest onto the patient's head, skull, or brain surface in real time. The aim of this study was to evaluate system accuracy and to perform the first intraoperative application. METHODS Images of segmented brain tumors in different localizations and sizes were created in 10 cases and were projected to a head phantom using a video projector. Registration was performed using 5 fiducial markers. After each registration, the distance of the 5 fiducial markers from the visualized tumor borders was measured on the virtual image and on the phantom. The difference was considered a projection error. Moreover, the image projection technique was intraoperatively applied in 5 patients and was compared with a standard navigation system. RESULTS Augmented reality visualization of the tumors succeeded in all cases. The mean time for registration was 3.8 minutes (range 2–7 minutes). The mean projection e...

Proceedings ArticleDOI
Ohan Oda1, Carmine Elvezio1, Mengu Sukan1, Steven Feiner1, Barbara Tversky 
05 Nov 2015
TL;DR: Two approaches that use Virtual Reality (VR) or Augmented Reality (AR) for the remote expert, and AR for the local user, each wearing a stereo head-worn display are introduced.
Abstract: In many complex tasks, a remote subject-matter expert may need to assist a local user to guide actions on objects in the local user's environment. However, effective spatial referencing and action demonstration in a remote physical environment can be challenging. We introduce two approaches that use Virtual Reality (VR) or Augmented Reality (AR) for the remote expert, and AR for the local user, each wearing a stereo head-worn display. Both approaches allow the expert to create and manipulate virtual replicas of physical objects in the local environment to refer to parts of those physical objects and to indicate actions on them. This can be especially useful for parts that are occluded or difficult to access. In one approach, the expert points in 3D to portions of virtual replicas to annotate them. In another approach, the expert demonstrates actions in 3D by manipulating virtual replicas, supported by constraints and annotations. We performed a user study of a 6DOF alignment task, a key operation in many physical task domains, comparing both approaches to an approach in which the expert uses a 2D tablet-based drawing system similar to ones developed for prior work on remote assistance. The study showed the 3D demonstration approach to be faster than the others. In addition, the 3D pointing approach was faster than the 2D tablet in the case of a highly trained expert.

Journal ArticleDOI
TL;DR: This research investigates different visual features for augmented reality (AR)–based assembly instructions and suggests that in order to gain an advantage from AR, the visual features used to explain a particular assembly operation must correspond to its relative difficulty level.
Abstract: This research investigates different visual features for augmented reality (AR)–based assembly instructions. Since the beginning of AR research, one of its most popular application areas has been manual assembly assistance. A typical AR assembly application indicates the necessary manual assembly operations by generating visual representations of parts that are spatially registered with, and superimposed on, a video representation of the physical product to be assembled. Research in this area indicates the advantages of this type of assembly instruction presentation. This research investigates different types of visual features for different assembly operations. The hypothesis is that in order to gain an advantage from AR, the visual features used to explain a particular assembly operation must correspond to its relative difficulty level. The final goal is to associate different types of visual features to different levels of task complexity. A user study has been conducted in order to compare different v...

Proceedings ArticleDOI
18 May 2015
TL;DR: RollingLight is designed, implements, and evaluates, a line-of-sight light-to-camera communication system that enables a light to talk to diverse off-the-shelf rolling shutter cameras and incorporates a number of designs to resolve the issues caused by inherently unsynchronized light- to-camera channels.
Abstract: Recent literatures have demonstrated the feasibility and applicability of light-to-camera communications. They either use this new technology to realize specific applications, e.g., localization, by sending repetitive signal patterns, or consider non-line-of-sight scenarios. We however notice that line-of-sight light-to-camera communications has a great potential because it provides a natural way to enable visual association, i.e., visually associating the received information with the transmitter's identity. Such capability benefits broader applications, such as augmented reality, advertising, and driver assistance systems. Hence, this paper designs, implements, and evaluates RollingLight, a line-of-sight light-to-camera communication system that enables a light to talk to diverse off-the-shelf rolling shutter cameras. To boost the data rate and enhance reliability, RollingLight addresses the following practical challenges. First, its demodulation algorithm allows cameras with heterogeneous sampling rates to accurately decode high-order frequency modulation in real-time. Second, it incorporates a number of designs to resolve the issues caused by inherently unsynchronized light-to-camera channels. We have built a prototype of RollingLight with USRP-N200, and also implemented a real system with Arduino Mega 2560, both tested with a range of different camera receivers. We also implement a real iOS application to examine our real-time decoding capability. The experimental results show that, even to serve commodity cameras with a large variety of frame rates, RollingLight can still deliver a throughput of 11.32 bytes per second.

Journal ArticleDOI
TL;DR: Augmented reality is a valuable navigation tool which may enhance the ability to achieve safe surgical resection during robotic hepatectomy.
Abstract: Background Augmented reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative workup and real-time patient images in order to visualize unapparent anatomical details. The 3D model could be used for a preoperative planning of the procedure. The potential of AR navigation as a tool to improve safety of the surgical dissection is outlined for robotic hepatectomy.

Journal ArticleDOI
TL;DR: In this paper, a mobile learning tool Explorez was created for first-year University French students in order to bridge the gap between gaming and education through quest-based learning and augmented reality.

Proceedings ArticleDOI
18 May 2015
TL;DR: To build a ready-to-use mobile AR system, this work adopts a top-down approach cutting across smartphone sensing, computer vision, cloud offloading, and linear optimization, and uses a novel location-free geometric representation of the environment from smartphone sensors to prune down the visual search space.
Abstract: The idea of augmented reality - the ability to look at a physical object through a camera and view annotations about the object - is certainly not new. Yet, this apparently feasible vision has not yet materialized into a precise, fast, and comprehensively usable system. This paper asks: What does it take to enable augmented reality (AR) on smartphones today? To build a ready-to-use mobile AR system, we adopt a top-down approach cutting across smartphone sensing, computer vision, cloud offloading, and linear optimization. Our core contribution is in a novel location-free geometric representation of the environment - from smartphone sensors - and using this geometry to prune down the visual search space. Metrics of success include both accuracy and latency of object identification, coupled with the ease of use and scalability in uncontrolled environments. Our converged system, OverLay, is currently deployed in the engineering building and open for use to regular public; ongoing work is focussed on campus-wide deployment to serve as a "historical tour guide" of UIUC. Performance results and user responses thus far have been promising, to say the least.

Journal ArticleDOI
TL;DR: Based on the results of the pilot test, it is concluded that students were satisfied with HuMAR in terms of its usability and features; which in turn could have a positive impact in their learning process.

Patent
29 Jun 2015
TL;DR: In this article, a real-time surgery method and apparatus for displaying a stereoscopic augmented view of a patient from a static or dynamic viewpoint of the surgeon, which employs realtime three-dimensional surface reconstruction for preoperative and intra-operative image registration.
Abstract: Embodiments disclose a real-time surgery method and apparatus for displaying a stereoscopic augmented view of a patient from a static or dynamic viewpoint of the surgeon, which employs real-time three-dimensional surface reconstruction for preoperative and intraoperative image registration. Stereoscopic cameras provide real-time images of the scene including the patient. A stereoscopic video display is used by the surgeon, who sees a graphical representation of the preoperative or intraoperative images blended with the video images in a stereoscopic manner through a see through display.

Patent
30 Dec 2015
TL;DR: In this article, a user authentication system includes an augmented reality device with a gesture analyzer configured for recognizing a user's gestures and an object renderer in communication with the gesture analyzers.
Abstract: A user authentication system includes an augmented reality device with a gesture analyzer configured for recognizing a user's gestures. The augmented reality device also includes an object renderer in communication with the gesture analyzer. The object renderer is configured for (i) rendering a virtual three-dimensional object for display to the user (ii) modifying the shape of the virtual three-dimensional object based upon the recognized gestures.