scispace - formally typeset
Search or ask a question
Author

Michael R. Marner

Bio: Michael R. Marner is an academic researcher from University of South Australia. The author has contributed to research in topics: Augmented reality & User interface. The author has an hindex of 13, co-authored 27 publications receiving 394 citations.

Papers
More filters
Proceedings ArticleDOI
23 Dec 2013
TL;DR: Results of a study measuring user performance in a procedural task using Spatial Augmented Reality show that augmented annotations lead to significantly faster task completion speed, fewer errors, and reduced head movement, when compared to monitor based instructions.
Abstract: This paper presents results of a study measuring user performance in a procedural task using Spatial Augmented Reality (SAR). The task required participants to press sequences of buttons on two control panel designs in the correct order. Instructions for the task were shown either on a computer monitor, or projected directly onto the control panels. This work was motivated by discrepancies between the expectations from AR proponents and experimental findings. AR is often promoted as a way of improving user performance and understanding. With notable exceptions however, experimental results do not confirm these expectations. Reasons cited for results include limitations of current display technologies and misregistration caused by tracking and calibration errors. Our experiment utilizes SAR to remove these effects. Our results show that augmented annotations lead to significantly faster task completion speed, fewer errors, and reduced head movement, when compared to monitor based instructions. Subjectively, our results show augmented annotations are preferred by users.

63 citations

Proceedings ArticleDOI
19 Oct 2009
TL;DR: This paper presents a new user interface methodology for Spatial Augmented Reality systems, allowing an industrial designer to digitally airbrush onto an augmented physical model, masking the paint using a virtualized stencil.
Abstract: This paper presents a new user interface methodology for Spatial Augmented Reality systems. The methodology is based on a set of physical tools that are overloaded with logical functions. Visual feedback presents the logical mode of the tool to the user by projecting graphics onto the physical tools. This approach makes the tools malleable in their functionality, with this change conveyed to the user by changing the projected information. Our prototype application implements a two handed technique allowing an industrial designer to digitally airbrush onto an augmented physical model, masking the paint using a virtualized stencil.

43 citations

Journal ArticleDOI
TL;DR: The article describes how features of large-scale, projector-based augmented reality affect the design of spatial user interfaces for these environments and explores promising research directions and application domains.
Abstract: Spatial augmented reality applies the concepts of spatial user interfaces to large-scale, projector-based augmented reality. Such virtual environments have interesting characteristics. They deal with large physical objects, the projection surfaces are nonplanar, the physical objects provide natural passive haptic feedback, and the systems naturally support collaboration between users. The article describes how these features affect the design of spatial user interfaces for these environments and explores promising research directions and application domains.

42 citations

Proceedings ArticleDOI
22 Nov 2010
TL;DR: The results indicate that while slower, users can interact naturally with projected control panels and validate the use of spatial augmented reality for rapid iterative interface prototyping.
Abstract: This paper investigates the use of Spatial Augmented Reality in the prototyping of new human-machine interfaces, such as control panels or car dashboards The prototyping system uses projectors to present the visual appearance of controls onto a mock-up of a product Finger tracking is employed to allow real-time interactions with the controls This technology can be used to quickly and inexpensively create and evaluate interface prototypes for devices In the past, evaluating a prototype involved constructing a physical model of the device with working components such as buttons We have conducted a user study to compare these two methods of prototyping and to validate the use of spatial augmented reality for rapid iterative interface prototyping Participants of the study were required to press pairs of buttons in sequence and interaction times were measured The results indicate that while slower, users can interact naturally with projected control panels

41 citations

Proceedings ArticleDOI
20 Mar 2010
TL;DR: This work designed and implemented space-distorting visualizations to address off-screen or occluded points of interest in augmented or mixed reality and hopes that the initial results can inspire other researchers to also investigate space- Distorting visualization for mixed and augmented reality.
Abstract: Most of today's mobile internet devices contain facilities to display maps of the user's surroundings with points of interest embedded into the map. Other researchers have already explored complementary, egocentric visualizations of these points of interest using mobile mixed reality. Being able to perceive the point of interest in detail within the user's current context is desirable, however, it is challenging to display off-screen or occluded points of interest. We have designed and implemented space-distorting visualizations to address these situations. While this class of visualizations has been extensively studied in information visualization, we are not aware of any attempts to apply them to augmented or mixed reality. Based on the informal user feedback that we have gathered, we have performed several iterations on our visualizations. We hope that our initial results can inspire other researchers to also investigate space-distorting visualizations for mixed and augmented reality.

34 citations


Cited by
More filters
Patent
31 Aug 2011
TL;DR: In this article, a method for modifying an image is presented, which consists of displaying an image, the image comprising a portion of an object; determining if an edge of the object is in a location within the portion; and detecting movement in a member direction, of an operating member with respect to the edge.
Abstract: A method is provided for modifying an image. The method comprises displaying an image, the image comprising a portion of an object; and determining if an edge of the object is in a location within the portion. The method further comprises detecting movement, in a member direction, of an operating member with respect to the edge. The method still further comprises moving, if the edge is not in the location, the object in an object direction corresponding to the detected movement; and modifying, if the edge is in the location, the image in response to the detected movement, the modified image comprising the edge in the location.

434 citations

Proceedings ArticleDOI
22 Nov 2010
TL;DR: This paper provides a classification of perceptual issues in augmented reality into ones related to the environment, capturing, augmentation, display, and individual user differences, and describes current approaches to addressing these problems.
Abstract: This paper provides a classification of perceptual issues in augmented reality, created with a visual processing and interpretation pipeline in mind. We organize issues into ones related to the environment, capturing, augmentation, display, and individual user differences. We also illuminate issues associated with more recent platforms such as handhelds or projector-camera systems. Throughout, we describe current approaches to addressing these problems, and suggest directions for future research.

426 citations

Journal ArticleDOI
17 Apr 2018
TL;DR: It is found that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing.
Abstract: Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We summarize the high-level contributions from each category of papers, and present examples of the most influential user studies. We also identify areas where there have been few user studies, and opportunities for future research. Among other things, we find that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing. This research will be useful for AR researchers who want to follow best practices in designing their own AR user studies.

258 citations

Proceedings ArticleDOI
12 Sep 2016
TL;DR: This paper aims to close the gap between HMD instructions, tablet instructions, and baseline paper instructions to in-situ projected instructions using an abstract Lego Duplo assembly task and shows that assembling parts is significantly faster and locating positions is significantly slower using HMDs.
Abstract: With increasing complexity of assembly tasks and an increasing number of product variants, instruction systems providing cognitive support at the workplace are becoming more important. Different instruction systems for the workplace provide instructions on phones, tablets, and head-mounted displays (HMDs). Recently, many systems using in-situ projection for providing assembly instructions at the workplace have been proposed and became commercially available. Although comprehensive studies comparing HMD and tablet-based systems have been presented, in-situ projection has not been scientifically compared against state-of-the-art approaches yet. In this paper, we aim to close this gap by comparing HMD instructions, tablet instructions, and baseline paper instructions to in-situ projected instructions using an abstract Lego Duplo assembly task. Our results show that assembling parts is significantly faster using in-situ projection and locating positions is significantly slower using HMDs. Further, participants make less errors and have less perceived cognitive load using in-situ instructions compared to HMD instructions.

151 citations

Proceedings ArticleDOI
21 Jun 2017
TL;DR: This work provides results of a long-term study in an industrial workplace with an overall runtime of 11 full workdays and shows a decrease in performance for expert workers and a learning success for untrained workers.
Abstract: Due to increasing complexity of products and the demographic change at manual assembly workplaces, interactive and context-aware instructions for assembling products are becoming more and more important. Over the last years, many systems using head-mounted displays (HMDs) and in-situ projection have been proposed. We are observing a trend in assistive systems using in-situ projection for supporting workers during work tasks. Recent advances in technology enable robust detection of almost every work step, which is done at workplaces. With this improvement in robustness, a continuous usage of assistive systems at the workplace becomes possible. In this work, we provide results of a long-term study in an industrial workplace with an overall runtime of 11 full workdays. In our study, each participant assembled at least three full workdays using in-situ projected instructions. We separately considered two different user groups comprising expert and untrained workers. Our results show a decrease in performance for expert workers and a learning success for untrained workers.

130 citations