scispace - formally typeset
Search or ask a question
Author

Patrick Maier

Bio: Patrick Maier is an academic researcher from Technische Universität München. The author has contributed to research in topics: Augmented reality & User interface. The author has an hindex of 9, co-authored 17 publications receiving 267 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This work presents the architecture of the first release of the Neurorobotics Platform, a new web-based environment offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation.
Abstract: Combined efforts in the fields of neuroscience, computer science and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to filling this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in-silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, envi-ronments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.

104 citations

01 Jan 2009
TL;DR: Augmented Chemical Reactions visualizes models of molecules rendered to a camera picture at the position of special markers hold in the hands of the users, and shows the dynamic deformation of molecules when they come close to each other.
Abstract: unchen, Germany Abstract: This paper describes an approach for increasing the understanding and ease the learning of chemistry for students by visualizing and controlling virtual models of molecules in a intuitive way. With the Help of Augmented Reality, we developed a tool with the name Augmented Chemical Reactions. This program visualizes models of molecules rendered to a camera picture at the position of special markers hold in the hands of the users. The intuitive controlling of the position and orientation of the molecules is done by moving and rotating the markers in front of a camera, so the virtual objects behave as they would have been manipulated themselves. For a better understanding of the subject of chemistry, Augmented Chemical Reactions also shows the dynamic deformation of molecules when they come close to each other. Here the users can have a better view on certain behaviours between molecules. This program has the potential to increase the understanding and ease learning chemistry because of its intuitive controlling of the 3D structure of molecules. In the same way that Augmented Chemical Reactions can help students, it also has the potential to speed up the process of designing new molecules. Scientists can inspect the created molecules and see, if they meet the spatial requirements for specific reactions. This approach prevents time consuming reactions in the laboratory to create those molecules and test them for their desired attributes.

52 citations

Proceedings ArticleDOI
01 Oct 2009
TL;DR: A novel prototypical Underwater Augmented Reality (UWAR) system that provides visual aids to increase commercial divers' capability to detect, perceive, and understand elements in underwater environments is described.
Abstract: This paper describes the implementation of a novel prototypical Underwater Augmented Reality (UWAR) system that provides visual aids to increase commercial divers' capability to detect, perceive, and understand elements in underwater environments. During underwater operations, a great amount of stress is imposed on divers by environmental and working conditions such as pressure, visibility, weightlessness, current, etc. Those factors cause a restriction in divers' sensory inputs, cognition and memory, which are essentials for locating within the surroundings and performing their task effectively. The focus of this research was to improve some of those conditions by adding elements in divers' views in order to increase awareness and safety in commercial diving operations. We accomplished our goal by assisting divers in locating the work site, keeping informed about orientation and position (constantly), and providing a 3D virtual model for an assembling task. The system consisted of a video see-through head mounted display (HMD) with a webcam in front of it, protected by a custom waterproof housing placed over the diving mask. As a very first step, optical square-markers were used for positioning and orientation tracking purposes (POSE). The tracking was implemented with a ubiquitous-tracking software (Ubitrack). Finally, we discuss the possible implications of a diver-machine synergy.

41 citations

Journal ArticleDOI
TL;DR: Augmented Chemical Reactions is an application that uses Augmented Reality to visualize and interact with the virtual molecules in a direct way to help chemistry students and researchers in developing and understanding new chemical molecules.
Abstract: Supporting chemistry students in learning and researchers in developing and understanding new chemical molecules is a task that is not that easy Computer applications try to support the users by visualizing chemical properties and spatial relations Thus far, there mostly exist applications that are controlled by using ordinary input devices as mice and keyboards But these input devices have one problem: they always try to map a lower degree of freedom to 6-dimensional movements for the location and the orientation of the virtual molecules Augmented Chemical Reactions is an application that uses Augmented Reality to visualize and interact with the virtual molecules in a direct way With the introduced 3D interaction methods, the work of students and researchers is tried to be simplified to concentrate on the actual task

26 citations

Proceedings ArticleDOI
01 Sep 2013
TL;DR: This demonstration shows an Augmented Reality tool to support teaching chemistry, an direct manipulation user interface using the augmented reality technique that enables the users to better understand the spacial structure of the shown geometries.
Abstract: This demonstration shows an Augmented Reality tool to support teaching chemistry. The understanding of spacial relations in and between molecules is an essential part that has to be understood by students to learn chemistry As nowadays the techniques to show and simulate molecular behaviors get faster and better, 3D applications to show the molecules become more and more popular also in schools. Augmented Chemical Reactions is an application that shows the 3D spatial structure of molecules as well as the dynamics of the atoms in and between molecules. This application does not use the commonly used 3D user interface of mice and keyboards to move and rotate the virtual objects but it makes use of an intuitive 3D user interface. This 3D User interface is an direct manipulation user interface using the augmented reality technique. With this user interface it enables the users to better understand the spacial structure of the shown geometries.

25 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The usability study showed that although this technology is not mature enough to be used massively in education, enthusiasm of middle-school students diminished most of the barriers found.
Abstract: In this paper, the authors show that augmented reality technology has a positive impact on the motivation of middle-school students. The Instructional Materials Motivation Survey (IMMS) (Keller, 2010) based on the ARCS motivation model (Keller, 1987a) was used to gather information; it considers four motivational factors: attention, relevance, confidence, and satisfaction. Motivational factors of attention and satisfaction in an augmented-reality-based learning environment were better rated than those obtained in a slides-based learning environment. When the impact of the augmented reality system was analyzed in isolation, the attention and confidence factors were the best rated. The usability study showed that although this technology is not mature enough to be used massively in education, enthusiasm of middle-school students diminished most of the barriers found.

780 citations

Journal ArticleDOI
17 Apr 2018
TL;DR: It is found that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing.
Abstract: Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We summarize the high-level contributions from each category of papers, and present examples of the most influential user studies. We also identify areas where there have been few user studies, and opportunities for future research. Among other things, we find that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing. This research will be useful for AR researchers who want to follow best practices in designing their own AR user studies.

258 citations

Journal ArticleDOI
TL;DR: The IoUT is introduced and its main differences with respect to the Internet of Things (IoT) are outlined and the proposed IoUT architecture is described.

252 citations

Journal ArticleDOI
06 Apr 2021
TL;DR: Loihi as mentioned in this paper is a neuromorphic research processor designed to support a broad range of spiking neural networks with sufficient scale, performance, and features to deliver competitive results compared to state-of-the-art contemporary computing architectures.
Abstract: Deep artificial neural networks apply principles of the brain’s information processing that led to breakthroughs in machine learning spanning many problem domains. Neuromorphic computing aims to take this a step further to chips more directly inspired by the form and function of biological neural circuits, so they can process new knowledge, adapt, behave, and learn in real time at low power levels. Despite several decades of research, until recently, very few published results have shown that today’s neuromorphic chips can demonstrate quantitative computational value. This is now changing with the advent of Intel’s Loihi, a neuromorphic research processor designed to support a broad range of spiking neural networks with sufficient scale, performance, and features to deliver competitive results compared to state-of-the-art contemporary computing architectures. This survey reviews results that are obtained to date with Loihi across the major algorithmic domains under study, including deep learning approaches and novel approaches that aim to more directly harness the key features of spike-based neuromorphic hardware. While conventional feedforward deep neural networks show modest if any benefit on Loihi, more brain-inspired networks using recurrence, precise spike-timing relationships, synaptic plasticity, stochasticity, and sparsity perform certain computation with orders of magnitude lower latency and energy compared to state-of-the-art conventional approaches. These compelling neuromorphic networks solve a diverse range of problems representative of brain-like computation, such as event-based data processing, adaptive control, constrained optimization, sparse feature regression, and graph search.

237 citations

Proceedings ArticleDOI
29 Mar 2014
TL;DR: This work presents a method that utilizes dynamic 3D eye position measurements from an eye tracker in combination with pre-computed, static display calibration parameters and shows that the new calibration with eye tracking is more stable than repeated SPAAM calibrations.
Abstract: It is a common problem of AR applications that optical see-through head-mounted displays (OST-HMD) move on users' heads or are even temporarily taken off, thus requiring frequent (re)calibrations. If such calibrations involve user interactions, they are time consuming and distract users from their applications. Furthermore, they inject user-dependent errors into the system setup and reduce users' acceptance of OST-HMDs. To overcome these problems, we present a method that utilizes dynamic 3D eye position measurements from an eye tracker in combination with pre-computed, static display calibration parameters. Our experiments provide a comparison of our calibration with SPAAM (Single Point Active Alignment Method) for several head-display conditions: in the first condition, repeated calibrations are conducted while keeping the display position on the user's head fixed. In the second condition, users take the HMD off and put it back on in between calibrations. The result shows that our new calibration with eye tracking is more stable than repeated SPAAM calibrations. We close with a discussion on potential error sources which should be removed to achieve higher calibration quality.

138 citations