scispace - formally typeset
Search or ask a question
Author

Daniel Bachmann

Bio: Daniel Bachmann is an academic researcher from Technical University of Dortmund. The author has contributed to research in topics: Flood myth & Controller (computing). The author has an hindex of 6, co-authored 8 publications receiving 968 citations.

Papers
More filters
Journal ArticleDOI
14 May 2013-Sensors
TL;DR: Using the conclusion of this analysis can improve the development of applications for the Leap Motion controller in the field of Human-Computer Interaction.
Abstract: The Leap Motion Controller is a new device for hand gesture controlled user interfaces with declared sub-millimeter accuracy However, up to this point its capabilities in real environments have not been analyzed Therefore, this paper presents a first study of a Leap Motion Controller The main focus of attention is on the evaluation of the accuracy and repeatability For an appropriate evaluation, a novel experimental setup was developed making use of an industrial robot with a reference pen allowing a position accuracy of 02 mm Thereby, a deviation between a desired 3D position and the average measured positions below 02mmhas been obtained for static setups and of 12mmfor dynamic setups Using the conclusion of this analysis can improve the development of applications for the Leap Motion controller in the field of Human-Computer Interaction

863 citations

Journal ArticleDOI
24 Dec 2014-Sensors
TL;DR: A Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device is presented, at least with regard to the selection recognition provided by the LMC.
Abstract: This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8 % for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC.

130 citations

Journal ArticleDOI
07 Jul 2018-Sensors
TL;DR: The purpose of this paper is to survey the state-of-the-art Human-Computer Interaction techniques with a focus on the special field of three-dimensional interaction, including an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition.
Abstract: Modern hardware and software development has led to an evolution of user interfaces from command-line to natural user interfaces for virtual immersive environments. Gestures imitating real-world interaction tasks increasingly replace classical two-dimensional interfaces based on Windows/Icons/Menus/Pointers (WIMP) or touch metaphors. Thus, the purpose of this paper is to survey the state-of-the-art Human-Computer Interaction (HCI) techniques with a focus on the special field of three-dimensional interaction. This includes an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition. Focus is on interfaces based on the Leap Motion Controller (LMC) and corresponding methods of gesture design and recognition. Further, a review of evaluation methods for the proposed natural user interfaces is given.

104 citations

Posted Content
TL;DR: An agent-based numerical simulation of pedestrian dynamics is proposed in order to assess the behavior of pedestrians in public places in the context of contact transmission of infectious diseases like COVID-19, and to gather insights about exposure times and the overall effectiveness of distancing measures.
Abstract: With the Corona Virus Disease 2019 (COVID-19) pandemic spreading across the world, protective measures for containing the virus are essential, especially as long as no vaccine or effective treatment is available. One important measure is the so-called physical distancing or social distancing. In this paper, we propose an agent-based numerical simulation of pedestrian dynamics in order to assess behaviour of pedestrians in public places in the context of contact-transmission of infectious diseases like COVID-19, and to gather insights about exposure times and the overall effectiveness of distancing measures. To abide the minimum distance of $1.5m$ stipulated by the German government at an infection rate of 2%, our simulation results suggest that a density of one person per $16m^2$ or below is sufficient. The results of this study give insight about how physical distancing as a protective measure can be carried out more efficiently to help reduce the spread of COVID-19.

19 citations

Journal ArticleDOI
TL;DR: In this article, an agent-based numerical simulation of pedestrian dynamics in order to assess the behavior of pedestrians in public places in the context of contact transmission of infectious diseases like COVID-19, and to gather insights about exposure times and the overall effectiveness of distancing measures.
Abstract: With the coronavirus disease 2019 (COVID-19) pandemic spreading across the world, protective measures for containing the virus are essential, especially as long as no vaccine or effective treatment is available. One important measure is the so-called physical distancing or social distancing. In this paper, we propose an agent-based numerical simulation of pedestrian dynamics in order to assess the behavior of pedestrians in public places in the context of contact transmission of infectious diseases like COVID-19, and to gather insights about exposure times and the overall effectiveness of distancing measures. To abide by the minimum distance of 1.5 m stipulated by the German government at an infection rate of 2%, our simulation results suggest that a density of one person per 16m2 or below is sufficient. The results of this study give insight into how physical distancing as a protective measure can be carried out more efficiently to help reduce the spread of COVID-19.

18 citations


Cited by
More filters
Journal ArticleDOI
11 Jul 2016
TL;DR: It is demonstrated that Soli can be used for robust gesture recognition and can track gestures with sub-millimeter accuracy, running at over 10,000 frames per second on embedded hardware.
Abstract: This paper presents Soli, a new, robust, high-resolution, low-power, miniature gesture sensing technology for human-computer interaction based on millimeter-wave radar. We describe a new approach to developing a radar-based sensor optimized for human-computer interaction, building the sensor architecture from the ground up with the inclusion of radar design principles, high temporal resolution gesture tracking, a hardware abstraction layer (HAL), a solid-state radar chip and system architecture, interaction models and gesture vocabularies, and gesture recognition. We demonstrate that Soli can be used for robust gesture recognition and can track gestures with sub-millimeter accuracy, running at over 10,000 frames per second on embedded hardware.

667 citations

Proceedings ArticleDOI
01 Oct 2014
TL;DR: Experimental results present a comparison between the accuracy that can be obtained from the two devices on a subset of the American Manual Alphabet and show how, by combining the two features sets, it is possible to achieve a very high accuracy in real-time.
Abstract: The recent introduction of novel acquisition devices like the Leap Motion and the Kinect allows to obtain a very informative description of the hand pose that can be exploited for accurate gesture recognition. This paper proposes a novel hand gesture recognition scheme explicitly targeted to Leap Motion data. An ad-hoc feature set based on the positions and orientation of the fingertips is computed and fed into a multi-class SVM classifier in order to recognize the performed gestures. A set of features is also extracted from the depth computed from the Kinect and combined with the Leap Motion ones in order to improve the recognition performance. Experimental results present a comparison between the accuracy that can be obtained from the two devices on a subset of the American Manual Alphabet and show how, by combining the two features sets, it is possible to achieve a very high accuracy in real-time.

391 citations

Journal ArticleDOI
21 Feb 2014-Sensors
TL;DR: The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.
Abstract: We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations For the static measurements, a plastic arm model simulating a human arm was used A set of 37 reference locations was selected to cover the controller's sensory space For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers In the static scenario, the standard deviation was less than 05 mm The linear correlation revealed a significant increase in the standard deviation when moving away from the controller The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system

379 citations

Journal ArticleDOI
TL;DR: This paper presents a survey of some recent works on hand gesture recognition using 3D depth sensors, and reviews the commercial depth sensors and public data sets that are widely used in this field.
Abstract: Three-dimensional hand gesture recognition has attracted increasing research interests in computer vision, pattern recognition, and human-computer interaction. The emerging depth sensors greatly inspired various hand gesture recognition approaches and applications, which were severely limited in the 2D domain with conventional cameras. This paper presents a survey of some recent works on hand gesture recognition using 3D depth sensors. We first review the commercial depth sensors and public data sets that are widely used in this field. Then, we review the state-of-the-art research for 3D hand gesture recognition in four aspects: 1) 3D hand modeling; 2) static hand gesture recognition; 3) hand trajectory gesture recognition; and 4) continuous hand gesture recognition. While the emphasis is on 3D hand gesture recognition approaches, the related applications and typical systems are also briefly summarized for practitioners.

291 citations

Journal ArticleDOI
TL;DR: A novel feature vector with depth information is computed and fed into the Hidden Conditional Neural Field (HCNF) classifier to recognize dynamic hand gestures and Experimental results show that the proposed method is suitable for certain dynamic hand gesture recognition tasks.
Abstract: Dynamic hand gesture recognition is a crucial but challenging task in the pattern recognition and computer vision communities. In this paper, we propose a novel feature vector which is suitable for representing dynamic hand gestures, and presents a satisfactory solution to recognizing dynamic hand gestures with a Leap Motion controller (LMC) only. These have not been reported in other papers. The feature vector with depth information is computed and fed into the Hidden Conditional Neural Field (HCNF) classifier to recognize dynamic hand gestures. The systematic framework of the proposed method includes two main steps: feature extraction and classification with the HCNF classifier. The proposed method is evaluated on two dynamic hand gesture datasets with frames acquired with a LMC. The recognition accuracy is 89.5% for the LeapMotion-Gesture3D dataset and 95.0% for the Handicraft-Gesture dataset. Experimental results show that the proposed method is suitable for certain dynamic hand gesture recognition tasks.

201 citations