D
Dadhichi Shukla
Researcher at University of Innsbruck
Publications - 13
Citations - 147
Dadhichi Shukla is an academic researcher from University of Innsbruck. The author has contributed to research in topics: Robot & Gesture recognition. The author has an hindex of 6, co-authored 12 publications receiving 100 citations.
Papers
More filters
Book ChapterDOI
The Effects of Social Gaze in Human-Robot Collaborative Assembly
Kerstin Fischer,Lars Christian Jensen,Franziska Kirstein,Sebastian Stabinger,Ozgur Erkent,Dadhichi Shukla,Justus Piater +6 more
TL;DR: It is concluded that social gaze in assembly scenarios fulfills floor management functions and provides an indicator for the robot’s affordance, yet that it does not influence likability, mutual interest and suspected competence of the robot.
Proceedings ArticleDOI
Probabilistic Detection of Pointing Directions for Human-Robot Interaction
TL;DR: This work proposes a functional model for pointing which incorporates two types of pointing, finger pointing and tool pointing using an object in hand, and presents a probabilistic, appearance-based object detection framework to detect pointing gestures and robustly estimate the pointing direction.
Proceedings ArticleDOI
A multi-view hand gesture RGB-D dataset for human-robot interaction scenarios
TL;DR: A baseline evaluation of the Innsbruck Multi-View Hand Gesture (IMHG) dataset recorded with two RGB-D cameras is presented and a probabilistic appearance-based framework to detect a hand gesture and estimate its pose using two cameras is adopted.
Book ChapterDOI
Integration of Probabilistic Pose Estimates from Multiple Views
TL;DR: An approach to multi-view object detection and pose estimation that considers combinations of single-view estimates that can produce improved results even if the individual pose estimates are incoherent is proposed.
Journal ArticleDOI
Learning Semantics of Gestural Instructions for Human-Robot Collaboration.
TL;DR: This work presents the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions, and investigates how the accuracy of gesture detection affects the number of interactions required to complete the task.