T
Tim Welschehold
Researcher at University of Freiburg
Publications - 27
Citations - 365
Tim Welschehold is an academic researcher from University of Freiburg. The author has contributed to research in topics: Computer science & Robot. The author has an hindex of 7, co-authored 18 publications receiving 218 citations.
Papers
More filters
Proceedings ArticleDOI
3D Human Pose Estimation in RGBD Images for Robotic Task Learning
TL;DR: In this paper, the authors propose an approach to estimate 3D human pose in real world units from a single RGBD image and show that it exceeds performance of monocular 3D pose estimation approaches from color as well as pose estimation exclusively from depth.
Journal ArticleDOI
Microcracks in Silicon Wafers I: Inline Detection and Implications of Crack Morphology on Wafer Strength
Matthias Demant,Tim Welschehold,Marcus Oswald,Sebastian Bartsch,Thomas Brox,Stephan Schoenfelder,Stefan Rein +6 more
TL;DR: In this article, a pattern recognition approach based on local descriptors and support vector classification was proposed to detect micro cracks in photoluminescence (PL) and IR images of as-cut wafers.
Proceedings ArticleDOI
Learning mobile manipulation actions from human demonstrations
TL;DR: A novel approach to learn joint robot base and gripper action models from observing demonstrations carried out by a human teacher is presented and it is shown that the robot is able to learn complex mobile manipulation tasks such as opening and driving through a door.
Journal ArticleDOI
Learning Kinematic Feasibility for Mobile Manipulation Through Deep Reinforcement Learning
TL;DR: In this paper, a deep reinforcement learning approach is proposed to learn feasible dynamic motions for a mobile base while the end-effector follows a trajectory in task space generated by an arbitrary system to fulfill the task at hand.
Posted Content
3D Human Pose Estimation in RGBD Images for Robotic Task Learning
TL;DR: This work proposes an approach to estimate 3D human pose in real world units from a single RGBD image and shows that it exceeds performance of monocular 3D pose estimation approaches from color as well as pose estimation exclusively from depth.