scispace - formally typeset
Search or ask a question
Author

Heinz-Bodo Schmiedmayer

Bio: Heinz-Bodo Schmiedmayer is an academic researcher from Vienna University of Technology. The author has contributed to research in topics: Kinematics & Hinge joint. The author has an hindex of 9, co-authored 13 publications receiving 655 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The resulting taxonomy incorporates all grasps found in the reviewed taxonomies that complied with the grasp definition and is shown that due to the nature of the classification, the 33 grasp types might be reduced to a set of 17 more generalgrasps if only the hand configuration is considered without the object shape/size.
Abstract: In this paper, we analyze and compare existing human grasp taxonomies and synthesize them into a single new taxonomy (dubbed “The GRASP Taxonomy” after the GRASP project funded by the European Commission). We consider only static and stable grasps performed by one hand. The goal is to extract the largest set of different grasps that were referenced in the literature and arrange them in a systematic way. The taxonomy provides a common terminology to define human hand configurations and is important in many domains such as human–computer interaction and tangible user interfaces where an understanding of the human is basis for a proper interface. Overall, 33 different grasp types are found and arranged into the GRASP taxonomy. Within the taxonomy, grasps are arranged according to 1) opposition type, 2) the virtual finger assignments, 3) type in terms of power, precision, or intermediate grasp, and 4) the position of the thumb. The resulting taxonomy incorporates all grasps found in the reviewed taxonomies that complied with the grasp definition. We also show that due to the nature of the classification, the 33 grasp types might be reduced to a set of 17 more general grasps if only the hand configuration is considered without the object shape/size.

636 citations

Journal ArticleDOI
TL;DR: A metric for comparing the anthropomorphic motion capability of robotic and prosthetic hands is proposed based on the evaluation of how many different postures or configurations a hand can perform by studying the reachable set of fingertip poses.
Abstract: We propose a metric for comparing the anthropomorphic motion capability of robotic and prosthetic hands. The metric is based on the evaluation of how many different postures or configurations a hand can perform by studying the reachable set of fingertip poses. To define a benchmark for comparison, we first generate data with human subjects based on an extensive grasp taxonomy. We then develop a methodology for comparison using generative, nonlinear dimensionality reduction techniques. We assess the performance of different hands with respect to the human hand and with respect to each other. The method can be used to compare other types of kinematic structures.

76 citations

Journal ArticleDOI
TL;DR: By comparing the error functions derived with the beam model to the correction formulas given in the literature an improved algorithm is proposed and shall help biomechanists in understanding the basic concepts of determining the point of force application with force plates and in constructing custom-made force plates for specific applications.

46 citations

Journal ArticleDOI
TL;DR: It will be shown that the errors in the COP depend on the load distribution, and two examples are presented: simulated balance study, and different pressure patterns during walking.
Abstract: Errors up to +/- 30 mm in determining the COP with piezoelectric force plates have been reported in the literature. To compensate for these errors, correction formulas were proposed, based on measurements with single point loads. In this paper, it will be shown that the errors in the COP depend on the load distribution. Two examples are presented: (1) simulated balance study, and (2) different pressure patterns during walking. Accurate corrections can only be made for forces distributed over a small area. Errors are expected to be overcompensated if there are only a few pressure peaks separated by large distances. These errors can be as large as the statistical errors (5.8 +/- 3.7 mm) after compensation. For certain situations, it is probably better not to use correction formulas.

27 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This work uses reinforcement learning (RL) to learn dexterous in-hand manipulation policies that can perform vision-based object reorientation on a physical Shadow Dexterous Hand, and these policies transfer to the physical robot despite being trained entirely in simulation.
Abstract: We use reinforcement learning (RL) to learn dexterous in-hand manipulation policies that can perform vision-based object reorientation on a physical Shadow Dexterous Hand. The training is performed...

1,428 citations

Journal ArticleDOI
01 May 2019-Nature
TL;DR: Tactile patterns obtained from a scalable sensor-embedded glove and deep convolutional neural networks help to explain how the human hand can identify and grasp individual objects and estimate their weights.
Abstract: Humans can feel, weigh and grasp diverse objects, and simultaneously infer their material properties while applying the right amount of force—a challenging set of tasks for a modern robot1. Mechanoreceptor networks that provide sensory feedback and enable the dexterity of the human grasp2 remain difficult to replicate in robots. Whereas computer-vision-based robot grasping strategies3–5 have progressed substantially with the abundance of visual data and emerging machine-learning tools, there are as yet no equivalent sensing platforms and large-scale datasets with which to probe the use of the tactile information that humans rely on when grasping objects. Studying the mechanics of how humans grasp objects will complement vision-based robotic object handling. Importantly, the inability to record and analyse tactile signals currently limits our understanding of the role of tactile information in the human grasp itself—for example, how tactile maps are used to identify objects and infer their properties is unknown6. Here we use a scalable tactile glove and deep convolutional neural networks to show that sensors uniformly distributed over the hand can be used to identify individual objects, estimate their weight and explore the typical tactile patterns that emerge while grasping objects. The sensor array (548 sensors) is assembled on a knitted glove, and consists of a piezoresistive film connected by a network of conductive thread electrodes that are passively probed. Using a low-cost (about US$10) scalable tactile glove sensor array, we record a large-scale tactile dataset with 135,000 frames, each covering the full hand, while interacting with 26 different objects. This set of interactions with different objects reveals the key correspondences between different regions of a human hand while it is manipulating objects. Insights from the tactile signatures of the human grasp—through the lens of an artificial analogue of the natural mechanoreceptor network—can thus aid the future design of prosthetics7, robot grasping tools and human–robot interactions1,8–10. Tactile patterns obtained from a scalable sensor-embedded glove and deep convolutional neural networks help to explain how the human hand can identify and grasp individual objects and estimate their weights.

623 citations

Journal ArticleDOI
TL;DR: A model of hands and bodies interacting together and fit it to full-body 4D sequences that move naturally with detailed hand motions and a realism not seen before in full body performance capture is formulated.
Abstract: Humans move their hands and bodies together to communicate and solve tasks. Capturing and replicating such coordinated activity is critical for virtual characters that behave realistically. Surprisingly, most methods treat the 3D modeling and tracking of bodies and hands separately. Here we formulate a model of hands and bodies interacting together and fit it to full-body 4D sequences. When scanning or capturing the full body in 3D, hands are small and often partially occluded, making their shape and pose hard to recover. To cope with low-resolution, occlusion, and noise, we develop a new model called MANO (hand Model with Articulated and Non-rigid defOrmations). MANO is learned from around 1000 high-resolution 3D scans of hands of 31 subjects in a wide variety of hand poses. The model is realistic, low-dimensional, captures non-rigid shape changes with pose, is compatible with standard graphics packages, and can fit any human hand. MANO provides a compact mapping from hand poses to pose blend shape corrections and a linear manifold of pose synergies. We attach MANO to a standard parameterized 3D body shape model (SMPL), resulting in a fully articulated body and hand model (SMPL+H). We illustrate SMPL+H by fitting complex, natural, activities of subjects captured with a 4D scanner. The fitting is fully automatic and results in full body models that move naturally with detailed hand motions and a realism not seen before in full body performance capture. The models and data are freely available for research purposes at http://mano.is.tue.mpg.de.

536 citations

Journal ArticleDOI
TL;DR: In this paper, a low-cost open-source consumer 3D printer and a commercially available printing material were identified for printing soft pneumatic actuators with complex inner geometry and high degree of freedom.
Abstract: This work presents a novel technique for direct 3D printing of soft pneumatic actuators using 3D printers based on fused deposition modeling (FDM) technology. Existing fabrication techniques for soft pneumatic actuators with complex inner geometry are normally time-consuming and involve multistep processes. A low-cost open-source consumer 3D printer and a commercially available printing material were identified for printing soft pneumatic actuators with complex inner geometry and high degree of freedom. We investigated the material properties of the printing material, simulated the mechanical behavior of the printed actuators, characterized the performances of the actuators in terms of their bending capability, output forces, as well as durability, and demonstrated the potential soft robotic applications of the 3D printed actuators. Using the 3D printed actuators, we developed a soft gripper that was able to grasp and lift heavy objects with high pay–-to-weight ratio, which demonstrated that the ...

413 citations

Journal ArticleDOI
21 Jun 2019-Science
TL;DR: The progress made in robotics to emulate humans’ ability to grab, hold, and manipulate objects is reviewed, with a focus on designing humanlike hands capable of using tools.
Abstract: BACKGROUND Humans have a fantastic ability to manipulate objects of various shapes, sizes, and materials and can control the objects’ position in confined spaces with the advanced dexterity capabilities of our hands. Building machines inspired by human hands, with the functionality to autonomously pick up and manipulate objects, has always been an essential component of robotics. The first robot manipulators date back to the 1960s and are some of the first robotic devices ever constructed. In these early days, robotic manipulation consisted of carefully prescribed movement sequences that a robot would execute with no ability to adapt to a changing environment. As time passed, robots gradually gained the ability to automatically generate movement sequences, drawing on artificial intelligence and automated reasoning. Robots would stack boxes according to size, weight, and so forth, extending beyond geometric reasoning. This task also required robots to handle errors and uncertainty in sensing at run time, given that the slightest imprecision in the position and orientation of stacked boxes might cause the entire tower to topple. Methods from control theory also became instrumental for enabling robots to comply with the environment’s natural uncertainty by empowering them to adapt exerted forces upon contact. The ability to stably vary forces upon contact expanded robots’ manipulation repertoire to more-complex tasks, such as inserting pegs in holes or hammering. However, none of these actions truly demonstrated fine or in-hand manipulation capabilities, and they were commonly performed using simple two-fingered grippers. To enable multipurpose fine manipulation, roboticists focused their efforts on designing humanlike hands capable of using tools. Wielding a tool in-hand became a problem of its own, and a variety of advanced algorithms were developed to facilitate stable holding of objects and provide optimality guarantees. Because optimality was difficult to achieve in a stochastic environment, from the 1990s onward researchers aimed to increase the robustness of object manipulation at all levels. These efforts initiated the design of sensors and hardware for improved control of hand–object contacts. Studies that followed were focused on robust perception for coping with object occlusion and noisy measurements, as well as on adaptive control approaches to infer an object’s physical properties, so as to handle objects whose properties are unknown or change as a result of manipulation. ADVANCES Roboticists are still working to develop robots capable of sorting and packaging objects, chopping vegetables, and folding clothes in unstructured and dynamic environments. Robots used for modern manufacturing have accomplished some of these tasks in structured settings that still require fences between the robots and human operators to ensure safety. Ideally, robots should be able to work side by side with humans, offering their strength to carry heavy loads while presenting no danger. Over the past decade, robots have gained new levels of dexterity. This enhancement is due to breakthroughs in mechanics with sensors for perceiving touch along a robot’s body and new mechanics for soft actuation to offer natural compliance. Most notably, this development leverages the immense progress in machine learning to encapsulate models of uncertainty and support further advances in adaptive and robust control. Learning to manipulate in real-world settings is costly in terms of both time and hardware. To further elaborate on data-driven methods but avoid generating examples with real, physical systems, many researchers use simulation environments. Still, grasping and dexterous manipulation require a level of reality that existing simulators are not yet able to deliver—for example, in the case of modeling contacts for soft and deformable objects. Two roads are hence pursued: The first draws inspiration from the way humans acquire interaction skills and prompts robots to learn skills from observing humans performing complex manipulation. This allows robots to acquire manipulation capabilities in only a few trials. However, generalizing the acquired knowledge to apply to actions that differ from those previously demonstrated remains difficult. The second road constructs databases of real object manipulation, with the goal to better inform the simulators and generate examples that are as realistic as possible. Yet achieving realistic simulation of friction, material deformation, and other physical properties may not be possible anytime soon, and real experimental evaluation will be unavoidable for learning to manipulate highly deformable objects. OUTLOOK Despite many years of software and hardware development, achieving dexterous manipulation capabilities in robots remains an open problem—albeit an interesting one, given that it necessitates improved understanding of human grasping and manipulation techniques. We build robots to automate tasks but also to provide tools for humans to easily perform repetitive and dangerous tasks while avoiding harm. Achieving robust and flexible collaboration between humans and robots is hence the next major challenge. Fences that currently separate humans from robots will gradually disappear, and robots will start manipulating objects jointly with humans. To achieve this objective, robots must become smooth and trustable partners that interpret humans’ intentions and respond accordingly. Furthermore, robots must acquire a better understanding of how humans interact and must attain real-time adaptation capabilities. There is also a need to develop robots that are safe by design, with an emphasis on soft and lightweight structures as well as control and planning methodologies based on multisensory feedback.

371 citations