scispace - formally typeset
Search or ask a question

Showing papers by "Willow Garage published in 2014"


Book ChapterDOI
01 Jan 2014
TL;DR: This work combines aspects such as scene interpretation from 3D range data, grasp planning, motion planning, and grasp failure identification and recovery using tactile sensors, aiming to address the uncertainty due to sensor and execution errors.
Abstract: We present a complete software architecture for reliable grasping of household objects. Our work combines aspects such as scene interpretation from 3D range data, grasp planning, motion planning, and grasp failure identification and recovery using tactile sensors. We build upon, and add several new contributions to the significant prior work in these areas. A salient feature of our work is the tight coupling between perception (both visual and tactile) and manipulation, aiming to address the uncertainty due to sensor and execution errors. This integration effort has revealed new challenges, some of which can be addressed through system and software engineering, and some of which present opportunities for future research. Our approach is aimed at typical indoor environments, and is validated by long running experiments where the PR2 robotic platform was able to consistently grasp a large variety of known and unknown objects. The set of tools and algorithms for object grasping presented here have been integrated into the open-source Robot Operating System (ROS).

204 citations


Proceedings ArticleDOI
01 Sep 2014
TL;DR: A new method called layered costmaps is created and implemented, which work by separating the processing of costmap data into semantically-separated layers, which results in faster path planning in practical use, and exhibits a cleaner separation of concerns that the original architecture.
Abstract: Many navigation systems, including the ubiquitous ROS navigation stack, perform path-planning on a single costmap, in which the majority of information is stored in a single grid. This approach is quite successful at generating collision-free paths of minimal length, but it can struggle in dynamic, people-filled environments when the values in the costmap expand beyond occupied or free space. We have created and implemented a new method called layered costmaps, which work by separating the processing of costmap data into semantically-separated layers. Each layer tracks one type of obstacle or constraint, and then modifies a master costmap which is used for the path planning. We show how the algorithm can be integrated with the open-source ROS navigation stack, and how our approach is easier to fine-tune to specific environmental contexts than the existing monolithic one. Our design also results in faster path planning in practical use, and exhibits a cleaner separation of concerns that the original architecture. The new algorithm also makes it possible to represent complex cost values in order to create navigation behavior for a wide range of contexts.

188 citations


Journal ArticleDOI
TL;DR: A customizable human kinematic model that extracts skeletons from RGB-D sensor data that adapts on-line to difficult unstructured scenes taken from a moving camera and benefits from using both color and depth data is presented.

119 citations


Journal ArticleDOI
TL;DR: This work designs, optimize and demonstrates the behavior of a tendon-driven robotic gripper performing parallel, enveloping and fingertip grasps, and introduces a method for optimizing the dimensions of the links in order to achieve envelopinggrasps of a large range of objects.
Abstract: We design, optimize and demonstrate the behavior of a tendon-driven robotic gripper performing parallel, enveloping and fingertip grasps. The gripper consists of two fingers, each with two links, and is actuated using a single active tendon. During unobstructed closing, the distal links remain parallel, for parallel grasps. If the proximal links are stopped by contact with an object, the distal links start flexing, creating a stable enveloping grasp. We optimize the route of the active flexor tendon and the route and stiffness of a passive extensor tendon in order to achieve this behavior. We show how the resulting gripper can also execute fingertip grasps for picking up small objects off a flat surface, using contact with the surface to its advantage through passive adaptation. Finally, we introduce a method for optimizing the dimensions of the links in order to achieve enveloping grasps of a large range of objects, and apply it to a set of common household objects.

111 citations


Book ChapterDOI
01 Jan 2014
TL;DR: An extendable framework that combines measurements from the robot's various sensors (proprioceptive and external) to calibrate the robot’s joint offsets and external sensor locations is proposed, allowing sensors with very different error characteristics to be used side by side in the calibration.
Abstract: Complex robots with multiple arms and sensors need good calibration to perform precise tasks in unstructured environments. The sensors must be calibrated both to the manipulators and to each other, since fused sensor data is often needed. We propose an extendable framework that combines measurements from the robot’s various sensors (proprioceptive and external) to calibrate the robot’s joint offsets and external sensor locations. Our approach is unique in that it accounts for sensor measurement uncertainties, thereby allowing sensors with very different error characteristics to be used side by side in the calibration. The framework is general enough to handle complex robots with kinematic components, including external sensors on kinematic chains. We validate the framework by implementing it on the Willow Garage PR2 robot, providing a significant improvement in the robot’s calibration.

106 citations


Proceedings ArticleDOI
12 Jul 2014
TL;DR: This work proposes an alternative PbD framework that involves demonstrating the task once and then providing additional task information explicitly, through interactions with a visualization of the action, and presents a simple action representation that supports this framework.
Abstract: Existing approaches to Robot Programming by Demonstration (PbD) require multiple demonstrations to capture task information that lets robots generalize to unseen situations. However, providing these demonstrations is cumbersome for endusers. In addition, users who are not familiar with the system often fail to demonstrate sufficiently varied demonstrations. We propose an alternative PbD framework that involves demonstrating the task once and then providing additional task information explicitly, through interactions with a visualization of the action. We present a simple action representation that supports this framework and describe a system that implements the framework on a two-armed mobile manipulator. We demonstrate the power of this system by evaluating it on a diverse task benchmark that involves manipulation of everyday objects. We then demonstrate that the system is easy to learn and use for novice users through a user study in which participants program a subset of the benchmark. We characterize the limitations of our system in task generalization and end-user interactions and present extensions that could address some of the limitations.

90 citations


Proceedings ArticleDOI
26 Apr 2014
TL;DR: The results showed that mobility significantly increased the remote user's feelings of presence, particularly in tasks with high mobility requirements, but decreased task performance, illustrating the need to design support for effective use of mobility in high-mobility tasks.
Abstract: Robotic telepresence systems - videoconferencing systems that allow a remote user to drive around in another location - provide an alternative to video-mediated communications as a way of interacting over distances. These systems, which are seeing increasing use in business and medical settings, are unique in their ability to grant the remote user the ability to maneuver in a distant location. While this mobility promises increased feelings of "being there" for remote users and thus greater support for task collaboration, whether these promises are borne out, providing benefits in task performance, is unknown. To better understand the role that mobility plays in shaping the remote user's sense of presence and its potential benefits, we conducted a two-by-two (system mobility: stationary vs. mobile; task demands for mobility: low vs. high) controlled laboratory experiment. We asked participants (N=40) to collaborate in a construction task with a confederate via a robotic telepresence system. Our results showed that mobility significantly increased the remote user's feelings of presence, particularly in tasks with high mobility requirements, but decreased task performance. Our findings highlight the positive effects of mobility on feelings of "being there," while illustrating the need to design support for effective use of mobility in high-mobility tasks.

87 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a heuristic search-based approach to motion planning for manipulation that does not deal with the high dimensionality of the problem and achieves the necessary efficiency by exploiting the following three key principles: (a) representation of the planning problem with what they call a manipulation lattice graph; (b) use of the ARA* search with provable bounds on solution suboptimality.
Abstract: Heuristic searches such as the A* search are a popular means of finding least-cost plans due to their generality, strong theoretical guarantees on completeness and optimality, simplicity in implementation and consistent behavior. In planning for robotic manipulation, however, these techniques are commonly thought of as impractical due to the high dimensionality of the planning problem. In this paper, we present a heuristic search-based approach to motion planning for manipulation that does deal effectively with the high dimensionality of the problem. Our approach achieves the necessary efficiency by exploiting the following three key principles: (a) representation of the planning problem with what we call a manipulation lattice graph; (b) use of the ARA* search which is an anytime heuristic search with provable bounds on solution suboptimality; and (c) use of informative yet fast-to-compute heuristics. The paper presents the approach together with its theoretical properties and shows how to apply it to single-arm and dual-arm motion planning with upright constraints on a PR2 robot operating in non-trivial cluttered spaces. An extensive experimental analysis in both simulation and on a physical PR2 shows that, in terms of runtime, our approach is on a par with other most common sampling-based approaches despite the high dimensionality of the problems. In addition, the experimental analysis shows that due to its deterministic cost minimization, the approach generates motions that are of good quality and are consistent, in other words, the resulting plans tend to be similar for similar tasks. For many problems, the consistency of the generated motions is important as it helps make the actions of the robot more predictable for a human controlling or interacting with the robot.

81 citations


Proceedings Article
21 Jun 2014
TL;DR: The paper presents the approach together with its theoretical properties and shows how to apply it to single-arm and dual-arm motion planning with upright constraints on a PR2 robot operating in non-trivial cluttered spaces.
Abstract: Heuristic searches such as A* search are a popular means of finding least-cost plans due to their generality, strong theoretical guarantees on completeness and optimality, simplicity in implementation and consistent behavior. In planning for robotic manipulation, however, these techniques are commonly thought of as impractical due to the high-dimensionality of the planning problem. In this paper, we present a heuristic search-based approach to motion planning for manipulation that does deal effectively with the high-dimensionality of the problem. The paper presents a summary of the approach along with applications to single-arm and dual-arm motion planning with upright constraints on a PR2 robot operating in non-trivial cluttered spaces. An extensive experimental analysis in both simulation and on a physical PR2 shows that, in terms of runtime, our approach is on par with other most common sampling-based approaches and due to its deterministic cost-minimization, the computed motions are of good quality and are consistent, i.e. the resulting plans tend to be similar for similar tasks. For complete details of our approach, please refer to (Cohen, Chitta, and Likhachev 2013).

61 citations


Proceedings ArticleDOI
03 Mar 2014
TL;DR: This paper develops a system that allows users to program complex manipulation skills on a two-armed robot through a spoken dialog interface and by physically moving the robot’s arms, and investigates the effect of providing users with an additional written tutorial or an instructional video.
Abstract: Allowing end-users to harness the full capability of general purpose robots, requires giving them powerful tools. As the functionality of these tools increase, learning how to use them becomes more challenging. In this paper we investigate the use of instructional materials to support the learnability of a Programming by Demonstration tool. We develop a system that allows users to program complex manipulation skills on a two-armed robot through a spoken dialog interface and by physically moving the robot’s arms. We present a user study (N=30) in which participants are left alone with the robot and a user manual, without any prior instructions on how to program the robot. Instead, they are asked to figure it out on their own. We investigate the effect of providing users with an additional written tutorial or an instructional video. We find that videos are most effective in training the user; however, this effect might be superficial and ultimately trial-and-error plays an important role in learning to program the robot. We also find that tutorials can be problematic when the interaction has uncertainty due to speech recognition errors. Overall, the user study demonstrates the effectiveness and learnability of the our system, while providing useful feedback about the dialog design.

37 citations


Book ChapterDOI
01 Jan 2014
TL;DR: A simple but robust approach to both pre-touch grasp adjustment and grasp planning for unknown objects in clutter, using a small-baseline stereo camera attached to the gripper of the robot and a feature-based cost function on local 3D data.
Abstract: Robotic grasping in unstructured environments requires the ability to adjust and recover when a pre-planned grasp faces imminent failure. Even for a single object, modeling uncertainties due to occluded surfaces, sensor noise and calibration errors can cause grasp failure; cluttered environments exacerbate the problem. In this work, we propose a simple but robust approach to both pre-touch grasp adjustment and grasp planning for unknown objects in clutter, using a small-baseline stereo camera attached to the gripper of the robot. By employing a 3D sensor from the perspective of the gripper we gain information about the object and nearby obstacles immediately prior to grasping that is not available during head-sensor-based grasp planning. We use a feature-based cost function on local 3D data to evaluate the feasibility of a proposed grasp. In cases where only minor adjustments are needed, our algorithm uses gradient descent on a cost function based on local features to find optimal grasps near the original grasp. In cases where no suitable grasp is found, the robot can search for a significantly different grasp pose rather than blindly attempting a doomed grasp. We present experimental results to validate our approach by grasping a wide range of unknown objects in cluttered scenes. Our results show that reactive pre-touch adjustment can correct for a fair amount of uncertainty in the measured position and shape of the objects, or the presence of nearby obstacles.

Book ChapterDOI
01 Jan 2014
TL;DR: This chapter describes the method for grasp synthesis using a low-dimensional posture subspace, and applies it to a set of hand models with different kinematics and numbers of degrees of freedom.
Abstract: Recent advances in neuroscience research have shown that posture variation of the human hand during grasping is dominated by movement in a configuration space of highly reduced dimensionality. In this chapter we explore how robot and artificial hands may take advantage of similar subspaces to reduce the complexity of dexterous grasping. We first describe our method for grasp synthesis using a low-dimensional posture subspace, and apply it to a set of hand models with different kinematics and numbers of degrees of freedom. We then discuss two applications of the method: online interactive grasp planning and data-driven grasp planning using a pre-computed database of stable grasps.

Book ChapterDOI
01 Jan 2014
TL;DR: This paper studies the impact that the users visual access to the robot, or lack thereof, has on on teaching performance and addresses how a robot can provide additional information to a instructor during the LfD process, to optimize the two-way process of teaching and learning.
Abstract: Learning from demonstration utilizes human expertise to program a robot. We believe this approach to robot programming will facilitate the development and deployment of general purpose personal robots that can adapt to specific user preferences. Demonstrations can potentially take place across a wide variety of environmental conditions. In this paper we study the impact that the users visual access to the robot, or lack thereof, has on on teaching performance. Based on the obtained results, we then address how a robot can provide additional information to a instructor during the LfD process, to optimize the two-way process of teaching and learning. Finally, we describe a novel Bayesian approach to generating task policies from demonstration data.

Proceedings ArticleDOI
29 Sep 2014
TL;DR: A multi-level architecture for complex manipulation planning of rigid bodies which uses communication between the two levels, one for planning the manipulated object motion, the other to plan for the arms, to improve the efficiency of the method.
Abstract: One intuitive approach for planning robotic manipulation tasks is to first compute a plan for the manipulated object, as if capable to move on its own, then use the obtained plan to find a sequence of arm maneuvers to take the object along the computed plan. Motion planning queries would be performed in relatively low-dimensional configuration spaces, rather than the full configuration space of a robot's arms and grippers, resulting in a more efficient planning method. However, having a plan for the manipulated object does not guarantee there also exists a feasible sequence of maneuvers for the robot to do the manipulation. Knowing different possible grasps on the object increases the robots chance to find a sequence of grasp switches and maneuvers, but makes the search more time consuming as the grasps need to be tested for usefulness. In this paper, we develop a multi-level architecture for complex manipulation planning of rigid bodies which uses communication between the two levels, one for planning the manipulated object motion, the other to plan for the arms, to improve the efficiency of the method. We store grasp zones in the configuration space of the manipulated object as regions where a given grasp seems promising. We use grasp zones to guide searching for grasp switching maneuvers, and to avoid regions of configuration space where few good grasps exist.