scispace - formally typeset
Search or ask a question

Showing papers on "GRASP published in 2016"


Proceedings ArticleDOI
16 May 2016
TL;DR: This paper takes the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts, which allows us to train a Convolutional Neural Network for the task of predicting grasp locations without severe overfitting.
Abstract: Current model free learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping.

1,147 citations


Journal ArticleDOI
TL;DR: The resulting taxonomy incorporates all grasps found in the reviewed taxonomies that complied with the grasp definition and is shown that due to the nature of the classification, the 33 grasp types might be reduced to a set of 17 more generalgrasps if only the hand configuration is considered without the object shape/size.
Abstract: In this paper, we analyze and compare existing human grasp taxonomies and synthesize them into a single new taxonomy (dubbed “The GRASP Taxonomy” after the GRASP project funded by the European Commission). We consider only static and stable grasps performed by one hand. The goal is to extract the largest set of different grasps that were referenced in the literature and arrange them in a systematic way. The taxonomy provides a common terminology to define human hand configurations and is important in many domains such as human–computer interaction and tangible user interfaces where an understanding of the human is basis for a proper interface. Overall, 33 different grasp types are found and arranged into the GRASP taxonomy. Within the taxonomy, grasps are arranged according to 1) opposition type, 2) the virtual finger assignments, 3) type in terms of power, precision, or intermediate grasp, and 4) the position of the thumb. The resulting taxonomy incorporates all grasps found in the reviewed taxonomies that complied with the grasp definition. We also show that due to the nature of the classification, the 33 grasp types might be reduced to a set of 17 more general grasps if only the hand configuration is considered without the object shape/size.

636 citations


Proceedings ArticleDOI
16 May 2016
TL;DR: The Dexterity Network (Dex-Net) 1.0, a dataset of 3D object models and a sampling-based planning algorithm to explore how Cloud Robotics can be used for robust grasp planning, and reports on system sensitivity to variations in similarity metrics and in uncertainty in pose and friction.
Abstract: This paper presents the Dexterity Network (Dex-Net) 1.0, a dataset of 3D object models and a sampling-based planning algorithm to explore how Cloud Robotics can be used for robust grasp planning. The algorithm uses a Multi- Armed Bandit model with correlated rewards to leverage prior grasps and 3D object models in a growing dataset that currently includes over 10,000 unique 3D object models and 2.5 million parallel-jaw grasps. Each grasp includes an estimate of the probability of force closure under uncertainty in object and gripper pose and friction. Dex-Net 1.0 uses Multi-View Convolutional Neural Networks (MV-CNNs), a new deep learning method for 3D object classification, to provide a similarity metric between objects, and the Google Cloud Platform to simultaneously run up to 1,500 virtual cores, reducing experiment runtime by up to three orders of magnitude. Experiments suggest that correlated bandit techniques can use a cloud-based network of object models to significantly reduce the number of samples required for robust grasp planning. We report on system sensitivity to variations in similarity metrics and in uncertainty in pose and friction. Code and updated information is available at http://berkeleyautomation.github.io/dex-net/.

354 citations


Book ChapterDOI
03 Oct 2016
TL;DR: A large convolutional neural network is trained to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose.
Abstract: We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.

266 citations


Proceedings ArticleDOI
04 Mar 2016
TL;DR: In this paper, the authors proposed two new representations of grasp candidates and quantified the effect of using prior knowledge of two forms: instance or category knowledge of the object to be grasped, and pretraining the network on simulated depth data obtained from idealized CAD models.
Abstract: This paper considers the problem of grasp pose detection in point clouds. We follow a general algorithmic structure that first generates a large set of 6-DOF grasp candidates and then classifies each of them as a good or a bad grasp. Our focus in this paper is on improving the second step by using depth sensor scans from large online datasets to train a convolutional neural network. We propose two new representations of grasp candidates, and we quantify the effect of using prior knowledge of two forms: instance or category knowledge of the object to be grasped, and pretraining the network on simulated depth data obtained from idealized CAD models. Our analysis shows that a more informative grasp candidate representation as well as pretraining and prior knowledge significantly improve grasp detection. We evaluate our approach on a Baxter Research Robot and demonstrate an average grasp success rate of 93% in dense clutter. This is a 20% improvement compared to our prior work.

243 citations


Posted Content
TL;DR: In this article, a large convolutional neural network is trained to predict the probability that task-space motion of the gripper will result in successful grasps using only monocular camera images and independently of camera calibration or the current robot pose.
Abstract: We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.

221 citations


Proceedings ArticleDOI
07 Aug 2016
TL;DR: A new method for parallel-jaw grasping of isolated objects from depth images, under large gripper pose uncertainty, which trains a Convolutional Neural Network which takes as input a single depth image of an object, and outputs a score for each grasp pose across the image.
Abstract: This paper presents a new method for parallel-jaw grasping of isolated objects from depth images, under large gripper pose uncertainty. Whilst most approaches aim to predict the single best grasp pose from an image, our method first predicts a score for every possible grasp pose, which we denote the grasp function. With this, it is possible to achieve grasping robust to the gripper's pose uncertainty, by smoothing the grasp function with the pose uncertainty function. Therefore, if the single best pose is adjacent to a region of poor grasp quality, that pose will no longer be chosen, and instead a pose will be chosen which is surrounded by a region of high grasp quality. To learn this function, we train a Convolutional Neural Network which takes as input a single depth image of an object, and outputs a score for each grasp pose across the image. Training data for this is generated by use of physics simulation and depth image simulation with 3D object meshes, to enable acquisition of sufficient data without requiring exhaustive real-world experiments. We evaluate with both synthetic and real experiments, and show that the learned grasp score is more robust to gripper pose uncertainty than when this uncertainty is not accounted for.

210 citations


Posted Content
TL;DR: In this paper, a grasp score is learned for parallel-jaw grasping under large gripper pose uncertainty by smoothing the grasp function with the pose uncertainty function, which is shown to be more robust to gripper's pose uncertainty than when this uncertainty is not accounted for.
Abstract: This paper presents a new method for parallel-jaw grasping of isolated objects from depth images, under large gripper pose uncertainty. Whilst most approaches aim to predict the single best grasp pose from an image, our method first predicts a score for every possible grasp pose, which we denote the grasp function. With this, it is possible to achieve grasping robust to the gripper's pose uncertainty, by smoothing the grasp function with the pose uncertainty function. Therefore, if the single best pose is adjacent to a region of poor grasp quality, that pose will no longer be chosen, and instead a pose will be chosen which is surrounded by a region of high grasp quality. To learn this function, we train a Convolutional Neural Network which takes as input a single depth image of an object, and outputs a score for each grasp pose across the image. Training data for this is generated by use of physics simulation and depth image simulation with 3D object meshes, to enable acquisition of sufficient data without requiring exhaustive real-world experiments. We evaluate with both synthetic and real experiments, and show that the learned grasp score is more robust to gripper pose uncertainty than when this uncertainty is not accounted for.

161 citations


Journal ArticleDOI
TL;DR: A method for one-shot learning of dexterous grasps and grasp generation for novel objects using an incomplete point cloud from a depth camera and a product of experts, in which experts are of two types.
Abstract: This paper presents a method for one-shot learning of dexterous grasps and grasp generation for novel objects. A model of each grasp type is learned from a single kinesthetic demonstration and several types are taught. These models are used to select and generate grasps for unfamiliar objects. Both the learning and generation stages use an incomplete point cloud from a depth camera, so no prior model of an object shape is used. The learned model is a product of experts, in which experts are of two types. The first type is a contact model and is a density over the pose of a single hand link relative to the local object surface. The second type is the hand-configuration model and is a density over the whole-hand configuration. Grasp generation for an unfamiliar object optimizes the product of these two model types, generating thousands of grasp candidates in under 30 seconds. The method is robust to incomplete data at both training and testing stages. When several grasp types are considered the method selects the highest-likelihood grasp across all the types. In an experiment, the training set consisted of five different grasps and the test set of 45 previously unseen objects. The success rate of the first-choice grasp is 84.4% or 77.7% if seven views or a single view of the test object are taken, respectively.

155 citations


Journal ArticleDOI
TL;DR: A novel robot grasp detection system that maps a pair of RGB-D images of novel objects to best grasping pose of a robotic gripper and presents a two-stage closed-loop grasping candidate estimator to accelerate the searching efficiency of grasping-candidate generation.
Abstract: Autonomous manipulation has enabled a wide range of exciting robot tasks. However, perceiving outside environment is still a challenging problem in the field of intelligent robotic research due to ...

136 citations


Journal ArticleDOI
TL;DR: The proposed approach does not require object exploration, re-grasping, grasp-release, or force modulation and works for arbitrary object start positions and orientations, so the technique may be integrated into practical robotic grasping scenarios without adding time or manipulation overheads.
Abstract: Classical robotic approaches to tactile object identification often involve rigid mechanical grippers, dense sensor arrays, and exploratory procedures (EPs). Though EPs are a natural method for humans to acquire object information, evidence also exists for meaningful tactile property inference from brief, non-exploratory motions (a ‘haptic glance’). In this work, we implement tactile object identification and feature extraction techniques on data acquired during a single, unplanned grasp with a simple, underactuated robot hand equipped with inexpensive barometric pressure sensors. Our methodology utilizes two cooperating schemes based on an advanced machine learning technique (random forests) and parametric methods that estimate object properties. The available data is limited to actuator positions (one per two link finger) and force sensors values (eight per finger). The schemes are able to work both independently and collaboratively, depending on the task scenario. When collaborating, the results of each method contribute to the other, improving the overall result in a synergistic fashion. Unlike prior work, the proposed approach does not require object exploration, re-grasping, grasp-release, or force modulation and works for arbitrary object start positions and orientations. Due to these factors, the technique may be integrated into practical robotic grasping scenarios without adding time or manipulation overheads.

Journal ArticleDOI
TL;DR: An approach for addressing the performance of dexterous grasping under shape uncertainty is presented, the uncertainty in object shape is parametrized and incorporated as a constraint into grasp planning and the grasp planning approach is hand interchangeable.

Journal ArticleDOI
TL;DR: The Hierarchical Fingertip Space is introduced as a representation enabling optimization for both efficient grasp synthesis and online finger gaiting and the main objective of the proposed framework is to enable fingertip grasping by addressing problems of changed weight of the object, slippage, and external disturbances.
Abstract: We present a unified framework for grasp planning and in-hand grasp adaptation using visual, tactile, and proprioceptive feedback. The main objective of the proposed framework is to enable fingertip grasping by addressing problems of changed weight of the object, slippage, and external disturbances. For this purpose we introduce the Hierarchical Fingertip Space as a representation enabling optimization for both efficient grasp synthesis and online finger gaiting. Grasp synthesis is followed by a grasp adaptation step that consists of both grasp force adaptation through impedance control and regrasping/finger gaiting when the former is not sufficient. Experimental evaluation is conducted on an Allegro hand mounted on a Kuka LWR arm.

Posted Content
TL;DR: A novel robotic grasp detection system that predicts the best grasping pose of a parallel-plate robotic gripper for novel objects using the RGB-D image of the scene and then uses a shallow convolutional neural network to predict the grasp configuration for the object of interest.
Abstract: Deep learning has significantly advanced computer vision and natural language processing. While there have been some successes in robotics using deep learning, it has not been widely adopted. In this paper, we present a novel robotic grasp detection system that predicts the best grasping pose of a parallel-plate robotic gripper for novel objects using the RGB-D image of the scene. The proposed model uses a deep convolutional neural network to extract features from the scene and then uses a shallow convolutional neural network to predict the grasp configuration for the object of interest. Our multi-modal model achieved an accuracy of 89.21% on the standard Cornell Grasp Dataset and runs at real-time speeds. This redefines the state-of-the-art for robotic grasp detection.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: This work introduces a version of the grasping in clutter problem where a yellow cylinder must be grasped by a planar robot arm amid extruded objects in a variety of shapes and positions and proposes using a hierarchy of three levels of supervisors.
Abstract: For applications such as Amazon warehouse order fulfillment, robots must grasp a desired object amid clutter: other objects that block direct access. This can be difficult to program explicitly due to uncertainty in friction and push mechanics and the variety of objects that can be encountered. Deep Learning networks combined with Online Learning from Demonstration (LfD) algorithms such as DAgger and SHIV have potential to learn robot control policies for such tasks where the input is a camera image and system dynamics and the cost function are unknown. To explore this idea, we introduce a version of the grasping in clutter problem where a yellow cylinder must be grasped by a planar robot arm amid extruded objects in a variety of shapes and positions. To reduce the burden on human experts to provide demonstrations, we propose using a hierarchy of three levels of supervisors: a fast motion planner that ignores obstacles, crowd-sourced human workers who provide appropriate robot control values remotely via online videos, and a local human expert. Physical experiments suggest that with 160 expert demonstrations, using the hierarchy of supervisors can increase the probability of a successful grasp (reliability) from 55% to 90%.

Proceedings ArticleDOI
18 Jun 2016
TL;DR: The hypothesis is that it is necessary to model the grasp types of hands and the attributes of manipulated objects in order to accurately recognize manipulation actions, and the proposed action model outperforms traditional appearance-based models which are not designed to take into account semantic constraints such as grasp types or object attributes.
Abstract: Our goal is to automate the understanding of natural hand-object manipulation by developing computer visionbased techniques. Our hypothesis is that it is necessary to model the grasp types of hands and the attributes of manipulated objects in order to accurately recognize manipulation actions. Specifically, we focus on recognizing hand grasp types, object attributes and actions from a single image within an unified model. First, we explore the contextual relationship between grasp types and object attributes, and show how that context can be used to boost the recognition of both grasp types and object attributes. Second, we propose to model actions with grasp types and object attributes based on the hypothesis that grasp types and object attributes contain complementary information for characterizing different actions. Our proposed action model outperforms traditional appearance-based models which are not designed to take into account semantic constraints such as grasp types or object attributes. Experiment results on public egocentric activities datasets strongly support our hypothesis.


Proceedings ArticleDOI
16 May 2016
TL;DR: A shared convolutional neural network (CNN) which can simultaneously implement these two tasks in real-time and be implemented on a real robotic platform and it is shown that the robot can accurately discover a target object from the stack and successfully grasp it.
Abstract: Grasp an object from a stack of objects in real-time is still a challenge in robotics. This requires the robot to have the ability of both fast object discovery and grasp detection: a target object should be picked out from the stack first and then a proper grasp configuration is applied to grasp the object. In this paper, we propose a shared convolutional neural network (CNN) which can simultaneously implement these two tasks in real-time. The processing speed of the model is about 100 frames per second on a GPU which largely satisfies the requirement. Meanwhile, we also establish a labeled RGBD dataset which contains scenes of stacked objects for robotic grasping. At last, we demonstrate the implementation of our shared CNN model on a real robotic platform and show that the robot can accurately discover a target object from the stack and successfully grasp it.

Journal ArticleDOI
TL;DR: The problem of using real-time feedback from contact sensors to create closed-loop pushing actions is considered as a partially observable Markov decision process (POMDP) with a transition model based on a physics simulator and a reward function that drives the robot towards a successful grasp.
Abstract: We consider the problem of using real-time feedback from contact sensors to create closed-loop pushing actions. To do so, we formulate the problem as a partially observable Markov decision process POMDP with a transition model based on a physics simulator and a reward function that drives the robot towards a successful grasp. We demonstrate that it is intractable to solve the full POMDP with traditional techniques and introduce a novel decomposition of the policy into pre- and post-contact stages to reduce the computational complexity. Our method uses an offline point-based solver on a variable-resolution discretization of the state space to solve for a post-contact policy as a pre-computation step. Then, at runtime, we use an A* search to compute a pre-contact trajectory. We prove that the value of the resulting policy is within a bound of the value of the optimal policy and give intuition about when it performs well. Additionally, we show the policy produced by our algorithm achieves a successful grasp more quickly and with higher probability than a baseline QMDP policy on two different objects in simulation. Finally, we validate our simulation results on a real robot using commercially available tactile sensors.

Journal ArticleDOI
20 Jan 2016
TL;DR: This letter targets the next step in autonomous robot commissioning: automatizing the currently manual order picking procedure, and investigates the use case of autonomous picking and palletizing with a dedicated research platform and discusses lessons learned during testing in simplified warehouse settings.
Abstract: So far, autonomous order picking (commissioning) systems have not been able to meet the stringent demands regarding speed, safety, and accuracy of real-world warehouse automation, resulting in reliance on human workers. In this letter, we target the next step in autonomous robot commissioning: automatizing the currently manual order picking procedure. To this end, we investigate the use case of autonomous picking and palletizing with a dedicated research platform and discuss lessons learned during testing in simplified warehouse settings. The main theoretical contribution is a novel grasp representation scheme which allows for redundancy in the gripper pose placement. This redundancy is exploited by a local, prioritized kinematic controller which generates reactive manipulator motions on-the-fly. We validated our grasping approach by means of a large set of experiments, which yielded an average grasp acquisition time of $23.5\;\text{s}$ at a success rate of $94.7\%$ . Our system is able to autonomously carry out simple order picking tasks in a human-safe manner, and as such serves as an initial step toward future commercial-scale in-house logistics automation solutions.

Book ChapterDOI
01 Jan 2016
TL;DR: A new approach to localizing handle-like grasp affordances in 3-D point clouds by identifying a set of sufficient geometric conditions for the existence of a grasp affordance and to search the point cloud for neighborhoods that satisfy these conditions.
Abstract: We propose a new approach to localizing handle-like grasp affordances in 3-D point clouds. The main idea is to identify a set of sufficient geometric conditions for the existence of a grasp affordance and to search the point cloud for neighborhoods that satisfy these conditions. Our goal is not to find all possible grasp affordances, but instead to develop a method of localizing important types of grasp affordances quickly and reliably. The strength of this method relative to other current approaches is that it is very practical: it can have good precision/recall for the types of affordances under consideration, it runs in real-time, and it is easy to adapt to different robots and operating scenarios. We validate with a set of experiments where the approach is used to enable the Rethink Baxter robot to localize and grasp unmodelled objects.

Journal ArticleDOI
TL;DR: This work proposes a greedy randomized adaptive search procedure (GRASP) enhanced with heuristic concentration (HC) that uses a set of randomized route-first, cluster-second heuristics to generate starting solutions and a variable-neighborhood descent procedure for the local search phase.
Abstract: The vehicle routing problem with stochastic demands (VRPSD) consists in designing optimal routes to serve a set of customers with random demands following known probability distributions. Because of demand uncertainty, a vehicle may arrive at a customer without enough capacity to satisfy its demand and may need to apply a recourse to recover the route's feasibility. Although travel times are assumed to be deterministic, because of eventual recourses the total duration of a route is a random variable. We present two strategies to deal with route-duration constraints in the VRPSD. In the first, the duration constraints are handled as chance constraints, meaning that for each route, the probability of exceeding the maximum duration must be lower than a given threshold. In the second, violations to the duration constraint are penalized in the objective function. To solve the resulting problem, we propose a greedy randomized adaptive search procedure (GRASP) enhanced with heuristic concentration (HC). The GRASP component uses a set of randomized route-first, cluster-second heuristics to generate starting solutions and a variable-neighborhood descent procedure for the local search phase. The HC component assembles the final solution from the set of all routes found in the local optima reached by the GRASP. For each strategy, we discuss extensive computational experiments that analyze the impact of route-duration constraints on the VRPSD. In addition, we report state-of-the-art solutions for a established set of benchmarks for the classical VRPSD.

Book
27 Oct 2016
TL;DR: This is the first book to cover GRASP (Greedy Randomized Adaptive Search Procedures), a metaheuristic that has enjoyed wide success in practice with a broad range of applications to real-world combinatorial optimization problems.
Abstract: This is the first book to cover GRASP (Greedy Randomized Adaptive Search Procedures), a metaheuristic that has enjoyed wide success in practice with a broad range of applications to real-world combinatorial optimization problems. The state-of-the-art coverage and carefully crafted pedagogical style lends this book highly accessible as an introductory text not only to GRASP, but also to combinatorial optimization, greedy algorithms, local search, and path-relinking, as well as to heuristics and metaheuristics, in general. The focus is on algorithmic and computational aspects of applied optimization with GRASP with emphasis given to the end-user, providing sufficient information on the broad spectrum of advances in applied optimization with GRASP. For the more advanced reader, chapters on hybridization with path-relinking and parallel and continuous GRASP present these topics in a clear and concise fashion. Additionally, the book offers a very complete annotated bibliography of GRASP and combinatorial optimization. For the practitioner who needs to solve combinatorial optimization problems, the book provides a chapter with four case studies and implementable templates for all algorithms covered in the text. This book, with its excellent overview of GRASP, will appeal to researchers and practitioners of combinatorial optimization who have a need to find optimal or near optimal solutions to hard combinatorial optimization problems.

Journal ArticleDOI
TL;DR: A method for controlling extra robotic fingers, termed “Supernumerary Robotic Fingers,” in coordination with human fingers to grasp diverse objects through grasp synergy of the hybrid human-robotic hand is presented.
Abstract: Functionality of a human hand can be augmented with wearable robotic fingers to enable grasping and manipulation of objects with a single hand. Such technology will have applications in manufacturing and construction, as well as health care. This paper presents a method for controlling extra robotic fingers, termed “Supernumerary Robotic Fingers (SR Fingers),” in coordination with human fingers to grasp diverse objects. Two hypotheses are proposed and verified through experiments. One is that humans prefer grasp posture of their fingers and that of the SR Fingers to be highly correlated when working together, which is represented with a few principal components, resembling grasp synergy in neuromotor control. The other hypothesis is that SR Finger posture can be controlled to coordinate with human finger posture via grasp synergy of the hybrid human–robotic hand. Partial least squares regression is used for predicting a desired posture of the SR Fingers from the measurement of human fingers. This method is implemented on a pair of wrist-mounted SR Fingers. Experiments demonstrate that the prototype SR Fingers can assist the human user in performing single-handed grasping tasks without requiring explicit commands.

Journal ArticleDOI
TL;DR: A novel dataset to evaluate visual grasp affordance estimation is provided and it is shown that a fused method outperforms either local or global methods alone, and that continuous pose estimation improves over discrete output models.
Abstract: Appearance-based estimation of grasp affordances is desirable when 3-D scans become unreliable due to clutter or material properties. We develop a general framework for estimating grasp affordances from 2-D sources, including local texture-like measures as well as object-category measures that capture previously learned grasp strategies. Local approaches to estimating grasp positions have been shown to be effective in real-world scenarios, but are unable to impart object-level biases and can be prone to false positives. We describe how global cues can be used to compute continuous pose estimates and corresponding grasp point locations, using a max-margin optimization for category-level continuous pose regression. We provide a novel dataset to evaluate visual grasp affordance estimation; on this dataset we show that a fused method outperforms either local or global methods alone, and that continuous pose estimation improves over discrete output models. Finally, we demonstrate our autonomous object detection and grasping system on the Willow Garage PR2 robot.

Proceedings ArticleDOI
01 Nov 2016
TL;DR: This work presents a part-based grasp planning approach that is capable of generating grasps that are applicable to multiple familiar objects and evaluates the approach in simulation, by applying it to multiple object categories and determining how successful the planned grAsps can be transferred to novel, but familiar objects.
Abstract: In this work, we present a part-based grasp planning approach that is capable of generating grasps that are applicable to multiple familiar objects. We show how object models can be decomposed according to their shape and local volumetric information. The resulting object parts are labeled with semantic information and used for generating robotic grasping information. We investigate how the transfer of such grasping information to familiar objects can be achieved and how the transferability of grasps can be measured. We show that the grasp transferability measure provides valuable information about how successful planned grasps can be applied to novel object instances of the same object category. We evaluate the approach in simulation, by applying it to multiple object categories and determine how successful the planned grasps can be transferred to novel, but familiar objects. In addition, we present a use case on the humanoid robot ARMAR-III.

Patent
13 Dec 2016
TL;DR: In this article, the authors used a deep neural network to predict a measure that candidate motion data for an end effector of a robot will result in a successful grasp of one or more objects by the end-effector.
Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a deep neural network to predict a measure that candidate motion data for an end effector of a robot will result in a successful grasp of one or more objects by the end effector. Some implementations are directed to utilization of the trained deep neural network to servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector. For example, the trained deep neural network may be utilized in the iterative updating of motion control commands for one or more actuators of a robot that control the pose of a grasping end effector of the robot, and to determine when to generate grasping control commands to effectuate an attempted grasp by the grasping end effector.

Journal ArticleDOI
TL;DR: It is shown that the integrated recovery of flights and passengers can decrease both the recovery cost and the number of disrupted passengers.
Abstract: This paper considers the integrated recovery of both aircraft routing and passengers. A mathematical model is proposed based on both the flight connection network and the passenger reassignment relationship. A heuristic based on a GRASP algorithm is adopted to solve the problem. A passenger reassignment solution is demonstrated to be optimal in each iteration for a special case. The effectiveness of the heuristic is illustrated through experiments based on synthetic and real-world datasets. It is shown that the integrated recovery of flights and passengers can decrease both the recovery cost and the number of disrupted passengers.

Proceedings ArticleDOI
16 May 2016
TL;DR: This paper considers the problem of detecting robotic grasps using only the raw point cloud depth data of a scene containing unknown objects, and applies a geometric approach that categorizes objects into geometric shape primitives based on an analysis of local surface properties.
Abstract: In this paper, we present a novel grasp detection algorithm targeted towards assistive robotic manipulation systems. We consider the problem of detecting robotic grasps using only the raw point cloud depth data of a scene containing unknown objects, and apply a geometric approach that categorizes objects into geometric shape primitives based on an analysis of local surface properties. Grasps are detected without a priori models, and the approach can generalize to any number of novel objects that fall within the shape primitive categories. Our approach generates multiple candidate object grasps, which moreover are semantically meaningful and similar to what a human would generate when teleoperating the robot—and thus should be suitable manipulation goals for assistive robotic systems. An evaluation of our algorithm on 30 household objects includes a pilot user study, confirms the robustness of the detected grasps and was conducted in real-world experiments using an assistive robotic arm.

Journal ArticleDOI
TL;DR: In this article, a greedy randomized adaptive search procedure (GRASP) is proposed to solve the Slot Planning Problem (SPP) in the context of container stowage planning.
Abstract: This work presents a generalization of the Slot Planning Problem which raises when the liner shipping industry needs to plan the placement of containers within a vessel (stowage planning). State-of-the-art stowage planning relies on a heuristic decomposition where containers are first distributed in clusters along the vessel. For each of those clusters a specific position for each container must be found. Compared to previous studies, we have introduced two new features: the explicit handling of rolled out containers and the inclusion of separations rules for dangerous cargo. We present a novel integer programming formulation and a Greedy Randomized Adaptive Search Procedure (GRASP) to solve the problem. The approach is able to find high-quality solution within 1 s. We also provide comparison with the state-of-the-art on an existing and a new set of benchmark instances.