Institution
Willow Garage
About: Willow Garage is a based out in . It is known for research contribution in the topics: Robot & Mobile robot. The organization has 76 authors who have published 191 publications receiving 28617 citations.
Topics: Robot, Mobile robot, Motion planning, Robotics, Personal robot
Papers
More filters
••
TL;DR: Results show that auditory cues provide important knowledge about the robot's internal state, while visual observation of a robot can hinder an instructor due to incorrect mental models of the robot and distractions from the robot’s movements.
36 citations
••
09 Sep 2013TL;DR: This paper proposes to use human detections as cues to more accurately estimate the vanishing points of highly cluttered indoor scenes, and shows that this approach improves 3D interpretation of scenes.
Abstract: Recovering the spatial layout of cluttered indoor scenes is a challenging problem. Current methods generate layout hypotheses from vanishing point estimates produced using 2D image features. This method fails in highly cluttered scenes in which most of the image features come from clutter instead of the room’s geometric structure. In this paper, we propose to use human detections as cues to more accurately estimate the vanishing points. Our method is built on top of the fact that people are often the focus of indoor scenes, and that the scene and the people within the scene should have consistent geometric configurations in 3D space. We contribute a new data set of highly cluttered indoor scenes containing people, on which we provide baselines and evaluate our method. This evaluation shows that our approach improves 3D interpretation of scenes.
35 citations
••
24 Dec 2012TL;DR: This work combines visual features and RGB-D data in a simple and effective way to segment objects from robot sensory data and uses a Dirichlet process to cluster and recognize objects.
Abstract: A useful capability for a mobile robot is the ability to recognize the objects in its environment that move and change (as distinct from background objects, which are largely stationary). This ability can improve the accuracy and reliability of localization and mapping, enhance the ability of the robot to interact with its environment, and facilitate applications such as inventory management and theft detection. Rather than viewing this task as a difficult application of object recognition methods from computer vision, this work is in line with a recent trend in the community towards unsupervised object discovery and tracking that exploits the fundamentally temporal nature of the data acquired by a robot. Unlike earlier approaches, which relied heavily upon computationally intensive techniques from mapping and computer vision, our approach combines visual features and RGB-D data in a simple and effective way to segment objects from robot sensory data. We then use a Dirichlet process to cluster and recognize objects. The performance of our approach is demonstrated in several test domains.
35 citations
••
01 Jan 2013TL;DR: This work presents fast and novel algorithms to perform k-NN (k-nearest neighbor) queries in high dimensional configuration spaces based on locality-sensitive hashing and derive tight bounds on their accuracy.
Abstract: We present a novel approach to improve the performance of sample-based motion planners by learning from prior instances. Our formulation stores the results of prior collision and local planning queries. This information is used to accelerate the performance of planners based on probabilistic collision checking, select new local paths in free space, and compute an efficient order to perform queries along a search path in a graph. We present fast and novel algorithms to perform k-NN (k-nearest neighbor) queries in high dimensional configuration spaces based on locality-sensitive hashing and derive tight bounds on their accuracy. The k-NN queries are used to perform instance-based learning and have a sub-linear time complexity. Our approach is general, makes no assumption about the sampling scheme, and can be used with various sample-based motion planners, including PRM, Lazy-PRM, RRT and RRT*, by making small changes to these planners.We observe up to 100% improvement in the performance of various planners on rigid and articulated robots.
34 citations
••
23 Jun 2013TL;DR: In this paper, the authors proposed a framework of planning with experience graphs which encode and reuse previous experiences for constrained manipulation tasks, e.g., door opening and drawer opening, where the motion of the object itself involves only a single degree of freedom.
Abstract: Motion planning in high dimensional state spaces, such as for mobile manipulation, is a challenging problem. Constrained manipulation, e.g., opening articulated objects like doors or drawers, is also hard since sampling states on the constrained manifold is expensive. Further, planning for such tasks requires a combination of planning in free space for reaching a desired grasp or contact location followed by planning for the constrained manipulation motion, often necessitating a slow two step process in traditional approaches. In this work, we show that combined planning for such tasks can be dramatically accelerated by providing user demonstrations of the constrained manipulation motions. In particular, we show how such demonstrations can be incorporated into a recently developed framework of planning with experience graphs which encode and reuse previous experiences. We focus on tasks involving articulation constraints, e.g., door opening or drawer opening, where the motion of the object itself involves only a single degree of freedom. We provide experimental results with the PR2 robot opening a variety of such articulated objects using our approach, using full-body manipulation (after receiving kinesthetic demonstrations). We also provide simulated results highlighting the benefits of our approach for constrained manipulation tasks.
31 citations
Authors
Showing all 76 results
Name | H-index | Papers | Citations |
---|---|---|---|
Ian Goodfellow | 85 | 137 | 135390 |
Kurt Konolige | 64 | 171 | 24749 |
Andreas Paepcke | 50 | 140 | 9405 |
Gunter Niemeyer | 47 | 153 | 17135 |
Radu Bogdan Rusu | 43 | 97 | 15008 |
Mike J. Dixon | 42 | 182 | 8272 |
Gary Bradski | 41 | 82 | 23763 |
Leila Takayama | 34 | 90 | 4549 |
Sachin Chitta | 34 | 56 | 4589 |
Wendy Ju | 34 | 184 | 3861 |
Maya Cakmak | 34 | 111 | 4452 |
Brian P. Gerkey | 32 | 51 | 7923 |
Caroline Pantofaru | 26 | 65 | 4116 |
Matei Ciocarlie | 25 | 91 | 3176 |
Kaijen Hsiao | 24 | 29 | 2366 |