Institution
Willow Garage
About: Willow Garage is a based out in . It is known for research contribution in the topics: Robot & Mobile robot. The organization has 76 authors who have published 191 publications receiving 28617 citations.
Topics: Robot, Mobile robot, Motion planning, Robotics, Personal robot
Papers
More filters
••
01 Jan 2014
TL;DR: This paper studies the impact that the users visual access to the robot, or lack thereof, has on on teaching performance and addresses how a robot can provide additional information to a instructor during the LfD process, to optimize the two-way process of teaching and learning.
Abstract: Learning from demonstration utilizes human expertise to program a robot. We believe this approach to robot programming will facilitate the development and deployment of general purpose personal robots that can adapt to specific user preferences. Demonstrations can potentially take place across a wide variety of environmental conditions. In this paper we study the impact that the users visual access to the robot, or lack thereof, has on on teaching performance. Based on the obtained results, we then address how a robot can provide additional information to a instructor during the LfD process, to optimize the two-way process of teaching and learning. Finally, we describe a novel Bayesian approach to generating task policies from demonstration data.
5 citations
•
01 Jan 20105 citations
•
18 Jun 2013TL;DR: In this article, the feet of a subject may be analyzed using a 3D camera, and a point cloud may be created based upon the captured images, and extraction procedures may be conducted to create individual, or discrete, point clouds for each of the feet from the overall superset point cloud created using the 3D imaging device.
Abstract: One embodiment is directed to system for analyzing the feet of a subject, wherein the subject may position and orient his feet in a capture configuration relative to a 3-dimensional camera, and the 3-dimensional camera may be utilized to capture a plurality of images about the subject's feet from a plurality of perspectives. A point cloud may be created based upon the captured images, and extraction procedures may be conducted to create individual, or discrete, point clouds for each of the feet from the overall superset point cloud created using the 3-dimensional imaging device. The discrete point clouds may be utilized to conduct various measurements of the feet, which may be utilized in various configurations, such as for shoe fitment or manufacturing.
4 citations
01 Jan 2010
TL;DR: In this article, the authors present results from interviews and surveys regarding personal experiences with tools that became invisible-in-use, shedding light upon ways that robots might do the same.
Abstract: A major challenge facing human-robot interaction is un- derstanding how to people will interact and cope with increasingly agentic objects in their everyday lives. As more robotic technolo- gies enter human environments, it is critical to consider other mod- els of human-robot interaction that do not always require focused attention from people. Ubiquitous computing put forth the perspec- tive that computers should not always be the focus of our attention, but that computing should weave itself into the fabric of our ev- eryday lives. Similarly, robots might be the center of attention in some interactions, but might be even more effective when they fade into one's attentional background. In this line of thought, the current study presents results from interviews (N=19) and surveys (N=46) regarding personal experiences with tools that became invisible-in- use, shedding light upon ways that robots might do the same. We present the lessons learned from these open-ended interviews and surveys in the context of larger theories of making tools invisible-in- use (9), functional (16), ready-at-hand (8), proximal (14), and/or in the periphery of one's experience (24).
4 citations
•
01 Jan 2016
TL;DR: In this article, a generative model for object shapes can achieve state-of-the-art results on challenging scene text recognition tasks, and with orders of magnitude fewer training images than required for competing discriminative methods.
Abstract: We demonstrate that a generative model for object shapes can achieve state of the art results on challenging scene text recognition tasks, and with orders of magnitude fewer training images than required for competing discriminative methods. In addition to transcribing text from challenging images, our method performs fine-grained instance segmentation of characters. We show that our model is more robust to both affine transformations and non-affine deformations compared to previous approaches.
4 citations
Authors
Showing all 76 results
Name | H-index | Papers | Citations |
---|---|---|---|
Ian Goodfellow | 85 | 137 | 135390 |
Kurt Konolige | 64 | 171 | 24749 |
Andreas Paepcke | 50 | 140 | 9405 |
Gunter Niemeyer | 47 | 153 | 17135 |
Radu Bogdan Rusu | 43 | 97 | 15008 |
Mike J. Dixon | 42 | 182 | 8272 |
Gary Bradski | 41 | 82 | 23763 |
Leila Takayama | 34 | 90 | 4549 |
Sachin Chitta | 34 | 56 | 4589 |
Wendy Ju | 34 | 184 | 3861 |
Maya Cakmak | 34 | 111 | 4452 |
Brian P. Gerkey | 32 | 51 | 7923 |
Caroline Pantofaru | 26 | 65 | 4116 |
Matei Ciocarlie | 25 | 91 | 3176 |
Kaijen Hsiao | 24 | 29 | 2366 |