scispace - formally typeset
Search or ask a question
Institution

Willow Garage

About: Willow Garage is a based out in . It is known for research contribution in the topics: Robot & Mobile robot. The organization has 76 authors who have published 191 publications receiving 28617 citations.

Papers published on a yearly basis

Papers
More filters
Book ChapterDOI
01 Jan 2014
TL;DR: This paper studies the impact that the users visual access to the robot, or lack thereof, has on on teaching performance and addresses how a robot can provide additional information to a instructor during the LfD process, to optimize the two-way process of teaching and learning.
Abstract: Learning from demonstration utilizes human expertise to program a robot. We believe this approach to robot programming will facilitate the development and deployment of general purpose personal robots that can adapt to specific user preferences. Demonstrations can potentially take place across a wide variety of environmental conditions. In this paper we study the impact that the users visual access to the robot, or lack thereof, has on on teaching performance. Based on the obtained results, we then address how a robot can provide additional information to a instructor during the LfD process, to optimize the two-way process of teaching and learning. Finally, we describe a novel Bayesian approach to generating task policies from demonstration data.

5 citations

Patent
18 Jun 2013
TL;DR: In this article, the feet of a subject may be analyzed using a 3D camera, and a point cloud may be created based upon the captured images, and extraction procedures may be conducted to create individual, or discrete, point clouds for each of the feet from the overall superset point cloud created using the 3D imaging device.
Abstract: One embodiment is directed to system for analyzing the feet of a subject, wherein the subject may position and orient his feet in a capture configuration relative to a 3-dimensional camera, and the 3-dimensional camera may be utilized to capture a plurality of images about the subject's feet from a plurality of perspectives. A point cloud may be created based upon the captured images, and extraction procedures may be conducted to create individual, or discrete, point clouds for each of the feet from the overall superset point cloud created using the 3-dimensional imaging device. The discrete point clouds may be utilized to conduct various measurements of the feet, which may be utilized in various configurations, such as for shoe fitment or manufacturing.

4 citations

Leila Takayama1
01 Jan 2010
TL;DR: In this article, the authors present results from interviews and surveys regarding personal experiences with tools that became invisible-in-use, shedding light upon ways that robots might do the same.
Abstract: A major challenge facing human-robot interaction is un- derstanding how to people will interact and cope with increasingly agentic objects in their everyday lives. As more robotic technolo- gies enter human environments, it is critical to consider other mod- els of human-robot interaction that do not always require focused attention from people. Ubiquitous computing put forth the perspec- tive that computers should not always be the focus of our attention, but that computing should weave itself into the fabric of our ev- eryday lives. Similarly, robots might be the center of attention in some interactions, but might be even more effective when they fade into one's attentional background. In this line of thought, the current study presents results from interviews (N=19) and surveys (N=46) regarding personal experiences with tools that became invisible-in- use, shedding light upon ways that robots might do the same. We present the lessons learned from these open-ended interviews and surveys in the context of larger theories of making tools invisible-in- use (9), functional (16), ready-at-hand (8), proximal (14), and/or in the periphery of one's experience (24).

4 citations

Proceedings Article
01 Jan 2016
TL;DR: In this article, a generative model for object shapes can achieve state-of-the-art results on challenging scene text recognition tasks, and with orders of magnitude fewer training images than required for competing discriminative methods.
Abstract: We demonstrate that a generative model for object shapes can achieve state of the art results on challenging scene text recognition tasks, and with orders of magnitude fewer training images than required for competing discriminative methods. In addition to transcribing text from challenging images, our method performs fine-grained instance segmentation of characters. We show that our model is more robust to both affine transformations and non-affine deformations compared to previous approaches.

4 citations


Authors

Showing all 76 results

NameH-indexPapersCitations
Ian Goodfellow85137135390
Kurt Konolige6417124749
Andreas Paepcke501409405
Gunter Niemeyer4715317135
Radu Bogdan Rusu439715008
Mike J. Dixon421828272
Gary Bradski418223763
Leila Takayama34904549
Sachin Chitta34564589
Wendy Ju341843861
Maya Cakmak341114452
Brian P. Gerkey32517923
Caroline Pantofaru26654116
Matei Ciocarlie25913176
Kaijen Hsiao24292366
Network Information
Related Institutions (5)
Adobe Systems
8K papers, 214.7K citations

85% related

Mitsubishi Electric Research Laboratories
3.8K papers, 131.6K citations

84% related

Google
39.8K papers, 2.1M citations

84% related

Facebook
10.9K papers, 570.1K citations

83% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20172
20164
20152
201414
201336
201239