Institution
Willow Garage
About: Willow Garage is a based out in . It is known for research contribution in the topics: Robot & Mobile robot. The organization has 76 authors who have published 191 publications receiving 28617 citations.
Topics: Robot, Mobile robot, Motion planning, Robotics, Personal robot
Papers
More filters
••
27 Feb 2013TL;DR: A coordination structure for human-robot handovers is proposed that considers the physical and social-cognitive aspects of the interaction separately and describes how people approach, reach out their hands, and transfer objects while simultaneously coordinating the what, when, and where of handovers.
Abstract: A handover is a complex collaboration, where actors coordinate in time and space to transfer control of an object. This coordination comprises two processes: the physical process of moving to get close enough to transfer the object, and the cognitive process of exchanging information to guide the transfer. Despite this complexity, we humans are capable of performing handovers seamlessly in a wide variety of situations, even when unexpected. This suggests a common procedure that guides all handover interactions. Our goal is to codify that procedure.To that end, we first study how people hand over objects to each other in order to understand their coordination process and the signals and cues that they use and observe with their partners. Based on these studies, we propose a coordination structure for human-robot handovers that considers the physical and social-cognitive aspects of the interaction separately. This handover structure describes how people approach, reach out their hands, and transfer objects while simultaneously coordinating the what, when, and where of handovers: to agree that the handover will happen (and with what object), to establish the timing of the handover, and to decide the configuration at which the handover will occur. We experimentally evaluate human-robot handover behaviors that exploit this structure and offer design implications for seamless human-robot handover interactions.
258 citations
••
06 Mar 2011TL;DR: Support is found for the hypothesis that perceptions of robots are influenced by robots showing forethought, the task outcome (success or failure), and showing goal-oriented reactions to those task outcomes.
Abstract: The animation techniques of anticipation and reaction can help create robot behaviors that are human readable such that people can figure out what the robot is doing, reasonably predict what the robot will do next, and ultimately interact with the robot in an effective way. By showing forethought before action and expressing a reaction to the task outcome (success or failure), we prototyped a set of human-robot interaction behaviors. In a 2 (forethought vs. none: between) x 2 (reaction to outcome vs. none: between) x 2 (success vs. failure task outcome: within) experiment, we tested the influences of forethought and reaction upon people's perceptions of the robot and the robot's readability. In this online video prototype experiment (N=273), we have found support for the hypothesis that perceptions of robots are influenced by robots showing forethought, the task outcome (success or failure), and showing goal-oriented reactions to those task outcomes. Implications for theory and design are discussed.
256 citations
••
07 May 2011TL;DR: This work found that the mobile embodiment of the remote worker evoked orientations toward the MRP both as a person and as a machine, leading to formation of new usage norms among remote and local coworkers.
Abstract: As geographically distributed teams become increasingly common, there are more pressing demands for communication work practices and technologies that support distributed collaboration. One set of technologies that are emerging on the commercial market is mobile remote presence (MRP) systems, physically embodied videoconferencing systems that remote workers use to drive through a workplace, communicating with locals there. Our interviews, observations, and survey results from people, who had 2-18 months of MRP use, showed how remotely-controlled mobility enabled remote workers to live and work with local coworkers almost as if they were physically there. The MRP supported informal communications and connections between distributed coworkers. We also found that the mobile embodiment of the remote worker evoked orientations toward the MRP both as a person and as a machine, leading to formation of new usage norms among remote and local coworkers.
243 citations
••
03 Dec 2010TL;DR: The results show that reactive grasping can correct for a fair amount of uncertainty in the measured position or shape of the objects, and that the grasp selection approach is successful in grasping objects with a variety of shapes.
Abstract: Robotic grasping in unstructured environments requires the ability to select grasps for unknown objects and execute them while dealing with uncertainty due to sensor noise or calibration errors. In this work, we propose a simple but robust approach to grasp selection for unknown objects, and a reactive adjustment approach to deal with uncertainty in object location and shape. The grasp selection method uses 3D sensor data directly to determine a ranked set of grasps for objects in a scene, using heuristics based on both the overall shape of the object and its local features. The reactive grasping approach uses tactile feedback from fingertip sensors to execute a compliant robust grasp. We present experimental results to validate our approach by grasping a wide range of unknown objects. Our results show that reactive grasping can correct for a fair amount of uncertainty in the measured position or shape of the objects, and that our grasp selection approach is successful in grasping objects with a variety of shapes.
232 citations
••
24 Jun 2013
TL;DR: A novel approach to tightly integrate visual measurements with readings from an Inertial Measurement Unit (IMU) in SLAM using the powerful concept of ‘keyframes’ to maintain a bounded-sized optimization window, ensuring real-time operation.
Abstract: The fusion of visual and inertial cues has become popular in robotics due to the complementary nature of the two sensing modalities. While most fusion strategies to date rely on filtering schemes, the visual robotics community has recently turned to non-linear optimization approaches for tasks such as visual Simultaneous Localization And Mapping (SLAM), following the discovery that this comes with significant advantages in quality of performance and computational complexity. Following this trend, we present a novel approach to tightly integrate visual measurements with readings from an Inertial Measurement Unit (IMU) in SLAM. An IMU error term is integrated with the landmark reprojection error in a fully probabilistic manner, resulting to a joint non-linear cost function to be optimized. Employing the powerful concept of ‘keyframes’ we partially marginalize old states to maintain a bounded-sized optimization window, ensuring real-time operation. Comparing against both vision-only and loosely-coupled visual-inertial algorithms, our experiments confirm the benefits of tight fusion in terms of accuracy and robustness.
225 citations
Authors
Showing all 76 results
Name | H-index | Papers | Citations |
---|---|---|---|
Ian Goodfellow | 85 | 137 | 135390 |
Kurt Konolige | 64 | 171 | 24749 |
Andreas Paepcke | 50 | 140 | 9405 |
Gunter Niemeyer | 47 | 153 | 17135 |
Radu Bogdan Rusu | 43 | 97 | 15008 |
Mike J. Dixon | 42 | 182 | 8272 |
Gary Bradski | 41 | 82 | 23763 |
Leila Takayama | 34 | 90 | 4549 |
Sachin Chitta | 34 | 56 | 4589 |
Wendy Ju | 34 | 184 | 3861 |
Maya Cakmak | 34 | 111 | 4452 |
Brian P. Gerkey | 32 | 51 | 7923 |
Caroline Pantofaru | 26 | 65 | 4116 |
Matei Ciocarlie | 25 | 91 | 3176 |
Kaijen Hsiao | 24 | 29 | 2366 |