scispace - formally typeset
Search or ask a question

Showing papers by "Willow Garage published in 2012"


Proceedings ArticleDOI
14 May 2012
TL;DR: A new collision and proximity library that integrates several techniques for fast and accurate collision checking and proximity computation and is based on hierarchical representations and designed to perform multiple proximity queries on different model representations.
Abstract: We present a new collision and proximity library that integrates several techniques for fast and accurate collision checking and proximity computation. Our library is based on hierarchical representations and designed to perform multiple proximity queries on different model representations. The set of queries includes discrete collision detection, continuous collision detection, separation distance computation and penetration depth estimation. The input models may correspond to triangulated rigid or deformable models and articulated models. Moreover, FCL can perform probabilistic collision checking between noisy point clouds that are captured using cameras or LIDAR sensors. The main benefit of FCL lies in the fact that it provides a unified interface that can be used by various applications. Furthermore, its flexible architecture makes it easier to implement new algorithms within this framework. The runtime performance of the library is comparable to state of the art collision and proximity algorithms. We demonstrate its performance on synthetic datasets as well as motion planning and grasping computations performed using a two-armed mobile manipulation robot.

445 citations


Journal ArticleDOI
TL;DR: A rapidly growing group of people can acquire 3- D data cheaply and in real time, as these sensors are commodity hardware and sold at low cost.
Abstract: With the advent of new-generation depth sensors, the use of three-dimensional (3-D) data is becoming increasingly popular. As these sensors are commodity hardware and sold at low cost, a rapidly growing group of people can acquire 3- D data cheaply and in real time.

368 citations


Proceedings ArticleDOI
14 May 2012
TL;DR: This work presents a novel lossy compression approach for point cloud streams which exploits spatial and temporal redundancy within the point data and presents a technique for comparing the octree data structures of consecutive point clouds.
Abstract: We present a novel lossy compression approach for point cloud streams which exploits spatial and temporal redundancy within the point data. Our proposed compression framework can handle general point cloud streams of arbitrary and varying size, point order and point density. Furthermore, it allows for controlling coding complexity and coding precision. To compress the point clouds, we perform a spatial decomposition based on octree data structures. Additionally, we present a technique for comparing the octree data structures of consecutive point clouds. By encoding their structural differences, we can successively extend the point clouds at the decoder. In this way, we are able to detect and remove temporal redundancy from the point cloud data stream. Our experimental results show a strong compression performance of a ratio of 14 at 1 mm coordinate precision and up to 40 at a coordinate precision of 9 mm.

341 citations


Book ChapterDOI
18 Jun 2012
TL;DR: A system for acquiring and processing 3D (semantic) information at frame rates of up to 30Hz that allows a mobile robot to reliably detect obstacles and segment graspable objects and supporting surfaces as well as the overall scene geometry.
Abstract: Real-time 3D perception of the surrounding environment is a crucial precondition for the reliable and safe application of mobile service robots in domestic environments Using a RGB-D camera, we present a system for acquiring and processing 3D (semantic) information at frame rates of up to 30Hz that allows a mobile robot to reliably detect obstacles and segment graspable objects and supporting surfaces as well as the overall scene geometry Using integral images, we compute local surface normals The points are then clustered, segmented, and classified in both normal space and spherical coordinates The system is tested in different setups in a real household environment The results show that the system is capable of reliably detecting obstacles at high frame rates, even in case of obstacles that move fast or do not considerably stick out of the ground The segmentation of all planes in the 3D data even allows for correcting characteristic measurement errors and for reconstructing the original scene geometry in far ranges

324 citations


Proceedings ArticleDOI
24 Dec 2012
TL;DR: In this article, the authors present two real-time methods for estimating surface normals from organized point cloud data using integral images to perform highly efficient border and depth-dependent smoothing and covariance estimation.
Abstract: In this paper we present two real-time methods for estimating surface normals from organized point cloud data. The proposed algorithms use integral images to perform highly efficient border- and depth-dependent smoothing and covariance estimation. We show that this approach makes it possible to obtain robust surface normals from large point clouds at high frame rates and therefore, can be used in real-time computer vision algorithms that make use of Kinect-like data.

179 citations


Proceedings ArticleDOI
05 Mar 2012
TL;DR: This work implemented and analyzed four different strategies for performing grasping tasks, ranging from direct, real-time operator control of the end-effector pose, to autonomous motion and grasp planning that is simply adjusted or confirmed by the operator.
Abstract: Human-in-the loop robotic systems have the potential to handle complex tasks in unstructured environments, by combining the cognitive skills of a human operator with autonomous tools and behaviors. Along these lines, we present a system for remote human-in-the-loop grasp execution. An operator uses a computer interface to visualize a physical robot and its surroundings, and a point-and-click mouse interface to command the robot. We implemented and analyzed four different strategies for performing grasping tasks, ranging from direct, real-time operator control of the end-effector pose, to autonomous motion and grasp planning that is simply adjusted or confirmed by the operator. Our controlled experiment (N=48) results indicate that people were able to successfully grasp more objects and caused fewer unwanted collisions when using the strategies with more autonomous assistance. We used an untethered robot over wireless communications, making our strategies applicable for remote, human-in-the-loop robotic applications.

178 citations


Proceedings ArticleDOI
14 May 2012
TL;DR: This work introduces a methodology for learning 3D descriptors from synthetic CAD-models and classification of never-before-seen objects at the first glance, where classification rates and speed are suited for robotics tasks.
Abstract: 3D object and object class recognition gained momentum with the arrival of low-cost RGB-D sensors and enables robotics tasks not feasible years ago. Scaling object class recognition to hundreds of classes still requires extensive time and many objects for learning. To overcome the training issue, we introduce a methodology for learning 3D descriptors from synthetic CAD-models and classification of never-before-seen objects at the first glance, where classification rates and speed are suited for robotics tasks. We provide this in 3DNet (3d-net.org), a free resource for object class recognition and 6DOF pose estimation from point cloud data. 3DNet provides a large-scale hierarchical CAD-model databases with increasing numbers of classes and difficulty with 10, 50, 100 and 200 object classes together with evaluation datasets that contain thousands of scenes captured with a RGB-D sensor. 3DNet further provides an open-source framework based on the Point Cloud Library (PCL) for testing new descriptors and benchmarking of state-of-the-art descriptors together with pose estimation procedures to enable robotics tasks such as search and grasping.

150 citations


BookDOI
09 Jul 2012
TL;DR: Validation on a real robot shows that the grasp evaluation method accurately predicts the outcome of a grasp, and that the approach, in conjunction with state-of-the-art object recognition tools, is applicable in reallife scenes that are highly cluttered and constrained.
Abstract: We propose a planning method for grasping in cluttered environments, where the robot can make simultaneous contact with multiple objects, manipulating them in a deliberate and controlled fashion. This enables the robot to reach for and grasp the target while simultaneously contacting and moving aside obstacles in order to clear a desired path. We use a physicsbased analysis of pushing to compute the motion of each object in the scene in response to a set of possible robot motions. In order to make the problem computationally tractable, we enable multiple simultaneous robot-object interactions, which we pre-compute and cache, but avoid object-object interactions. Tests on large sets of simulated scenes show that our planner produces more successful grasps in more complex scenes than versions that avoid any interaction with surrounding clutter. Validation on a real robot shows that our grasp evaluation method accurately predicts the outcome of a grasp, and that our approach, in conjunction with state-of-the-art object recognition tools, is applicable in reallife scenes that are highly cluttered and constrained.

132 citations


Proceedings ArticleDOI
04 Jun 2012
TL;DR: A multi-robot collision avoidance system based on the velocity obstacle paradigm that alleviates the strong requirement for perfect sensing using Adaptive Monte-Carlo Localization on a per-agent level and combines the computation for collision-free motion with localization uncertainty.
Abstract: This paper describes a multi-robot collision avoidance system based on the velocity obstacle paradigm. In contrast to previous approaches, we alleviate the strong requirement for perfect sensing (i.e. global positioning) using Adaptive Monte-Carlo Localization on a per-agent level. While such methods as Optimal Reciprocal Collision Avoidance guarantee local collision-free motion for a large number of robots, given perfect knowledge of positions and speeds, a realistic implementation requires further extensions to deal with inaccurate localization and message passing delays. The presented algorithm bounds the error introduced by localization and combines the computation for collision-free motion with localization uncertainty. We provide an open source implementation using the Robot Operating System (ROS). The system is tested and evaluated with up to eight robots in simulation and on four differential drive robots in a real-world situation.

103 citations


Proceedings ArticleDOI
14 May 2012
TL;DR: This work presents a fast, integrated approach to solve path planning in 3D using a combination of an efficient octree-based representation of the 3D world and an anytime search-based motion planner to improve planning speed.
Abstract: Collision-free navigation in cluttered environments is essential for any mobile manipulation system. Traditional navigation systems have relied on a 2D grid map projected from a 3D representation for efficiency. This approach, however, prevents navigation close to objects in situations where projected 3D configurations are in collision within the 2D grid map even if actually no collision occurs in the 3D environment. Accordingly, when using such a 2D representation for planning paths of a mobile manipulation robot, the number of planning problems which can be solved is limited and suboptimal robot paths may result. We present a fast, integrated approach to solve path planning in 3D using a combination of an efficient octree-based representation of the 3D world and an anytime search-based motion planner. Our approach utilizes a combination of multi-layered 2D and 3D representations to improve planning speed, allowing the generation of almost real-time plans with bounded sub-optimality. We present extensive experimental results with the two-armed mobile manipulation robot PR2 carrying large objects in a highly cluttered environment. Using our approach, the robot is able to efficiently plan and execute trajectories while transporting objects, thereby often moving through demanding, narrow passageways.

93 citations


Proceedings ArticleDOI
Leila Takayama1, Caroline Pantofaru1, David Robson1, Bianca Soto1, Michael Barry 
05 Sep 2012
TL;DR: This work goes out into the field to conduct need finding interviews among people who have already introduced automation into their homes and kept it there--home automators and presents the lessons learned as frameworks and implications for the values that domestic technology should support.
Abstract: Home and automation are not natural partners--one homey and the other cold. Most current automation in the home is packaged in the form of appliances. To better understand the current reality and possible future of living with other types of domestic technology, we went out into the field to conduct need finding interviews among people who have already introduced automation into their homes and kept it there--home automators. We present the lessons learned from these home automators as frameworks and implications for the values that domestic technology should support. In particular, we focus on the satisfaction and meaning that the home automators derived from their projects, especially in connecting to their homes (rather than simply controlling their homes). These results point the way toward other technologies designed for our everyday lives at home.

Journal ArticleDOI
TL;DR: This work presents an approach to mobile pick and place in human environments using a combination of two-dimensional and three-dimensional visual processing, tactile and proprioceptive sensor data, fast motion planning, reactive control and monitoring, and reactive grasping.
Abstract: Unstructured human environments present a substantial challenge to effective robotic operation. Mobile manipulation in human environments requires dealing with novel unknown objects, cluttered workspaces, and noisy sensor data. We present an approach to mobile pick and place in such environments using a combination of two-dimensional (2-D) and three-dimensional (3-D) visual processing, tactile and proprioceptive sensor data, fast motion planning, reactive control and monitoring, and reactive grasping. We demonstrate our approach by using a two-arm mobile manipulation system to pick and place objects. Reactive components allow our system to account for uncertainty arising from noisy sensors, inaccurate perception (e.g., object detection or registration), or dynamic changes in the environment. We also present a set of tools that allows our system to be easily configured within a short time for a new robotic system.

BookDOI
09 Jul 2012
TL;DR: This paper develops an online motion planning approach which learns from its planning episodes (experiences) a graph, an Experience Graph which represents the underlying connectivity of the space required for the execution of the mundane tasks performed by the robot.
Abstract: Human environments possess a significant amount of underlying structure that is under-utilized in motion planning and mobile manipulation In domestic environments for example, walls and shelves are static, large objects such as furniture and kitchen appliances most of the time do not move and do not change, and objects are typically placed on a limited number of support surfaces such as tables, countertops or shelves Motion planning for robots operating in such environments should be able to exploit this structure to improve its performance with each execution of a task In this paper, we develop an online motion planning approach which learns from its planning episodes (experiences) a graph, an Experience Graph This graph represents the underlying connectivity of the space required for the execution of the mundane tasks performed by the robot The planner uses the Experience graph to accelerate its planning efforts whenever possible On the theoretical side, we show that planning with Experience graphs is complete and provides bounds on sub-optimality with respect to the graph that represents the original planning problem On the experimental side, we show in simulations and on a physical robot that our approach is particularly suitable for higher-dimensional motion planning tasks such as planning for single-arm manipulation and two armed mobile manipulation The approach provides significant speedups over planning from scratch and generates predictable motion plans: motions planned from start positions that are close to each other to goal positions that are also close to each other tend to be similar In addition, we show how the Experience graphs can incorporate solutions from other approaches such as human demonstrations, providing an easy way of bootstrapping motion planning for complex tasks

BookDOI
09 Jul 2012
TL;DR: In this article, a method for segmentation, pose estimation and recognition of transparent objects from a single RGB-D image from a Kinect sensor is proposed, where the weakness in the perception of transparent object is exploited in their segmentation and edge fitting is used for recognition and pose estimation.
Abstract: Recognizing and determining the 6DOF pose of transparent objects is necessary in order for robots to manipulate such objects. However, it is a challenging problem for computer vision. We propose new algorithms for segmentation, pose estimation and recognition of transparent objects from a single RGB-D image from a Kinect sensor. Kinect's weakness in the perception of transparent objects is exploited in their segmentation. Following segmentation, edge fitting is used for recognition and pose estimation. A 3D model of the object is created automatically during training and it is required for pose estimation and recognition. The algorithm is evaluated in different conditions of a domestic environment within the framework of a robotic grasping pipeline where it demonstrates high grasping success rates compared to the state-of-the-art results. The method doesn't deal with occlusions and overlapping transparent objects currently but it is robust against non-transparent clutter.

Proceedings ArticleDOI
24 Dec 2012
TL;DR: This work describes and experimentally verify a semantic querying system aboard a mobile robot equipped with a Microsoft Kinect RGB-D sensor, which allows the system to operate in large, dynamic, and uncon-strained environments, where modeling every object that occurs or might occur is impractical.
Abstract: Recent years have seen rising interest in robotic mapping algorithms that operate at the level of objects, rather than two- or three-dimensional occupancy. Such “semantic maps” permit higher-level reasoning than occupancy maps, and are useful for any application that involves dealing with objects, including grasping, change detection, and object search. We describe and experimentally verify such a system aboard a mobile robot equipped with a Microsoft Kinect RGB-D sensor. Our representation is object-based, and makes uniquely weak assumptions about the quality of the perceptual data available; in particular, we perform no explicit object recognition. This allows our system to operate in large, dynamic, and uncon-strained environments, where modeling every object that occurs (or might occur) is impractical. Our dataset, which is publicly available, consists of 67 autonomous runs of our robot over a six-week period in a roughly 1600m2 office environment. We demonstrate two applications built on our system: semantic querying and change detection.

Journal ArticleDOI
TL;DR: In this article, a model predictive control (MPC) relies heavily on predictive capabilities, and good control simultaneously requires predictions that provide consistent, strong filtering of sensor noise, as well as fast adaptation for disturbances.

Proceedings ArticleDOI
24 Dec 2012
TL;DR: Their close and error-bounded convex approximation of the localization density distribution results in collision-free paths under uncertainty, while in many algorithms the robots are approximated by circumscribed radii, the authors use the convex hull to minimize the overestimation in the footprint.
Abstract: We present a multi-mobile robot collision avoidance system based on the velocity obstacle paradigm. Current positions and velocities of surrounding robots are translated to an efficient geometric representation to determine safe motions. Each robot uses on-board localization and local communication to build the velocity obstacle representation of its surroundings. Our close and error-bounded convex approximation of the localization density distribution results in collision-free paths under uncertainty. While in many algorithms the robots are approximated by circumscribed radii, we use the convex hull to minimize the overestimation in the footprint. Results show that our approach allows for safe navigation even in densely packed environments.

Proceedings ArticleDOI
24 Dec 2012
TL;DR: A mobile manipulation platform operated by a motor-impaired person using input from a head-tracker, single-button mouse is presented, and how the use of autonomous sub-modules improves performance in complex, cluttered environments is shown.
Abstract: We present a mobile manipulation platform operated by a motor-impaired person using input from a head-tracker, single-button mouse. The platform is used to perform varied and unscripted manipulation tasks in a real home, combining navigation, perception and manipulation. The operator can make use of a wide range of interaction methods and tools, from direct tele-operation of the gripper or mobile base to autonomous sub-modules performing collision-free base navigation or arm motion planning. We describe the complete set of tools that enable the execution of complex tasks, and share the lessons learned from testing them in a real user's home. In the context of grasping, we show how the use of autonomous sub-modules improves performance in complex, cluttered environments, and compare the results to those obtained by novice, able-bodied users operating the same system.

Proceedings ArticleDOI
24 Dec 2012
TL;DR: A set of benchmarks and the associated infrastructure for comparing different types of motion planning approaches and algorithms is presented and results comparing the performance of several motion planning algorithms are presented to validate the use of these benchmarks.
Abstract: Randomized planners, search-based planners, potential-field approaches and trajectory optimization based motion planners are just some of the types of approaches that have been developed for motion planning. Given a motion planning problem, choosing the appropriate algorithm to use is a daunting task even for experts since there has been relatively little effort in comparing the plans generated by the different approaches, for different problems. In this paper, we present a set of benchmarks and the associated infrastructure for comparing different types of motion planning approaches and algorithms. The benchmarks are specifically designed for robotics and include typical indoor human environments. We present example motion planning problems for single arm tasks. Our infrastructure is designed to be easily extensible to allow for the addition of new planning approaches, new robots, new environments and new metrics. We present results comparing the performance of several motion planning algorithms to validate the use of these benchmarks.

Proceedings ArticleDOI
11 Feb 2012
TL;DR: There is a serious risk of creating interpersonal conflict when the metaphors are mismatched between people, and the implications for understanding remote pilots' rights and responsibilities are explored and design guidelines for MRP systems that support geographically distributed groups are presented.
Abstract: Metaphors for making sense of new communication technologies are important for setting user expectations about appropriate use of the technologies. When users do not share a common metaphorical model for using these technologies, interpersonal communication breakdowns can occur. Through a set of three 8-week-long field deployments and one ongoing observation in-house, we conducted contextual inquiries around the uses of a relatively new communication technology, a mobile remote presence (MRP) system. We observed many nonhuman-like metaphors (e.g., orienting toward the system as a robot, an object) and human-like metaphors (e.g., a person, or a person with disabilities). These metaphors influence people's expectations about social norms in using the systems. We found that there is a serious risk of creating interpersonal conflict when the metaphors are mismatched between people (e.g., locals use nonhuman-like metaphors when remote pilots use human-like metaphors). We explore the implications for understanding remote pilots' rights and responsibilities and present design guidelines for MRP systems that support geographically distributed groups.

Proceedings ArticleDOI
05 May 2012
TL;DR: It is found that the interdependent framing was successful in producing more in-group oriented behaviors and, contrary to the authors' predictions, visual framing of the MRP system weakened team cohesion.
Abstract: As an emerging technology that enables geographically distributed work teams, mobile remote presence (MRP) systems present new opportunities for supporting effective team building and collaboration. MRP systems are physically embodied mobile videoconferencing systems that remote co-workers control. These systems allow remote users, pilots, to actively initiate conversations and to navigate throughout the local environment. To investigate ways of encouraging team-like behavior among local and remote co-workers, we conducted a 2 (visual framing: decoration vs. no decoration) x 2 (verbal framing: interdependent vs. independent performance scoring) between-participants study (n=40). We hypothesized that verbally framing the situation as interdependent and visually framing the MRP system to create a sense of self-extension would enhance group cohesion between the local and the pilot. We found that the interdependent framing was successful in producing more in-group oriented behaviors and, contrary to our predictions, visual framing of the MRP system weakened team cohesion.

Proceedings ArticleDOI
05 Mar 2012
TL;DR: Using the technique of need finding, a group of people are interviewed regarding the reality of organization in their home; the successes, failures, family dynamics and practicalities surrounding organization are abstracted into a set of frameworks and design implications for home robotics.
Abstract: Technologists have long wanted to put robots in the home, making robots truly personal and present in every aspect of our lives. It has not been clear, however, exactly what these robots should do in the home. The difficulty of tasking robots with home chores comes not only from the significant technical challenges, but also from the strong emotions and expectations people have about their home lives. In this paper, we explore one possible set of tasks a robot could perform, home organization and storage tasks. Using the technique of need finding, we interviewed a group of people regarding the reality of organization in their home; the successes, failures, family dynamics and practicalities surrounding organization. These interviews are abstracted into a set of frameworks and design implications for home robotics, which we contribute to the community as inspiration and hypotheses for future robot prototypes to test.

Proceedings ArticleDOI
14 May 2012
TL;DR: This paper presents a search-based approach that is capable of planning dual-arm motions in cluttered environments while adhering to the orientation constraints and generates motions that are consistent across runs with similar start/goal configurations and are low-cost.
Abstract: Dual-arm manipulation is an increasingly important skill for robots operating in home, retail and industrial environments. Dual-arm manipulation is especially essential for tasks involving large objects which are harder to grasp and manipulate using a single arm. In this work, we address dual-arm manipulation of objects in indoor environments. We are particularly focused on tasks that involve an upright orientation constraint on the grasped object. Such constraints are often present in human environments, e.g. when manipulating a tray of food or a container with fluids. In this paper, we present a search-based approach that is capable of planning dual-arm motions, often within one second, in cluttered environments while adhering to the orientation constraints. Our approach systematically constructs a graph in task space and generates motions that are consistent across runs with similar start/goal configurations and are low-cost. These motions come with guarantees on completeness and bounds on the suboptimality with respect to the graph that encodes the planning problem. For many problems, the consistency of the generated motions is important as it helps make the actions of the robot more predictable for a human interacting with the robot.

Proceedings ArticleDOI
24 Dec 2012
TL;DR: This work combines visual features and RGB-D data in a simple and effective way to segment objects from robot sensory data and uses a Dirichlet process to cluster and recognize objects.
Abstract: A useful capability for a mobile robot is the ability to recognize the objects in its environment that move and change (as distinct from background objects, which are largely stationary). This ability can improve the accuracy and reliability of localization and mapping, enhance the ability of the robot to interact with its environment, and facilitate applications such as inventory management and theft detection. Rather than viewing this task as a difficult application of object recognition methods from computer vision, this work is in line with a recent trend in the community towards unsupervised object discovery and tracking that exploits the fundamentally temporal nature of the data acquired by a robot. Unlike earlier approaches, which relied heavily upon computationally intensive techniques from mapping and computer vision, our approach combines visual features and RGB-D data in a simple and effective way to segment objects from robot sensory data. We then use a Dirichlet process to cluster and recognize objects. The performance of our approach is demonstrated in several test domains.

Book ChapterDOI
Leila Takayama1
01 Jan 2012
TL;DR: Personal robots present opportunities for understanding the ways that people perceive agency-both in themoment and reflectively, and robotics can inform the understanding of both robotic agency and human agency.
Abstract: Personal robots present opportunities for understanding the ways that people perceive agency-both in-the-moment and reflectively. Autonomous and interactive personal robots allow us to explore how people come to perceive agency of non-human agents. Remote presence and tele-operation systems are expanding our understandings of how people interact through robots, incorporating these systems into their own sense of agency. As such, robotics can inform our understanding of both robotic agency and human agency.

Proceedings ArticleDOI
24 Dec 2012
TL;DR: This work presents an approach to handling certain types of constraints in a manner that significantly increases the efficiency of existing methods, and implements this step as the drawing of samples from a set that has been computed in advance instead of the direct sampling of constraints.
Abstract: Robots executing practical tasks in real environments are often subject to multiple constraints. These constraints include orientation constraints: e.g., keeping a glass of water upright, torque constraints: e.g., not exceeding the torque limits for an arm lifting heavy objects, visibility constraints: e.g., keeping an object in view while moving a robot arm, etc. Rejection sampling, Jacobian projection techniques and optimization-based approaches are just some of the methods that have been used to address such constraints while computing motion plans for robots performing manipulation tasks. In this work, we present an approach to handling certain types of constraints in a manner that significantly increases the efficiency of existing methods. Our approach focuses on the sampling step of a motion planner. We implement this step as the drawing of samples from a set that has been computed in advance instead of the direct sampling of constraints. We show how our approach can be applied to different constraints: orientation constraints on the end-effector of an arm, visibility constraints and dual-arm constraints. We present simulated results to validate our method, comparing it to approaches that use direct sampling of constraints.

Proceedings ArticleDOI
06 Dec 2012
TL;DR: This paper uses a system with position, force, and vision sensors to explore an environment geometry in two degrees offreedom to address some of the challenges that arise as model-mediated teleoperation is applied to systems with multiple degrees of freedom and multiple sensors.
Abstract: In this paper, we address some of the challenges that arise as model-mediated teleoperation is applied to systems with multiple degrees of freedom and multiple sensors. Specifically we use a system with position, force, and vision sensors to explore an environment geometry in two degrees of freedom. The inclusion of vision is proposed to alleviate the difficulties of estimating an increasing number of environment properties. Vision can furthermore increase the predictive nature of model-mediated teleoperation, by effectively predicting touch feedback before the slave is even in contact with the environment. We focus on the case of estimating the location and orientation of a local surface patch at the contact point between the slave and the environment. We describe the various information sources with their respective limitations and create a combined model estimator as part of a multi-d.o.f. model-mediated controller. An experiment demonstrates the feasibility and benefits of utilizing vision sensors in teleoperation.

Book ChapterDOI
13 Jun 2012
TL;DR: A first stability discussion is presented, examining the continuous behavior using general control principles and discussing how the model structure and its predictive power affects system lag and stability.
Abstract: The design of a bilateral teleoperation system remains challenging in cases with high-impedance slave robots or substantial communication delays. Especially for these scenarios, model-mediated teleoperation offers a promising new approach. In this paper, we present a first stability discussion. We examine the continuous behavior using general control principles and discuss how the model structure and its predictive power affects system lag and stability. We also recognize the unavoidability of discrete model jumps and discuss measures to isolate events and prevent limit cycles. The discussions are illustrated in a single degree of freedom case and supported by single degree of freedom experiments.

Proceedings ArticleDOI
04 Mar 2012
TL;DR: This work presents an efficient 6-DOF haptic algorithm for rendering interaction forces between a rigid proxy object and a set of unordered point data, and explores the use of haptic feedback for remotely supervised robots performing grasping tasks.
Abstract: We present an efficient 6-DOF haptic algorithm for rendering interaction forces between a rigid proxy object and a set of unordered point data. We further explore the use of haptic feedback for remotely supervised robots performing grasping tasks. The robot captures the geometry of a remote environment (as a cloud of 3D points) at run-time using a depth camera or laser scanner. An operator then uses a haptic device to position a virtual model of the robot gripper (the haptic proxy), specifying a desired grasp pose to be executed by the robot. The haptic algorithm enforces a proxy pose that is non-colliding with the observable environment, and provides both force and torque feedback to the operator. Once the operator confirms the desired gripper pose, the robot computes a collisionfree arm trajectory and executes the specified grasp. We apply this method for grasping a wide range of objects, previously unseen by the robot, from highly cluttered scenes typical of human environments. Our user experiment (N=20) shows that people with no prior experience using the visualization system on which our interfaces are based are able to successfully grasp more objects with a haptic device providing force-feedback than with just a mouse.

BookDOI
Bhaskara Marthi1
09 Jul 2012
TL;DR: In this article, the authors consider robot navigation in environments given a known static map, but where dynamic obstacles of varying and unknown lifespans appear and disappear over time, and describe a roadmap-based formulation of the problem that takes the sensing and transition uncertainty into account.
Abstract: We consider robot navigation in environments given a known static map, but where dynamic obstacles of varying and unknown lifespans appear and disappear over time We describe a roadmap-based formulation of the problem that takes the sensing and transition uncertainty into account, and an efficient online planner for this problem The planner displays behaviors such as persistence and obstacle timeouts that would normally be hardcoded into an executive It is also able to make inferences about obstacle types even with impoverished sensors We present empirical results on simulated domains and on a PR2 robot