scispace - formally typeset
Search or ask a question

Showing papers in "The International Journal of Robotics Research in 2012"


Journal ArticleDOI
TL;DR: This paper presents RGB-D Mapping, a full 3D mapping system that utilizes a novel joint optimization algorithm combining visual features and shape-based alignment to achieve globally consistent maps.
Abstract: RGB-D cameras (such as the Microsoft Kinect) are novel sensing systems that capture RGB images along with per-pixel depth information. In this paper we investigate how such cameras can be used for building dense 3D maps of indoor environments. Such maps have applications in robot navigation, manipulation, semantic mapping, and telepresence. We present RGB-D Mapping, a full 3D mapping system that utilizes a novel joint optimization algorithm combining visual features and shape-based alignment. Visual and depth information are also combined for view-based loop-closure detection, followed by pose optimization to achieve globally consistent maps. We evaluate RGB-D Mapping on two large indoor environments, and show that it effectively combines the visual and shape information available from RGB-D cameras.

1,223 citations


Journal ArticleDOI
TL;DR: The Bayes tree is applied to obtain a completely novel algorithm for sparse nonlinear incremental optimization, named iSAM2, which achieves improvements in efficiency through incremental variable re-ordering and fluid relinearization, eliminating the need for periodic batch steps.
Abstract: We present a novel data structure, the Bayes tree, that provides an algorithmic foundation enabling a better understanding of existing graphical model inference algorithms and their connection to sparse matrix factorization methods. Similar to a clique tree, a Bayes tree encodes a factored probability density, but unlike the clique tree it is directed and maps more naturally to the square root information matrix of the simultaneous localization and mapping (SLAM) problem. In this paper, we highlight three insights provided by our new data structure. First, the Bayes tree provides a better understanding of the matrix factorization in terms of probability densities. Second, we show how the fairly abstract updates to a matrix factorization translate to a simple editing of the Bayes tree and its conditional densities. Third, we apply the Bayes tree to obtain a completely novel algorithm for sparse nonlinear incremental optimization, named iSAM2, which achieves improvements in efficiency through incremental variable re-ordering and fluid relinearization, eliminating the need for periodic batch steps. We analyze various properties of iSAM2 in detail, and show on a range of real and simulated datasets that our algorithm compares favorably with other recent mapping algorithms in both quality and efficiency.

1,085 citations


Journal ArticleDOI
TL;DR: It is shown that this approach permits the development of trajectories and controllers enabling such aggressive maneuvers as flying through narrow, vertical gaps and perching on inverted surfaces with high precision and repeatability.
Abstract: We study the problem of designing dynamically feasible trajectories and controllers that drive a quadrotor to a desired state in state space. We focus on the development of a family of trajectories defined as a sequence of segments, each with a controller parameterized by a goal state or region in state space. Each controller is developed from the dynamic model of the robot and then iteratively refined through successive experimental trials in an automated fashion to account for errors in the dynamic model and noise in the actuators and sensors. We show that this approach permits the development of trajectories and controllers enabling such aggressive maneuvers as flying through narrow, vertical gaps and perching on inverted surfaces with high precision and repeatability.

838 citations


Journal ArticleDOI
TL;DR: This two-part paper discusses the analysis and control of legged locomotion in terms of N-step capturability: the ability of a legged system to come to a stop without falling by taking N or fewer steps, and introduces a theoretical framework for assessing N- stepCapturability.
Abstract: This two-part paper discusses the analysis and control of legged locomotion in terms of N-step capturability: the ability of a legged system to come to a stop without falling by taking N or fewer steps. We consider this ability to be crucial to legged locomotion and a useful, yet not overly restrictive criterion for stability. In this part (Part 1), we introduce a theoretical framework for assessing N-step capturability. This framework is used to analyze three simple models of legged locomotion. All three models are based on the 3D Linear Inverted Pendulum Model. The first model relies solely on a point foot step location to maintain balance, the second model adds a finite-sized foot, and the third model enables the use of centroidal angular momentum by adding a reaction mass. We analyze how these mechanisms influence N-step capturability, for any N > 0. Part 2 will show that these results can be used to control a humanoid robot.

428 citations


Journal ArticleDOI
TL;DR: The recent work on micro unmanned aerial vehicles (UAVs), a fast-growing field in robotics, is surveyed, outlining the opportunities for research and applications, along with the scientific and technological challenges.
Abstract: We survey the recent work on micro unmanned aerial vehicles (UAVs), a fast-growing field in robotics, outlining the opportunities for research and applications, along with the scientific and technological challenges. Micro-UAVs can operate in three-dimensional environments, explore and map multi-story buildings, manipulate and transport objects, and even perform such tasks as assembly. While fixed-base industrial robots were the main focus in the first two decades of robotics, and mobile robots enabled most of the significant advances during the next two decades, it is likely that UAVs, and particularly micro-UAVs, will provide a major impetus for the next phase of education, research, and development.

409 citations


Journal ArticleDOI
TL;DR: It is shown that CST can be used to acquire skills from human demonstration in a dynamic continuous domain, and from both expert demonstration and learned control sequences on the uBot-5 mobile manipulator.
Abstract: We describe CST, an online algorithm for constructing skill trees from demonstration trajectories. CST segments a demonstration trajectory into a chain of component skills, where each skill has a goal and is assigned a suitable abstraction from an abstraction library. These properties permit skills to be improved efficiently using a policy learning algorithm. Chains from multiple demonstration trajectories are merged into a skill tree. We show that CST can be used to acquire skills from human demonstration in a dynamic continuous domain, and from both expert demonstration and learned control sequences on the uBot-5 mobile manipulator.

335 citations


Journal ArticleDOI
TL;DR: The approach represents beliefs (the distributions of the robot’s state estimate) by Gaussian distributions and is applicable to robot systems with non-linear dynamics and observation models and in simulation for holonomic and non-holonomic robots maneuvering through environments with obstacles with noisy and partial sensing.
Abstract: We present a new approach to motion planning under sensing and motion uncertainty by computing a locally optimal solution to a continuous partially observable Markov decision process (POMDP). Our approach represents beliefs (the distributions of the robot's state estimate) by Gaussian distributions and is applicable to robot systems with non-linear dynamics and observation models. The method follows the general POMDP solution framework in which we approximate the belief dynamics using an extended Kalman filter and represent the value function by a quadratic function that is valid in the vicinity of a nominal trajectory through belief space. Using a belief space variant of iterative LQG (iLQG), our approach iterates with second-order convergence towards a linear control policy over the belief space that is locally optimal with respect to a user-defined cost function. Unlike previous work, our approach does not assume maximum-likelihood observations, does not assume fixed estimator or control gains, takes into account obstacles in the environment, and does not require discretization of the state and action spaces. The running time of the algorithm is polynomial (O[n6]) in the dimension n of the state space. We demonstrate the potential of our approach in simulation for holonomic and non-holonomic robots maneuvering through environments with obstacles with noisy and partial sensing and with non-linear dynamics and observation models.

325 citations


Journal ArticleDOI
TL;DR: This paper develops methods for maximizing the throughput of a mobility-on-demand urban transportation system and develops a rebalancing policy that lets every station reach an equilibrium in which there are excess vehicles and no waiting customers and that minimizes the number of robotic vehicles performing rebalanced trips.
Abstract: In this paper we develop methods for maximizing the throughput of a mobility-on-demand urban transportation system. We consider a finite group of shared vehicles, located at a set of stations. Users arrive at the stations, pickup vehicles, and drive (or are driven) to their destination station where they drop-off the vehicle. When some origins and destinations are more popular than others, the system will inevitably become out of balance: vehicles will build up at some stations, and become depleted at others. We propose a robotic solution to this rebalancing problem that involves empty robotic vehicles autonomously driving between stations. Specifically, we utilize a fluid model for the customers and vehicles in the system. Then, we develop a rebalancing policy that lets every station reach an equilibrium in which there are excess vehicles and no waiting customers and that minimizes the number of robotic vehicles performing rebalancing trips. We show that the optimal rebalancing policy can be found as the solution to a linear program. We use this solution to develop a real-time rebalancing policy which can operate in highly variable environments. Finally, we verify policy performance in a simulated mobility-on-demand environment and in hardware experiments.

293 citations


Journal ArticleDOI
TL;DR: An algorithm that uses the ability of a legged system to come to a stop without falling by taking N or fewer steps and novel instantaneous capture point control strategies as approximations to control a humanoid robot is described.
Abstract: This two-part paper discusses the analysis and control of legged locomotion in terms of N-step capturability: the ability of a legged system to come to a stop without falling by taking N or fewer steps. We consider this ability to be crucial to legged locomotion and a useful, yet not overly restrictive criterion for stability. Part 1 introduced the N-step capturability framework and showed how to obtain capture regions and control sequences for simplified gait models. In Part 2, we describe an algorithm that uses these results as approximations to control a humanoid robot. The main contributions of this part are (1) step location adjustment using the 1-step capture region, (2) novel instantaneous capture point control strategies, and 3) an experimental evaluation of the 1-step capturability margin. The presented algorithm was tested using M2V2, a 3D force-controlled bipedal robot with 12 actuated degrees of freedom in the legs, both in simulation and in physical experiments. The physical robot was able to recover from forward and sideways pushes of up to 21 Ns while balancing on one leg and stepping to regain balance. The simulated robot was able to recover from sideways pushes of up to 15 Ns while walking, and walked across randomly placed stepping stones.

289 citations


Journal ArticleDOI
TL;DR: An algorithm is presented which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers.
Abstract: We consider the problem of autonomous robotic laundry folding, and propose a solution to the perception and manipulation challenges inherent to the task. At the core of our approach is a quasi-static cloth model which allows us to neglect the complex dynamics of cloth under significant parts of the state space, allowing us to reason instead in terms of simple geometry. We present an algorithm which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers. We define parametrized fold sequences for four clothing categories: towels, pants, short-sleeved shirts, and long-sleeved shirts, each represented as polygons. We then devise a model-based optimization approach for visually inferring the class and pose of a spread-out or folded clothing article from a single image, such that the resulting polygon provides a parse suitable for these folding primitives. We test the manipulation and perception tasks individually, and combine them to implement an autonomous folding system on the Willow Garage PR2. This enables the PR2 to identify a clothing article spread out on a table, execute the computed folding sequence, and visually track its progress over successive folds.

272 citations


Journal ArticleDOI
TL;DR: In this paper, a dynamic role exchange mechanism is proposed to adjust the robot's urge to complete the task based on the human feedback, and three different possibilities for the assignment of task effort are proposed.
Abstract: Since the strict separation of working spaces of humans and robots has experienced a softening due to recent robotics research achievements, close interaction of humans and robots comes rapidly into reach. In this context, physical human-robot interaction raises a number of questions regarding a desired intuitive robot behavior. The continuous bilateral information and energy exchange requires an appropriate continuous robot feedback. Investigating a cooperative manipulation task, the desired behavior is a combination of an urge to fulfill the task, a smooth instant reactive behavior to human force inputs and an assignment of the task effort to the cooperating agents. In this paper, a formal analysis of human-robot cooperative load transport is presented. Three different possibilities for the assignment of task effort are proposed. Two proposed dynamic role exchange mechanisms adjust the robot's urge to complete the task based on the human feedback. For comparison, a static role allocation strategy not relying on the human agreement feedback is investigated as well. All three role allocation mechanisms are evaluated in a user study that involves large-scale kinesthetic interaction and full-body human motion. Results show tradeoffs between subjective and objective performance measures stating a clear objective advantage of the proposed dynamic role allocation scheme.

Journal ArticleDOI
TL;DR: The technique can handle noisy data, potentially from multiple sources, and fuse it into a robust common probabilistic representation of the robot’s surroundings, and provides inferences with associated variances into occluded regions and between sensor beams, even with relatively few observations.
Abstract: We introduce a new statistical modelling technique for building occupancy maps. The problem of mapping is addressed as a classification task where the robot's environment is classified into regions of occupancy and free space. This is obtained by employing a modified Gaussian process as a non-parametric Bayesian learning technique to exploit the fact that real-world environments inherently possess structure. This structure introduces dependencies between points on the map which are not accounted for by many common mapping techniques such as occupancy grids. Our approach is an 'anytime' algorithm that is capable of generating accurate representations of large environments at arbitrary resolutions to suit many applications. It also provides inferences with associated variances into occluded regions and between sensor beams, even with relatively few observations. Crucially, the technique can handle noisy data, potentially from multiple sources, and fuse it into a robust common probabilistic representation of the robot's surroundings. We demonstrate the benefits of our approach on simulated datasets with known ground truth and in outdoor urban environments.

Journal ArticleDOI
TL;DR: This paper deals with the trajectory generation problem faced by an autonomous vehicle in moving traffic and proposes a semi-reactive planning strategy that realizes all required long-term maneuver tasks while providing short-term collision avoidance.
Abstract: This paper deals with the trajectory generation problem faced by an autonomous vehicle in moving traffic. Being given the predicted motion of the traffic flow, the proposed semi-reactive planning strategy realizes all required long-term maneuver tasks (lane-changing, merging, distance-keeping, velocity-keeping, precise stopping, etc.) while providing short-term collision avoidance. The key to comfortable, human-like as well as physically feasible trajectories is the combined optimization of the lateral and longitudinal movements in street-relative coordinates with carefully chosen cost functionals and terminal state sets (manifolds). The performance of the approach is demonstrated in simulated traffic scenarios.

Journal ArticleDOI
TL;DR: On the open hull, integrated acoustic and visual mapping processes to achieve closed-loop control relative to features such as weld-lines and biofouling are integrated, and new large-scale planning routines are implemented so as to achieve full imaging coverage of all the structures, at a high resolution.
Abstract: Inspection of ship hulls and marine structures using autonomous underwater vehicles has emerged as a unique and challenging application of robotics. The problem poses rich questions in physical design and operation, perception and navigation, and planning, driven by difficulties arising from the acoustic environment, poor water quality and the highly complex structures to be inspected. In this paper, we develop and apply algorithms for the central navigation and planning problems on ship hulls. These divide into two classes, suitable for the open, forward parts of a typical monohull, and for the complex areas around the shafting, propellers and rudders. On the open hull, we have integrated acoustic and visual mapping processes to achieve closed-loop control relative to features such as weld-lines and biofouling. In the complex area, we implemented new large-scale planning routines so as to achieve full imaging coverage of all the structures, at a high resolution. We demonstrate our approaches in recent op...

Journal ArticleDOI
TL;DR: An approach for on-line, incremental learning of full body motion primitives from observation of human motion using hidden Markov models, so that the same model can be used for both motion recognition and motion generation.
Abstract: In this paper we describe an approach for on-line, incremental learning of full body motion primitives from observation of human motion. The continuous observation sequence is first partitioned into motion segments, using stochastic segmentation. Next, motion segments are incrementally clustered and organized into a hierarchical tree structure representing the known motion primitives. Motion primitives are encoded using hidden Markov models, so that the same model can be used for both motion recognition and motion generation. At the same time, the temporal relationship between motion primitives is learned via the construction of a motion primitive graph. The motion primitive graph can then be used to construct motions, consisting of sequences of motion primitives. The approach is implemented and tested during on-line observation and on the IRT humanoid robot.

Journal ArticleDOI
TL;DR: This work proposes a novel algorithm that achieves accurate point cloud registration an order of a magnitude faster than the current state of the art through the use of a compact spatial representation: the Three-Dimensional Normal Distributions Transform (3D-NDT).
Abstract: Registration of range sensor measurements is an important task in mobile robotics and has received a lot of attention. Several iterative optimization schemes have been proposed in order to align th ...

Journal ArticleDOI
TL;DR: The concept of pregrasping cages, caging configurations from which an object can be grasped without first breaking the cage, is introduced and an analogy between the role of grasping functions in grasping and that of Lyapunov functions in stability theory is established.
Abstract: This paper digs into the relationship between cages and grasps of a rigid body. In particular, it considers the use of cages as waypoints to grasp an object. We introduce the concept of pregrasping cages, caging configurations from which an object can be grasped without first breaking the cage. For two-fingered manipulators, all cages are pregrasping cages and, consequently, useful waypoints to grasp an object. A contribution of this paper is to show that the same does not hold for more than two fingers. A second contribution is to show how to overcome that limitation. We explore the natural generalization of the well-understood squeezing/stretching characterization of two-finger cages to arbitrary workspace dimension, arbitrary object shapes without holes, and arbitrary number of point fingers, and exploit it to give sufficient conditions for a cage to be a pregrasping cage. As a product of that generalization, we introduce grasping functions: scalar functions defined on the finger formation that control the process of going from a cage to a grasp. We finish the paper by establishing an analogy between the role of grasping functions in grasping and that of Lyapunov functions in stability theory.

Journal ArticleDOI
TL;DR: The system described in this article was constructed specifically for the generation of model data for object recognition, localization and manipulation tasks and it allows 2D image and 3D geometric data of everyday objects to be obtained semi-automatically.
Abstract: For the execution of object recognition, localization and manipulation tasks, most algorithms use object models. Most models are derived from, or consist of two-dimensional (2D) images and/or three-dimensional (3D) geometric data. The system described in this article was constructed specifically for the generation of such model data. It allows 2D image and 3D geometric data of everyday objects be obtained semi-automatically. The calibration provided allows 2D data to be related to 3D data. Through the use of high-quality sensors, high-accuracy data is achieved. So far over 100 objects have been digitized using this system and the data has been successfully used in several international research projects. All of the models are freely available on the web via a front-end that allows preview and filtering of the data.

Journal ArticleDOI
TL;DR: The proposed BMI exploits a novel algorithm to decouple the estimates of force and stiffness of the human arm while performing the task, and derives the reference command from a novel body–machine interface (BMI) applied to the master operator’s arm.
Abstract: This work presents the concept of tele-impedance as a method for remotely controlling a robotic arm in interaction with uncertain environments. As an alternative to bilateral force-reflecting teleoperation control, in tele-impedance a compound reference command is sent to the slave robot including both the desired motion trajectory and impedance profile, which are then realized by the remote controller without explicit feedback to the operator. We derive the reference command from a novel body-machine interface (BMI) applied to the master operator's arm, using only non-intrusive position and electromyography (EMG) measurements, and excluding any feedback from the remote site except for looking at the task. The proposed BMI exploits a novel algorithm to decouple the estimates of force and stiffness of the human arm while performing the task. The endpoint (wrist) position of the human arm is monitored by an optical tracking system and used for the closed-loop position control of the robot's end-effector. The concept is demonstrated in two experiments, namely a peg-in-the-hole and a ball-catching task, which illustrate complementary aspects of the method. The performance of tele-impedance control is assessed by comparing the results obtained with the slave arm under either constantly low or high stiffness.

Journal ArticleDOI
TL;DR: The hyper-actuated hand, very close to its human archetype in terms of size, weight, and, in particular, grasping performance, robustness, and dynamics, is presented and will be the basis of a simplified hand that would still perform daily manipulation tasks.
Abstract: Physical human-robot interaction implies the intersection of human and robot workspaces and intrinsically favors collision. The robustness of the most exposed parts, such as the hands, is crucial for effective and complete task execution of a robot. Considering the scales, we think that the robustness can only be achieved by the use of energy storage mechanisms, e.g. in elastic elements. The use of variable stiffness drives provides a low-pass filtering of impacts and allows stiffness adjustments depending on the task. However, using these drive principles does not guarantee the safety of the human due to the dramatically increased dynamics of such system. The design methodology of an antagonistically tendon-driven hand is explained. The resulting hand, very close to its human archetype in terms of size, weight, and, in particular, grasping performance, robustness, and dynamics, is presented. The hyper-actuated hand is a research platform that will also be used to investigate the importance of mechanical couplings and, in future projects, be the basis of a simplified hand that would still perform daily manipulation tasks.

Journal ArticleDOI
TL;DR: This work proposes eight point cloud sequences acquired in locations covering the environment diversity that modern robots are susceptible to encounter, ranging from inside an apartment to a woodland area, and proposes a special effort to ensure global positioning of the scanner within mm-range precision, independent of environmental conditions.
Abstract: The number of registration solutions in the literature has bloomed recently. The iterative closest point, for example, could be considered as the backbone of many laser-based localization and mapping systems. Although they are widely used, it is a common challenge to compare registration solutions on a fair base. The main limitation is to overcome the lack of accurate ground truth in current data sets, which usually cover environments only over a small range of organization levels. In computer vision, the Stanford 3D Scanning Repository pushed forward point cloud registration algorithms and object modeling fields by providing high-quality scanned objects with precise localization. We aim to provide similar high-caliber working material to the robotic and computer vision communities but with sceneries instead of objects. We propose eight point cloud sequences acquired in locations covering the environment diversity that modern robots are susceptible to encounter, ranging from inside an apartment to a woodland area. The core of the data sets consists of 3D laser point clouds for which supporting data (Gravity, Magnetic North and GPS) are given for each pose. A special effort has been made to ensure global positioning of the scanner within mm-range precision, independent of environmental conditions. This will allow for the development of improved registration algorithms when mapping challenging environments, such as those found in real-world situations.1

Journal ArticleDOI
TL;DR: This work presents a practical vision-based robotic bin-picking system that performs detection and three-dimensional pose estimation of objects in an unstructured bin using a novel camera design, picks up parts from the bin, and performs error detection and pose correction while the part is in the gripper.
Abstract: We present a practical vision-based robotic bin-picking system that performs detection and three-dimensional pose estimation of objects in an unstructured bin using a novel camera design, picks up parts from the bin, and performs error detection and pose correction while the part is in the gripper. Two main innovations enable our system to achieve real-time robust and accurate operation. First, we use a multi-flash camera that extracts robust depth edges. Second, we introduce an efficient shape-matching algorithm called fast directional chamfer matching (FDCM), which is used to reliably detect objects and estimate their poses. FDCM improves the accuracy of chamfer matching by including edge orientation. It also achieves massive improvements in matching speed using line-segment approximations of edges, a three-dimensional distance transform, and directional integral images. We empirically show that these speedups, combined with the use of bounds in the spatial and hypothesis domains, give the algorithm sub...

Journal ArticleDOI
TL;DR: This application demonstrates how the integration of computer vision and soft-robotics leads to a robotic system capable of acting in unstructured and occluded environments.
Abstract: In this paper, we present an efficient 3D object recognition and pose estimation approach for grasping procedures in cluttered and occluded environments. In contrast to common appearance-based approaches, we rely solely on 3D geometry information. Our method is based on a robust geometric descriptor, a hashing technique and an efficient, localized RANSAC-like sampling strategy. We assume that each object is represented by a model consisting of a set of points with corresponding surface normals. Our method simultaneously recognizes multiple model instances and estimates their pose in the scene. A variety of tests shows that the proposed method performs well on noisy, cluttered and unsegmented range scans in which only small parts of the objects are visible. The main procedure of the algorithm has a linear time complexity resulting in a high recognition speed which allows a direct integration of the method into a continuous manipulation task. The experimental validation with a seven-degree-of-freedom Cartesian impedance controlled robot shows how the method can be used for grasping objects from a complex random stack. This application demonstrates how the integration of computer vision and soft-robotics leads to a robotic system capable of acting in unstructured and occluded environments.

Journal ArticleDOI
TL;DR: An information-theoretic approach to distributively control multiple robots equipped with sensors to infer the state of an environment using a sequential Bayesian filter and a novel consensus-based algorithm to approximate the robots’ joint measurement probabilities.
Abstract: In this paper we present an information-theoretic approach to distributively control multiple robots equipped with sensors to infer the state of an environment. The robots iteratively estimate the environment state using a sequential Bayesian filter, while continuously moving along the gradient of mutual information to maximize the informativeness of the observations provided by their sensors. The gradient-based controller is proven to be convergent between observations and, in its most general form, locally optimal. However, the computational complexity of the general form is shown to be intractable, and thus non-parametric methods are incorporated to allow the controller to scale with respect to the number of robots. For decentralized operation, both the sequential Bayesian filter and the gradient-based controller use a novel consensus-based algorithm to approximate the robots' joint measurement probabilities, even when the network diameter, the maximum in/out degree, and the number of robots are unknown. The approach is validated in two separate hardware experiments each using five quadrotor flying robots, and scalability is emphasized in simulations using 100 robots.

Journal ArticleDOI
TL;DR: It is shown how the belief roadmap algorithm prentice2009belief, a belief space extension of the probabilistic roadmap algorithm, can be used to plan vehicle trajectories that incorporate the sensing model of the RGB-D camera.
Abstract: RGB-D cameras provide both color images and per-pixel depth estimates. The richness of this data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, reducing the dependence on an unreliable wireless link to a ground station. However, even with accurate 3D sensing and position estimation, some parts of the environment have more perceptual structure than others, leading to state estimates that vary in accuracy across the environment. If the vehicle plans a path without regard to how well it can localize itself along that path, it runs the risk of becoming lost or worse. We show how the belief roadmap algorithm prentice2009belief, a belief space extension of the probabilistic roadmap algorithm, can be used to plan vehicle trajectories that incorporate the sensing model of the RGB-D camera. We evaluate the effectiveness of our system for controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations.

Journal ArticleDOI
TL;DR: This paper reports the formulation and evaluation of a centralized extended Kalman filter designed for a novel navigation system for underwater vehicles that employs Doppler sonar, depth sensors, synchronous clocks, and acoustic modems to achieve simultaneous acoustic communication and navigation.
Abstract: This paper reports the formulation and evaluation of a centralized extended Kalman filter designed for a novel navigation system for underwater vehicles. The navigation system employs Doppler sonar, depth sensors, synchronous clocks, and acoustic modems to achieve simultaneous acoustic communication and navigation. The use of a single moving reference beacon eliminates the requirement for the underwater vehicle to remain in a bounded navigable area; the use of underwater modems and synchronous clocks enables range measurements based on one-way time-of-flight information from acoustic data-packet broadcasts. The acoustic data packets are broadcast from a single, moving reference beacon and can be received simultaneously by multiple vehicles within acoustic range. We report results from a simulated deep-water survey and real field data collected from an autonomous underwater vehicle survey in 4000 m of water on the southern Mid-Atlantic Ridge with an independent long-baseline navigation system for ground truth.

Journal ArticleDOI
TL;DR: This paper forms a basis for generating truly safe velocity bounds that explicitly consider the dynamic properties of the manipulator and human injury and proposes a motion supervisor that utilizes injury knowledge for generating safe robot motions.
Abstract: Enabling robots to safely interact with humans is an essential goal of robotics research. The developments achieved over recent years in mechanical design and control made it possible to have active cooperation between humans and robots in rather complex situations. For this, safe robot behavior even under worst-case situations is crucial and forms also a basis for higher-level decisional aspects. For quantifying what safe behavior really means, the definition of injury, as well as understanding its general dynamics, are essential. This insight can then be applied to design and control robots such that injury due to robot-human impacts is explicitly taken into account. In this paper we approach the problem from a medical injury analysis point of view in order to formulate the relation between robot mass, velocity, impact geometry and resulting injury qualified in medical terms. We transform these insights into processable representations and propose a motion supervisor that utilizes injury knowledge for generating safe robot motions. The algorithm takes into account the reflected inertia, velocity, and geometry at possible impact locations. The proposed framework forms a basis for generating truly safe velocity bounds that explicitly consider the dynamic properties of the manipulator and human injury.

Journal ArticleDOI
TL;DR: This paper divides the problem of estimating the intrinsic parameters of a 3D LIDAR while at the same time computing its extrinsic calibration with respect to a rigidly connected camera into two least-squares sub-problems, and analytically solves each one to determine a precise initial estimate for the unknown parameters.
Abstract: In this paper we address the problem of estimating the intrinsic parameters of a 3D LIDAR while at the same time computing its extrinsic calibration with respect to a rigidly connected camera. Existing approaches to solve this nonlinear estimation problem are based on iterative minimization of nonlinear cost functions. In such cases, the accuracy of the resulting solution hinges on the availability of a precise initial estimate, which is often not available. In order to address this issue, we divide the problem into two least-squares sub-problems, and analytically solve each one to determine a precise initial estimate for the unknown parameters. We further increase the accuracy of these initial estimates by iteratively minimizing a batch nonlinear least-squares cost function. In addition, we provide the minimal identifiability conditions, under which it is possible to accurately estimate the unknown parameters. Experimental results consisting of photorealistic 3D reconstruction of indoor and outdoor scenes, as well as standard metrics of the calibration errors, are used to assess the validity of our approach.

Journal ArticleDOI
TL;DR: Analysis of several innovative designs for a new kind of robot that uses a continuous wave of peristalsis for locomotion, the same method that earthworms use, shows that with smooth, constant velocity waves, the forces that cause accelerations within the body sum to zero.
Abstract: We have developed several innovative designs for a new kind of robot that uses a continuous wave of peristalsis for locomotion, the same method that earthworms use, and report on the first completed prototypes. This form of locomotion is particularly effective in constrained spaces, and although the motion has been understood for some time, it has rarely been effectively or accurately implemented in a robotic platform. As an alternative to robots with long segments, we present a technique using a braided mesh exterior to produce smooth waves of motion along the body of a worm-like robot. We also present a new analytical model of this motion and compare predicted robot velocity to a 2D simulation and a working prototype. Because constant-velocity peristaltic waves form due to accelerating and decelerating segments, it has been often assumed that this motion requires strong anisotropic ground friction. However, our analysis shows that with smooth, constant velocity waves, the forces that cause accelerations within the body sum to zero. Instead, transition timing between aerial and ground phases plays a critical role in the amount of slippage, and the final robot speed. The concept is highly scalable, and we present methods of construction at two different scales.

Journal ArticleDOI
TL;DR: CAT-SLAM as discussed by the authors augments sequential appearance-based place recognition with local metric pose filtering to improve the frequency and reliability of appearancebased loop closure, which is performed without calculating global feature geometry or performing 3D map construction.
Abstract: This paper describes a new system, dubbed Continuous Appearance-based Trajectory Simultaneous Localisation and Mapping (CAT-SLAM), which augments sequential appearance-based place recognition with local metric pose filtering to improve the frequency and reliability of appearance-based loop closure. As in other approaches to appearance-based mapping, loop closure is performed without calculating global feature geometry or performing 3D map construction. Loop-closure filtering uses a probabilistic distribution of possible loop closures along the robot's previous trajectory, which is represented by a linked list of previously visited locations linked by odometric information. Sequential appearance-based place recognition and local metric pose filtering are evaluated simultaneously using a Rao-Blackwellised particle filter, which weights particles based on appearance matching over sequential frames and the similarity of robot motion along the trajectory. The particle filter explicitly models both the likelihood of revisiting previous locations and exploring new locations. A modified resampling scheme counters particle deprivation and allows loop-closure updates to be performed in constant time for a given environment. We compare the performance of CAT-SLAM with FAB-MAP (a state-of-the-art appearance-only SLAM algorithm) using multiple real-world datasets, demonstrating an increase in the number of correct loop closures detected by CAT-SLAM.