scispace - formally typeset
Search or ask a question

Showing papers in "Autonomous Robots in 2015"


Journal ArticleDOI
TL;DR: The quality measures proposed in the literature are reviewed according to the main aspect they evaluate: location of contact points on the object and hand configuration and some measures related to human hand studies and grasp performance are presented.
Abstract: The correct grasp of objects is a key aspect for the right fulfillment of a given task. Obtaining a good grasp requires algorithms to automatically determine proper contact points on the object as well as proper hand configurations, especially when dexterous manipulation is desired, and the quantification of a good grasp requires the definition of suitable grasp quality measures. This article reviews the quality measures proposed in the literature to evaluate grasp quality. The quality measures are classified into two groups according to the main aspect they evaluate: location of contact points on the object and hand configuration. The approaches that combine different measures from the two previous groups to obtain a global quality measure are also reviewed, as well as some measures related to human hand studies and grasp performance. Several examples are presented to illustrate and compare the performance of the reviewed measures.

383 citations


Journal ArticleDOI
TL;DR: This work identifies five robotic priors and explains how they can be used to learn pertinent state representations, and shows that the state representations learned by the method greatly improve generalization in reinforcement learning.
Abstract: Robot learning is critically enabled by the availability of appropriate state representations. We propose a robotics-specific approach to learning such state representations. As robots accomplish tasks by interacting with the physical world, we can facilitate representation learning by considering the structure imposed by physics; this structure is reflected in the changes that occur in the world and in the way a robot can effect them. By exploiting this structure in learning, robots can obtain state representations consistent with the aspects of physics relevant to the learning task. We name this prior knowledge about the structure of interactions with the physical world robotic priors. We identify five robotic priors and explain how they can be used to learn pertinent state representations. We demonstrate the effectiveness of this approach in simulated and real robotic experiments with distracting moving objects. We show that our method extracts task-relevant state representations from high-dimensional observations, even in the presence of task-irrelevant distractions. We also show that the state representations learned by our method greatly improve generalization in reinforcement learning.

195 citations


Journal ArticleDOI
TL;DR: It is found that an autonomous robot can outperform a human worker in the execution of part or all of the process of task allocation, and that people preferred to cede their control authority to the robot than to human teammates only.
Abstract: In manufacturing, advanced robotic technology has opened up the possibility of integrating highly autonomous mobile robots into human teams. However, with this capability comes the issue of how to maximize both team efficiency and the desire of human team members to work with these robotic counterparts. To address this concern, we conducted a set of experiments studying the effects of shared decision-making authority in human---robot and human-only teams. We found that an autonomous robot can outperform a human worker in the execution of part or all of the process of task allocation ($$p<0.001$$p<0.001 for both), and that people preferred to cede their control authority to the robot $$(p<0.001)$$(p<0.001). We also established that people value human teammates more than robotic teammates; however, providing robots authority over team coordination more strongly improved the perceived value of these agents than giving similar authority to another human teammate $$(p< 0.001)$$(p<0.001). In post hoc analysis, we found that people were more likely to assign a disproportionate amount of the work to themselves when working with a robot $$(p<0.01)$$(p<0.01) rather than human teammates only. Based upon our findings, we provide design guidance for roboticists and industry practitioners to design robotic assistants for better integration into the human workplace.

148 citations


Journal ArticleDOI
TL;DR: This article describes an investigation of local motion planning, or collision avoidance, for a set of decision-making agents navigating in 3D space, which builds on the concept of velocity obstacles, which characterizes the set of trajectories that lead to a collision between interacting agents.
Abstract: This article describes an investigation of local motion planning, or collision avoidance, for a set of decision-making agents navigating in 3D space. The method is applicable to agents which are heterogeneous in size, dynamics and aggressiveness. It builds on the concept of velocity obstacles (VO), which characterizes the set of trajectories that lead to a collision between interacting agents. Motion continuity constraints are satisfied by using a trajectory tracking controller and constraining the set of available local trajectories in an optimization. Collision-free motion is obtained by selecting a feasible trajectory from the VO's complement, where reciprocity can also be encoded. Three algorithms for local motion planning are presented--(1) a centralized convex optimization in which a joint quadratic cost function is minimized subject to linear and quadratic constraints, (2) a distributed convex optimization derived from (1), and (3) a centralized non-convex optimization with binary variables in which the global optimum can be found, albeit at higher computational cost. A complete system integration is described and results are presented in experiments with up to four physical quadrotors flying in close proximity, and in experiments with two quadrotors avoiding a human.

139 citations


Journal ArticleDOI
TL;DR: This paper presents a novel algorithmic approach to reformulate a joint chance constraint as a constraint on the expectation of a summation of indicator random variables, which can be incorporated into the cost function by considering a dual formulation of the optimization problem.
Abstract: Existing approaches to constrained dynamic programming are limited to formulations where the constraints share the same additive structure of the objective function (that is, they can be represented as an expectation of the summation of one-stage costs). As such, these formulations cannot handle joint probabilistic (chance) constraints, whose structure is not additive. To bridge this gap, this paper presents a novel algorithmic approach for joint chance-constrained dynamic programming problems, where the probability of failure to satisfy given state constraints is explicitly bounded. Our approach is to (conservatively) reformulate a joint chance constraint as a constraint on the expectation of a summation of indicator random variables, which can be incorporated into the cost function by considering a dual formulation of the optimization problem. As a result, the primal variables can be optimized by standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate algorithm effectiveness on three optimal control problems, namely a path planning problem, a Mars entry, descent and landing problem, and a Lunar landing problem. All Mars simulations are conducted using real terrain data of Mars, with four million discrete states at each time step. The numerical experiments are used to validate our theoretical and heuristic arguments that the proposed algorithm is both (i) computationally efficient, i.e., capable of handling real-world problems, and (ii) near-optimal, i.e., its degree of conservatism is very low.

125 citations


Journal ArticleDOI
TL;DR: A new methodology for learning and adaption of manipulation skills that involve physical contact with the environment, based on dynamic movement primitives and quaternion representation of orientation, which provide a mathematical machinery for efficient and stable adaptation.
Abstract: We propose a new methodology for learning and adaption of manipulation skills that involve physical contact with the environment. Pure position control is unsuitable for such tasks because even small errors in the desired trajectory can cause significant deviations from the desired forces and torques. The proposed algorithm takes a reference Cartesian trajectory and force/torque profile as input and adapts the movement so that the resulting forces and torques match the reference profiles. The learning algorithm is based on dynamic movement primitives and quaternion representation of orientation, which provide a mathematical machinery for efficient and stable adaptation. Experimentally we show that the robot's performance can be significantly improved within a few iteration steps, compensating for vision and other errors that might arise during the execution of the task. We also show that our methodology is suitable both for robots with admittance and for robots with impedance control.

124 citations


Journal ArticleDOI
TL;DR: A novel minimal and linear 3-point algorithm that uses relative rotation angle measurements from a 3-axis gyroscope to recover the relative motion of the MAV with metric scale and from 2D-2D feature correspondences does not involve scene point triangulation.
Abstract: The use of a multi-camera system enables a robot to obtain a surround view, and thus, maximize its perceptual awareness of its environment. If vision-based simultaneous localization and mapping (vSLAM) is expected to provide reliable pose estimates for a micro aerial vehicle (MAV) with a multi-camera system, an accurate calibration of the multi-camera system is a necessary prerequisite. We propose a novel vSLAM-based self-calibration method for a multi-camera system that includes at least one calibrated stereo camera, and an arbitrary number of monocular cameras. We assume overlapping fields of view to only exist within stereo cameras. Our self-calibration estimates the inter-camera transforms with metric scale; metric scale is inferred from calibrated stereo. On our MAV, we set up each camera pair in a stereo configuration which facilitates the estimation of the MAV's pose with metric scale. Once the MAV is calibrated, the MAV is able to estimate its global pose via a multi-camera vSLAM implementation based on the generalized camera model. We propose a novel minimal and linear 3-point algorithm that uses relative rotation angle measurements from a 3-axis gyroscope to recover the relative motion of the MAV with metric scale and from 2D-2D feature correspondences. This relative motion estimation does not involve scene point triangulation. Our constant-time vSLAM implementation with loop closures runs on-board the MAV in real-time. To the best of our knowledge, no published work has demonstrated real-time on-board vSLAM with loop closures. We show experimental results from simulation experiments, and real-world experiments in both indoor and outdoor environments.

84 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider continuous-discrete estimation problems wherein a trajectory is viewed as a one-dimensional Gaussian process, with time as the independent variable, and they show that this class of prior results in an inverse kernel matrix (i.e., covariance matrix between all pairs of measurement times) that is exactly sparse (block-tridiagonal).
Abstract: In this paper, we revisit batch state estimation through the lens of Gaussian process (GP) regression. We consider continuous-discrete estimation problems wherein a trajectory is viewed as a one-dimensional GP, with time as the independent variable. Our continuous-time prior can be defined by any nonlinear, time-varying stochastic differential equation driven by white noise; this allows the possibility of smoothing our trajectory estimates using a variety of vehicle dynamics models (e.g. `constant-velocity'). We show that this class of prior results in an inverse kernel matrix (i.e., covariance matrix between all pairs of measurement times) that is exactly sparse (block-tridiagonal) and that this can be exploited to carry out GP regression (and interpolation) very efficiently. When the prior is based on a linear, time-varying stochastic differential equation and the measurement model is also linear, this GP approach is equivalent to classical, discrete-time smoothing (at the measurement times); when a nonlinearity is present, we iterate over the whole trajectory to maximize accuracy. We test the approach experimentally on a simultaneous trajectory estimation and mapping problem using a mobile robot dataset.

73 citations


Journal ArticleDOI
TL;DR: This work demonstrates an approach for enabling a robot to recover from failures by communicating its need for specific help to a human partner using natural language, and presents a novel inverse semantics algorithm for generating effective help requests.
Abstract: Robots inevitably fail, often without the ability to recover autonomously. We demonstrate an approach for enabling a robot to recover from failures by communicating its need for specific help to a human partner using natural language. Our approach automatically detects failures, then generates targeted spoken-language requests for help such as "Please give me the white table leg that is on the black table." Once the human partner has repaired the failure condition, the system resumes full autonomy. We present a novel inverse semantics algorithm for generating effective help requests. In contrast to forward semantic models that interpret natural language in terms of robot actions and perception, our inverse semantics algorithm generates requests by emulating the human's ability to interpret a request using the Generalized Grounding Graph ($$\hbox {G}^{3}$$G3) framework. To assess the effectiveness of our approach, we present a corpus-based online evaluation, as well as an end-to-end user study, demonstrating that our approach increases the effectiveness of human interventions compared to static requests for help.

70 citations


Journal ArticleDOI
TL;DR: DART is introduced, a general framework for tracking articulated objects composed of rigid bodies connected through a kinematic tree that extends the signed distance function representation to articulated objects and takes full advantage of highly parallel GPU algorithms for data association and pose optimization.
Abstract: This paper introduces DART, a general framework for tracking articulated objects composed of rigid bodies connected through a kinematic tree. DART covers a broad set of objects encountered in indoor environments, including furniture and tools, and human and robot bodies, hands and manipulators. To achieve efficient and robust tracking, DART extends the signed distance function representation to articulated objects and takes full advantage of highly parallel GPU algorithms for data association and pose optimization. We demonstrate the capabilities of DART on different types of objects that have each required dedicated tracking techniques in the past.

62 citations


Journal ArticleDOI
TL;DR: This work presents an experience-based push-manipulation approach that enables the robot to acquire experimental models regarding how pushable real world objects with complex 3D structures move in response to various pushing actions and demonstrates the superiority of the achievable planning and execution concept through safe and successful push- manipulation of a variety of passively mobile pushable objects.
Abstract: In a realistic mobile push-manipulation scenario, it becomes non-trivial and infeasible to build analytical models that will capture the complexity of the interactions between the environment, each of the objects, and the robot as the variety of objects to be manipulated increases. We present an experience-based push-manipulation approach that enables the robot to acquire experimental models regarding how pushable real world objects with complex 3D structures move in response to various pushing actions. These experimentally acquired models can then be used either (1) for trying to track a collision-free guideline path generated for the object by reiterating pushing actions that result in the best locally-matching object trajectories until the goal is reached, or (2) as building blocks for constructing achievable push plans via a Rapidly-exploring Random Trees variant planning algorithm we contribute and executing them by reiterating the corresponding trajectories. We extensively experiment with these two methods in a 3D simulation environment and demonstrate the superiority of the achievable planning and execution concept through safe and successful push-manipulation of a variety of passively mobile pushable objects. Additionally, our preliminary tests in a real world scenario, where the robot is asked to arrange a set of chairs around a table through achievable push-manipulation, also show promising results despite the increased perception and action uncertainty, and verify the validity of our contributed method.

Journal ArticleDOI
TL;DR: This paper generalizes the previous results on density upper bound constraints and captures a general class of linear safety constraints that bound the flow of agents.
Abstract: This paper presents a Markov chain based approach for the probabilistic density control of a large number, swarm, of autonomous agents The proposed approach specifies the time evolution of the probabilistic density distribution by using a Markov chain, which guides the swarm to a desired steady-state distribution, while satisfying the prescribed ergodicity, motion, and safety constraints This paper generalizes our previous results on density upper bound constraints and captures a general class of linear safety constraints that bound the flow of agents The safety constraints are formulated as equivalent linear inequality conditions on the Markov chain matrices by using the duality theory of convex optimization which is our first contribution With the safety constraints, we can facilitate proper low-level conflict avoidance policies to compute and execute the detailed agent state trajectories Our second contribution is to develop (i) linear matrix inequality based offline methods, and (ii) quadratic programming based online methods that can incorporate these constraints into the Markov chain synthesis The offline method provides a feasible solution for Markov matrix when there is no density feedback The online method utilizes realtime estimates of the swarm density distribution to continuously update the Markov matrices to maximize the convergence rates within the problem constraints The paper also introduces a decentralized method to compute the density estimates needed for the online synthesis method

Journal ArticleDOI
TL;DR: A novel contact-sensing algorithm for a robotic fingertip which is equipped with a 6-axis force/torque sensor and covered with a deformable rubber skin is introduced.
Abstract: In this paper we introduce a novel contact-sensing algorithm for a robotic fingertip which is equipped with a 6-axis force/torque sensor and covered with a deformable rubber skin. The design and the sensing algorithm of the fingertip for effective contact information identification are introduced. Validation tests show that the contact sensing fingertip can estimate contact information, including the contact location on the fingertip, the direction and the magnitude of the friction and normal forces, the local torque generated at the surface, at high speed (158---242 Hz) and with high precision. Experiments show that the proposed algorithm is robust and accurate when the friction coefficient $$\le $$≤1. Obtaining such contact information in real-time are essential for fine object manipulation. Using the contact sensing fingertip for surface exploration has been demonstrated, indicating the advantage gained by using the identified contact information from the proposed contact-sensing method.

Journal ArticleDOI
TL;DR: A core framework that autonomously segments motion trajectories to support the learning of motion primitives is proposed and the autonomous estimation of the segmentation points based on the Gaussian Mixture Model learned in a reduced dimensional space is proposed.
Abstract: In manipulation tasks, motion trajectories are characterized by a set of key phases (i.e., motion primitives). It is therefore important to learn the motion primitives embedded in such tasks from a complete demonstration. In this paper, we propose a core framework that autonomously segments motion trajectories to support the learning of motion primitives. For this purpose, a set of segmentation points is estimated using a Gaussian Mixture Model (GMM) learned after investigating the dimensional subspaces reduced by Principal Component Analysis. The segmentation points can be acquired by two alternative approaches: (1) using a geometrical interpretation of the Gaussians obtained from the learned GMM, and (2) using the weights estimated along the time component of the learned GMM. The main contribution of this paper is the autonomous estimation of the segmentation points based on the GMM learned in a reduced dimensional space. The advantages of such an estimation are as follows: (1) segmentation points without any internal parameters to be manually predefined or pretuned (according to the types of given tasks and/or motion trajectories) can be estimated from a single training data, (2) segmentation points, in which non-linear motion trajectories can be better characterized than by using the original motion trajectories, can be estimated, and (3) natural motion trajectories can be retrieved by temporally rearranging motion segments. The capability of this autonomous segmentation framework is validated by four experiments. In the first experiment, motion segments are evaluated through a comparison with a human expert using a publicly available kitchen dataset. In the second experiment, motion segments are evaluated through a comparison with an existing approach using an open hand-writing database. In the third experiment, the segmentation performance is evaluated by retrieving motion trajectories from the reorganization of motion segments. In the fourth experiment, the segmentation performance is evaluated by clustering motion segments.

Journal ArticleDOI
TL;DR: Simulation and experimental results demonstrate the feasibility of this approach for autonomous outdoor coordinated landing, and a joint decentralized controller is developed to coordinate a rendezvous for the two vehicles.
Abstract: This work presents a control technique to autonomously coordinate a landing between a quadrotor UAV and a skid-steered UGV Local controllers to feedback linearize the models are presented, and a joint decentralized controller is developed to coordinate a rendezvous for the two vehicles The effects of time delays on closed loop stability are examined using a Retarded Functional Differential Equation formulation of the problem, and delay margins are determined for particular closed loop setups Both simulation and experimental results are presented, which demonstrate the feasibility of this approach for autonomous outdoor coordinated landing

Journal ArticleDOI
TL;DR: A framework is introduced, wherein the robot simultaneously learns an action policy and a model of the reward function by actively querying a human expert for ratings and demonstrates results of the method for a robot grasping task and shows that the learned reward function generalizes to a similar task.
Abstract: Reward functions are an essential component of many robot learning methods. Defining such functions, however, remains hard in many practical applications. For tasks such as grasping, there are no reliable success measures available. Defining reward functions by hand requires extensive task knowledge and often leads to undesired emergent behavior. We introduce a framework, wherein the robot simultaneously learns an action policy and a model of the reward function by actively querying a human expert for ratings. We represent the reward model using a Gaussian process and evaluate several classical acquisition functions (AFs) from the Bayesian optimization literature in this context. Furthermore, we present a novel AF, expected policy divergence. We demonstrate results of our method for a robot grasping task and show that the learned reward function generalizes to a similar task. Additionally, we evaluate the proposed novel AF on a real robot pendulum swing-up task.

Journal ArticleDOI
TL;DR: An integrated system for generating, troubleshooting, and executing correct-by-construction controllers for autonomous robots using natural language input, allowing non-expert users to command robots to perform high-level tasks.
Abstract: This paper presents an integrated system for generating, troubleshooting, and executing correct-by-construction controllers for autonomous robots using natural language input, allowing non-expert users to command robots to perform high-level tasks. This system unites the power of formal methods with the accessibility of natural language, providing controllers for implementable high-level task specifications, easy-to-understand feedback on those that cannot be achieved, and natural language explanation of the reason for the robot's actions during execution. The natural language system uses domain-general components that can easily be adapted to cover the vocabulary of new applications. Generation of a linear temporal logic specification from the user's natural language input uses a novel data structure that allows for subsequent mapping of logical propositions back to natural language, enabling natural language feedback about problems with the specification that are only identifiable in the logical form. We demonstrate the robustness of the natural language understanding system through a user study where participants interacted with a simulated robot in a search and rescue scenario. Automated analysis and user feedback on unimplementable specifications is demonstrated using an example involving a robot assistant in a hospital.

Journal ArticleDOI
TL;DR: An extension to the well-known Yoshikawa manipulability ellipsoid measure Yoshikawa is proposed, which incorporates constraining factors, such as joint limits or the self-distance between manipulator and other parts of the robot, to support online queries like grasp selection or inverse kinematics solving.
Abstract: Quantifying the robot's performance in terms of dexterity and maneuverability is essential for the analysis and design of novel robot mechanisms and for the selection of appropriate robot configurations in the context of grasping and manipulation. It can also be used for monitoring and evaluating the current robot state and support planning and decision making tasks, such as grasp selection or inverse kinematics (IK) computation. To this end, we propose an extension to the well-known Yoshikawa manipulability ellipsoid measure Yoshikawa (Int J Robotics Res 4(2):3---9, 1985), which incorporates constraining factors, such as joint limits or the self-distance between manipulator and other parts of the robot. Based on this measure we show how an extended capability representation of the robot's workspace can be built in order to support online queries like grasp selection or inverse kinematics solving. In addition to single handed grasping tasks, we discuss how the approach can be extended to bimanual grasping tasks. The proposed approaches are evaluated in simulation and we show how the extended manipulability measure is used within the grasping and manipulation pipeline of the humanoid robot ARMAR-III.

Journal ArticleDOI
TL;DR: This study proposes a high-precision navigation technique using dead-reckoning sensors and lidars, which enables building a parameterized map of artificial bridge structures and estimating the vehicle’s position relative to the parameterizing map simultaneously.
Abstract: Navigation relative to the surrounding physical structures and obstacles is an important capability for safe vehicle operation. This capability is particularly useful for unmanned surface vehicles (USVs) operating near large structures such as bridges, waterside buildings, towers and cranes, where global positioning system signals are restricted or unavailable due to the line-of-sight restrictions. This study proposes a high-precision navigation technique using dead-reckoning sensors and lidars, which enables building a parameterized map of artificial bridge structures and estimating the vehicle's position relative to the parameterized map simultaneously. Also, three-dimensional reconstruction of the surrounding structures is carried out by fusing camera and lidar measurements for realistic 3D visual mapping which may facilitate automated surveys and inspection of structural safety. Field experiments using a newly developed USV system in a real-world bridge environment were performed to verify and demonstrate the performance of the proposed navigation and mapping algorithms. The field test results are presented and discussed in this paper.

Journal ArticleDOI
TL;DR: This paper presents an alternative approach to the problem of outdoor, persistent visual localisation against a known map that leverages prior experiences of a place to learn place-dependent feature detectors, features that are unique to each place in the authors' map and used for localisation.
Abstract: This paper presents an alternative approach to the problem of outdoor, persistent visual localisation against a known map. Instead of blindly applying a feature detector/descriptor combination over all images of all places, we leverage prior experiences of a place to learn place-dependent feature detectors (i.e., features that are unique to each place in our map and used for localisation). Furthermore, as these features do not represent low-level structure, like edges or corners, but are in fact mid-level patches representing distinctive visual elements (e.g., windows, buildings, or silhouettes), we are able to localise across extreme appearance changes. Note that there is no requirement that the features posses semantic meaning, only that they are optimal for the task of localisation. This work is an extension on previous work (McManus et al. in Proceedings of robotics science and systems, 2014b) in the following ways: (i) we have included a landmark refinement and outlier rejection step during the learning phase, (ii) we have implemented an asynchronous pipeline design, (iii) we have tested on data collected in an urban environment, and (iv) we have implemented a purely monocular system. Using over 100 km worth of data for training, we present localisation results from Begbroke Science Park and central Oxford.

Journal ArticleDOI
TL;DR: A two-stage control strategy and selection algorithm for the trajectory tracking of a class of underactuated mechanical systems and two new acceleration profiles for the capsubot motion generation are proposed for the motion control of the cap Subot.
Abstract: Trajectory tracking control of underactuated systems is one of the challenging issues. This paper proposes a two-stage control strategy for the trajectory tracking of a class of underactuated mechanical systems. Two new acceleration profiles for the capsubot motion generation are proposed for the motion control of the capsubot. The optimum selection of the parameters of the acceleration profile is investigated. To track the trajectory of the capsubot, a selection algorithm is proposed. Simulation and experimentation are performed to demonstrate the feasibility of the control strategy and selection algorithm along with the newly proposed acceleration profiles.

Journal ArticleDOI
TL;DR: It is proved universal reconfiguration of 2-dimensional lattice-based modular robots by means of a distributed algorithm that applies in a general setting to a wide variety of particular modular robotic systems, and holds for both square and hexagonal lattices-based2-dimensional systems.
Abstract: We prove universal reconfiguration (i.e., reconfiguration between any two robotic systems with the same number of modules) of 2-dimensional lattice-based modular robots by means of a distributed algorithm. To the best of our knowledge, this is the first known reconfiguration algorithm that applies in a general setting to a wide variety of particular modular robotic systems, and holds for both square and hexagonal lattice-based 2-dimensional systems. All modules apply the same set of local rules (in a manner similar to cellular automata), and move relative to each other akin to the sliding-cube model. Reconfiguration is carried out while keeping the robot connected at all times. If executed in a synchronous way, any reconfiguration of a robotic system of $$n$$n modules is done in $$O(n)$$O(n) time steps with $$O(n)$$O(n) basic moves per module, using $$O(1)$$O(1) force per module, $$O(1)$$O(1) size memory and computation per module (except for one module, which needs $$O(n)$$O(n) size memory to store the information of the goal shape), and $$O(n)$$O(n) communication per module.

Journal ArticleDOI
TL;DR: This work pursues an active perception strategy that enables MAVs with limited onboard sensing and processing capabilities to concurrently assess feasible rooftop landing sites with a vision-based perception system while generating trajectories that balance continued landing site assessment and the requirement to provide visual monitoring of an interest point.
Abstract: Autonomous landing is an essential function for micro air vehicles (MAVs) for many scenarios. We pursue an active perception strategy that enables MAVs with limited onboard sensing and processing capabilities to concurrently assess feasible rooftop landing sites with a vision-based perception system while generating trajectories that balance continued landing site assessment and the requirement to provide visual monitoring of an interest point. The contributions of the work are twofold: (1) a perception system that employs a dense motion stereo approach that determines the 3D model of the captured scene without the need of geo-referenced images, scene geometry constraints, or external navigation aids; and (2) an online trajectory generation approach that balances the need to concurrently explore available rooftop vantages of an interest point while ensuring confidence in the landing site suitability by considering the impact of landing site uncertainty as assessed by the perception system. Simulation and experimental evaluation of the performance of the perception and trajectory generation methodologies are analyzed independently and jointly in order to establish the efficacy and robustness of the proposed approach.

Journal ArticleDOI
TL;DR: This work introduces an incremental approach to create topological segmentation for semi-structured environments in 2D based on spectral clustering of an incremental generalized Voronoi decomposition of discretized metric maps, and builds an environment model which aims at simplifying the navigation task for mobile robots.
Abstract: Over the past few decades, topological segmentation has been much studied, especially for structured environments. In this work, we first propose a set of criteria to assess the quality of topological segmentation, especially for semi-structured environments in 2D. These criteria provide a general benchmark for different segmentation algorithms. Then we introduce an incremental approach to create topological segmentation for semi-structured environments. Our novel approach is based on spectral clustering of an incremental generalized Voronoi decomposition of discretized metric maps. It extracts sparse spatial information from the maps, and builds an environment model which aims at simplifying the navigation task for mobile robots. Experimental results in real environments show the robustness and the quality of the topological map created by the proposed method. Extended experiments for urban search and rescue missions are performed to show the global consistency of the proposed incremental segmentation method using six different trails, during which the test robot traveled 1.8 km in total.

Journal ArticleDOI
TL;DR: This paper presents a contract-based, decentralized planning approach for a team of autonomous unmanned surface vehicles (USV) to patrol and guard an asset in an environment with hostile boats and civilian traffic and demonstrates the planner using two mission scenarios.
Abstract: In this paper, we present a contract-based, decentralized planning approach for a team of autonomous unmanned surface vehicles (USV) to patrol and guard an asset in an environment with hostile boats and civilian traffic. The USVs in the team have to cooperatively deal with the uncertainty about which boats pose an actual threat and distribute themselves around the asset to optimize their guarding opportunities. The developed planner incorporates a contract-based algorithm for allocating tasks to the USVs through forward simulating the mission and assigning estimated utilities to candidate task allocation plans. The task allocation process uses a form of marginal cost-based contracting that allows decentralized, cooperative task negotiation among neighboring agents. The task allocation plans are realized through a corresponding set of low-level behaviors. In this paper, we demonstrate the planner using two mission scenarios. However, the planner is general enough to be used for a variety of scenarios with mission-specific tasks and behaviors. We provide detailed analysis of simulation results and discuss the impact of communication interruptions, unreliable sensor data, and simulation inaccuracies on the performance of the planner.

Journal ArticleDOI
TL;DR: The authors here propose a hybrid docking system composed of a magnetic alignment unit and a mechanical connection that passively aligns and guides the underwater vehicle facilitating a subsequent mechanical connection.
Abstract: This article presents a novel docking system developed for miniature underwater robots. Recent years have seen an increased diffusion of robots for ocean monitoring, exploration and maintenance of underwater infrastructures. The versatility of those vehicles is extremely affected and limited by energetic constraints and difficulties in updating their mission parameters. Submerged docking stations are a promising solution for providing energy sources and data exchange, thus extending autonomy and mission duration of underwater robots. Furthermore, the docking capability is a novel, but promising approach to enable modularity and reconfigurability in underwater robotics. The authors here propose a hybrid docking system composed of a magnetic alignment unit and a mechanical connection. The former passively aligns and guides the underwater vehicle facilitating a subsequent mechanical connection. The reliability of the system is both analytically investigated and experimentally validated. Finally, the mechanical design of the docking system of two miniature underwater robots is described in detail.

Journal ArticleDOI
TL;DR: A dense monocular mapping algorithm that improves the accuracy of the state-of-the-art variational and multiview stereo methods by incorporating scene priors into its formulation.
Abstract: This paper presents a dense monocular mapping algorithm that improves the accuracy of the state-of-the-art variational and multiview stereo methods by incorporating scene priors into its formulation. Most of the improvement of our proposal is in low-textured image regions and for low-parallax camera motions; two typical failure cases of multiview mapping. The specific priors we model are the planarity of homogeneous color regions, the repeating geometric primitives of the scene--that can be learned from data--and the Manhattan structure of indoor rooms. We evaluate the performance of our method in our own sequences and in the publicly available NYU dataset, emphasizing its strengths and weaknesses in different cases.

Journal ArticleDOI
TL;DR: An analysis of deceptive motion is presented, starting with how humans would deceive, moving to a mathematical model that enables the robot to autonomously generate deceptive motion, and ending with studies on the implications of deceive motion for human-robot interactions and the effects of iterated deception.
Abstract: Much robotics research explores how robots can clearly communicate true information. Here, we focus on the counterpart: communicating false information, or hiding information altogether--in one word, deception. Robot deception is useful in conveying intentionality, and in making games against the robot more engaging. We study robot deception in goal-directed motion, in which the robot is concealing its actual goal. We present an analysis of deceptive motion, starting with how humans would deceive, moving to a mathematical model that enables the robot to autonomously generate deceptive motion, and ending with a studies on the implications of deceptive motion for human-robot interactions and the effects of iterated deception.

Journal ArticleDOI
TL;DR: A WiFi-based solution to the localization and mapping problem for teams of heterogeneous robots operating in unknown environments, allowing the robots to operate in completely unknown environments where a priori information such as a blue-print or the access points’ location is unavailable.
Abstract: In this paper we present a WiFi-based solution to the localization and mapping problem for teams of heterogeneous robots operating in unknown environments. By exploiting wireless signal strengths broadcast from access points, a robot with a large sensor payload creates a WiFi signal map that can then be shared and utilized for localization by sensor-deprived robots. In our approach, WiFi localization is cast as a classification problem. An online clustering algorithm processes incoming WiFi signals that are then incorporated into an online random forest (ORF). The algorithm's robustness is increased by a Monte Carlo localization algorithm whose sensor model exploits the results of the ORF classification. The proposed algorithm is shown to run in real-time, allowing the robots to operate in completely unknown environments, where a priori information such as a blue-print or the access points' location is unavailable. A comprehensive set of experiments not only compares our approach with other algorithms, but also validates the results across different scenarios covering both indoor and outdoor environments.

Journal ArticleDOI
TL;DR: A decentralized control algorithm is developed, based on the concept of skills for decoupling the mission design from its deployment, which combines task assignment and execution through a consensus-based approach.
Abstract: In this paper, we propose a decentralized model and control framework for the assignment and execution of tasks, i.e. the dynamic task planning, for a network of heterogeneous robots. The proposed modeling framework allows the design of missions, defined as sets of tasks, in order to achieve global objectives regardless of the actual characteristics of the robotic network. The concept of skills, defined by the mission designer and considered as constraints for the mission execution, is exploited to distribute tasks across the robotic network. In addition, we develop a decentralized control algorithm, based on the concept of skills for decoupling the mission design from its deployment, which combines task assignment and execution through a consensus-based approach. Finally, conditions upon which the proposed decentralized formulation is equivalent to a centralized one are discussed. Experimental results are provided to validate the effectiveness of the proposed framework in a real-world scenario.