scispace - formally typeset
Search or ask a question

Showing papers in "Autonomous Robots in 2020"


Journal ArticleDOI
TL;DR: It is found that G-Prim and its repeated variant perform relatively well when communication is poor, and that re-sending winner data in later rounds is an easy way to improve the performance of multi-round auctions, in general.
Abstract: We consider the problem of multi-robot task allocation using auctions, and study how lossy communication between the auctioneer and bidders affects solution quality. We demonstrate both analytically and experimentally that even though many auction algorithms have similar performance when communication is perfect, different auctions degrade in different ways as communication quality decreases from perfect to nonexistent. Thus, if a multi-robot system is expected to encounter lossy communication, then the auction algorithm that it uses for task allocation must be chosen carefully. We compare six auction algorithms including: standard implementations of the Sequential Auction, Parallel Auction, Combinatorial Auction; a generalization of the Prim Allocation Auction called G-Prim; and two multi-round variants of a Repeated Parallel Auction. Variants of these auctions are also considered in which award information from previous rounds is rebroadcast by the auctioneer during later rounds. We consider a variety of valuation functions used by the bidders, including: the total and maximum distance traveled (for distance based cost functions), the expected profit or cost to a robot (assuming robots’ task values are drawn from a random distribution). Different auctioneer objectives are also evaluated, and include: maximizing profit (max sum), minimizing cost (min sum), and minimizing the maximum distance traveled by any particular robot (min max). In addition to the cost value functions that are used, we are also interested in fleet performance statistics such as the expected robot utilization rate, and the expected number of items won by each robot. Experiments are performed both in simulation and on real AscTec Pelican quad-rotor aircraft. In simulation, each algorithm is considered across communication qualities ranging from perfect to nonexistent. For the case of the distance-based cost functions, the performance of the auctions is compared using two different communication models: (1) a Bernoulli model and (2) the Gilbert–Elliot model. The particular auction that performs the best changes based on the the reliability of the communication between the bidders and the auctioneer. We find that G-Prim and its repeated variant perform relatively well when communication is poor, and that re-sending winner data in later rounds is an easy way improve the performance of multi-round auctions, in general.

92 citations


Journal ArticleDOI
TL;DR: This article introduces a general informative path planning framework for monitoring scenarios using an aerial robot, focusing on problems in which the value of sensor information is unevenly distributed in a target area and unknown a priori.
Abstract: Unmanned aerial vehicles represent a new frontier in a wide range of monitoring and research applications. To fully leverage their potential, a key challenge is planning missions for efficient data acquisition in complex environments. To address this issue, this article introduces a general informative path planning framework for monitoring scenarios using an aerial robot, focusing on problems in which the value of sensor information is unevenly distributed in a target area and unknown a priori. The approach is capable of learning and focusing on regions of interest via adaptation to map either discrete or continuous variables on the terrain using variable-resolution data received from probabilistic sensors. During a mission, the terrain maps built online are used to plan information-rich trajectories in continuous 3-D space by optimizing initial solutions obtained by a coarse grid search. Extensive simulations show that our approach is more efficient than existing methods. We also demonstrate its real-time application on a photorealistic mapping scenario using a publicly available dataset and a proof of concept for an agricultural monitoring task.

89 citations


Journal ArticleDOI
TL;DR: A system for online learning of human classifiers by mobile service robots using 3D~LiDAR sensors, and its experimental evaluation in a large indoor public space is presented and a new feature to improve human classification in sparse, long-range point clouds is introduced.
Abstract: This paper presents a system for online learning of human classifiers by mobile service robots using 3D LiDAR sensors, and its experimental evaluation in a large indoor public space. The learning framework requires a minimal set of labelled samples (e.g. one or several samples) to initialise a classifier. The classifier is then retrained iteratively during operation of the robot. New training samples are generated automatically using multi-target tracking and a pair of “experts” to estimate false negatives and false positives. Both classification and tracking utilise an efficient real-time clustering algorithm for segmentation of 3D point cloud data. We also introduce a new feature to improve human classification in sparse, long-range point clouds. We provide an extensive evaluation of our the framework using a 3D LiDAR dataset of people moving in a large indoor public space, which is made available to the research community. The experiments demonstrate the influence of the system components and improved classification of humans compared to the state-of-the-art.

62 citations


Journal ArticleDOI
TL;DR: This work identifies the conditions for convergence to optimal paths in multi-robot problems, which the prior method was not achieving and identifies the planner’s capability to solve problems involving multiple real-world robotic arms.
Abstract: Many exciting robotic applications require multiple robots with many degrees of freedom, such as manipulators, to coordinate their motion in a shared workspace. Discovering high-quality paths in such scenarios can be achieved, in principle, by exploring the composite space of all robots. Sampling-based planners do so by building a roadmap or a tree data structure in the corresponding configuration space and can achieve asymptotic optimality. The hardness of motion planning, however, renders the explicit construction of such structures in the composite space of multiple robots impractical. This work proposes a scalable solution for such coupled multi-robot problems, which provides desirable path-quality guarantees and is also computationally efficient. In particular, the proposed $$\mathtt{dRRT^*}$$ is an informed, asymptotically-optimal extension of a prior sampling-based multi-robot motion planner, $$\mathtt{dRRT}$$. The prior approach introduced the idea of building roadmaps for each robot and implicitly searching the tensor product of these structures in the composite space. This work identifies the conditions for convergence to optimal paths in multi-robot problems, which the prior method was not achieving. Building on this analysis, $$\mathtt{dRRT}$$ is first properly adapted so as to achieve the theoretical guarantees and then further extended so as to make use of effective heuristics when searching the composite space of all robots. The case where the various robots share some degrees of freedom is also studied. Evaluation in simulation indicates that the new algorithm, $$\mathtt{dRRT^*}$$ converges to high-quality paths quickly and scales to a higher number of robots where various alternatives fail. This work also demonstrates the planner’s capability to solve problems involving multiple real-world robotic arms.

58 citations


Journal ArticleDOI
TL;DR: A novel spherical underwater robot (SUR IV) with hybrid propulsion devices including vectored water-jet and propeller thrusters is proposed in this paper, and the diversity of the movement modes is proposed for the different targets as remote or hover and general or silent.
Abstract: Underwater robots have been promoted a significant interest in monitoring the marine environment. In some complex situation, robots sometimes need to keep moving fast, sometimes need to keep low speed and low noise. To address this issue, a novel spherical underwater robot (SUR IV) with hybrid propulsion devices including vectored water-jet and propeller thrusters is proposed in this paper. The diversity of the movement modes is also proposed for the different targets as remote or hover and general or silent. To analyze the hydrodynamic characteristics of the hybrid thruster, the computational fluid dynamics simulation is calculated in ANSYS CFX by using the multi-reference frame method. The simulation results show the interaction between the propeller and water-jet thruster. The thrust experiment to evaluate the performance of the improved hybrid thruster is also conducted. The maximum thrust of the hybrid thruster is increased 2.27 times than before. In addition, a noise comparison experiment is conducted to verify the low noise of the water-jet thruster. Finally, the 3 DoF motions which include the surge, heave and yaw for the SUR IV were carried out in the swimming pool. The improvement of the overall robot is assessed by the experimental results.

50 citations


Journal ArticleDOI
TL;DR: It is illustrated that more tasks can be performed in the given mission time by efficient incorporation of communication in the path design and the quality of the resultant paths improves in terms of connectivity.
Abstract: We incorporate communication into the multi-UAV path planning problem for search and rescue missions to enable dynamic task allocation via information dissemination. Communication is not treated as a constraint but a mission goal. While achieving this goal, our aim is to avoid compromising the area coverage goal and the overall mission time. We define the mission tasks as: search, inform, and monitor at the best possible link quality. Building on our centralized simultaneous inform and connect (SIC) path planning strategy, we propose two adaptive strategies: (1) SIC with QoS (SICQ): optimizes search, inform, and monitor tasks simultaneously and (2) SIC following QoS (SIC+): first optimizes search and inform tasks together and then finds the optimum positions for monitoring. Both strategies utilize information as soon as it becomes available to determine UAV tasks. The strategies can be tuned to prioritize certain tasks in relation to others. We illustrate that more tasks can be performed in the given mission time by efficient incorporation of communication in the path design. We also observe that the quality of the resultant paths improves in terms of connectivity.

42 citations


Journal ArticleDOI
Philip Dames1
TL;DR: In this paper, a distributed estimation and control algorithm that enables a team of mobile robots to search for and track an unknown number of targets is proposed, where the robots are equipped with sensors that have a finite field of view and may experience false negative and false positive detections.
Abstract: This paper proposes a distributed estimation and control algorithm that enables a team of mobile robots to search for and track an unknown number of targets. These targets may be stationary or moving, and the number of targets may vary over time as targets enter and leave the area of interest. The robots are equipped with sensors that have a finite field of view and may experience false negative and false positive detections. The robots use a novel, distributed formulation of the Probability Hypothesis Density (PHD) filter, which accounts for the limitations of the sensors, to estimate the number of targets and the positions of the targets. The robots then use Lloyd’s algorithm, a distributed control algorithm that has been shown to be effective for coverage and search tasks, to drive their motion within the environment. We utilize the output of the PHD filter as the importance weighting function within Lloyd’s algorithm. This causes the robots to be drawn towards areas that are likely to contain targets. We demonstrate the efficacy of our proposed algorithm, including comparisons to a coverage-based controller with a uniform importance weighting function, through an extensive series of simulated experiments. These experiments show teams of 10–100 robots successfully tracking 10–50 targets in both 2D and 3D environments.

39 citations


Journal ArticleDOI
TL;DR: This work removed the dependency on a common heading measurement by the MAVs, making the relative localization accuracy independent of magnetometer readings, and presented a range-based solution for indoor relative localization by micro air vehicles, achieving sufficient accuracy for leader–follower flight.
Abstract: We present a range-based solution for indoor relative localization by micro air vehicles (MAVs), achieving sufficient accuracy for leader–follower flight. Moving forward from previous work, we removed the dependency on a common heading measurement by the MAVs, making the relative localization accuracy independent of magnetometer readings. We found that this restricts the relative maneuvers that guarantee observability, and also that higher accuracy range measurements are required to rectify the missing heading information, yet both disadvantages can be tackled. Our implementation uses ultra wideband, for both range measurements between MAVs and sharing their velocities, accelerations, yaw rates, and height with each other. We showcased our implementation on a total of three Parrot Bebop 2.0 MAVs and performed leader–follower flight in a real-world indoor environment. The follower MAVs were autonomous and used only on-board sensors to track the same trajectory as the leader. They could follow the leader MAV in close proximity for the entire durations of the flights.

33 citations


Journal ArticleDOI
TL;DR: This paper presents a framework for direct visual-LiDAR SLAM that combines the sparse depth measurement of light detection and ranging ( LiDAR) with a monocular camera and proves that the presented approach is significantly outperformed in terms of accuracy and robustness under sparse depth measurements.
Abstract: This paper presents a framework for direct visual-LiDAR SLAM that combines the sparse depth measurement of light detection and ranging (LiDAR) with a monocular camera. The exploitation of the depth measurement between two sensor modalities has been reported in the literature but mostly by a keyframe-based approach or by using a dense depth map. When the sparsity becomes severe, the existing methods reveal limitation. The key finding of this paper is that the direct method is more robust under sparse depth with narrow field of view. The direct exploitation of sparse depth is achieved by implementing a joint optimization of each measurement under multiple keyframes. To ensure real-time performance, the keyframes of the sliding window are kept constant through rigorous marginalization. Through cross-validation, loop-closure achieves the robustness even in large-scale mapping. We intensively evaluated the proposed method using our own portable camera-LiDAR sensor system as well as the KITTI dataset. For the evaluation, the performance according to the LiDAR of sparsity was simulated by sampling the laser beam from 64 to 16 and 8. The experiment proves that the presented approach is significantly outperformed in terms of accuracy and robustness under sparse depth measurements.

32 citations


Journal ArticleDOI
Soohwan Song1, Daekyum Kim1, Sungho Jo1
TL;DR: The proposed algorithm first computes a global plan to cover unexplored regions to complete the target model sequentially, and then plans local inspection paths that comprehensively scans local frontiers that improves the completeness of surface coverage.
Abstract: In this study, we address an exploration problem when constructing complete 3D models in an unknown environment using a Micro-Aerial Vehicle. Most previous exploration methods were based on the Next-Best-View (NBV) approaches, which iteratively determine the most informative view, that exposes the greatest unknown area from the current partial model. However, these approaches sometimes miss minor unreconstructed regions like holes or sparse surfaces (while these can be important features). Furthermore, because the NBV methods iterate the next-best path from a current partial view, they sometimes produce unnecessarily long trajectories by revisiting known regions. To address these problems, we propose a novel exploration algorithm that integrates coverage and inspection strategies. The suggested algorithm first computes a global plan to cover unexplored regions to complete the target model sequentially. It then plans local inspection paths that comprehensively scans local frontiers. This approach reduces the total exploration time and improves the completeness of the reconstructed models. We evaluate the proposed algorithm in comparison with other state-of-the-art approaches through simulated and real-world experiments. The results show that our algorithm outperforms the other approaches and in particular improves the completeness of surface coverage.

28 citations


Journal ArticleDOI
TL;DR: This framework introduces a novel attention mechanism that substantially improves the grasp success rate in clutter and demonstrates inter-robot generality by achieving over $$92\%$$ real-world grasp success rates in cluttered scenes with novel objects using two multi-fingered robotic hand-arm systems with different degrees of freedom.
Abstract: Generative Attention Learning (GenerAL) is a framework for high-DOF multi-fingered grasping that is not only robust to dense clutter and novel objects but also effective with a variety of different parallel-jaw and multi-fingered robot hands. This framework introduces a novel attention mechanism that substantially improves the grasp success rate in clutter. Its generative nature allows the learning of full-DOF grasps with flexible end-effector positions and orientations, as well as all finger joint angles of the hand. Trained purely in simulation, this framework skillfully closes the sim-to-real gap. To close the visual sim-to-real gap, this framework uses a single depth image as input. To close the dynamics sim-to-real gap, this framework circumvents continuous motor control with a direct mapping from pixel to Cartesian space inferred from the same depth image. Finally, this framework demonstrates inter-robot generality by achieving over $$92\%$$ real-world grasp success rates in cluttered scenes with novel objects using two multi-fingered robotic hand-arm systems with different degrees of freedom.

Journal ArticleDOI
TL;DR: A Nonlinear Moving Horizon Estimator that identifies key terrain parameters using onboard robot sensors and a learning-based Nonlinear Model Predictive Control that ensures high precision path tracking in the presence of unknown wheel-terrain interaction is formulated.
Abstract: This paper presents high precision control and deep learning-based corn stand counting algorithms for a low-cost, ultra-compact 3D printed and autonomous field robot for agricultural operations. Currently, plant traits, such as emergence rate, biomass, vigor, and stand counting, are measured manually. This is highly labor-intensive and prone to errors. The robot, termed TerraSentia, is designed to automate the measurement of plant traits for efficient phenotyping as an alternative to manual measurements. In this paper, we formulate a Nonlinear Moving Horizon Estimator that identifies key terrain parameters using onboard robot sensors and a learning-based Nonlinear Model Predictive Control that ensures high precision path tracking in the presence of unknown wheel-terrain interaction. Moreover, we develop a machine vision algorithm designed to enable an ultra-compact ground robot to count corn stands by driving through the fields autonomously. The algorithm leverages a deep network to detect corn plants in images, and a visual tracking model to re-identify detected objects at different time steps. We collected data from 53 corn plots in various fields for corn plants around 14 days after emergence (stage V3 - V4). The robot predictions have agreed well with the ground truth with $$C_{robot}=1.02 \times C_{human}-0.86$$ and a correlation coefficient $$R=0.96$$ . The mean relative error given by the algorithm is $$-3.78\%$$ , and the standard deviation is $$6.76\%$$ . These results indicate a first and significant step towards autonomous robot-based real-time phenotyping using low-cost, ultra-compact ground robots for corn and potentially other crops.

Journal ArticleDOI
TL;DR: In this paper, a Gaussian process decentralized data fusion algorithm exploiting the notion of agent-centric support sets for distributed cooperative perception of large-scale environmental phenomena is proposed, which allows every mobile sensing agent to utilize a different support set and dynamically switch to another during execution for encapsulating its own data into a local summary that can still be assimilated with the other agents' local summaries.
Abstract: This paper presents novel Gaussian process decentralized data fusion algorithms exploiting the notion of agent-centric support sets for distributed cooperative perception of large-scale environmental phenomena. To overcome the limitations of scale in existing works, our proposed algorithms allow every mobile sensing agent to utilize a different support set and dynamically switch to another during execution for encapsulating its own data into a local summary that, perhaps surprisingly, can still be assimilated with the other agents’ local summaries (i.e., based on their current support sets) into a globally consistent summary to be used for predicting the phenomenon. To achieve this, we propose a novel transfer learning mechanism for a team of agents capable of sharing and transferring information encapsulated in a summary based on a support set to that utilizing a different support set with some loss that can be theoretically bounded and analyzed. To alleviate the issue of information loss accumulating over multiple instances of transfer learning, we propose a new information sharing mechanism to be incorporated into our algorithms in order to achieve memory-efficient lazy transfer learning. Empirical evaluation on three real-world datasets for up to 128 agents show that our algorithms outperform the state-of-the-art methods.

Journal ArticleDOI
TL;DR: The results indicate that the use of visual attention significantly improves search, but the degree of improvement depends on the nature of the task and the complexity of the environment.
Abstract: We present an active visual search model for finding objects in unknown environments. The proposed algorithm guides the robot towards the sought object using the relevant stimuli provided by the visual sensors. Existing search strategies are either purely reactive or use simplified sensor models that do not exploit all the visual information available. In this paper, we propose a new model that actively extracts visual information via visual attention techniques and, in conjunction with a non-myopic decision-making algorithm, leads the robot to search more relevant areas of the environment. The attention module couples both top-down and bottom-up attention models enabling the robot to search regions with higher importance first. The proposed algorithm is evaluated on a mobile robot platform in a 3D simulated environment. The results indicate that the use of visual attention significantly improves search, but the degree of improvement depends on the nature of the task and the complexity of the environment. In our experiments, we found that performance enhancements of up to 42% in structured and 38% in highly unstructured cluttered environments can be achieved using visual attention mechanisms.

Journal ArticleDOI
TL;DR: In this paper, a distributed algorithm, called Cooperative Autonomy for Resilience and Efficiency (CARE), is proposed to provide resilience to the robot team against failures of individual robots and improve the overall efficiency of operation via event-driven replanning.
Abstract: This paper addresses the problem of Multi-robot Coverage Path Planning for unknown environments in the presence of robot failures. Unexpected robot failures can seriously degrade the performance of a robot team and in extreme cases jeopardize the overall operation. Therefore, this paper presents a distributed algorithm, called Cooperative Autonomy for Resilience and Efficiency, which not only provides resilience to the robot team against failures of individual robots, but also improves the overall efficiency of operation via event-driven replanning. The algorithm uses distributed Discrete Event Supervisors, which trigger games between a set of feasible players in the event of a robot failure or idling, to make collaborative decisions for task reallocations. The game-theoretic structure is built using Potential Games, where the utility of each player is aligned with a shared objective function for all players. The algorithm has been validated in various complex scenarios on a high-fidelity robotic simulator, and the results demonstrate that the team achieves complete coverage under failures, reduced coverage time, and faster target discovery as compared to three alternative methods.

Journal ArticleDOI
TL;DR: An enhanced tightly-coupled sensor fusion scheme using a monocular camera and ultra-wideband ranging sensors for the task of simultaneous localization and mapping that can achieve metric-scale, drift-reduced odometry and a map consisting of visual landmarks and UWB anchors without knowing the anchor positions is proposed.
Abstract: This paper proposes an enhanced tightly-coupled sensor fusion scheme using a monocular camera and ultra-wideband (UWB) ranging sensors for the task of simultaneous localization and mapping. By leveraging UWB data, the method can achieve metric-scale, drift-reduced odometry and a map consisting of visual landmarks and UWB anchors without knowing the anchor positions. Firstly, the UWB configuration accommodates any degenerate cases with an insufficient number of anchors for 3D triangulation ( $$N\le 3$$ and no height data). Secondly, a practical model for UWB measurement is used, ensuring more accurate estimates for all the states. Thirdly, selected prior range measurements including the anchor-world origin and anchor–anchor ranges are utilized to alleviate the requirement of good initial guesses for anchor position. Lastly, a monitoring scheme is introduced to appropriately fix the scale factor to maintain a smooth trajectory as well as the UWB anchor position to fuse camera and UWB measurement in the bundle adjustment. Extensive experiments are carried out to showcase the effectiveness of the proposed system.

Journal ArticleDOI
TL;DR: OCBC, an algorithm for Optimizing Communication under Bandwidth Constraints, uses forward simulation to evaluate communications and applies a bandit-based combinatorial optimization algorithm to select what to include in a message.
Abstract: Robots working collaboratively can share observations with others to improve team performance, but communication bandwidth is limited. Recognizing this, an agent must decide which observations to communicate to best serve the team. Accurately estimating the value of a single communication is expensive; finding an optimal combination of observations to put in the message is intractable. In this paper, we present OCBC, an algorithm for Optimizing Communication under Bandwidth Constraints. OCBC uses forward simulation to evaluate communications and applies a bandit-based combinatorial optimization algorithm to select what to include in a message. We evaluate OCBC’s performance in a simulated multi-robot navigation task. We show that OCBC achieves better task performance than a state-of-the-art method while communicating up to an order of magnitude less.

Journal ArticleDOI
TL;DR: The proposed application-layer rate-adaptive multicast video streaming over an aerial ad-hoc network that uses IEEE 802.11 outperforms legacy multicast in terms of goodput, delay, and packet loss.
Abstract: We present and evaluate a multicast framework for point-to-multipoint and multipoint-to-point-to-multipoint video streaming that is applicable if both source and receiver nodes are mobile. Receiver nodes can join a multicast group by selecting a particular video stream and are dynamically elected as designated nodes based on their signal quality to provide feedback about packet reception. We evaluate the proposed application-layer rate-adaptive multicast video streaming over an aerial ad-hoc network that uses IEEE 802.11, a desirable protocol that, however, does not support a reliable multicast mechanism due to its inability to provide feedback from the receivers. Our rate-adaptive approach outperforms legacy multicast in terms of goodput, delay, and packet loss. Moreover, we obtain a gain in video quality (PSNR) of $$30\%$$ for point-to-multipoint and of $$20\%$$ for multipoint-to-point-to-multipoint streaming.

Journal ArticleDOI
TL;DR: This paper thoroughly outlines the RN framework and demonstrates its practicality with several long flight tests in unknown GPS-denied and GPS-degraded environments and is shown to produce globally-consistent, metric, and localized maps by incorporating loop closures and intermittent GPS measurements.
Abstract: Unlike many current navigation approaches for micro air vehicles, the relative navigation (RN) framework presented in this paper ensures that the filter state remains observable in GPS-denied environments by working with respect to a local reference frame. By subtly restructuring the problem, RN ensures that the filter uncertainty remains bounded, consistent, and normally-distributed, and insulates flight-critical estimation and control processes from large global updates. This paper thoroughly outlines the RN framework and demonstrates its practicality with several long flight tests in unknown GPS-denied and GPS-degraded environments. The relative front end is shown to produce low-drift estimates and smooth, stable control while leveraging off-the-shelf algorithms. The system runs in real time with onboard processing, fuses a variety of vision sensors, and works indoors and outdoors without requiring special tuning for particular sensors or environments. RN is shown to produce globally-consistent, metric, and localized maps by incorporating loop closures and intermittent GPS measurements.

Journal ArticleDOI
TL;DR: This paper proposes Gaussian Process-based online methods to efficiently build a communication map with multiple robots and provides two leader-follower online sensing strategies to coordinate and guide the robots while collecting data.
Abstract: This paper tackles the problem of constructing a communication map of a known environment using multiple robots. A communication map encodes information on whether two robots can communicate when they are at two arbitrary locations and plays a fundamental role for a multi-robot system deployment to reliably and effectively achieve a variety of tasks, such as environmental monitoring and exploration. Previous work on communication map building typically considered only scenarios with a fixed base station and designed offline methods, which did not exploit data collected online by the robots. This paper proposes Gaussian Process-based online methods to efficiently build a communication map with multiple robots. Such robots form a mesh network, where there is no fixed base station. Specifically, we provide two leader-follower online sensing strategies to coordinate and guide the robots while collecting data. Furthermore, we improve the performance and computational efficiency by exploiting prior communication models that can be built from the physical map of the environment. Extensive experimental results in simulation and with a team of TurtleBot 2 platforms validate the approach.

Journal ArticleDOI
TL;DR: In this article, the authors study the problem of tracking multiple moving targets using a team of mobile robots, where each robot has a set of motion primitives to choose from in order to collectively maximize the number of targets tracked or the total quality of tracking.
Abstract: We study the problem of tracking multiple moving targets using a team of mobile robots. Each robot has a set of motion primitives to choose from in order to collectively maximize the number of targets tracked or the total quality of tracking. Our focus is on scenarios where communication is limited and the robots have limited time to share information with their neighbors. As a result, we seek distributed algorithms that can find solutions in a bounded amount of time. We present two algorithms: (1) a greedy algorithm that is guaranteed to find a 2-approximation to the optimal (centralized) solution but requiring |R| communication rounds in the worst case, where |R| denotes the number of robots, and (2) a local algorithm that finds a $$\mathcal {O}\left( (1+\epsilon )(1+1/h)\right) $$—approximation algorithm in $$\mathcal {O}(h\log 1/\epsilon )$$ communication rounds. Here, h and $$\epsilon $$ are parameters that allow the user to trade-off the solution quality with communication time. In addition to theoretical results, we present empirical evaluation including comparisons with centralized optimal solutions.

Journal ArticleDOI
TL;DR: This paper presents iterative residual tuning (IRT), a deep learning system identification technique that modifies a simulator’s parameters to better match reality using minimal real-world observations, and develops and analyzes IRT in depth.
Abstract: Robots are increasingly learning complex skills in simulation, increasing the need for realistic simulation environments Existing techniques for approximating real-world physics with a simulation require extensive observation data and/or thousands of simulation samples This paper presents iterative residual tuning (IRT), a deep learning system identification technique that modifies a simulator’s parameters to better match reality using minimal real-world observations IRT learns to estimate the parameter difference between two parameterized models, allowing repeated iterations to converge on the true parameters similarly to gradient descent In this paper, we develop and analyze IRT in depth, including its similarities and differences with gradient descent Our IRT implementation, TuneNet, is pre-trained via supervised learning over an auto-generated simulated dataset We show that TuneNet can perform rapid, efficient system identification even when the true parameter values lie well outside those in the network’s training data, and can also learn real-world parameter values from visual data We apply TuneNet to a sim-to-real task transfer experiment, allowing a robot to perform a dynamic manipulation task with a new object after a single observation

Journal ArticleDOI
TL;DR: This work focuses on developing autonomous high-level planning, where low-level controls are leveraged from previous work in distributed motion, target tracking, localization, and communication, and introduces a hierarchical algorithm, Dynamic Domain Reduction for Multi-Agent Planning, that enables multi-agent planning for large multi-objective environments.
Abstract: We consider scenarios where a swarm of unmanned vehicles (UxVs) seek to satisfy a number of diverse, spatially distributed objectives. The UxVs strive to determine an efficient plan to service the objectives while operating in a coordinated fashion. We focus on developing autonomous high-level planning, where low-level controls are leveraged from previous work in distributed motion, target tracking, localization, and communication. We rely on the use of state and action abstractions in a Markov decision processes framework to introduce a hierarchical algorithm, Dynamic Domain Reduction for Multi-Agent Planning, that enables multi-agent planning for large multi-objective environments. Our analysis establishes the correctness of our search procedure within specific subsets of the environments, termed ‘sub-environment’ and characterizes the algorithm performance with respect to the optimal trajectories in single-agent and sequential multi-agent deployment scenarios using tools from submodularity. Simulated results show significant improvement over using a standard Monte Carlo tree search in an environment with large state and action spaces.

Journal ArticleDOI
TL;DR: This work presents a novel pipeline resulting from integrating Maiettini et al. (2017), justifying that the proposed hybrid architecture is key in leveraging powerful deep representations while maintaining fast training time of large scale Kernel methods.
Abstract: Object detection is a fundamental ability for robots interacting within an environment. While stunningly effective, state-of-the-art deep learning methods require huge amounts of labeled images and hours of training which does not favour such scenarios. This work presents a novel pipeline resulting from integrating (Maiettini et al. in 2017 IEEE-RAS 17th international conference on humanoid robotics (Humanoids), 2017) and (Maiettini et al. in 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS), 2018), which naturally trains a robot to detect novel objects in few seconds. Moreover, we report on an extended empirical evaluation of the learning method, justifying that the proposed hybrid architecture is key in leveraging powerful deep representations while maintaining fast training time of large scale Kernel methods. We validate our approach on the Pascal VOC benchmark (Everingham et al. in Int J Comput Vis 88(2): 303–338, 2010), and on a challenging robotic scenario (iCubWorld Transformations (Pasquale et al. in Rob Auton Syst 112:260–281, 2019). We address real world use-cases and show how to tune the method for different speed/accuracy trades-off. Lastly, we discuss limitations and directions for future development.

Journal ArticleDOI
Zhijie Tang1, Jiaqi Lu1, Zheng Wang1, Weiwei Chen1, Hao Feng1 
TL;DR: The internal pressure of each cavity is detected and adjusted in real-time to change the moving and bending state of earthworm-like soft robot to realize the ability of self-adaption to unstructured environment.
Abstract: This paper presents a soft robot which can imitate the crawling locomotion of an earthworm. Locomotion of the robot can be achieved by expanding and contracting the body that is made of flexible material. Earthworm-like soft robot can agilely get through cramped space and has strong ability of self-adaption to an unstructured environment. A link of the earthworm-like robot is combined with three modules, and the robot is combined with multiple links. Experiments on a single module, two-links and three-links show that the soft robot can move and bend on condition of modules extension and contraction in a specified gait. In the process of earthworm-like robot movement, the internal pressure of each cavity is detected and adjusted in real-time to change the moving and bending state of earthworm-like soft robot to realize the ability of self-adaption to unstructured environment. The air pressure perception earthworm-like soft robot shows a great prospect in many complicate environment such as pipeline detection.

Journal ArticleDOI
TL;DR: This paper presents continuous classification of left and right hand-gestures using muscle signals, time-locked classification of error-related potentials using brain signals, and a framework that combines these pipelines to detect and correct robot mistakes during multiple-choice tasks.
Abstract: Effective human supervision of robots can be key for ensuring correct robot operation in a variety of potentially safety-critical scenarios. This paper takes a step towards fast and reliable human intervention in supervisory control tasks by combining two streams of human biosignals: muscle and brain activity acquired via EMG and EEG, respectively. It presents continuous classification of left and right hand-gestures using muscle signals, time-locked classification of error-related potentials using brain signals (unconsciously produced when observing an error), and a framework that combines these pipelines to detect and correct robot mistakes during multiple-choice tasks. The resulting hybrid system is evaluated in a “plug-and-play” fashion with 7 untrained subjects supervising an autonomous robot performing a target selection task. Offline analysis further explores the EMG classification performance, and investigates methods to select subsets of training data that may facilitate generalizable plug-and-play classifiers.

Journal ArticleDOI
TL;DR: This paper proposes a formal specification language for the high-level description of swarm behaviors on both the swarm and individual levels and presents algorithms for automated synthesis of decentralized controllers and synchronization skeletons that describe how groups of robots must coordinate to satisfy the specification.
Abstract: The majority of work in the field of swarm robotics focuses on the bottom-up design of local rules for individual robots that create emergent swarm behaviors. In this paper, we take a top-down approach and consider the following problem: how can we specify a desired collective behavior and automatically synthesize decentralized controllers that can be distributed over robots to achieve the collective objective in a provably correct way? We propose a formal specification language for the high-level description of swarm behaviors on both the swarm and individual levels. We present algorithms for automated synthesis of decentralized controllers and synchronization skeletons that describe how groups of robots must coordinate to satisfy the specification. We demonstrate our proposed approach through an example in simulation.

Journal ArticleDOI
Bo Li1, Yingqiang Wang1, Yu Zhang1, Wenjie Zhao1, Jianyuan Ruan1, Ping Li1 
TL;DR: This study develops a new laser-based SLAM algorithm by redesigning the two core elements common to all SLAM systems, namely the state estimation and map construction, and proposes a new type of map representation based on the regionalized GP map reconstruction algorithm.
Abstract: Existing laser-based 2D simultaneous localization and mapping (SLAM) methods exhibit limitations with regard to either efficiency or map representation. An ideal method should estimate the map of the environment and the state of the robot quickly and accurately while providing a compact and dense map representation. In this study, we develop a new laser-based SLAM algorithm by redesigning the two core elements common to all SLAM systems, namely the state estimation and map construction. Utilizing Gaussian process (GP) regression, we propose a new type of map representation based on the regionalized GP map reconstruction algorithm. With this new map representation, both the state estimation method and the map update method can be completed with the use of concise mathematics. For small- or medium-scale scenarios, our method, consisting of only state estimation and map construction, demonstrates outstanding performance relative to traditional occupancy-grid-map-based approaches in both accuracy and especially efficiency. For large-scale scenarios, we extend our approach to a graph-based version.

Journal ArticleDOI
TL;DR: The goal of this work is to provide a unified robotic architecture to produce these two roles, and a human-guidance detection algorithm to switch across the two roles.
Abstract: A seamless interaction requires two robotic behaviors: the leader role where the robot rejects the external perturbations and focuses on the autonomous execution of the task, and the follower role where the robot ignores the task and complies with human intentional forces. The goal of this work is to provide (1) a unified robotic architecture to produce these two roles, and (2) a human-guidance detection algorithm to switch across the two roles. In the absence of human-guidance, the robot performs its task autonomously and upon detection of such guidances the robot passively follows the human motions. We employ dynamical systems to generate task-specific motion and admittance control to generate reactive motions toward the human-guidance. This structure enables the robot to reject undesirable perturbations, track the motions precisely, react to human-guidance by providing proper compliant behavior, and re-plan the motion reactively. We provide analytical investigation of our method in terms of tracking and compliant behavior. Finally, we evaluate our method experimentally using a 6-DoF manipulator.

Journal ArticleDOI
TL;DR: A novel approach for control and motion planning of formations of multiple unmanned micro aerial vehicles (multi-rotor helicopters) in cluttered GPS-denied on straitened environments by migrating the virtual leader along with the hull surrounding the formation.
Abstract: This paper presents a novel approach for control and motion planning of formations of multiple unmanned micro aerial vehicles (multi-rotor helicopters, in the literature also often called unmanned aerial vehicles—UAVs or unmanned aerial system—UAS) in cluttered GPS-denied on straitened environments. The proposed method enables us to autonomously design complex maneuvers of a compact Micro Aerial Vehicles (MAV) team in a virtual-leader-follower scheme. The results of the motion planning approach and the required stability of the formation are achieved by migrating the virtual leader along with the hull surrounding the formation. This enables us to suddenly change the formation motion in all directions, independently from the current orientation of the formation, and therefore to fully exploit the maneuverability of small multi-rotor helicopters. The proposed method was verified and its performance has been statistically evaluated in numerous simulations and experiments with a fleet of MAVs.