scispace - formally typeset
Search or ask a question

Showing papers on "Robot control published in 2017"


Journal ArticleDOI
TL;DR: This paper presents safety barrier certificates that ensure scalable and provably collision-free behaviors in multirobot systems by modifying the nominal controllers to formally satisfy safety constraints.
Abstract: This paper presents safety barrier certificates that ensure scalable and provably collision-free behaviors in multirobot systems by modifying the nominal controllers to formally satisfy safety constraints. This is achieved by minimizing the difference between the actual and the nominal controllers subject to safety constraints. The resulting computation of the safety controllers is done through a quadratic programming problem that can be solved in real-time and in this paper, we describe a series of problems of increasing complexity. Starting with a centralized formulation, where the safety controller is computed across all agents simultaneously, we show how one can achieve a natural decentralization whereby individual robots only have to remain safe relative to nearby robots. Conservativeness and existence of solutions as well as deadlock-avoidance are then addressed using a mixture of relaxed control barrier functions, hybrid braking controllers, and consistent perturbations. The resulting control strategy is verified experimentally on a collection of wheeled mobile robots whose nominal controllers are explicitly designed to make the robots collide.

504 citations


Journal ArticleDOI
TL;DR: This survey paper review, extend, compare, and evaluate experimentally model-based algorithms for real-time collision detection, isolation, and identification that use only proprioceptive sensors that cover the context-independent phases of the collision event pipeline for robots interacting with the environment.
Abstract: Robot assistants and professional coworkers are becoming a commodity in domestic and industrial settings. In order to enable robots to share their workspace with humans and physically interact with them, fast and reliable handling of possible collisions on the entire robot structure is needed, along with control strategies for safe robot reaction. The primary motivation is the prevention or limitation of possible human injury due to physical contacts. In this survey paper, based on our early work on the subject, we review, extend, compare, and evaluate experimentally model-based algorithms for real-time collision detection, isolation, and identification that use only proprioceptive sensors. This covers the context-independent phases of the collision event pipeline for robots interacting with the environment, as in physical human–robot interaction or manipulation tasks. The problem is addressed for rigid robots first and then extended to the presence of joint/transmission flexibility. The basic physically motivated solution has already been applied to numerous robotic systems worldwide, ranging from manipulators and humanoids to flying robots, and even to commercial products.

467 citations


Journal ArticleDOI
TL;DR: In order to extend the semiglobal stability achieved by conventional neural control to global stability, a switching mechanism is integrated into the control design and effectiveness of the proposed control design has been shown through experiments carried out on the Baxter Robot.
Abstract: Robots with coordinated dual arms are able to perform more complicated tasks that a single manipulator could hardly achieve. However, more rigorous motion precision is required to guarantee effective cooperation between the dual arms, especially when they grasp a common object. In this case, the internal forces applied on the object must also be considered in addition to the external forces. Therefore, a prescribed tracking performance at both transient and steady states is first specified, and then, a controller is synthesized to rigorously guarantee the specified motion performance. In the presence of unknown dynamics of both the robot arms and the manipulated object, the neural network approximation technique is employed to compensate for uncertainties. In order to extend the semiglobal stability achieved by conventional neural control to global stability, a switching mechanism is integrated into the control design. Effectiveness of the proposed control design has been shown through experiments carried out on the Baxter Robot.

342 citations


Journal ArticleDOI
TL;DR: The capability of the robot and the performance of the individual motion control and perception modules were validated during the DRC in which the robot was able to demonstrate exceptional physical resilience and execute some of the tasks during the competition.
Abstract: In this work, we present WALK-MAN, a humanoid platform that has been developed to operate in realistic unstructured environment, and demonstrate new skills including powerful manipulation, robust balanced locomotion, high-strength capabilities, and physical sturdiness. To enable these capabilities, WALK-MAN design and actuation are based on the most recent advancements of series elastic actuator drives with unique performance features that differentiate the robot from previous state-of-the-art compliant actuated robots. Physical interaction performance is benefited by both active and passive adaptation, thanks to WALK-MAN actuation that combines customized high-performance modules with tuned torque/velocity curves and transmission elasticity for high-speed adaptation response and motion reactions to disturbances. WALK-MAN design also includes innovative design optimization features that consider the selection of kinematic structure and the placement of the actuators with the body structure to maximize the robot performance. Physical robustness is ensured with the integration of elastic transmission, proprioceptive sensing, and control. The WALK-MAN hardware was designed and built in 11 months, and the prototype of the robot was ready four months before DARPA Robotics Challenge (DRC) Finals. The motion generation of WALK-MAN is based on the unified motion-generation framework of whole-body locomotion and manipulation (termed loco-manipulation). WALK-MAN is able to execute simple loco-manipulation behaviors synthesized by combining different primitives defining the behavior of the center of gravity, the motion of the hands, legs, and head, the body attitude and posture, and the constrained body parts such as joint limits and contacts. The motion-generation framework including the specific motion modules and software architecture is discussed in detail. A rich perception system allows the robot to perceive and generate 3D representations of the environment as well as detect contacts and sense physical interaction force and moments. The operator station that pilots use to control the robot provides a rich pilot interface with different control modes and a number of teleoperated or semiautonomous command features. The capability of the robot and the performance of the individual motion control and perception modules were validated during the DRC in which the robot was able to demonstrate exceptional physical resilience and execute some of the tasks during the competition.

211 citations


Journal ArticleDOI
TL;DR: It is demonstrated that ANYmal can execute various climbing maneuvers, walking gaits, as well as a dynamic trot and jump, and has a low energy consumption during locomotion, which results in an autonomy of more than 2 h.
Abstract: This paper provides a system overview about ANYmal, a quadrupedal robot developed for operation in harsh environments. The 30 kg, 0.5 m tall robotic dog was built in a modular way for simple mainte...

211 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented the design of the hydraulically actuated quadruped robot HyQ2Max, which is an evolution of the 80 kg agile and versatile robot HQ. Compared to HQ, the new robot needs to be more rugged, more powerful and extend the existing locomotion skills with self-righting capability.
Abstract: This paper presents the design of the hydraulically actuated quadruped robot HyQ2Max . HyQ2Max is an evolution of the 80 kg agile and versatile robot HyQ. Compared to HyQ, the new robot needs to be more rugged, more powerful and extend the existing locomotion skills with self-righting capability. Since the robot's actuation system has an impact on many aspects of the overall design/specifications of the robot (e.g., payload, speed, torque, overall mass, and compactness), this paper will pay special attention to the selection and sizing of the joint actuators. To obtain meaningful joint requirements for the new machine, we simulated seven characteristic motions that cover a wide range of required behaviors of an agile rough terrain robot, including trotting on rough terrain, stair climbing, push recovery, self-righting, etc. We will describe how to use the obtained joint requirements for the selection of the hydraulic actuator types, four-bar linkage parameters, and valve size. Poorly sized actuators may lead to limited robot capabilities or higher cost, weight, energy consumption, and cooling requirements. The main contributions of this paper are: 1) a novel design of an agile quadruped robot capable of performing trotting/crawling over flat/uneven terrain, balancing, and self-righting; 2) a detailed method to find suitable hydraulic cylinder/valve properties and linkage parameters with a specific focus on optimizing the actuator areas; and 3) to the best knowledge of the authors, the most complete review of hydraulic quadruped robots.

187 citations


07 Jul 2017
TL;DR: In this paper, a CNN is trained to map observed images to velocities, using domain randomisation to enable generalisation to real world images without having seen a single real image.
Abstract: End-to-end control for robot manipulation and grasping is emerging as an attractive alternative to traditional pipelined approaches. However, end-to-end methods tend to either be slow to train, exhibit little or no generalisability, or lack the ability to accomplish long-horizon or multi-stage tasks. In this paper, we show how two simple techniques can lead to end-to-end (image to velocity) execution of a multi-stage task, which is analogous to a simple tidying routine, without having seen a single real image. This involves locating, reaching for, and grasping a cube, then locating a basket and dropping the cube inside. To achieve this, robot trajectories are computed in a simulator, to collect a series of control velocities which accomplish the task. Then, a CNN is trained to map observed images to velocities, using domain randomisation to enable generalisation to real world images. Results show that we are able to successfully accomplish the task in the real world with the ability to generalise to novel environments, including those with dynamic lighting conditions, distractor objects, and moving objects, including the basket itself. We believe our approach to be simple, highly scalable, and capable of learning long-horizon tasks that have until now not been shown with the state-of-the-art in end-to-end robot control.

183 citations


Proceedings ArticleDOI
12 Jul 2017
TL;DR: Using the developed concept of discrete control barrier functions, a novel control method is presented to address the problem of navigation of a high-dimensional bipedal robot through environments with moving obstacles that present time-varying safety-critical constraints.
Abstract: In this paper, we extend the concept of control barrier functions, developed initially for continuous time systems, to the discrete-time domain. We demonstrate safety-critical control for nonlinear discrete-time systems with applications to 3D bipedal robot navigation. Particularly, we mathematically analyze two different formulations of control barrier functions, based on their continuous-time counterparts, and demonstrate how these can be applied to discrete-time systems. We show that the resulting formulation is a nonlinear program in contrast to the quadratic program for continuous-time systems and under certain conditions, the nonlinear program can be formulated as a quadratically constrained quadratic program. Furthermore, using the developed concept of discrete control barrier functions, we present a novel control method to address the problem of navigation of a high-dimensional bipedal robot through environments with moving obstacles that present time-varying safety-critical constraints.

183 citations


Journal ArticleDOI
TL;DR: The control framework is shown to provide stable bounding in the hardware, at speeds of up to 6.4 m/s and with a minimum total cost of transport of 0.47, unprecedented accomplishments in terms of efficiency and speed in untethered experimental quadruped machines.
Abstract: This paper presents the design and implementation of a bounding controller for the MIT Cheetah 2 and its experimental results. The paper introduces the architecture of the controller along with the...

169 citations


Posted Content
TL;DR: This paper shows how two simple techniques can lead to end-to-end (image to velocity) execution of a multi-stage task, which is analogous to a simple tidying routine, without having seen a single real image.
Abstract: End-to-end control for robot manipulation and grasping is emerging as an attractive alternative to traditional pipelined approaches. However, end-to-end methods tend to either be slow to train, exhibit little or no generalisability, or lack the ability to accomplish long-horizon or multi-stage tasks. In this paper, we show how two simple techniques can lead to end-to-end (image to velocity) execution of a multi-stage task, which is analogous to a simple tidying routine, without having seen a single real image. This involves locating, reaching for, and grasping a cube, then locating a basket and dropping the cube inside. To achieve this, robot trajectories are computed in a simulator, to collect a series of control velocities which accomplish the task. Then, a CNN is trained to map observed images to velocities, using domain randomisation to enable generalisation to real world images. Results show that we are able to successfully accomplish the task in the real world with the ability to generalise to novel environments, including those with dynamic lighting conditions, distractor objects, and moving objects, including the basket itself. We believe our approach to be simple, highly scalable, and capable of learning long-horizon tasks that have until now not been shown with the state-of-the-art in end-to-end robot control.

166 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the 75-kg quadruped robot is able to walk inside two high-slope V-shaped walls; an achievement that to the authors’ best knowledge has never been presented before.
Abstract: Research into legged robotics is primarily motivated by the prospects of building machines that are able to navigate in challenging and complex environments that are predominantly non-flat. In this context, control of contact forces is fundamental to ensure stable contacts and equilibrium of the robot. In this paper we propose a planning/control framework for quasi-static walking of quadrupedal robots, implemented for a demanding application in which regulation of ground reaction forces is crucial. Experimental results demonstrate that our 75-kg quadruped robot is able to walk inside two high-slope ($$50^\circ $$50ź) V-shaped walls; an achievement that to the authors' best knowledge has never been presented before. The robot distributes its weight among the stance legs so as to optimize user-defined criteria. We compute joint torques that result in no foot slippage, fulfillment of the unilateral constraints of the contact forces and minimization of the actuators effort. The presented study is an experimental validation of the effectiveness and robustness of QP-based force distributions methods for quasi-static locomotion on challenging terrain.

Journal ArticleDOI
TL;DR: The `ros_control` framework provides the capability to implement and manage robot controllers with a focus on both realtime performance and sharing of controllers in a robot-agnostic way and implements solutions for controller-lifecycle and hardware resource management.
Abstract: In recent years the Robot Operating System (ROS) has become the 'de facto' standard framework for robotics software development. The `ros_control` framework provides the capability to implement and manage robot controllers with a focus on both _real-time performance_ and _sharing of controllers_ in a robot-agnostic way. The primary motivation for a sepate robot-control framework is the lack of realtime-safe communication layer in ROS. Furthermore, the framework implements solutions for controller-lifecycle and hardware resource management as well as abstractions on hardware interfaces with minimal assumptions on hardware or operating system. The clear, modular design of `ros_control` makes it ideal for both research and industrial use and has indeed seen many such applications to date. The idea of `ros_control` originates from the `pr2_controller_manager` framework specific to the PR2 robot but `ros_control` is fully robot-agnostic. Controllers expose standard ROS interfaces for out-of-the box 3rd party solutions to robotics problems like manipulation path planning (`MoveIt!`) and autonomous navigation (the `ROS navigation stack`). Hence, a robot made up of a mobile base and an arm that support `ros_control` doesn't need any additional code to be written, only a few controller configuration files and it is ready to navigate autonomously and do path planning for the arm. `ros_control` also provides several libraries to support writing custom controllers.

Journal ArticleDOI
18 Jan 2017
TL;DR: This work presents a method that enables resilient formation control for mobile robot teams in the presence of noncooperative (defective or malicious) robots, and demonstrates the use of the framework for resilient flocking, and shows simulation results with groups of holonomic mobile robots.
Abstract: We present a method that enables resilient formation control for mobile robot teams in the presence of noncooperative (defective or malicious) robots. Recent results in network science define graph topological properties that guarantee resilience against faults and attacks on individual nodes in static networks. We build on these results to propose a control policy that allows a team of mobile robots to achieve resilient consensus on the direction of motion. Our strategy relies on dynamic connectivity management that makes use of a metric that characterizes the robustness of the communication network topology. Our method distinguishes itself from prior work in that our connectivity management strategy ensures that the network lies above a critical resilience threshold , guaranteeing that the consensus algorithm always converges to a value within the range of the cooperative agents’ initial values. We demonstrate the use of our framework for resilient flocking, and show simulation results with groups of holonomic mobile robots.

Journal ArticleDOI
TL;DR: A finite-time cooperative controller is explicitly constructed which guarantees that the states consensus is achieved in a finite time to solve the consensus problem of multiple nonholonomic mobile robots.
Abstract: The consensus problem of multiple nonholonomic mobile robots in the form of high-order chained structure is considered in this paper. Based on the model features and the finite-time control technique, a finite-time cooperative controller is explicitly constructed which guarantees that the states consensus is achieved in a finite time. As an application of the proposed results, finite-time formation control of multiple wheeled mobile robots is studied and a finite-time formation control algorithm is proposed. To show effectiveness of the proposed approach, a simulation example is given.

Proceedings ArticleDOI
15 Nov 2017
TL;DR: A new humanoid robot capable of interacting with a human environment and targeting industrial applications, equipped with torque sensors to measure joint effort and high resolution encoders to measure both motor and joint positions is introduced.
Abstract: The upcoming generation of humanoid robots will have to be equipped with state-of-the-art technical features along with high industrial quality, but they should also offer the prospect of effective physical human interaction. In this paper we introduce a new humanoid robot capable of interacting with a human environment and targeting industrial applications. Limitations are outlined and used together with the feedback from the DARPA Robotics Challenge, and other teams leading the field in creating new humanoid robots. The resulting robot is able to handle weights of 6 kg with an out-stretched arm, and has powerful motors to carry out fast movements. Its kinematics have been specially designed for screwing and drilling motions. In order to make interaction with human operators possible, this robot is equipped with torque sensors to measure joint effort and high resolution encoders to measure both motor and joint positions. The humanoid robotics field has reached a stage where robustness and repeatability is the next watershed. We believe that this robot has the potential to become a powerful tool for the research community to successfully navigate this turning point, as the humanoid robot HRP-2 was in its own time.

Journal ArticleDOI
TL;DR: With the proposed control, the trajectories of the closed-loop system are semiglobally uniformly bounded which can be proved via Lyapunov stability theorem.
Abstract: In this paper, neural network control strategies based on radial basis functions are designed for biped robots, which includes balancing and posture control. To deal with system uncertainties, neural networks are used to approximate the unknown model of the robot. Both full state feedback control and output feedback control are considered in this paper. With the proposed control, the trajectories of the closed-loop system are semiglobally uniformly bounded which can be proved via Lyapunov stability theorem. Simulations are also carried out to illustrate the effectiveness of the proposed control.

Journal ArticleDOI
01 Jan 2017
TL;DR: The particle swarm optimization method has been employed to optimize the trajectory of each joint, such that satisfied parameter estimation can be obtained and the estimated inertia parameters are taken as the initial values for the RNE-based adaptive control design to achieve improved tracking performance.
Abstract: In this paper, model identification and adaptive control design are performed on Devanit-Hartenberg model of a humanoid robot. We focus on the modeling of the 6 degree-of-freedom upper limb of the robot using recursive Newton-Euler (RNE) formula for the coordinate frame of each joint. To obtain sufficient excitation for modeling of the robot, the particle swarm optimization method has been employed to optimize the trajectory of each joint, such that satisfied parameter estimation can be obtained. In addition, the estimated inertia parameters are taken as the initial values for the RNE-based adaptive control design to achieve improved tracking performance. Simulation studies have been carried out to verify the result of the identification algorithm and to illustrate the effectiveness of the control design.

Journal ArticleDOI
TL;DR: A novel approach for effective online collision avoidance in an augmented environment, where virtual three-dimensional (3D) models of robots and real images of human operators from depth cameras are used for monitoring and collision detection.
Abstract: Establishing safe human–robot collaboration is an essential factor for improving efficiency and flexibility in today’s manufacturing environment. Targeting safety in human–robot collaboration, this paper reports a novel approach for effective online collision avoidance in an augmented environment, where virtual three-dimensional 3D models of robots and real images of human operators from depth cameras are used for monitoring and collision detection. A prototype system is developed and linked to industrial robot controllers for adaptive robot control, without the need of programming by the operators. The result of collision detection reveals four safety strategies: the system can alert an operator, stop a robot, move away the robot, or modify the robot’s trajectory away from an approaching operator. These strategies can be activated based on the operator’s existence and location with respect to the robot. The case study of the research further discusses the possibility of implementing the developed method in realistic applications, for example, collaboration between robots and humans in an assembly line.

Journal ArticleDOI
TL;DR: A set of experiments that tasked individuals with navigating a virtual maze using different methods to simulate an evacuation concluded that a mistake made by a robot will cause a person to have a significantly lower level of trust in it in later interactions.
Abstract: Robots have the potential to save lives in high-risk situations, such as emergency evacuations. To realize this potential, we must understand how factors such as the robot's performance, the riskiness of the situation, and the evacuee's motivation influence his or her decision to follow a robot. In this paper, we developed a set of experiments that tasked individuals with navigating a virtual maze using different methods to simulate an evacuation. Participants chose whether or not to use the robot for guidance in each of two separate navigation rounds. The robot performed poorly in two of the three conditions. The participant's decision to use the robot and self-reported trust in the robot served as dependent measures. A 53% drop in self-reported trust was found when the robot performs poorly. Self-reports of trust were strongly correlated with the decision to use the robot for guidance ( $\phi ({90}) = + 0.745$ ). We conclude that a mistake made by a robot will cause a person to have a significantly lower level of trust in it in later interactions.

Journal ArticleDOI
TL;DR: The results show that the developed socially aware navigation framework allows a mobile robot to navigate safely, socially, and proactively while guaranteeing human safety and comfort in crowded and dynamic environments.
Abstract: Safe and social navigation is the key to deploying a mobile service robot in a human-centered environment. Widespread acceptability of mobile service robots in daily life is hindered by robot’s inability to navigate in crowded and dynamic human environments in a socially acceptable way that would guarantee human safety and comfort. In this paper, we propose an effective proactive social motion model (PSMM) that enables a mobile service robot to navigate safely and socially in crowded and dynamic environments. The proposed method considers not only human states (position, orientation, motion, field of view, and hand poses) relative to the robot but also social interactive information about human–object and human group interactions. This allows development of the PSMM that consists of elements of an extended social force model and a hybrid reciprocal velocity obstacle technique. The PSMM is then combined with a path planning technique to generate a motion planning system that drives a mobile robot in a socially acceptable manner and produces respectful and polite behaviors akin to human movements. Note to Practitioners —In this paper, we validated the effectiveness and feasibility of the proposed proactive social motion model (PSMM) through both simulation and real-world experiments under the newly proposed human comfortable safety indices. To do that, we first implemented the entire navigation system using the open-source robot operating system. We then installed it in a simulated robot model and conducted experiments in a simulated shopping mall-like environment to verify its effectiveness. We also installed the proposed algorithm on our mobile robot platform and conducted experiments in our office-like laboratory environment. Our results show that the developed socially aware navigation framework allows a mobile robot to navigate safely, socially, and proactively while guaranteeing human safety and comfort in crowded and dynamic environments. In this paper, we examined the proposed PSMM with a set of predefined parameters selected based on our empirical experiences about the robot mechanism and selected social environment. However, in fact a mobile robot might need to adapt to various contextual and cultural situations in different social environments. Thus, it should be equipped with an online adaptive interactive learning mechanism allowing the robot to learn to auto-adjust their parameters according to such embedded environments. Using machine learning techniques, e.g., inverse reinforcement learning [1] to optimize the parameter set for the PSMM could be a promising research direction to improve adaptability of mobile service robots in different social environments. In the future, we will evaluate the proposed framework based on a wider variety of scenarios, particularly those with different social interaction situations and dynamic environments. Furthermore, various kinds of social cues and signals introduced in [2] and [3] will be applied to extend the proposed framework in more complicated social situations and contexts. Last but not least, we will investigate different machine learning techniques and incorporate them in the PSMM in order to allow the robot to automatically adapt to diverse social environments.

Book ChapterDOI
01 Jan 2017
TL;DR: A probabilistic model of the environment is formulated, under which recursive Bayesian filters are used to estimate the environment events and hazards online and Mutual information is shown to combine the competing incentives of avoiding failure and collecting informative measurements under a common objective.
Abstract: This paper addresses the problem of deploying a network of robots to gather information in an environment, where the environment is hazardous to the robots. This may mean that there are adversarial agents in the environment trying to disable the robots, or that some regions of the environment tend to make the robots fail, for example due to radiation, fire, adverse weather, or caustic chemicals. A probabilistic model of the environment is formulated, under which recursive Bayesian filters are used to estimate the environment events and hazards online. The robots must control their positions both to avoid sensor failures and to provide useful sensor information by following the analytical gradient of mutual information computed using these online estimates. Mutual information is shown to combine the competing incentives of avoiding failure and collecting informative measurements under a common objective. Simulations demonstrate the performance of the algorithm.

Journal ArticleDOI
01 Jan 2017
TL;DR: This letter shows an extension of the pattern generator that directly considers the avoidance of convex obstacles and uses the whole-body dynamics to correct the center of mass trajectory of the underlying simplified model.
Abstract: The contribution of this work is to show that real-time nonlinear model predictive control (NMPC) can be implemented on position controlled humanoid robots. Following the idea of “walking without thinking,” we propose a walking pattern generator that takes into account simultaneously the position and orientation of the feet. A requirement for an application in real-world scenarios is the avoidance of obstacles. Therefore, this letter shows an extension of the pattern generator that directly considers the avoidance of convex obstacles. The algorithm uses the whole-body dynamics to correct the center of mass trajectory of the underlying simplified model. The pattern generator runs in real-time on the embedded hardware of the humanoid robot HRP-2 and experiments demonstrate the increase in performance with the correction.

Journal ArticleDOI
TL;DR: The proposed adaptive controller only requires the image information from an uncalibrated perspective camera mounted at any position and orientation (attitude) on the follower robot and does not depend on the relative position measurement and communication between the leader and follower.
Abstract: This paper focuses on the problem of vision-based leader–follower formation control of mobile robots. The proposed adaptive controller only requires the image information from an uncalibrated perspective camera mounted at any position and orientation (attitude) on the follower robot. Furthermore, the approach does not depend on the relative position measurement and communication between the leader and follower. First, a new real-time observer is developed to estimate the unknown intrinsic and extrinsic camera parameters as well as the unknown coefficients of the plane where the feature point moves relative to the camera frame. Second, the Lyapunov method is employed to prove the stability of the closed-loop system, where it is shown that convergence of the image error is guaranteed. Finally, the performance of the approach is demonstrated through physical experiments and experimental results.

Journal ArticleDOI
10 Jan 2017
TL;DR: This work proposes active exploration of the environment, where the flying robot chooses regions to map in a way that optimizes the overall response time of the system, which is the combined time for the air and ground robots to execute their missions.
Abstract: We address the problem of planning a path for a ground robot through unknown terrain, using observations from a flying robot. In search and rescue missions, which are our target scenarios, the time from arrival at the disaster site to the delivery of aid is critically important. Previous works required exhaustive exploration before path planning, which is time-consuming but eventually leads to an optimal path for the ground robot. Instead, we propose active exploration of the environment, where the flying robot chooses regions to map in a way that optimizes the overall response time of the system, which is the combined time for the air and ground robots to execute their missions. In our approach, we estimate terrain classes throughout our terrain map, and we also add elevation information in areas where the active exploration algorithm has chosen to perform 3-D reconstruction. This terrain information is used to estimate feasible and efficient paths for the ground robot. By exploring the environment actively, we achieve superior response times compared to both exhaustive and greedy exploration strategies. We demonstrate the performance and capabilities of the proposed system in simulated and real-world outdoor experiments. To the best of our knowledge, this is the first work to address ground robot path planning using active aerial exploration.

Journal ArticleDOI
07 Feb 2017
TL;DR: This letter presents a trajectory optimization framework for whole-body motion planning through contacts, and demonstrates how the proposed approach can be applied to automatically discover different gaits and dynamic motions on a quadruped robot.
Abstract: In this letter, we present a trajectory optimization framework for whole-body motion planning through contacts. We demonstrate how the proposed approach can be applied to automatically discover different gaits and dynamic motions on a quadruped robot. In contrast to most previous methods, we do not prespecify contact-switches, -timings, -points or gait patterns, but they are a direct outcome of the optimization. Furthermore, we optimize over the entire dynamics of the robot, which enables the optimizer to fully leverage the capabilities of the robot. To illustrate the spectrum of achievable motions, we show eight different tasks, which would require very different control structures when solved with state-of-the-art methods. Using our trajectory optimization approach, we are solving each task with a simple, high level cost function and without any changes in the control structure. Furthermore, we fully integrate our approach with the robot's control and estimation framework such that we are able to run the optimization online. Through several hardware experiments, we show that the optimized trajectories and control inputs can be directly applied to physical systems.

Posted Content
TL;DR: This chapter provides an overview of robot autonomy in commercial use and in research, and presents some of the challenges faced in developing autonomous surgical robots.
Abstract: Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots.

Journal ArticleDOI
17 Apr 2017
TL;DR: A novel approach that integrates online information about the human motor function and manipulability properties into the hybrid controller of the assistive robot through this human-in-the-loop framework can achieve an enhanced physical human–robot interaction performance and deliver appropriate level of assistance to the human operator.
Abstract: This paper aims to improve the interaction and coordination between the human and the robot in cooperative execution of complex, powerful, and dynamic tasks. We propose a novel approach that integrates online information about the human motor function and manipulability properties into the hybrid controller of the assistive robot. Through this human-in-the-loop framework, the robot can adapt to the human motor behavior and provide the appropriate assistive response in different phases of the cooperative task. We experimentally evaluate the proposed approach in two human–robot co-manipulation tasks that require specific complementary behavior from the two agents. Results suggest that the proposed technique, which relies on a minimum degree of task-level pre-programming, can achieve an enhanced physical human–robot interaction performance and deliver appropriate level of assistance to the human operator.

Journal ArticleDOI
17 Jan 2017
TL;DR: This paper addresses the development of formations that enable resilience, the ability to achieve consensus, and to cooperate in the presence of malicious or malfunctioning robots by using the notion of robust graphs to build resilient teams.
Abstract: All cooperative control algorithms for robot teams assume the ability to communicate without considering the possibility of a malicious or malfunctioning robot that may either communicate false information or take wrong actions. This paper addresses the development of formations that enable resilience, the ability to achieve consensus, and to cooperate in the presence of malicious or malfunctioning robots. Specifically, we use the notion of robust graphs to build resilient teams, and focus on the problem of designing robot formations with communication graphs (each edge models a bidirectional communication link) that are robust. We present algorithms to build robust graphs. Given a set of robots and the maximum number of malicious or malfunctioning robots, we are able to 1) state if it is possible to build a resilient team; 2) say what the proximity relationships that enable communication ought to be; 3) construct elemental resilient graphs; and 4) develop a framework for composing resilient teams to build larger resilient teams. We illustrate these algorithms by constructing resilient robot formations in the plane.

Journal ArticleDOI
10 Sep 2017-Sensors
TL;DR: Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields.
Abstract: In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields.

Journal ArticleDOI
TL;DR: It is demonstrated that intrinsically generated EEG-based human feedback in RL can successfully be used to implicitly improve gesture-based robot control during human-robot interaction.
Abstract: Reinforcement learning (RL) enables robots to learn its optimal behavioral strategy in dynamic environments based on feedback. Explicit human feedback during robot RL is advantageous, since an explicit reward function can be easily adapted. However, it is very demanding and tiresome for a human to continuously and explicitly generate feedback. Therefore, the development of implicit approaches is of high relevance. In this paper, we used an error-related potential (ErrP), an event-related activity in the human electroencephalogram (EEG), as an intrinsically generated implicit feedback (rewards) for RL. Initially we validated our approach with seven subjects in a simulated robot learning scenario. ErrPs were detected online in single trial with a balanced accuracy (bACC) of 91%, which was sufficient to learn to recognize gestures and the correct mapping between human gestures and robot actions in parallel. Finally, we validated our approach in a real robot scenario, in which seven subjects freely chose gestures and the real robot correctly learned the mapping between gestures and actions (ErrP detection (90% bACC)). In this paper, we demonstrated that intrinsically generated EEG-based human feedback in RL can successfully be used to implicitly improve gesture-based robot control during human-robot interaction. We call our approach intrinsic interactive RL.