scispace - formally typeset
Search or ask a question

Showing papers on "Robot published in 2016"


Journal ArticleDOI
25 Aug 2016-Nature
TL;DR: An untethered operation of a robot composed solely of soft materials that autonomously regulates fluid flow and, hence, catalytic decomposition of an on-board monopropellant fuel supply is reported.
Abstract: An untethered, entirely soft robot is designed to operate autonomously by combining microfluidic logic and hydrogen peroxide as an on-board fuel supply. Soft robotics have so far necessarily included some 'hard' or metallic elements, in particular in the form of batteries or wiring, to connect them to an external power source. Additionally, external wiring tethering them to a power source places limits on the autonomy of such robots. Now Jennifer Lewis and colleagues have combined a 3D-printed soft polymeric robot with microfluidic logic and hydrogen peroxide as an onboard fuel to produce an eight-armed robot — an 'octobot' — that actuates its arms, without the incorporation of any hard structures. The hydrogen peroxide decomposes in the presence of a platinum catalyst to produce oxygen and a volumetric expansion that fills bladders embedded within the arms of the octobot. The design of the fuel reservoirs, microfluidic channels and vents to release the gas means that two sets of arms actuate cyclically. Soft robots possess many attributes that are difficult, if not impossible, to achieve with conventional robots composed of rigid materials1,2. Yet, despite recent advances, soft robots must still be tethered to hard robotic control systems and power sources3,4,5,6,7,8,9,10. New strategies for creating completely soft robots, including soft analogues of these crucial components, are needed to realize their full potential. Here we report the untethered operation of a robot composed solely of soft materials. The robot is controlled with microfluidic logic11 that autonomously regulates fluid flow and, hence, catalytic decomposition of an on-board monopropellant fuel supply. Gas generated from the fuel decomposition inflates fluidic networks downstream of the reaction sites, resulting in actuation12. The body and microfluidic logic of the robot are fabricated using moulding and soft lithography, respectively, and the pneumatic actuator networks, on-board fuel reservoirs and catalytic reaction chambers needed for movement are patterned within the body via a multi-material, embedded 3D printing technique13,14. The fluidic and elastomeric architectures required for function span several orders of magnitude from the microscale to the macroscale. Our integrated design and rapid fabrication approach enables the programmable assembly of multiple materials within this architecture, laying the foundation for completely soft, autonomous robots.

1,491 citations


Journal ArticleDOI
06 Dec 2016
TL;DR: The challenge ahead for soft robotics is to further develop the abilities for robots to grow, evolve, self-heal, develop, and biodegrade, which are the ways that robots can adapt their morphology to the environment.
Abstract: The proliferation of soft robotics research worldwide has brought substantial achievements in terms of principles, models, technologies, techniques, and prototypes of soft robots. Such achievements are reviewed here in terms of the abilities that they provide robots that were not possible before. An analysis of the evolution of this field shows how, after a few pioneering works in the years 2009 to 2012, breakthrough results were obtained by taking seminal technological and scientific challenges related to soft robotics from actuation and sensing to modeling and control. Further progress in soft robotics research has produced achievements that are important in terms of robot abilities-that is, from the viewpoint of what robots can do today thanks to the soft robotics approach. Abilities such as squeezing, stretching, climbing, growing, and morphing would not be possible with an approach based only on rigid links. The challenge ahead for soft robotics is to further develop the abilities for robots to grow, evolve, self-heal, develop, and biodegrade, which are the ways that robots can adapt their morphology to the environment.

831 citations


Journal ArticleDOI
TL;DR: This paper describes a collection of optimization algorithms for achieving dynamic planning, control, and state estimation for a bipedal robot designed to operate reliably in complex environments and presents a state estimator formulation that permits highly precise execution of extended walking plans over non-flat terrain.
Abstract: This paper describes a collection of optimization algorithms for achieving dynamic planning, control, and state estimation for a bipedal robot designed to operate reliably in complex environments. To make challenging locomotion tasks tractable, we describe several novel applications of convex, mixed-integer, and sparse nonlinear optimization to problems ranging from footstep placement to whole-body planning and control. We also present a state estimator formulation that, when combined with our walking controller, permits highly precise execution of extended walking plans over non-flat terrain. We describe our complete system integration and experiments carried out on Atlas, a full-size hydraulic humanoid robot built by Boston Dynamics, Inc.

715 citations


Journal ArticleDOI
TL;DR: This survey concentrates on heuristic-based algorithms in robot path planning which are comprised of neural network, fuzzy logic, nature inspired algorithms and hybrid algorithms.

450 citations


Journal ArticleDOI
Abstract: Although the concept of industrial cobots dates back to 1999, most present day hybrid human-machine assembly systems are merely weight compensators. Here, we present results on the development of a collaborative human-robot manufacturing cell for homokinetic joint assembly. The robot alternates active and passive behaviours during assembly, to lighten the burden on the operator in the first case, and to comply to his/her needs in the latter. Our approach can successfully manage direct physical contact between robot and human, and between robot and environment. Furthermore, it can be applied to standard position (and not torque) controlled robots, common in the industry. The approach is validated in a series of assembly experiments. The human workload is reduced, diminishing the risk of strain injuries. Besides, a complete risk analysis indicates that the proposed setup is compatible with the safety standards, and could be certified.

449 citations


Proceedings ArticleDOI
16 May 2016
TL;DR: This work presents an approach that automates state-space construction by learning a state representation directly from camera images by using a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects.
Abstract: Reinforcement learning provides a powerful and flexible framework for automated acquisition of robotic motion skills. However, applying reinforcement learning requires a sufficiently detailed representation of the state, including the configuration of task-relevant objects. We present an approach that automates state-space construction by learning a state representation directly from camera images. Our method uses a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects, and then learns a motion skill with these feature points using an efficient reinforcement learning method based on local linear models. The resulting controller reacts continuously to the learned feature points, allowing the robot to dynamically manipulate objects in the world with closed-loop control. We demonstrate our method with a PR2 robot on tasks that include pushing a free-standing toy block, picking up a bag of rice using a spatula, and hanging a loop of rope on a hook at various positions. In each task, our method automatically learns to track task-relevant objects and manipulate their configuration with the robot's arm.

440 citations


Journal ArticleDOI
TL;DR: An extensive set of experiments suggests that the technique outperforms state-of-the-art methods to model the behavior of pedestrians, which also makes it applicable to fields such as behavioral science or computer graphics.
Abstract: Mobile robots are increasingly populating our human environments. To interact with humans in a socially compliant way, these robots need to understand and comply with mutually accepted rules. In this paper, we present a novel approach to model the cooperative navigation behavior of humans. We model their behavior in terms of a mixture distribution that captures both the discrete navigation decisions, such as going left or going right, as well as the natural variance of human trajectories. Our approach learns the model parameters of this distribution that match, in expectation, the observed behavior in terms of user-defined features. To compute the feature expectations over the resulting high-dimensional continuous distributions, we use Hamiltonian Markov chain Monte Carlo sampling. Furthermore, we rely on a Voronoi graph of the environment to efficiently explore the space of trajectories from the robot's current position to its target position. Using the proposed model, our method is able to imitate the behavior of pedestrians or, alternatively, to replicate a specific behavior that was taught by tele-operation in the target environment of the robot. We implemented our approach on a real mobile robot and demonstrated that it is able to successfully navigate in an office environment in the presence of humans. An extensive set of experiments suggests that our technique outperforms state-of-the-art methods to model the behavior of pedestrians, which also makes it applicable to fields such as behavioral science or computer graphics.

420 citations


Posted Content
TL;DR: This work proposes using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world, and presents an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging thereality gap.
Abstract: Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.

418 citations


Journal ArticleDOI
TL;DR: The core challenge of soft robotics research is, in fact, the variability and controllability of such deformability and compliance.
Abstract: The need for building robots with soft materials emerged recently from considerations of the limitations of service robots in negotiating natural environments, from observation of the role of compliance in animals and plants [1], and even from the role attributed to the physical body in movement control and intelligence, in the so-called embodied intelligence or morphological computation paradigm [2]-[4]. The wide spread of soft robotics relies on numerous investigations of diverse materials and technologies for actuation and sensing, and on research of control techniques, all of which can serve the purpose of building robots with high deformability and compliance. But the core challenge of soft robotics research is, in fact, the variability and controllability of such deformability and compliance.

417 citations


Journal ArticleDOI
TL;DR: The achievements and shortcomings of recent technology in these key areas are evaluated, and this paper concludes with a discussion on the potential impacts of soft manipulators on industry and society.
Abstract: Soft robotics is a growing area of research which utilises the compliance and adaptability of soft structures to develop highly adaptive robotics for soft interactions. One area in which soft robotics has the ability to make significant impact is in the development of soft grippers and manipulators. With an increased requirement for automation, robotics systems are required to perform task in unstructured and not well defined environments; conditions which conventional rigid robotics are not best suited. This requires a paradigm shift in the methods and materials used to develop robots such that they can adapt to and work safely in human environments. One solution to this is soft robotics, which enables soft interactions with the surroundings whilst maintaining the ability to apply significant force. This review paper assess the current materials and methods, actuation methods and sensors which are used in the development of soft manipulators. The achievements and shortcomings of recent technology in these key areas are evaluated, and this paper concludes with a discussion on the potential impacts of soft manipulators on industry and society.

388 citations


Journal ArticleDOI
TL;DR: A review of the research effort, developments and innovation in agricultural robots for field operations, and the associated concepts, principles, limitations and gaps can be found in this article, where the authors focus on: fusing complementary sensors for adequate localisation and sensing abilities, developing simple manipulators for each agricultural task, developing path planning, navigation and guidance algorithms suited to environments besides open fields and known a-priori, and integrating human operators in this complex and highly dynamic situation.

Journal ArticleDOI
TL;DR: The main focus is on studies characterized by distributed control, simplicity of individual robots and locality of sensing and communication, and distributed algorithms are shown to bring cooperation between agents.

Journal ArticleDOI
TL;DR: This tutorial aims at reviewing existing approaches for task-adaptive motion encoding and narrows down the scope to the special case of task parameters that take the form of frames of reference, coordinate systems or basis functions, which are most commonly encountered in service robotics.
Abstract: Task-parameterized models of movements aim at automatically adapting movements to new situations encountered by a robot. The task parameters can, for example, take the form of positions of objects in the environment or landmark points that the robot should pass through. This tutorial aims at reviewing existing approaches for task-adaptive motion encoding. It then narrows down the scope to the special case of task parameters that take the form of frames of reference, coordinate systems or basis functions, which are most commonly encountered in service robotics. Each section of the paper is accompanied by source codes designed as simple didactic examples implemented in Matlab with a full compatibility with GNU Octave, closely following the notation and equations of the article. It also presents ongoing work and further challenges that remain to be addressed, with examples provided in simulation and on a real robot (transfer of manipulation behaviors to the Baxter bimanual robot). The repository for the accompanying source codes is available at http://www.idiap.ch/software/pbdlib/.

Proceedings ArticleDOI
TL;DR: In this paper, the authors presented a robust sound source localization method in three-dimensional space using an array of 8 microphones, which can localize in real time different types of sound sources over a range of 3 meters and with a precision of 3 degrees.
Abstract: The hearing sense on a mobile robot is important because it is omnidirectional and it does not require direct line-of-sight with the sound source. Such capabilities can nicely complement vision to help localize a person or an interesting event in the environment. To do so the robot auditory system must be able to work in noisy, unknown and diverse environmental conditions. In this paper we present a robust sound source localization method in three-dimensional space using an array of 8 microphones. The method is based on time delay of arrival estimation. Results show that a mobile robot can localize in real time different types of sound sources over a range of 3 meters and with a precision of 3 degrees.

Journal ArticleDOI
TL;DR: This paper presents the basics of swarm robotics and introduces HSI from the perspective of a human operator by discussing the cognitive complexity of solving tasks with swarm systems and identifies the core concepts needed to design a human-swarm system.
Abstract: Recent advances in technology are delivering robots of reduced size and cost. A natural outgrowth of these advances are systems comprised of large numbers of robots that collaborate autonomously in diverse applications. Research on effective autonomous control of such systems, commonly called swarms, has increased dramatically in recent years and received attention from many domains, such as bioinspired robotics and control theory. These kinds of distributed systems present novel challenges for the effective integration of human supervisors, operators, and teammates that are only beginning to be addressed. This paper is the first survey of human–swarm interaction (HSI) and identifies the core concepts needed to design a human–swarm system. We first present the basics of swarm robotics. Then, we introduce HSI from the perspective of a human operator by discussing the cognitive complexity of solving tasks with swarm systems. Next, we introduce the interface between swarm and operator and identify challenges and solutions relating to human–swarm communication, state estimation and visualization, and human control of swarms. For the latter, we develop a taxonomy of control methods that enable operators to control swarms effectively. Finally, we synthesize the results to highlight remaining challenges, unanswered questions, and open problems for HSI, as well as how to address them in future works.

Journal ArticleDOI
TL;DR: This paper proposes a kinematic control strategy which enforces safety, while maintaining the maximum level of productivity of the robot.
Abstract: New paradigms in industrial robotics no longer require physical separation between robotic manipulators and humans. Moreover, in order to optimize production, humans and robots are expected to collaborate to some extent. In this scenario, involving a shared environment between humans and robots, common motion generation algorithms might turn out to be inadequate for this purpose.

Book ChapterDOI
01 Jan 2016
TL;DR: A software package, robot_localization, for the robot operating system (ROS), which can support an unlimited number of inputs from multiple sensor types, and allows users to customize which sensor data fields are fused with the current state estimate.
Abstract: Accurate state estimation for a mobile robot often requires the fusion of data from multiple sensors. Software that performs sensor fusion should therefore support the inclusion of a wide array of heterogeneous sensors. This paper presents a software package, robot_localization, for the robot operating system (ROS). The package currently contains an implementation of an extended Kalman filter (EKF). It can support an unlimited number of inputs from multiple sensor types, and allows users to customize which sensor data fields are fused with the current state estimate. In this work, we motivate our design decisions, discuss implementation details, and provide results from real-world tests.

Journal ArticleDOI
TL;DR: It is shown how a relatively small set of skills are derived from current factory worker instructions, and how these can be transferred to industrial mobile manipulators and shown how this approach can enable non-experts to utilize advanced robotic systems.
Abstract: Due to a general shift in manufacturing paradigm from mass production towards mass customization, reconfigurable automation technologies, such as robots, are required. However, current industrial robot solutions are notoriously difficult to program, leading to high changeover times when new products are introduced by manufacturers. In order to compete on global markets, the factories of tomorrow need complete production lines, including automation technologies that can effortlessly be reconfigured or repurposed, when the need arises. In this paper we present the concept of general, self-asserting robot skills for manufacturing. We show how a relatively small set of skills are derived from current factory worker instructions, and how these can be transferred to industrial mobile manipulators. General robot skills can not only be implemented on these robots, but also be intuitively concatenated to program the robots to perform a variety of tasks, through the use of simple task-level programming methods. We demonstrate various approaches to this, extensively tested with several people inexperienced in robotics. We validate our findings through several deployments of the complete robot system in running production facilities at an industrial partner. It follows from these experiments that the use of robot skills, and associated task-level programming framework, is a viable solution to introducing robots that can intuitively and on the fly be programmed to perform new tasks by factory workers. HighlightsWe propose a conceptual model of robot skills and show how this differs from macros.We show how this approach can enable non-experts to utilize advanced robotic systems.Concrete industrial applications of the approach are presented, on advanced robot systems.

Book ChapterDOI
01 Jan 2016
TL;DR: Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art in architectures for multirobot cooperation, exploring the alternative approaches that have been developed.
Abstract: Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

Journal ArticleDOI
TL;DR: A review for application of robotics in onshore oil and gas industry and semi-autonomous robots, where actions are performed by robots but cognitive decisions are still taken by skilled operator are presented.

Journal ArticleDOI
TL;DR: The principles and system components for navigation and manipulation in domestic environments, the interaction paradigm and its implementation in a multimodal user interface, the core robot tasks, as well as the results from the user studies are described.

Journal ArticleDOI
TL;DR: This work proposes a framework for a user to teach a robot collaborative skills from demonstrations, and presents an approach that combines probabilistic learning, dynamical systems, and stiffness estimation to encode the robot behavior along the task.
Abstract: Robots are becoming safe and smart enough to work alongside people not only on manufacturing production lines, but also in spaces such as houses, museums, or hospitals. This can be significantly exploited in situations in which a human needs the help of another person to perform a task, because a robot may take the role of the helper. In this sense, a human and the robotic assistant may cooperatively carry out a variety of tasks, therefore requiring the robot to communicate with the person, understand his/her needs, and behave accordingly. To achieve this, we propose a framework for a user to teach a robot collaborative skills from demonstrations. We mainly focus on tasks involving physical contact with the user, in which not only position, but also force sensing and compliance become highly relevant. Specifically, we present an approach that combines probabilistic learning, dynamical systems, and stiffness estimation to encode the robot behavior along the task. Our method allows a robot to learn not only trajectory following skills, but also impedance behaviors. To show the functionality and flexibility of our approach, two different testbeds are used: a transportation task and a collaborative table assembly.

Journal ArticleDOI
TL;DR: The future of robotic surgery involves cost reduction, development of new platforms and technologies, creation and validation of curriculum and virtual simulators, and conduction of randomized clinical trials to determine the best applications of robotics.
Abstract: The idea of reproducing himself with the use of a mechanical robot structure has been in man's imagination in the last 3000 years However, the use of robots in medicine has only 30 years of history The application of robots in surgery originates from the need of modern man to achieve two goals: the telepresence and the performance of repetitive and accurate tasks The first "robot surgeon" used on a human patient was the PUMA 200 in 1985 In the 1990s, scientists developed the concept of "master-slave" robot, which consisted of a robot with remote manipulators controlled by a surgeon at a surgical workstation Despite the lack of force and tactile feedback, technical advantages of robotic surgery, such as 3D vision, stable and magnified image, EndoWrist instruments, physiologic tremor filtering, and motion scaling, have been considered fundamental to overcome many of the limitations of the laparoscopic surgery Since the approval of the da Vinci(®) robot by international agencies, American, European, and Asian surgeons have proved its factibility and safety for the performance of many different robot-assisted surgeries Comparative studies of robotic and laparoscopic surgical procedures in general surgery have shown similar results with regard to perioperative, oncological, and functional outcomes However, higher costs and lack of haptic feedback represent the major limitations of current robotic technology to become the standard technique of minimally invasive surgery worldwide Therefore, the future of robotic surgery involves cost reduction, development of new platforms and technologies, creation and validation of curriculum and virtual simulators, and conduction of randomized clinical trials to determine the best applications of robotics

Journal ArticleDOI
TL;DR: This paper discusses the fundamentals of these most successful robot 3D path planning algorithms which have been developed in recent years and concentrate on universally applicable algorithms which can be implemented in aerial robots, ground robots, and underwater robots.
Abstract: Robot 3D three-dimension path planning targets for finding an optimal and collision-free path in a 3D workspace while taking into account kinematic constraints including geometric, physical, and temporal constraints. The purpose of path planning, unlike motion planning which must be taken into consideration of dynamics, is to find a kinematically optimal path with the least time as well as model the environment completely. We discuss the fundamentals of these most successful robot 3D path planning algorithms which have been developed in recent years and concentrate on universally applicable algorithms which can be implemented in aerial robots, ground robots, and underwater robots. This paper classifies all the methods into five categories based on their exploring mechanisms and proposes a category, called multifusion based algorithms. For all these algorithms, they are analyzed from a time efficiency and implementable area perspective. Furthermore a comprehensive applicable analysis for each kind of method is presented after considering their merits and weaknesses.

Journal ArticleDOI
TL;DR: A momentum-based control framework for floating-base robots and its application to the humanoid robot “Atlas” is presented and results for walking across rough terrain, basic manipulation, and multi-contact balancing on sloped surfaces are presented.
Abstract: This paper presents a momentum-based control framework for floating-base robots and its application to the humanoid robot “Atlas”. At the heart of the control framework lies a quadratic program that reconciles motion tasks expressed as constraints on the joint acceleration vector with the limitations due to unilateral ground contact and force-limited grasping. We elaborate on necessary adaptations required to move from simulation to real hardware and present results for walking across rough terrain, basic manipulation, and multi-contact balancing on sloped surfaces (the latter in simulation only). The presented control framework was used to secure second place in both the DARPA Robotics Challenge Trials in December 2013 and the Finals in June 2015.

Journal ArticleDOI
TL;DR: Tactile sensors provide robots with the ability to interact with humans and the environment with great accuracy, yet technical challenges remain for electronic-skin systems to reach human-level performance.
Abstract: Tactile sensors provide robots with the ability to interact with humans and the environment with great accuracy, yet technical challenges remain for electronic-skin systems to reach human-level performance.

Journal ArticleDOI
05 Mar 2016-Sensors
TL;DR: A comparative review of different machine vision techniques for robot guidance is presented and analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences.
Abstract: In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works.

Journal ArticleDOI
TL;DR: The goal is to use learning to generate low-uncertainty, non-parametric models in situ that provide safe, conservative control during initial trials when model uncertainty is high and converges to high-performance, optimal control during later trials whenmodel uncertainty is reduced with experience.
Abstract: This paper presents a Robust Constrained Learning-based Nonlinear Model Predictive Control RC-LB-NMPC algorithm for path-tracking in off-road terrain. For mobile robots, constraints may represent solid obstacles or localization limits. As a result, constraint satisfaction is required for safety. Constraint satisfaction is typically guaranteed through the use of accurate, a priori models or robust control. However, accurate models are generally not available for off-road operation. Furthermore, robust controllers are often conservative, since model uncertainty is not updated online. In this work our goal is to use learning to generate low-uncertainty, non-parametric models in situ. Based on these models, the predictive controller computes both linear and angular velocities in real-time, such that the robot drives at or near its capabilities while respecting path and localization constraints. Localization for the controller is provided by an on-board, vision-based mapping and navigation system enabling operation in large-scale, off-road environments. The paper presents experimental results, including over 5 km of travel by a 900 kg skid-steered robot at speeds of up to 2.0 m/s. The result is a robust, learning controller that provides safe, conservative control during initial trials when model uncertainty is high and converges to high-performance, optimal control during later trials when model uncertainty is reduced with experience.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: The Unmanned Underwater Vehicle Simulator is described, an extension of the open-source robotics simulator Gazebo to underwater scenarios, that can simulate multiple underwater robots and intervention tasks using robotic manipulators.
Abstract: This paper describes the Unmanned Underwater Vehicle (UUV) Simulator, an extension of the open-source robotics simulator Gazebo to underwater scenarios, that can simulate multiple underwater robots and intervention tasks using robotic manipulators. This is achieved mainly through a set of newly implemented plugins that model underwater hydrostatic and hydrodynamic effects, thrusters, sensors, and external disturbances. In contrast to existing solutions, it reuses and extends a general-purpose robotics simulation platform to underwater environments.

Journal ArticleDOI
TL;DR: This work proposes a framework for socially adaptive path planning in dynamic environments, by generating human-like path trajectory and evaluating the approach by deploying it on a real robotic wheelchair platform, and comparing the robot trajectories to human trajectories.
Abstract: A key skill for mobile robots is the ability to navigate efficiently through their environment. In the case of social or assistive robots, this involves navigating through human crowds. Typical performance criteria, such as reaching the goal using the shortest path, are not appropriate in such environments, where it is more important for the robot to move in a socially adaptive manner such as respecting comfort zones of the pedestrians. We propose a framework for socially adaptive path planning in dynamic environments, by generating human-like path trajectory. Our framework consists of three modules: a feature extraction module, inverse reinforcement learning (IRL) module, and a path planning module. The feature extraction module extracts features necessary to characterize the state information, such as density and velocity of surrounding obstacles, from a RGB-depth sensor. The inverse reinforcement learning module uses a set of demonstration trajectories generated by an expert to learn the expert’s behaviour when faced with different state features, and represent it as a cost function that respects social variables. Finally, the planning module integrates a three-layer architecture, where a global path is optimized according to a classical shortest-path objective using a global map known a priori, a local path is planned over a shorter distance using the features extracted from a RGB-D sensor and the cost function inferred from IRL module, and a low-level system handles avoidance of immediate obstacles. We evaluate our approach by deploying it on a real robotic wheelchair platform in various scenarios, and comparing the robot trajectories to human trajectories.