scispace - formally typeset
Search or ask a question

Showing papers on "Robot published in 2015"


Journal ArticleDOI
28 May 2015-Nature
TL;DR: This work identifies scientific and technological advances that are expected to translate, within appropriate regulatory frameworks, into pervasive use of autonomous drones for civilian applications.
Abstract: We are witnessing the advent of a new era of robots - drones - that can autonomously fly in natural and man-made environments. These robots, often associated with defence applications, could have a major impact on civilian tasks, including transportation, communication, agriculture, disaster mitigation and environment preservation. Autonomous flight in confined spaces presents great scientific and technical challenges owing to the energetic cost of staying airborne and to the perceptual intelligence required to negotiate complex environments. We identify scientific and technological advances that are expected to translate, within appropriate regulatory frameworks, into pervasive use of autonomous drones for civilian applications.

956 citations


Journal ArticleDOI
28 May 2015-Nature
TL;DR: An intelligent trial-and-error algorithm is introduced that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans, and may shed light on the principles that animals use to adaptation to injury.
Abstract: An intelligent trial-and-error learning algorithm is presented that allows robots to adapt in minutes to compensate for a wide variety of types of damage. Autonomous mobile robots would be extremely useful in remote or hostile environments such as space, deep oceans or disaster areas. An outstanding challenge is to make such robots able to recover after damage. Jean-Baptiste Mouret and colleagues have developed a machine learning algorithm that enables damaged robots to quickly regain their ability to perform tasks. When they sustain damage — such as broken or even missing legs — the robots adopt an intelligent trial-and-error approach, trying out possible behaviours that they calculate to be potentially high-performing. After a handful of such experiments they discover, in less than two minutes, a compensatory behaviour that works in spite of the damage. Robots have transformed many industries, most notably manufacturing1, and have the power to deliver tremendous benefits to society, such as in search and rescue2, disaster response3, health care4 and transportation5. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets6 to deep oceans7. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility6,8. Whereas animals can quickly adapt to injuries, current robots cannot ‘think outside the box’ to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes9, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots6,8. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage10,11, but current techniques are slow even with small, constrained search spaces12. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robot’s prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury.

928 citations


Journal ArticleDOI
10 Jul 2015-Science
TL;DR: Using three-dimensional printing to fuse together multiple materials to manufacture a combustion-powered robot whose body transitions from a rigid core to a soft exterior, which is able to perform untethered jumping and also enhances performance.
Abstract: Roboticists have begun to design biologically inspired robots with soft or partially soft bodies, which have the potential to be more robust and adaptable, and safer for human interaction, than traditional rigid robots. However, key challenges in the design and manufacture of soft robots include the complex fabrication processes and the interfacing of soft and rigid components. We used multimaterial three-dimensional (3D) printing to manufacture a combustion-powered robot whose body transitions from a rigid core to a soft exterior. This stiffness gradient, spanning three orders of magnitude in modulus, enables reliable interfacing between rigid driving components (controller, battery, etc.) and the primarily soft body, and also enhances performance. Powered by the combustion of butane and oxygen, this robot is able to perform untethered jumping.

767 citations


Journal ArticleDOI
TL;DR: This paper learns a probabilistic, non-parametric Gaussian process transition model of the system and applies it to autonomous learning in real robot and control tasks, achieving an unprecedented speed of learning.
Abstract: Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this paper, we follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.

575 citations


Journal ArticleDOI
Jamy Li1
TL;DR: Qualitative assessment of the direction of quantitative effects demonstrated that robots were more persuasive and perceived more positively when physically present in a user?s environment than when digitally-displayed on a screen either as a video feed of the same robot or as a virtual character analog.
Abstract: The effects of physical embodiment and physical presence were explored through a survey of 33 experimental works comparing how people interacted with physical robots and virtual agents. A qualitative assessment of the direction of quantitative effects demonstrated that robots were more persuasive and perceived more positively when physically present in a user?s environment than when digitally-displayed on a screen either as a video feed of the same robot or as a virtual character analog; robots also led to better user performance when they were collocated as opposed to shown via video on a screen. However, participants did not respond differently to physical robots and virtual agents when both were displayed digitally on a screen - suggesting that physical presence, rather than physical embodiment, characterizes people?s responses to social robots. Implications for understanding psychological response to physical and virtual agents and for methodological design are discussed. Survey identified 33 works exploring user responses to physical robots and virtual agents.Robot agents had greater influence when physically present than telepresent.No differences were found between physical robots displayed on a screen and virtual agents that looked similar.Physical presence, but not physical embodiment alone, resulted in more favorable responses from participants.

389 citations


Book ChapterDOI
01 Jan 2015
TL;DR: This chapter provides a comprehensive review on challenging aspects of MRTA problem, recent approaches to tackle this problem and the future directions.
Abstract: Multi-robot systems (MRS) are a group of robots that are designed aiming to perform some collective behavior. By this collective behavior, some goals that are impossible for a single robot to achieve become feasible and attainable. There are several foreseen benefits of MRS compared to single robot systems such as the increased ability to resolve task complexity, increasing performance, reliability and simplicity in design. These benefits have attracted many researchers from academia and industry to investigate how to design and develop robust versatile MRS by solving a number of challenging problems such as complex task allocation, group formation, cooperative object detection and tracking, communication relaying and self-organization to name just a few. One of the most challenging problems of MRS is how to optimally assign a set of robots to a set of tasks in such a way that optimizes the overall system performance subject to a set of constraints. This problem is known as Multi-robot Task Allocation (MRTA) problem. MRTA is a complex problem especially when it comes to heterogeneous unreliable robots equipped with different capabilities that are required to perform various tasks with different requirements and constraints in an optimal way. This chapter provides a comprehensive review on challenging aspects of MRTA problem, recent approaches to tackle this problem and the future directions.

346 citations


Proceedings ArticleDOI
26 May 2015
TL;DR: This paper extends a recently developed policy search method and uses it to learn a range of dynamic manipulation behaviors with highly general policy representations, without using known models or example demonstrations, and shows that this method can acquire fast, fluent behaviors after only minutes of interaction time.
Abstract: Autonomous learning of object manipulation skills can enable robots to acquire rich behavioral repertoires that scale to the variety of objects found in the real world. However, current motion skill learning methods typically restrict the behavior to a compact, low-dimensional representation, limiting its expressiveness and generality. In this paper, we extend a recently developed policy search method [1] and use it to learn a range of dynamic manipulation behaviors with highly general policy representations, without using known models or example demonstrations. Our approach learns a set of trajectories for the desired motion skill by using iteratively refitted time-varying linear models, and then unifies these trajectories into a single control policy that can generalize to new situations. To enable this method to run on a real robot, we introduce several improvements that reduce the sample count and automate parameter selection. We show that our method can acquire fast, fluent behaviors after only minutes of interaction time, and can learn robust controllers for complex tasks, including putting together a toy airplane, stacking tight-fitting lego blocks, placing wooden rings onto tight-fitting pegs, inserting a shoe tree into a shoe, and screwing bottle caps onto bottles.

313 citations


Book
09 Jan 2015
TL;DR: Developmental robotics as discussed by the authors is a collaborative and interdisciplinary approach to robotics that is directly inspired by the developmental principles and mechanisms observed in children's cognitive development, and it builds on the idea that the robot can autonomously acquire an increasingly complex set of sensorimotor and mental capabilities.
Abstract: Developmental robotics is a collaborative and interdisciplinary approach to robotics that is directly inspired by the developmental principles and mechanisms observed in children's cognitive development. It builds on the idea that the robot, using a set of intrinsic developmental principles regulating the real-time interaction of its body, brain, and environment, can autonomously acquire an increasingly complex set of sensorimotor and mental capabilities. This volume, drawing on insights from psychology, computer science, linguistics, neuroscience, and robotics, offers the first comprehensive overview of a rapidly growing field. After providing some essential background information on robotics and developmental psychology, the book looks in detail at how developmental robotics models and experiments have attempted to realize a range of behavioral and cognitive capabilities. The examples in these chapters were chosen because of their direct correspondence with specific issues in child psychology research; each chapter begins with a concise and accessible overview of relevant empirical and theoretical findings in developmental psychology. The chapters cover intrinsic motivation and curiosity; motor development, examining both manipulation and locomotion; perceptual development, including face recognition and perception of space; social learning, emphasizing such phenomena as joint attention and cooperation; language, from phonetic babbling to syntactic processing; and abstract knowledge, including models of number learning and reasoning strategies. Boxed text offers technical and methodological details for both psychology and robotics experiments.

303 citations


Journal ArticleDOI
TL;DR: The potential benefits and challenges of building anthropomorphic robots are discussed, from both a philosophical perspective and from the viewpoint of empirical research in the fields of human–robot interaction and social psychology.
Abstract: Anthropomorphism is a phenomenon that describes the human tendency to see human-like shapes in the environment. It has considerable consequences for people’s choices and beliefs. With the increased presence of robots, it is important to investigate the optimal design for this technology. In this paper we discuss the potential benefits and challenges of building anthropomorphic robots, from both a philosophical perspective and from the viewpoint of empirical research in the fields of human–robot interaction and social psychology. We believe that this broad investigation of anthropomorphism will not only help us to understand the phenomenon better, but can also indicate solutions for facilitating the integration of human-like machines in the real world.

298 citations


Proceedings ArticleDOI
26 May 2015
TL;DR: New quantitative measures of simulation performance are introduced, focusing on the numerical challenges that are typical for robotics as opposed to multi-body dynamics and gaming, and found that each engine performs best on the type of system it was designed and optimized for.
Abstract: There is growing need for software tools that can accurately simulate the complex dynamics of modern robots. While a number of candidates exist, the field is fragmented. It is difficult to select the best tool for a given project, or to predict how much effort will be needed and what the ultimate simulation performance will be. Here we introduce new quantitative measures of simulation performance, focusing on the numerical challenges that are typical for robotics as opposed to multi-body dynamics and gaming. We then present extensive simulation results, obtained within a new software framework for instantiating the same model in multiple engines and running side-by-side comparisons. Overall we find that each engine performs best on the type of system it was designed and optimized for: MuJoCo wins the robotics-related tests, while the gaming engines win the gaming-related tests without a clear leader among them. The simulations are illustrated in the accompanying movie.

262 citations


Journal ArticleDOI
Haoyong Yu1, Sunan Huang1, Gong Chen1, Yongping Pan1, Zhao Guo1 
TL;DR: An interaction control strategy for a gait rehabilitation robot driven by a novel compact series elastic actuator, which provides intrinsic compliance and backdrivablility for safe human-robot interaction.
Abstract: Rehabilitation robots, by necessity, have direct physical interaction with humans. Physical interaction affects the controlled variables and may even cause system instability. Thus, human–robot interaction control design is critical in rehabilitation robotics research. This paper presents an interaction control strategy for a gait rehabilitation robot. The robot is driven by a novel compact series elastic actuator, which provides intrinsic compliance and backdrivablility for safe human–robot interaction. The control design is based on the actuator model with consideration of interaction dynamics. It consists mainly of human interaction compensation, friction compensation, and is enhanced with a disturbance observer. Such a control scheme enables the robot to achieve low output impedance when operating in human-in-charge mode and achieve accurate force tracking when operating in force control mode. Due to the direct physical interaction with humans, the controller design must also meet the stability requirement. A theoretical proof is provided to show the guaranteed stability of the closed-loop system under the proposed controller. The proposed design is verified with an ankle robot in walking experiments. The results can be readily extended to other rehabilitation and assistive robots driven with compliant actuators without much difficulty.

Journal ArticleDOI
TL;DR: It is concluded that a cooperation model is critical for safe and efficient robot navigation in dense human crowds and the salient characteristics of nearly any dynamic navigation algorithm.
Abstract: We consider the problem of navigating a mobile robot through dense human crowds. We begin by exploring a fundamental impediment to classical motion planning algorithms called the “freezing robot problem”: once the environment surpasses a certain level of dynamic complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place or performs unnecessary maneuvers to avoid collisions. We argue that this problem can be avoided if the robot anticipates human cooperation, and accordingly we develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a “multiple goal” extension that models the goal-driven nature of human decision making. We validate this model with an empirical study of robot navigation in dense human crowds 488 runs, specifically testing how cooperation models effect navigation performance. The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 0.8 humans/m2, while a state-of-the-art non-cooperative planner exhibits unsafe behavior more than three times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m2. We also show that our non-cooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.

Proceedings ArticleDOI
02 Mar 2015
TL;DR: This paper analyzes the benefit of planning motion that explicitly enables the collaborator's inferences on the success of physical collaboration, as measured by both objective and subjective metrics, and suggests that legible motion, planned to clearly express the robot's intent, leads to more fluent collaborations.
Abstract: Most motion in robotics is purelyfifnctional, planned to achieve the goal and avoid collisions. Such motion is great in isolation, but collaboration affords a human who is watching the motion and making inferences about it, trying to coordinate with the robot to achieve the task. This paper analyzes the benefit of planning motion that explicitly enables the collaborator's inferences on the success of physical collaboration, as measured by both objective and subjective metrics. Results suggest that legible motion, planned to clearly express the robot's intent, leads to more fluent collaborations than predictable motion, planned to match the collaborator's expectations. Furthermore, purely functional motion can harm coordination, which negatively affects both task efficiency, as well as the participants' perception of the collaboration. Categories and Subject Descriptors I.2.9 [Artificial Intelligence]: Robotics

Proceedings ArticleDOI
17 Dec 2015
TL;DR: This paper implemented a complete model-predictive controller and applied it in real-time on the physical HRP-2 robot, the first time that such a whole-body model predictive controller is applied in real time on a complex dynamic robot.
Abstract: Controlling the robot with a permanently-updated optimal trajectory, also known as model predictive control, is the Holy Grail of whole-body motion generation. Before obtaining it, several challenges should be faced: computation cost, non-linear local minima, algorithm stability, etc. In this paper, we address the problem of applying the updated optimal control in real-time on the physical robot. In particular, we focus on the problems raised by the delays due to computation and by the differences between the real robot and the simulated model. Based on the optimal-control solver MuJoCo, we implemented a complete model-predictive controller and we applied it in real-time on the physical HRP-2 robot. It is the first time that such a whole-body model predictive controller is applied in real-time on a complex dynamic robot. Aside from the technical contributions cited above, the main contribution of this paper is to report the experimental results of this premiere implementation.

Proceedings Article
25 Jul 2015
TL;DR: This work proposes to formulate the problem of sequential robot manipulation holistically as a 1st- order logic extension of a mathematical program: a non-linear constrained program over the full world trajectory where the symbolic state-action sequence defines the (in-)equality constraints.
Abstract: We consider problems of sequential robot manipulation (aka. combined task and motion planning) where the objective is primarily given in terms of a cost function over the final geometric state, rather than a symbolic goal description. In this case we should leverage optimization methods to inform search over potential action sequences. We propose to formulate the problem holistically as a 1st- order logic extension of a mathematical program: a non-linear constrained program over the full world trajectory where the symbolic state-action sequence defines the (in-)equality constraints. We tackle the challenge of solving such programs by proposing three levels of approximation: The coarsest level introduces the concept of the effective end state kinematics, parametrically describing all possible end state configurations conditional to a given symbolic action sequence. Optimization on this level is fast and can inform symbolic search. The other two levels optimize over interaction keyframes and eventually over the full world trajectory across interactions. We demonstrate the approach on a problem of maximizing the height of a physically stable construction from an assortment of boards, cylinders and blocks.

Journal ArticleDOI
TL;DR: Rapyuta as mentioned in this paper is an open-source cloud robotics platform that helps robots to offload heavy computation by providing secured customizable computing environments in the cloud and allows robots to easily access the RoboEarth knowledge repository.
Abstract: In this paper, we present the design and implementation of Rapyuta, an open-source cloud robotics platform. Rapyuta helps robots to offload heavy computation by providing secured customizable computing environments in the cloud. The computing environments also allow the robots to easily access the RoboEarth knowledge repository. Furthermore, these computing environments are tightly interconnected, paving the way for deployment of robotic teams. We also describe three typical use cases, some benchmarking and performance results, and two proof-of-concept demonstrations.

Journal ArticleDOI
TL;DR: A brief system overview is presented, detailing Valkyrie's mechatronic subsystems, followed by a summarization of the inverse kinematics-based walking algorithm employed at the Trials, and some closing remarks are given about the competition.
Abstract: In December 2013, 16 teams from around the world gathered at Homestead Speedway near Miami, FL to participate in the DARPA Robotics Challenge DRC Trials, an aggressive robotics competition partly inspired by the aftermath of the Fukushima Daiichi reactor incident. While the focus of the DRC Trials is to advance robotics for use in austere and inhospitable environments, the objectives of the DRC are to progress the areas of supervised autonomy and mobile manipulation for everyday robotics. NASA's Johnson Space Center led a team comprised of numerous partners to develop Valkyrie, NASA's first bipedal humanoid robot. Valkyrie is a 44 degree-of-freedom, series elastic actuator-based robot that draws upon over 18 years of humanoid robotics design heritage. Valkyrie's application intent is aimed at not only responding to events like Fukushima, but also advancing human spaceflight endeavors in extraterrestrial planetary settings. This paper presents a brief system overview, detailing Valkyrie's mechatronic subsystems, followed by a summarization of the inverse kinematics-based walking algorithm employed at the Trials. Next, the software and control architectures are highlighted along with a description of the operator interface tools. Finally, some closing remarks are given about the competition, and a vision of future work is provided.

Proceedings ArticleDOI
26 May 2015
TL;DR: This work presents a sheet that can self-fold into a functional 3D robot, actuate immediately for untethered walking and swimming, and subsequently dissolve in liquid, including an acetone-degradable version, which allows the entire robot's body to vanish in a liquid.
Abstract: A miniature robotic device that can fold-up on the spot, accomplish tasks, and disappear by degradation into the environment promises a range of medical applications but has so far been a challenge in engineering This work presents a sheet that can self-fold into a functional 3D robot, actuate immediately for untethered walking and swimming, and subsequently dissolve in liquid The developed sheet weighs 031 g, spans 17 cm square in size, features a cubic neodymium magnet, and can be thermally activated to self-fold Since the robot has asymmetric body balance along the sagittal axis, the robot can walk at a speed of 38 body-length/s being remotely controlled by an alternating external magnetic field We further show that the robot is capable of conducting basic tasks and behaviors, including swimming, delivering/carrying blocks, climbing a slope, and digging The developed models include an acetone-degradable version, which allows the entire robot's body to vanish in a liquid We thus experimentally demonstrate the complete life cycle of our robot: self-folding, actuation, and degrading

Journal ArticleDOI
13 Nov 2015
TL;DR: The fundamentals of robot navigation requirements are discussed, and the state of the art techniques that form the bases of established solutions for mobile robots localization and mapping are reviewed.
Abstract: This paper is intended to pave the way for new researchers in the field of robotics and autonomous systems, particularly those who are interested in robot localization and mapping. We discuss the fundamentals of robot navigation requirements and provide a review of the state of the art techniques that form the bases of established solutions for mobile robots localization and mapping. The topics we discuss range from basic localization techniques such as wheel odometry and dead reckoning, to the more advance Visual Odometry (VO) and Simultaneous Localization and Mapping (SLAM) techniques. We discuss VO in both monocular and stereo vision systems using feature matching/tracking and optical flow techniques. We discuss and compare the basics of most common SLAM methods such as the Extended Kalman Filter SLAM (EKF-SLAM), Particle Filter and the most recent RGB-D SLAM. We also provide techniques that form the building blocks to those methods such as feature extraction (i.e. SIFT, SURF, FAST), feature matching, outlier removal and data association techniques.

Journal ArticleDOI
TL;DR: This work describes the full body humanoid control approach developed for the simulation phase of the DARPA Robotics Challenge DRC, as well as the modifications made for the DARNA Robotics Challenge Trials.
Abstract: We describe our full body humanoid control approach developed for the simulation phase of the DARPA Robotics Challenge DRC, as well as the modifications made for the DARPA Robotics Challenge Trials. We worked with the Boston Dynamics Atlas robot. Our approach was initially targeted at walking, and it consisted of two levels of optimization: a high-level trajectory optimizer that reasons about center of mass and swing foot trajectories, and a low-level controller that tracks those trajectories by solving floating base full body inverse dynamics using quadratic programming. This controller is capable of walking on rough terrain, and it also achieves long footsteps, fast walking speeds, and heel-strike and toe-off in simulation. During development of these and other whole body tasks on the physical robot, we introduced an additional optimization component in the low-level controller, namely an inverse kinematics controller. Modeling and torque measurement errors and hardware features of the Atlas robot led us to this three-part approach, which was applied to three tasks in the DRC Trials in December 2013.

Book ChapterDOI
01 Jan 2015
TL;DR: In this paper, the authors present a trajectory planning algorithm for robots that can be executed at high speed, but at the same time harmless for the robot, in terms of avoiding excessive accelerations of the actuators and vibrations of the mechanical structure.
Abstract: Path planning and trajectory planning are crucial issues in the field of Robotics and, more generally, in the field of Automation. Indeed, the trend for robots and automatic machines is to operate at increasingly high speed, in order to achieve shorter production times. The high operating speed may hinder the accuracy and repeatability of the robot motion, since extreme performances are required from the actuators and the control system. Therefore, particular care should be put in generating a trajectory that could be executed at high speed, but at the same time harmless for the robot, in terms of avoiding excessive accelerations of the actuators and vibrations of the mechanical structure. Such a trajectory is defined as smooth. For such reasons, path planning and trajectory planning algorithms assume an increasing significance in robotics. Path planning algorithms generate a geometric path, from an initial to a final point, passing through pre-defined via-points, either in the joint space or in the operating space of the robot, while trajectory planning algorithms take a given geometric path and endow it with the time information. Trajectory planning algorithms are crucial in Robotics, because defining the times of passage at the via-points influences not only the kinematic properties of the motion, but also the dynamic ones. Namely, the inertial forces (and torques), to which the robot is subjected, depend on the accelerations along the trajectory, while the vibrations of its mechanical structure are basically determined by the values of the jerk (i.e. the derivative of the acceleration). Path planning algorithms are usually divided according to the methodologies used to generate the geometric path, namely: roadmap techniques cell decomposition algorithms artificial potential methods.

Journal ArticleDOI
TL;DR: The main achievements of evolutionary robotics are considered, focusing particularly on its contributions to both engineering and biology, and some of the most interesting findings are reviewed.
Abstract: Evolutionary robotics applies the selection, variation, and heredity principles of natural evolution to the design of robots with embodied intelligence. It can be considered as a subfield of robotics that aims to create more robust and adaptive robots. A pivotal feature of the evolutionary approach is that it considers the whole robot at once, and enables the exploitation of robot features in a holistic manner. Evolutionary robotics can also be seen as an innovative approach to the study of evolution based on a new kind of experimentalism. The use of robots as a substrate can help address questions that are difficult, if not impossible, to investigate through computer simulations or biological studies. In this paper we consider the main achievements of evolutionary robotics, focusing particularly on its contributions to both engineering and biology. We briefly elaborate on methodological issues, review some of the most interesting findings, and discuss important open issues and promising avenues for future work.

Journal ArticleDOI
TL;DR: The ways that Socially assistive robotics (SAR) have already been used in mental health service and research are reviewed and ways that these applications can be expanded are discussed.

Proceedings ArticleDOI
02 Mar 2015
TL;DR: It was found that children interacting with a robot using social and adaptive behaviours in addition to the teaching strategy did not learn a significant amount, indicating that while the presence of a physical robot leads to improved learning, caution is required when applying social behaviour to a robot in a tutoring context.
Abstract: Social robots are finding increasing application in the domain of education, particularly for children, to support and augment learning opportunities. With an implicit assumption that social and adaptive behaviour is desirable, it is therefore of interest to determine precisely how these aspects of behaviour may be exploited in robots to support children in their learning. In this paper, we explore this issue by evaluating the effect of a social robot tutoring strategy with children learning about prime numbers. It is shown that the tutoring strategy itself leads to improvement, but that the presence of a robot employing this strategy amplifies this effect, resulting in significant learning. However, it was also found that children interacting with a robot using social and adaptive behaviours in addition to the teaching strategy did not learn a significant amount. These results indicate that while the presence of a physical robot leads to improved learning, caution is required when applying social behaviour to a robot in a tutoring context. Categories and Subject Descriptors H.1.2 [Models and Principles]: User/Machine Systems

Patent
23 Apr 2015
TL;DR: A wireless coverage characterization platform uses an autonomous vehicle or robot, such as an unmanned aerial vehicle or other small robot, to autonomously collect key wireless coverage parameters for an indoor environment.
Abstract: A wireless coverage characterization platform uses an autonomous vehicle or robot, such as an unmanned aerial vehicle or other small robot, to autonomously collect key wireless coverage parameters for an indoor environment. One or more vehicles or robots are equipped with integrated simultaneous localization and mapping sensors as well as wireless signal measurement sensors. As a vehicle traverses the indoor environment, on-board processing components process the sensor measurement data to simultaneously build an indoor map of the environment and to learn the wireless coverage characteristics of the environment incrementally. The vehicle's navigation system guides the vehicle through the environment based on the sensor measurements and the learned indoor map until a complete map of the wireless signal strength at all locations throughout the environment is obtained. The system can identify areas of weak wireless coverage or interference sources and recommend access point device locations based on results of the survey.

Journal ArticleDOI
TL;DR: With the proposed control, uniform ultimate boundedness of the closed loop system is achieved in the context of Lyapunov’s stability theory and its associated techniques.
Abstract: In this paper, neural network control is presented for a rehabilitation robot with unknown system dynamics. To deal with the system uncertainties and improve the system robustness, adaptive neural networks are used to approximate the unknown model of the robot and adapt interactions between the robot and the patient. Both full state feedback control and output feedback control are considered in this paper. With the proposed control, uniform ultimate boundedness of the closed loop system is achieved in the context of Lyapunov's stability theory and its associated techniques. The state of the system is proven to converge to a small neighborhood of zero by appropriately choosing design parameters. Extensive simulations for a rehabilitation robot with constraints are carried out to illustrate the effectiveness of the proposed control.

Journal ArticleDOI
TL;DR: The actuator-level control of Valkyrie, a new humanoid robot designed by NASA's Johnson Space Center in collaboration with several external partners, is discussed and a decentralized approach is taken in controlling Valkyrie's many series elastic degrees of freedom.
Abstract: This paper discusses the actuator-level control of Valkyrie, a new humanoid robot designed by NASA's Johnson Space Center in collaboration with several external partners. Several topics pertaining to Valkyrie's series elastic actuators are presented including control architecture, controller design, and implementation in hardware. A decentralized approach is taken in controlling Valkyrie's many series elastic degrees of freedom. By conceptually decoupling actuator dynamics from robot limb dynamics, the problem of controlling a highly complex system is simplified and the controller development process is streamlined compared to other approaches. This hierarchical control abstraction is realized by leveraging disturbance observers in the robot's joint-level torque controllers. A novel analysis technique is applied to understand the ability of a disturbance observer to attenuate the effects of unmodeled dynamics. The performance of this control approach is demonstrated in two ways. First, torque tracking performance of a single Valkyrie actuator is characterized in terms of controllable torque resolution, tracking error, bandwidth, and power consumption. Second, tests are performed on Valkyrie's arm, a serial chain of actuators, to demonstrate the robot's ability to accurately track torques with the presented decentralized control approach.

Proceedings Article
25 Jan 2015
TL;DR: A system that learns manipulation action plans by processing unconstrained videos from the World Wide Web to robustly generate the sequence of atomic actions of seen longer actions in video in order to acquire knowledge for robots.
Abstract: In order to advance action generation and creation in robots beyond simple learned schemas we need computational tools that allow us to automatically interpret and represent human actions. This paper presents a system that learns manipulation action plans by processing unconstrained videos from the World Wide Web. Its goal is to robustly generate the sequence of atomic actions of seen longer actions in video in order to acquire knowledge for robots. The lower level of the system consists of two convolutional neural network (CNN) based recognition modules, one for classifying the hand grasp type and the other for object recognition. The higher level is a probabilistic manipulation action grammar based parsing module that aims at generating visual sentences for robot manipulation. Experiments conducted on a publicly available unconstrained video dataset show that the system is able to learn manipulation actions by "watching" unconstrained videos with high accuracy.

Journal ArticleDOI
TL;DR: A series of algorithms are presented that draw from recent advances in Bayesian non-parametric statistics and control theory to automatically detect and leverage repeated structure at multiple levels of abstraction in demonstration data, providing robust generalization and transfer in complex, multi-step robotic tasks.
Abstract: Robots exhibit flexible behavior largely in proportion to their degree of knowledge about the world. Such knowledge is often meticulously hand-coded for a narrow class of tasks, limiting the scope of possible robot competencies. Thus, the primary limiting factor of robot capabilities is often not the physical attributes of the robot, but the limited time and skill of expert programmers. One way to deal with the vast number of situations and environments that robots face outside the laboratory is to provide users with simple methods for programming robots that do not require the skill of an expert. For this reason, learning from demonstration (LfD) has become a popular alternative to traditional robot programming methods, aiming to provide a natural mechanism for quickly teaching robots. By simply showing a robot how to perform a task, users can easily demonstrate new tasks as needed, without any special knowledge about the robot. Unfortunately, LfD often yields little knowledge about the world, and thus lacks robust generalization capabilities, especially for complex, multi-step tasks. We present a series of algorithms that draw from recent advances in Bayesian non-parametric statistics and control theory to automatically detect and leverage repeated structure at multiple levels of abstraction in demonstration data. The discovery of repeated structure provides critical insights into task invariants, features of importance, high-level task structure, and appropriate skills for the task. This culminates in the discovery of a finite-state representation of the task, composed of grounded skills that are flexible and reusable, providing robust generalization and transfer in complex, multi-step robotic tasks. These algorithms are tested and evaluated using a PR2 mobile manipulator, showing success on several complex real-world tasks, such as furniture assembly.

Journal ArticleDOI
TL;DR: Lyapunov theorem shows that the proposed algorithms can guarantee asymptotic stability and tracking of the linear and angular motion of a quadrotor vehicle.
Abstract: This paper addresses the stability and tracking control problem of a quadrotor unmanned flying robot vehicle in the presence of modeling error and disturbance uncertainty. The input algorithms are designed for autonomous flight control with the help of an energy function. Adaptation laws are designed to learn and compensate the modeling error and external disturbance uncertainties. Lyapunov theorem shows that the proposed algorithms can guarantee asymptotic stability and tracking of the linear and angular motion of a quadrotor vehicle. Compared with the existing results, the proposed adaptive algorithm does not require an a priori known bound of the modeling errors and disturbance uncertainty. To illustrate the theoretical argument, experimental results on a commercial quadrotor vehicle are presented.