scispace - formally typeset
Search or ask a question

Showing papers on "Robot published in 2018"


Journal ArticleDOI
24 Jan 2018-Nature
TL;DR: In this paper, the authors demonstrate magneto-elastic soft millimetre-scale robots that can swim inside and on the surface of liquids, climb liquid menisci, roll and walk on solid surfaces, jump over obstacles, and crawl within narrow tunnels.
Abstract: Untethered small-scale (from several millimetres down to a few micrometres in all dimensions) robots that can non-invasively access confined, enclosed spaces may enable applications in microfactories such as the construction of tissue scaffolds by robotic assembly, in bioengineering such as single-cell manipulation and biosensing, and in healthcare such as targeted drug delivery and minimally invasive surgery. Existing small-scale robots, however, have very limited mobility because they are unable to negotiate obstacles and changes in texture or material in unstructured environments. Of these small-scale robots, soft robots have greater potential to realize high mobility via multimodal locomotion, because such machines have higher degrees of freedom than their rigid counterparts. Here we demonstrate magneto-elastic soft millimetre-scale robots that can swim inside and on the surface of liquids, climb liquid menisci, roll and walk on solid surfaces, jump over obstacles, and crawl within narrow tunnels. These robots can transit reversibly between different liquid and solid terrains, as well as switch between locomotive modes. They can additionally execute pick-and-place and cargo-release tasks. We also present theoretical models to explain how the robots move. Like the large-scale robots that can be used to study locomotion, these soft small-scale robots could be used to study soft-bodied locomotion produced by small organisms.

1,326 citations


Journal ArticleDOI
TL;DR: An extensive review on human–robot collaboration in industrial environment is provided, with specific focus on issues related to physical and cognitive interaction, and the commercially available solutions are presented.

632 citations


Proceedings ArticleDOI
26 Jun 2018
TL;DR: This system can learn quadruped locomotion from scratch using simple reward signals and users can provide an open loop reference to guide the learning process when more control over the learned gait is needed.
Abstract: Designing agile locomotion for quadruped robots often requires extensive expertise and tedious manual tuning. In this paper, we present a system to automate this process by leveraging deep reinforcement learning techniques. Our system can learn quadruped locomotion from scratch using simple reward signals. In addition, users can provide an open loop reference to guide the learning process when more control over the learned gait is needed. The control policies are learned in a physics simulator and then deployed on real robots. In robotics, policies trained in simulation often do not transfer to the real world. We narrow this reality gap by improving the physics simulator and learning robust policies. We improve the simulation using system identification, developing an accurate actuator model and simulating latency. We learn robust controllers by randomizing the physical environments, adding perturbations and designing a compact observation space. We evaluate our system on two agile locomotion gaits: trotting and galloping. After learning in simulation, a quadruped robot can successfully perform both gaits in the real world.

520 citations


Journal ArticleDOI
TL;DR: With the proposed control, the stability of the closed-loop system is achieved via Lyapunov’s stability theory, and the tracking performance is guaranteed under the condition of state constraints and uncertainty.
Abstract: This paper investigates adaptive fuzzy neural network (NN) control using impedance learning for a constrained robot, subject to unknown system dynamics, the effect of state constraints, and the uncertain compliant environment with which the robot comes into contact. A fuzzy NN learning algorithm is developed to identify the uncertain plant model. The prominent feature of the fuzzy NN is that there is no need to get the prior knowledge about the uncertainty and a sufficient amount of observed data. Also, impedance learning is introduced to tackle the interaction between the robot and its environment, so that the robot follows a desired destination generated by impedance learning. A barrier Lyapunov function is used to address the effect of state constraints. With the proposed control, the stability of the closed-loop system is achieved via Lyapunov’s stability theory, and the tracking performance is guaranteed under the condition of state constraints and uncertainty. Some simulation studies are carried out to illustrate the effectiveness of the proposed scheme.

498 citations


Proceedings ArticleDOI
21 May 2018
TL;DR: In this article, the authors describe how consumer-grade Virtual Reality headsets and hand tracking hardware can be used to naturally teleoperate robots to perform complex tasks, and how imitation learning can learn deep neural network policies (mapping from pixels to actions) that can acquire the demonstrated skills.
Abstract: Imitation learning is a powerful paradigm for robot skill acquisition. However, obtaining demonstrations suitable for learning a policy that maps from raw pixels to actions can be challenging. In this paper we describe how consumer-grade Virtual Reality headsets and hand tracking hardware can be used to naturally teleoperate robots to perform complex tasks. We also describe how imitation learning can learn deep neural network policies (mapping from pixels to actions) that can acquire the demonstrated skills. Our experiments showcase the effectiveness of our approach for learning visuomotor skills.

480 citations


Journal ArticleDOI
TL;DR: The knowledge gap and promising solutions toward perceptive soft robots are discussed and analyzed to provide a perspective in this field and challenges and trends in developing multimodal sensors, stretchable conductive materials and electronic interfaces, modeling techniques, and data interpretation for soft robotic sensing are highlighted.
Abstract: In the past few years, soft robotics has rapidly become an emerging research topic, opening new possibilities for addressing real-world tasks Perception can enable robots to effectively explore the unknown world, and interact safely with humans and the environment Among all extero- and proprioception modalities, the detection of mechanical cues is vital, as with living beings A variety of soft sensing technologies are available today, but there is still a gap to effectively utilize them in soft robots for practical applications Here, the developments in soft robots with mechanical sensing are summarized to provide a comprehensive understanding of the state of the art in this field Promising sensing technologies for mechanically perceptive soft robots are described, categorized, and their pros and cons are discussed Strategies for designing soft sensors and criteria to evaluate their performance are outlined from the perspective of soft robotic applications Challenges and trends in developing multimodal sensors, stretchable conductive materials and electronic interfaces, modeling techniques, and data interpretation for soft robotic sensing are highlighted The knowledge gap and promising solutions toward perceptive soft robots are discussed and analyzed to provide a perspective in this field

416 citations


Journal ArticleDOI
TL;DR: An overview of the inaugural Amazon Picking Challenge is presented along with a summary of a survey conducted among the 26 participating teams, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task.
Abstract: This paper presents an overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning, and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned based on survey results and the authors’ personal experiences during the challenge. Note to Practitioners —Perception, motion planning, grasping, and robotic system engineering have reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semistructured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.

407 citations


Journal ArticleDOI
TL;DR: This review article attempts to provide an insight into various controllers developed for continuum/soft robots as a guideline for future applications in the soft robotics field.
Abstract: With the rise of soft robotics technology and applications, there have been increasing interests in the development of controllers appropriate for their particular design. Being fundamentally different from traditional rigid robots, there is still not a unified framework for the design, analysis, and control of these high-dimensional robots. This review article attempts to provide an insight into various controllers developed for continuum/soft robots as a guideline for future applications in the soft robotics field. A comprehensive assessment of various control strategies and an insight into the future areas of research in this field are presented.

403 citations


Proceedings ArticleDOI
21 May 2018
TL;DR: This work presents a decentralized sensor-level collision avoidance policy for multi-robot systems, which directly maps raw sensor measurements to an agent's steering commands in terms of movement velocity and demonstrates that the learned policy can be well generalized to new scenarios that do not appear in the entire training period.
Abstract: Developing a safe and efficient collision avoidance policy for multiple robots is challenging in the decentralized scenarios where each robot generates its paths without observing other robots' states and intents. While other distributed multi-robot collision avoidance systems exist, they often require extracting agent-level features to plan a local collision-free action, which can be computationally prohibitive and not robust. More importantly, in practice the performance of these methods are much lower than their centralized counterparts. We present a decentralized sensor-level collision avoidance policy for multi-robot systems, which directly maps raw sensor measurements to an agent's steering commands in terms of movement velocity. As a first step toward reducing the performance gap between decentralized and centralized methods, we present a multi-scenario multi-stage training framework to learn an optimal policy. The policy is trained over a large number of robots on rich, complex environments simultaneously using a policy gradient based reinforcement learning algorithm. We validate the learned sensor-level collision avoidance policy in a variety of simulated scenarios with thorough performance evaluations and show that the final learned policy is able to find time efficient, collision-free paths for a large-scale robot system. We also demonstrate that the learned policy can be well generalized to new scenarios that do not appear in the entire training period, including navigating a heterogeneous group of robots and a large-scale scenario with 100 robots. Videos are available at https://sites.google.com/view/drlmaca.

398 citations


Journal ArticleDOI
TL;DR: The use and demand for robotic medical and surgical platforms is increasing and new technologies are continually being developed to improve on the capabilities of previously established systems.
Abstract: The use of laparoscopic and robotic procedures has increased in general surgery. Minimally invasive robotic surgery has made tremendous progress in a relatively short period of time, realizing improvements for both the patient and surgeon. This has led to an increase in the use and development of robotic devices and platforms for general surgery. The purpose of this review is to explore current and emerging surgical robotic technologies in a growing and dynamic environment of research and development. This review explores medical and surgical robotic endoscopic surgery and peripheral technologies currently available or in development. The devices discussed here are specific to general surgery, including laparoscopy, colonoscopy, esophagogastroduodenoscopy, and thoracoscopy. Benefits and limitations of each technology were identified and applicable future directions were described. A number of FDA-approved devices and platforms for robotic surgery were reviewed, including the da Vinci Surgical System, Sensei X Robotic Catheter System, FreeHand 1.2, invendoscopy E200 system, Flex® Robotic System, Senhance, ARES, the Single-Port Instrument Delivery Extended Research (SPIDER), and the NeoGuide Colonoscope. Additionally, platforms were reviewed which have not yet obtained FDA approval including MiroSurge, ViaCath System, SPORT™ Surgical System, SurgiBot, Versius Robotic System, Master and Slave Transluminal Endoscopic Robot, Verb Surgical, Miniature In Vivo Robot, and the Einstein Surgical Robot. The use and demand for robotic medical and surgical platforms is increasing and new technologies are continually being developed. New technologies are increasingly implemented to improve on the capabilities of previously established systems. Future studies are needed to further evaluate the strengths and weaknesses of each robotic surgical device and platform in the operating suite.

398 citations


Journal ArticleDOI
19 Dec 2018
TL;DR: A tethered soft robot capable of climbing walls made of wood, paper, and glass at 90° with a speed of up to 0.75 body length per second and multimodal locomotion, including climbing, crawling, and turning is reported.
Abstract: Existing robots capable of climbing walls mostly rely on rigid actuators such as electric motors, but soft wall-climbing robots based on muscle-like actuators have not yet been achieved. Here, we report a tethered soft robot capable of climbing walls made of wood, paper, and glass at 90° with a speed of up to 0.75 body length per second and multimodal locomotion, including climbing, crawling, and turning. This soft wall-climbing robot is enabled by (i) dielectric-elastomer artificial muscles that generate fast periodic deformation of the soft robotic body, (ii) electroadhesive feet that give spatiotemporally controlled adhesion of different parts of the robot on the wall, and (iii) a control strategy that synchronizes the body deformation and feet electroadhesion for stable climbing. We further demonstrate that our soft robot could carry a camera to take videos in a vertical tunnel, change its body height to navigate through a confined space, and follow a labyrinth-like planar trajectory. Our soft robot mimicked the vertical climbing capability and the agile adaptive motions exhibited by soft organisms.

Proceedings ArticleDOI
21 May 2018
TL;DR: This paper evaluates an array of publicly-available VIO pipelines on different hardware configurations, including several single-board computer systems that are typically found on flying robots, and considers the pose estimation accuracy, per-frame processing time, and CPU and memory load while processing the EuRoC datasets.
Abstract: Flying robots require a combination of accuracy and low latency in their state estimation in order to achieve stable and robust flight. However, due to the power and payload constraints of aerial platforms, state estimation algorithms must provide these qualities under the computational constraints of embedded hardware. Cameras and inertial measurement units (IMUs) satisfy these power and payload constraints, so visual-inertial odometry (VIO) algorithms are popular choices for state estimation in these scenarios, in addition to their ability to operate without external localization from motion capture or global positioning systems. It is not clear from existing results in the literature, however, which VIO algorithms perform well under the accuracy, latency, and computational constraints of a flying robot with onboard state estimation. This paper evaluates an array of publicly-available VIO pipelines (MSCKF, OKVIS, ROVIO, VINS-Mono, SVO+MSF, and SVO+GTSAM) on different hardware configurations, including several single-board computer systems that are typically found on flying robots. The evaluation considers the pose estimation accuracy, per-frame processing time, and CPU and memory load while processing the EuRoC datasets, which contain six degree of freedom (6DoF) trajectories typical of flying robots. We present our complete results as a benchmark for the research community.

Journal ArticleDOI
TL;DR: It is posited that robots will play key roles in everyday life and will soon coexist with us, leading all people to a smarter, safer, healthier, and happier existence.
Abstract: As robotics technology evolves, we believe that personal social robots will be one of the next big expansions in the robotics sector. Based on the accelerated advances in this multidisciplinary domain and the growing number of use cases, we can posit that robots will play key roles in everyday life and will soon coexist with us, leading all people to a smarter, safer, healthier, and happier existence.

Journal ArticleDOI
TL;DR: The main sections of this paper focus on major results covering trajectory generation, task allocation, adversarial control, distributed sensing, monitoring, and mapping, and dynamic modeling and conditions for stability and controllability that are essential in order to achieve cooperative flight and distributed sensing.
Abstract: The use of aerial swarms to solve real-world problems has been increasing steadily, accompanied by falling prices and improving performance of communication, sensing, and processing hardware. The commoditization of hardware has reduced unit costs, thereby lowering the barriers to entry to the field of aerial swarm robotics. A key enabling technology for swarms is the family of algorithms that allow the individual members of the swarm to communicate and allocate tasks amongst themselves, plan their trajectories, and coordinate their flight in such a way that the overall objectives of the swarm are achieved efficiently. These algorithms, often organized in a hierarchical fashion, endow the swarm with autonomy at every level, and the role of a human operator can be reduced, in principle, to interactions at a higher level without direct intervention. This technology depends on the clever and innovative application of theoretical tools from control and estimation. This paper reviews the state of the art of these theoretical tools, specifically focusing on how they have been developed for, and applied to, aerial swarms. Aerial swarms differ from swarms of ground-based vehicles in two respects: they operate in a three-dimensional space and the dynamics of individual vehicles adds an extra layer of complexity. We review dynamic modeling and conditions for stability and controllability that are essential in order to achieve cooperative flight and distributed sensing. The main sections of this paper focus on major results covering trajectory generation, task allocation, adversarial control, distributed sensing, monitoring, and mapping. Wherever possible, we indicate how the physics and subsystem technologies of aerial robots are brought to bear on these individual areas.

Journal ArticleDOI
TL;DR: A robot control/identification scheme to identify the unknown robot kinematic and dynamic parameters with enhanced convergence rate was developed, and the information of parameter estimation error was properly integrated into the proposed identification algorithm, such that enhanced estimation performance was achieved.
Abstract: For parameter identifications of robot systems, most existing works have focused on the estimation veracity, but few works of literature are concerned with the convergence speed. In this paper, we developed a robot control/identification scheme to identify the unknown robot kinematic and dynamic parameters with enhanced convergence rate. Superior to the traditional methods, the information of parameter estimation error was properly integrated into the proposed identification algorithm, such that enhanced estimation performance was achieved. Besides, the Newton–Euler (NE) method was used to build the robot dynamic model, where a singular value decomposition-based model reduction method was designed to remedy the potential singularity problems of the NE regressor. Moreover, an interval excitation condition was employed to relax the requirement of persistent excitation condition for the kinematic estimation. By using the Lyapunov synthesis, explicit analysis of the convergence rate of the tracking errors and the estimated parameters were performed. Simulation studies were conducted to show the accurate and fast convergence of the proposed finite-time (FT) identification algorithm based on a 7-DOF arm of Baxter robot.

Journal ArticleDOI
TL;DR: This article presents for the first time a survey of visual SLAM and SfM techniques that are targeted toward operation in dynamic environments and identifies three main problems: how to perform reconstruction, how to segment and track dynamic objects, and how to achieve joint motion segmentation and reconstruction.
Abstract: In the last few decades, Structure from Motion (SfM) and visual Simultaneous Localization and Mapping (visual SLAM) techniques have gained significant interest from both the computer vision and robotic communities. Many variants of these techniques have started to make an impact in a wide range of applications, including robot navigation and augmented reality. However, despite some remarkable results in these areas, most SfM and visual SLAM techniques operate based on the assumption that the observed environment is static. However, when faced with moving objects, overall system accuracy can be jeopardized. In this article, we present for the first time a survey of visual SLAM and SfM techniques that are targeted toward operation in dynamic environments. We identify three main problems: how to perform reconstruction (robust visual SLAM), how to segment and track dynamic objects, and how to achieve joint motion segmentation and reconstruction. Based on this categorization, we provide a comprehensive taxonomy of existing approaches. Finally, the advantages and disadvantages of each solution class are critically discussed from the perspective of practicality and robustness.

Journal ArticleDOI
TL;DR: In this paper, compliant ultrathin sensing and actuating electronics innervated fully soft robots that can sense the environment and perform soft bodied crawling adaptively, mimicking an inchworm, are reported.
Abstract: Soft robots outperform the conventional hard robots on significantly enhanced safety, adaptability, and complex motions. The development of fully soft robots, especially fully from smart soft materials to mimic soft animals, is still nascent. In addition, to date, existing soft robots cannot adapt themselves to the surrounding environment, i.e., sensing and adaptive motion or response, like animals. Here, compliant ultrathin sensing and actuating electronics innervated fully soft robots that can sense the environment and perform soft bodied crawling adaptively, mimicking an inchworm, are reported. The soft robots are constructed with actuators of open-mesh shaped ultrathin deformable heaters, sensors of single-crystal Si optoelectronic photodetectors, and thermally responsive artificial muscle of carbon-black-doped liquid-crystal elastomer (LCE-CB) nanocomposite. The results demonstrate that adaptive crawling locomotion can be realized through the conjugation of sensing and actuation, where the sensors sense the environment and actuators respond correspondingly to control the locomotion autonomously through regulating the deformation of LCE-CB bimorphs and the locomotion of the robots. The strategy of innervating soft sensing and actuating electronics with artificial muscles paves the way for the development of smart autonomous soft robots.

Posted Content
TL;DR: In this article, a system is proposed to learn quadruped locomotion from scratch using simple reward signals and users can provide an open loop reference to guide the learning process when more control over the learned gait is needed.
Abstract: Designing agile locomotion for quadruped robots often requires extensive expertise and tedious manual tuning. In this paper, we present a system to automate this process by leveraging deep reinforcement learning techniques. Our system can learn quadruped locomotion from scratch using simple reward signals. In addition, users can provide an open loop reference to guide the learning process when more control over the learned gait is needed. The control policies are learned in a physics simulator and then deployed on real robots. In robotics, policies trained in simulation often do not transfer to the real world. We narrow this reality gap by improving the physics simulator and learning robust policies. We improve the simulation using system identification, developing an accurate actuator model and simulating latency. We learn robust controllers by randomizing the physical environments, adding perturbations and designing a compact observation space. We evaluate our system on two agile locomotion gaits: trotting and galloping. After learning in simulation, a quadruped robot can successfully perform both gaits in the real world.

Journal ArticleDOI
TL;DR: The development of key 3D printing technologies and new materials along with composites for soft robotic applications is investigated and a brief summary of 3D-printed soft devices suitable for medical to industrial applications is included.

Journal ArticleDOI
TL;DR: It was shown that one of the trends and research focuses in agricultural field robotics is towards building a swarm of small scale robots and drones that collaborate together to optimize farming inputs and reveal denied or concealed information.
Abstract: Digital farming is the practice of modern technologies such as sensors, robotics, and data analysis for shifting from tedious operations to continuously automated processes. This paper reviews some of the latest achievements in agricultural robotics, specifically those that are used for autonomous weed control, field scouting, and harvesting. Object identification, task planning algorithms, digitalization and optimization of sensors are highlighted as some of the facing challenges in the context of digital farming. The concepts of multi-robots, human-robot collaboration, and environment reconstruction from aerial images and ground-based sensors for the creation of virtual farms were highlighted as some of the gateways of digital farming. It was shown that one of the trends and research focuses in agricultural field robotics is towards building a swarm of small scale robots and drones that collaborate together to optimize farming inputs and reveal denied or concealed information. For the case of robotic harvesting, an autonomous framework with several simple axis manipulators can be faster and more efficient than the currently adapted professional expensive manipulators. While robots are becoming the inseparable parts of the modern farms, our conclusion is that it is not realistic to expect an entirely automated farming system in the future. Keywords: agricultural robotics, precision agriculture, virtual orchards, digital agriculture, simulation software, multi-robots DOI: 10.25165/j.ijabe.20181104.4278 Citation: Shamshiri R R, Weltzien C, Hameed I A, Yule I J, Grift T E, Balasundram S K, et al. Research and development in agricultural robotics: A perspective of digital farming. Int J Agric & Biol Eng, 2018; 11(4): 1–14.

Proceedings ArticleDOI
23 Apr 2018
TL;DR: Imitating expert demonstration is a powerful mechanism for learning to perform tasks from raw sensory observations as discussed by the authors, where the expert typically provides multiple demonstrations of a task at training time, and this generates data in the form of observation-action pairs from the agent's point of view.
Abstract: Imitating expert demonstration is a powerful mechanism for learning to perform tasks from raw sensory observations. The current dominant paradigm in learning from demonstration (LfD) [3,16,19,20] requires the expert to either manually move the robot joints (i.e., kinesthetic teaching) or teleoperate the robot to execute the desired task. The expert typically provides multiple demonstrations of a task at training time, and this generates data in the form of observation-action pairs from the agent's point of view. The agent then distills this data into a policy for performing the task of interest. Such a heavily supervised approach, where it is necessary to provide demonstrations by controlling the robot, is incredibly tedious for the human expert. Moreover, for every new task that the robot needs to execute, the expert is required to provide a new set of demonstrations.

Proceedings ArticleDOI
21 May 2018
TL;DR: A generalized computation graph is proposed that subsumes value-based model-free methods and model-based methods, and is instantiate to form a navigation model that learns from raw images and is sample efficient, and outperforms single-step and double-step double Q-learning.
Abstract: Enabling robots to autonomously navigate complex environments is essential for real-world deployment. Prior methods approach this problem by having the robot maintain an internal map of the world, and then use a localization and planning method to navigate through the internal map. However, these approaches often include a variety of assumptions, are computationally intensive, and do not learn from failures. In contrast, learning-based methods improve as the robot acts in the environment, but are difficult to deploy in the real-world due to their high sample complexity. To address the need to learn complex policies with few samples, we propose a generalized computation graph that subsumes value-based model-free methods and model-based methods, with specific instantiations interpolating between model-free and model-based. We then instantiate this graph to form a navigation model that learns from raw images and is sample efficient. Our simulated car experiments explore the design decisions of our navigation model, and show our approach outperforms single-step and $N$ -step double Q-learning. We also evaluate our approach on a real-world RC car and show it can learn to navigate through a complex indoor environment with a few hours of fully autonomous, self-supervised training. Videos of the experiments and code can be found at github.com/gkahn13/gcg

Journal ArticleDOI
TL;DR: This review provides a unifying view of human and robot sharing task execution in scenarios where collaboration and cooperation between the two entities are necessary, and where the physical coupling ofhuman and robot is a vital aspect.
Abstract: As robotic devices are applied to problems beyond traditional manufacturing and industrial settings, we find that interaction between robots and humans, especially physical interaction, has become a fast developing field. Consider the application of robotics in healthcare, where we find telerobotic devices in the operating room facilitating dexterous surgical procedures, exoskeletons in the rehabilitation domain as walking aids and upper-limb movement assist devices, and even robotic limbs that are physically integrated with amputees who seek to restore their independence and mobility. In each of these scenarios, the physical coupling between human and robot, often termed physical human robot interaction (pHRI), facilitates new human performance capabilities and creates an opportunity to explore the sharing of task execution and control between humans and robots. In this review, we provide a unifying view of human and robot sharing task execution in scenarios where collaboration and cooperation between the two entities are necessary, and where the physical coupling of human and robot is a vital aspect. We define three key themes that emerge in these shared control scenarios, namely, intent detection, arbitration, and feedback. First, we explore methods for how the coupled pHRI system can detect what the human is trying to do, and how the physical coupling itself can be leveraged to detect intent. Second, once the human intent is known, we explore techniques for sharing and modulating control of the coupled system between robot and human operator. Finally, we survey methods for informing the human operator of the state of the coupled system, or the characteristics of the environment with which the pHRI system is interacting. At the conclusion of the survey, we present two case studies that exemplify shared control in pHRI systems, and specifically highlight the approaches used for the three key themes of intent detection, arbitration, and feedback for applications of upper limb robotic rehabilitation and haptic feedback from a robotic prosthesis for the upper limb. [DOI: 10.1115/1.4039145]

Proceedings ArticleDOI
26 Feb 2018
TL;DR: A new design space for communicating robot motion in-tent is explored by investigating how augmented reality (AR) might mediate human-robot interactions and developing a series of explicit and implicit designs for visually signaling robot motion intent using AR.
Abstract: Humans coordinate teamwork by conveying intent through social cues, such as gestures and gaze behaviors. However, these methods may not be possible for appearance-constrained robots that lack anthropomorphic or zoomorphic features, such as aerial robots. We explore a new design space for communicating robot motion in-tent by investigating how augmented reality (AR) might mediate human-robot interactions. We develop a series of explicit and implicit designs for visually signaling robot motion intent using AR, which we evaluate in a user study. We found that several of our AR designs significantly improved objective task efficiency over a base-line in which users only received physically-embodied orientation cues. In addition, our designs offer several trade-offs in terms of intent clarity and user perceptions of the robot as a teammate.

Journal ArticleDOI
TL;DR: Here, three key elements of bioinspired soft robots from a mechanics vantage point are reviewed, namely, materials selection, actuation, and design.

Posted Content
TL;DR: In this paper, the authors explore the reality gap in the context of 6-DoF pose estimation of known objects from a single RGB image and show that for this problem, a simple combination of domain randomized and photorealistic data can be successfully spanned.
Abstract: Using synthetic data for training deep neural networks for robotic manipulation holds the promise of an almost unlimited amount of pre-labeled training data, generated safely out of harm's way. One of the key challenges of synthetic data, to date, has been to bridge the so-called reality gap, so that networks trained on synthetic data operate correctly when exposed to real-world data. We explore the reality gap in the context of 6-DoF pose estimation of known objects from a single RGB image. We show that for this problem the reality gap can be successfully spanned by a simple combination of domain randomized and photorealistic data. Using synthetic data generated in this manner, we introduce a one-shot deep neural network that is able to perform competitively against a state-of-the-art network trained on a combination of real and synthetic data. To our knowledge, this is the first deep network trained only on synthetic data that is able to achieve state-of-the-art performance on 6-DoF object pose estimation. Our network also generalizes better to novel environments including extreme lighting conditions, for which we show qualitative results. Using this network we demonstrate a real-time system estimating object poses with sufficient accuracy for real-world semantic grasping of known household objects in clutter by a real robot.

Journal ArticleDOI
19 Sep 2018-Sensors
TL;DR: The aim of this paper is to succinctly summarize and review the path smoothing techniques in robot navigation and discuss the challenges and future trends.
Abstract: Robot navigation is an indispensable component of any mobile service robot. Many path planning algorithms generate a path which has many sharp or angular turns. Such paths are not fit for mobile robot as it has to slow down at these sharp turns. These robots could be carrying delicate, dangerous, or precious items and executing these sharp turns may not be feasible kinematically. On the contrary, smooth trajectories are often desired for robot motion and must be generated while considering the static and dynamic obstacles and other constraints like feasible curvature, robot and lane dimensions, and speed. The aim of this paper is to succinctly summarize and review the path smoothing techniques in robot navigation and discuss the challenges and future trends. Both autonomous mobile robots and autonomous vehicles (outdoor robots or self-driving cars) are discussed. The state-of-the-art algorithms are broadly classified into different categories and each approach is introduced briefly with necessary background, merits, and drawbacks. Finally, the paper discusses the current and future challenges in optimal trajectory generation and smoothing research.

Journal ArticleDOI
01 Jan 2018
TL;DR: This review puts in light the elementary components that can be used to develop soft actuators, whether they use fluids, shape memory alloys, electro-active polymers or stimuli-responsive materials, and the manufacturing methods used to build complete soft structures.
Abstract: The growing interest in soft robots comes from the new possibilities offered by these systems to cope with problems that cannot be addressed by robots built from rigid bodies Many innovative solutions have been developed in recent years to design soft components and systems They all demonstrate how soft robotics development is closely dependent on advanced manufacturing processes This review aims at giving an insight on the current state of the art in soft robotics manufacturing It first puts in light the elementary components that can be used to develop soft actuators, whether they use fluids, shape memory alloys, electro-active polymers or stimuli-responsive materials Other types of elementary components, such as soft smart structures or soft-rigid hybrid systems, are then presented The second part of this review deals with the manufacturing methods used to build complete soft structures It includes molding, with possibly reinforcements and inclusions, additive manufacturing, thin-film manufacturing, shape deposition manufacturing, and bonding The paper conclusions sums up the pros and cons of the presented techniques, and open to developing topics such as design methods for soft robotics and sensing technologies

Journal ArticleDOI
TL;DR: Physical haptic feedback mechanism is introduced to result in muscle activity that would generate EMG signals in a natural manner, in order to achieve intuitive human impedance transfer through a designed coupling interface.
Abstract: It has been established that the transfer of human adaptive impedance is of great significance for physical human–robot interaction (pHRI). By processing the electromyography (EMG) signals collected from human muscles, the limb impedance could be extracted and transferred to robots. The existing impedance transfer interfaces rely only on visual feedback and, thus, may be insufficient for skill transfer in a sophisticated environment. In this paper, physical haptic feedback mechanism is introduced to result in muscle activity that would generate EMG signals in a natural manner, in order to achieve intuitive human impedance transfer through a designed coupling interface. Relevant processing methods are integrated into the system, including the spectral collaborative representation-based classifications method used for hand motion recognition; fast smooth envelop and dimensionality reduction algorithm for arm endpoint stiffness estimation. The tutor’s arm endpoint motion trajectory is directly transferred to the robot by the designed coupling module without the restriction of hands. Haptic feedback is provided to the human tutor according to skill learning performance to enhance the teaching experience. The interface has been experimentally tested by a plugging-in task and a cutting task. Compared with the existing interfaces, the developed one has shown a better performance. Note to Practitioners —This paper is motivated by the limited performance of skill transfer in the existing human–robot interfaces. Conventional robots perform tasks independently without interaction with humans. However, the new generation of robots with the characteristics, such as flexibility and compliance, become more involved in interacting with humans. Thus, advanced human robot interfaces are required to enable robots to learn human manipulation skills. In this paper, we propose a novel interface for human impedance adaptive skill transfer in a natural and intuitive manner. The developed interface has the following functionalities: 1) it transfers human arm impedance adaptive motion to the robot intuitively; 2) it senses human motion signals that are decoded into human hand gesture and arm endpoint stiffness that ia employed for natural human robot interaction; and 3) it provides human tutor haptic feedback for enhanced teaching experience. The interface can be potentially used in pHRI, teleoperation, human motor training systems, etc.

Proceedings ArticleDOI
26 Jun 2018
TL;DR: The authors exploit the full state observability in the simulator to train better policies which take as input only partial observations (RGBD images) by employing an actor-critic training algorithm in which the critic is trained on full states while the actor (or policy) gets rendered images as input.
Abstract: Deep reinforcement learning (RL) has proven a powerful technique in many sequential decision making domains. However, Robotics poses many challenges for RL, most notably training on a physical system can be expensive and dangerous, which has sparked significant interest in learning control policies using a physics simulator. While several recent works have shown promising results in transferring policies trained in simulation to the real world, they often do not fully utilize the advantage of working with a simulator. In this work, we exploit the full state observability in the simulator to train better policies which take as input only partial observations (RGBD images). We do this by employing an actor-critic training algorithm in which the critic is trained on full states while the actor (or policy) gets rendered images as input. We show experimentally on a range of simulated tasks that using these asymmetric inputs significantly improves performance. Finally, we combine this method with domain randomization and show real robot experiments for several tasks like picking, pushing, and moving a block. We achieve this simulation to real world transfer without training on any real world data.