scispace - formally typeset
Search or ask a question

Showing papers on "Robot kinematics published in 2019"


Journal ArticleDOI
TL;DR: This paper investigates fuzzy neural network (FNN) control using impedance learning for coordinated multiple constrained robots carrying a common object in the presence of the unknown robotic dynamics and the unknown environment with which the robot comes into contact.
Abstract: In this paper, we investigate fuzzy neural network (FNN) control using impedance learning for coordinated multiple constrained robots carrying a common object in the presence of the unknown robotic dynamics and the unknown environment with which the robot comes into contact. First, an FNN learning algorithm is developed to identify the unknown plant model. Second, impedance learning is introduced to regulate the control input in order to improve the environment–robot interaction, and the robot can track the desired trajectory generated by impedance learning. Third, in light of the condition requiring the robot to move in a finite space or to move at a limited velocity in a finite space, the algorithm based on the position constraint and the velocity constraint are proposed, respectively. To guarantee the position constraint and the velocity constraint, an integral barrier Lyapunov function is introduced to avoid the violation of the constraint. According to Lyapunov’s stability theory, it can be proved that the tracking errors are uniformly bounded ultimately. At last, some simulation examples are carried out to verify the effectiveness of the designed control.

199 citations


Journal ArticleDOI
16 Jan 2019
TL;DR: A tight bound for approximation of collision probability is developed, which makes the CCNMPC formulation tractable and solvable in real time.
Abstract: Safe autonomous navigation of microair vehicles in cluttered dynamic environments is challenging due to the uncertainties arising from robot localization, sensing, and motion disturbances. This letter presents a probabilistic collision avoidance method for navigation among other robots and moving obstacles, such as humans. The approach explicitly considers the collision probability between each robot and obstacle and formulates a chance constrained nonlinear model predictive control problem (CCNMPC). A tight bound for approximation of collision probability is developed, which makes the CCNMPC formulation tractable and solvable in real time. For multirobot coordination, we describe three approaches, one distributed without communication (constant velocity assumption), one distributed with communication (of previous plans), and one centralized (sequential planning). We evaluate the proposed method in experiments with two quadrotors sharing the space with two humans and verify the multirobot coordination strategy in simulation with up to sixteen quadrotors.

179 citations


Journal ArticleDOI
TL;DR: The design, modeling, and control of a soft continuum robot with a tip extension degree of freedom is described that enables extremely simple navigation of the robot through decoupled steering and forward movement.
Abstract: Soft continuum robots exhibit access and manipulation capabilities in constrained and cluttered environments not achievable by traditional robots. However, navigation of these robots can be difficult due to the kinematics of these devices. Here we describe the design, modeling, and control of a soft continuum robot with a tip extension degree of freedom. This design enables extremely simple navigation of the robot through decoupled steering and forward movement. To navigate to a destination, the robot is steered to point at the destination and the extension degree of freedom is used to reach it. Movement of the tip is always in the direction tangent to the end of the robot's backbone, independent of the shape of the rest of the backbone. Steering occurs by inflating multiple series pneumatic artificial muscles arranged radially around the backbone and extending along the robot's whole length, while extension is implemented using pneumatically driven tip eversion. We present models and experimentally verify the growing robot kinematics. Control of the growing robot is demonstrated using an eye-in-hand visual servo control law that enables growth and steering of the robot to designated locations.

138 citations


Journal ArticleDOI
28 Jan 2019
TL;DR: This letter proposes a method to collect data from robot-terrain interaction and associate it to images, and shows that data collected can be used to train a convolutional network for terrain property prediction as well as weakly supervised semantic segmentation.
Abstract: Legged robots have the potential to traverse diverse and rugged terrain. To find a safe and efficient navigation path and to carefully select individual footholds, it is useful to be able to predict properties of the terrain ahead of the robot. In this letter, we propose a method to collect data from robot-terrain interaction and associate it to images. Using sparse data acquired in teleoperation experiments with a quadrupedal robot, we train a neural network to generate a dense prediction of the terrain properties in front of the robot. To generate training data, we project the foothold positions from the robot trajectory into on-board camera images. We then attach labels to these footholds by identifying the dominant features of the force–torque signal measured with sensorized feet. We show that data collected in this fashion can be used to train a convolutional network for terrain property prediction as well as weakly supervised semantic segmentation. Finally, we show that the predicted terrain properties can be used for autonomous navigation of the ANYmal quadruped robot.

131 citations


Journal ArticleDOI
15 Feb 2019
TL;DR: The superiority of wheeled-legged robots compared to their legged counterparts is proved with a speed of 4 m/s and a reduction of the cost of transport by 83%, and dynamic locomotion strategies for wheeled quadrupedal robots that combine the advantages of both walking and driving are shown.
Abstract: We show dynamic locomotion strategies for wheeled quadrupedal robots that combine the advantages of both walking and driving. The developed optimization framework tightly integrates the additional degrees of freedom introduced by the wheels. Our approach relies on a zero-moment point-based motion optimization which continuously updates reference trajectories. The reference motions are tracked by a hierarchical whole-body controller which computes optimal generalized accelerations and contact forces by solving a sequence of prioritized tasks including the nonholonomic rolling constraints. Our approach has been tested on ANYmal, a quadrupedal robot that is fully torque-controlled including the nonsteerable wheels attached to its legs. We conducted experiments on flat and inclined terrains as well as over steps, whereby we show that integrating the wheels into the motion control and planning framework results in intuitive motion trajectories, which enable more robust and dynamic locomotion compared to other wheeled-legged robots. Moreover, with a speed of 4 m/s and a reduction of the cost of transport by 83%, we prove the superiority of wheeled-legged robots compared to their legged counterparts.

101 citations


Proceedings ArticleDOI
11 Mar 2019
TL;DR: This work explores how advances in augmented reality (AR) may enable the design of novel teleoperation interfaces that increase operation effectiveness, support the user in conducting concurrent work, and decrease stress, and presents two AR interfaces using such a surrogate: one focused on real-time control and one inspired by waypoint delegation.
Abstract: Teleoperation remains a dominant control paradigm for human interaction with robotic systems. However, teleoperation can be quite challenging, especially for novice users. Even experienced users may face difficulties or inefficiencies when operating a robot with unfamiliar and/or complex dynamics, such as industrial manipulators or aerial robots, as teleoperation forces users to focus on low-level aspects of robot control, rather than higher level goals regarding task completion, data analysis, and problem solving. We explore how advances in augmented reality (AR) may enable the design of novel teleoperation interfaces that increase operation effectiveness, support the user in conducting concurrent work, and decrease stress. Our key insight is that AR may be used in conjunction with prior work on predictive graphical interfaces such that a teleoperator controls a virtual robot surrogate, rather than directly operating the robot itself, providing the user with foresight regarding where the physical robot will end up and how it will get there. We present the design of two AR interfaces using such a surrogate: one focused on real-time control and one inspired by waypoint delegation. We compare these designs against a baseline teleoperation system in a laboratory experiment in which novice and expert users piloted an aerial robot to inspect an environment and analyze data. Our results revealed that the augmented reality prototypes provided several objective and subjective improvements, demonstrating the promise of leveraging AR to improve human-robot interactions.

84 citations


Journal ArticleDOI
17 Jan 2019
TL;DR: This work proposes an eye-in-hand visual servo that incorporates with learning-based controller to accomplish more precise robotic tasks and demonstrates the hyperelastic robot can compensate an external variable loading during trajectory tracking.
Abstract: Soft robots, owing to their elastomeric material, ensure safe interaction with their surroundings. These robot compliance properties inevitably impose a tradeoff against precise motion control, as to which conventional model-based methods were proposed to approximate the robot kinematics. However, too many parameters, regarding robot deformation and external disturbance, are difficult to obtain, even if possible, which could be very nonlinear. Sensors self-contained in the robot are required to compensate modeling uncertainties and external disturbances. Camera (eye) integrated at the robot end-effector (hand) is a common setting. To this end, we propose an eye-in-hand visual servo that incorporates with learning-based controller to accomplish more precise robotic tasks. Local Gaussian process regression is used to initialize and refine the inverse mappings online, without prior knowledge of robot and camera parameters. Experimental validation is also conducted to demonstrate the hyperelastic robot can compensate an external variable loading during trajectory tracking.

83 citations


Journal ArticleDOI
01 Apr 2019
TL;DR: In this article, a passive whole-body control approach for quadruped robots that achieves dynamic locomotion while compliantly balancing the robot's trunk is presented, which is superior for accurate execution of highly dynamic motions with respect to the current state of the art.
Abstract: We present experimental results using a passive whole-body control approach for quadruped robots that achieves dynamic locomotion while compliantly balancing the robot's trunk. We formulate the motion tracking as a quadratic program that takes into account the full robot rigid body dynamics, the actuation limits, the joint limits, and the contact interaction. We analyze the controller's robustness against inaccurate friction coefficient estimates and unstable footholds, as well as its capability to redistribute the load as a consequence of enforcing actuation limits. Additionally, we present practical implementation details gained from the experience with the real platform. Extensive experimental trials on the 90 kg hydraulically actuated quadruped robot validate the capabilities of this controller under various terrain conditions and gaits. The proposed approach is superior for accurate execution of highly dynamic motions with respect to the current state of the art.

80 citations


Journal ArticleDOI
25 Jul 2019
TL;DR: Wang et al. as discussed by the authors presented a learning architecture for navigation in cloud robotic systems: Lifelong Federated Reinforcement Learning (LFRL), which proposes a knowledge fusion algorithm for upgrading a shared model deployed on the cloud, and effective transfer learning methods in LFRL are introduced.
Abstract: This letter was motivated by the problem of how to make robots fuse and transfer their experience so that they can effectively use prior knowledge and quickly adapt to new environments. To address the problem, we present a learning architecture for navigation in cloud robotic systems: Lifelong Federated Reinforcement Learning (LFRL). In the letter, we propose a knowledge fusion algorithm for upgrading a shared model deployed on the cloud. Then, effective transfer learning methods in LFRL are introduced. LFRL is consistent with human cognitive science and fits well in cloud robotic systems. Experiments show that LFRL greatly improves the efficiency of reinforcement learning for robot navigation. The cloud robotic system deployment also shows that LFRL is capable of fusing prior knowledge. In addition, we release a cloud robotic navigation-learning website to provide the service based on LFRL: www.shared-robotics.com .

79 citations


Proceedings ArticleDOI
20 May 2019
TL;DR: A new Model Predictive Control (MPC) framework for controlling various dynamic movements of a quadrupedal robot is presented, which linearizes rotation matrices without resorting to parameterizations like Euler angles and quaternions, avoiding issues of singularity and unwinding phenomenon.
Abstract: This paper presents a new Model Predictive Control (MPC) framework for controlling various dynamic movements of a quadrupedal robot. System dynamics are represented by linearizing single rigid body dynamics in three-dimensional (3D) space. Our formulation linearizes rotation matrices without resorting to parameterizations like Euler angles and quaternions, avoiding issues of singularity and unwinding phenomenon, respectively. With a carefully chosen configuration error function, the MPC control law is transcribed into a Quadratic Program (QP) which can be solved efficiently in realtime. Our formulation can stabilize a wide range of periodic quadrupedal gaits and acrobatic maneuvers. We show various simulation as well as experimental results to validate our control strategy. Experiments prove the application of this framework with a custom QP solver could reach execution rates of 160 Hz on embedded platforms.

73 citations


Proceedings ArticleDOI
20 May 2019
TL;DR: ALMA, a motion planning and control framework for a torque-controlled quadrupedal robot equipped with a six degrees of freedom robotic arm capable of performing dynamic locomotion while executing manipulation tasks, is presented.
Abstract: The task of robotic mobile manipulation poses several scientific challenges that need to be addressed to execute complex manipulation tasks in unstructured environments, in which collaboration with humans might be required. Therefore, we present ALMA, a motion planning and control framework for a torque-controlled quadrupedal robot equipped with a six degrees of freedom robotic arm capable of performing dynamic locomotion while executing manipulation tasks. The online motion planning framework, together with a whole-body controller based on a hierarchical optimization algorithm, enables the system to walk, trot and pace while executing operational space end-effector control, reactive human-robot collaboration and torso posture optimization to increase the arm’s workspace. The torque control of the whole system enables the implementation of compliant behavior, allowing a user to safely interact with the robot. We verify our framework on the real robot by performing tasks such as opening a door and carrying a payload together with a human.

Journal ArticleDOI
TL;DR: This paper proposes a brain–computer interface (BCI)-based teleoperation strategy for a dual-arm robot carrying a common object by multifingered hands based on motor imagery of the human brain, which utilizes common spatial pattern method to analyze the filtered electroencephalograph signals.
Abstract: This paper proposes a brain–computer interface (BCI)-based teleoperation strategy for a dual-arm robot carrying a common object by multifingered hands. The BCI is based on motor imagery of the human brain, which utilizes common spatial pattern method to analyze the filtered electroencephalograph signals. Human intentions can be recognized and classified into the corresponding reference commands in task space for the robot according to phenomena of event-related synchronization/desynchronization, such that the object manipulation tasks guided by human user’s mind can be achieved. Subsequently, a concise dynamics consisting of the dynamics of the robotic arms and the geometrical constraints between the end-effectors and the object is formulated for the coordinated dual arm. To achieve optimization motion in the task space, a redundancy resolution at velocity level has been implemented through neural-dynamics optimization. Extensive experiments have been made by a number of subjects, and the results were provided to demonstrate the effectiveness of the proposed control strategy.

Proceedings ArticleDOI
20 May 2019
TL;DR: Ascento is introduced, a compact wheeled bipedal robot that is able to move quickly on flat terrain, and to overcome obstacles by jumping, as well as the development of various controllers for different scenarios.
Abstract: Applications of mobile ground robots demand high speed and agility while navigating in complex indoor environments. These present an ongoing challenge in mobile robotics. A system with these specifications would be of great use for a wide range of indoor inspection tasks. This paper introduces Ascento, a compact wheeled bipedal robot that is able to move quickly on flat terrain, and to overcome obstacles by jumping. The mechanical design and overall architecture of the system is presented, as well as the development of various controllers for different scenarios. A series of experiments1 with the final prototype system validate these behaviors in realistic scenarios. 1Video accompanying paper: https://youtu.be/U8bIsUPX1ZU

Proceedings ArticleDOI
20 May 2019
TL;DR: This paper proposes Parameterized Action Partially Observable Monte-Carlo Planning (PA-POMCP), an algorithm that evaluates manipulation actions by taking into account the effect of the robot’s current belief on the success of the action execution.
Abstract: The problem of finding and grasping a target object in a cluttered, uncertain environment, target object search, is a common and important problem in robotics. One key challenge is the uncertainty of locating and recognizing each object in a cluttered environment due to noisy perception and occlusions. Furthermore, the uncertainty in localization makes manipulation difficult and uncertain. To cope with these challenges, we formulate the target object search task as a partially observable Markov decision process (POMDP), enabling the robot to reason about perceptual and manipulation uncertainty while searching. To further address the manipulation difficulty, we propose Parameterized Action Partially Observable Monte-Carlo Planning (PA-POMCP), an algorithm that evaluates manipulation actions by taking into account the effect of the robot’s current belief on the success of the action execution. In addition, a novel run-time initial belief generator and a state value estimator are introduced in this paper to facilitate the PA-POMCP algorithm. Our experiments show that our methods solve the target object search task in settings where simpler methods either take more object movements or fail.

Proceedings ArticleDOI
20 May 2019
TL;DR: This paper proposes and implements an end-to-end transfer learning for a Convolutional Neural Network based object detection architecture that outperforms or is on-par with the state-of-the art methods on a benchmark dataset.
Abstract: In this paper, we focus on the robot grasping problem with parallel grippers using image data. For this task, we propose and implement an end-to-end approach. In order to detect the good grasping poses for a parallel gripper from RGB images, we have employed transfer learning for a Convolutional Neural Network (CNN) based object detection architecture. Our obtained results show that, the adapted network either outperforms or is on-par with the state-of-the art methods on a benchmark dataset. We also performed grasping experiments on a real robot platform to evaluate our method’s real world performance.

Journal ArticleDOI
TL;DR: A new method for cooperative autonomous localization among air-ground robots in a wide-ranging outdoor industrial environment that outperforms most consumer sensor in accuracy and also has an outstanding running time.
Abstract: Multiple mobile robots have gradually played a key role in many industrial systems, such as factory freight logistics system, patrol security in the factory environment, and multirobot collaborative service and work. As a key issue in industrial environment perception, the accurate robot localization can enhance their autonomous ability and is an important branch of robotic studies in artificial intelligence. In this paper, we propose a new method for cooperative autonomous localization among air-ground robots in a wide-ranging outdoor industrial environment. The aerial robot first maps an area of interest and achieves self-localization. Then the aerial robot transfers a simplified orthogonal perspective 2.5-D map to the ground robots for collaboration. Within the collaboration, the ground robot achieves pose estimation with respect to the unmanned aerial vehicle pose by instantaneously registering a single panorama with respect to the 2.5-D map. The 2.5-D map is used as the spatial association among air-ground robots. The ground robots estimate the orientation using automatically detected geometric information and generates the translation by aligning the 2.5-D map with a semantic segmentation of the panorama. Our method effectively overcomes the dramatic differences between the air-level view and the ground-level view. A set of experiments is performed in the outdoor industrial environment to demonstrate the applicability of our localization method. The proposed robotic collaborative localization outperforms most consumer sensor in accuracy and also has an outstanding running time.

Journal ArticleDOI
Weitian Wang1, Rui Li1, Zachary Max Diekel1, Yi Chen1, Zhujun Zhang1, Yunyi Jia1 
TL;DR: This paper develops a practical approach using a wearable sensory system that could make a robot recognize a human's hand-over intentions and enable the human to effectively and naturally control the hand- over process.
Abstract: With the deployment of collaborative robots in intelligent manufacturing, object hand-over between humans and robots plays a significant role in human–robot collaborations. In most collaboration studies, human hand-over intentions were usually assumed to be known by the robot, and the research mainly focused on robot motion planning and control during the hand-over process. Several approaches have been developed to control the human–robot hand-over, such as vision-based approach and physical contact-based approach, but their applications in manufacturing environments are limited due to various constraints, such as limited human working ranges and safety concerns. In this paper, we develop a practical approach using a wearable sensory system, which has a natural and simple configuration and can be easily utilized by humans. This approach could make a robot recognize a human's hand-over intentions and enable the human to effectively and naturally control the hand-over process. In addition, the approach could recognize the attribute classes of the objects in the human's hand using the wearable sensing and enable the robot to actively make decisions to ensure that graspable objects are handed over from the human to the robot. Results and evaluations illustrate the effectiveness and advantages of the proposed approach in human–robot hand-over control.

Journal ArticleDOI
17 Jan 2019
TL;DR: In this letter, kinematic modeling for a tendon actuated continuum robot with three extensible segments is investigated and the focus is drawn on the comparison of two of the most widely used modeling approaches both for free-space and loaded configurations.
Abstract: Continuum robots actuated by tendons are a widely researched robot design offering high dexterity and large workspaces relative to their volume. Their flexible and compliant structure can be easily miniaturized, making them predestined for applications in difficult-to-reach and confined spaces. Adaption of this specific robot design includes extensible segments leading to an even higher manipulability and enabling so-called follow-the-leader motions of the manipulator. In this letter, kinematic modeling for a tendon actuated continuum robot with three extensible segments is investigated. The focus is drawn on the comparison of two of the most widely used modeling approaches both for free-space and loaded configurations. Through extensive experimental validation, the modeling performances are assessed qualitatively and quantitatively in terms of the shape deviation, Euclidean error at segment ends, and computation time. While Cosserat rod modeling is slightly more accurate than beam mechanics modeling, the latter presents significantly lower computation time.

Journal ArticleDOI
09 Jan 2019
TL;DR: In this paper, a reinforcement learning-based local navigation policy is proposed to handle the robot freezing and the navigation lost problems simultaneously in dense crowds, where the robot dynamically chooses to approach a set of recovery positions with rich features.
Abstract: Our goal is to navigate a mobile robot to navigate through environments with dense crowds, e.g., shopping malls, canteens, train stations, or airport terminals. In these challenging environments, existing approaches suffer from two common problems: the robot may get frozen and cannot make any progress toward its goal, or it may get lost due to severe occlusions inside a crowd. Here, we propose a navigation framework that handles the robot freezing and the navigation lost problems simultaneously. First, we enhance the robot's mobility and unfreeze the robot in the crowd using a reinforcement learning-based local navigation policy developed in our previous work which naturally takes into account the coordination between robots and humans. Second, the robot takes advantage of its excellent local mobility to recover from its localization failure. In particular, it dynamically chooses to approach a set of recovery positions with rich features. To the best of our knowledge, our method is the first approach that simultaneously solves the freezing problem and the navigation lost problem in dense crowds. We evaluate our method in both simulated and real-world environments and demonstrate that it outperforms the state-of-the-art approaches. Videos are available at https://sites.google.com/view/rlslam .

Journal ArticleDOI
04 Feb 2019
TL;DR: This letter presents a framework for planning and perception for multi-robot exploration in large and unstructured three-dimensional environments and demonstrates that the proposed system is able to maintain efficiency and completeness in exploration while only requiring a low rate of communication.
Abstract: This letter presents a framework for planning and perception for multi-robot exploration in large and unstructured three-dimensional environments. We employ a Gaussian mixture model for global mapping to model complex environment geometries while maintaining a small memory footprint which enables distributed operation with a low volume of communication. We then generate a local occupancy grid for use in planning from the Gaussian mixture model using Monte Carlo ray tracing. Then, a finite-horizon, information-based planner uses this local map and optimizes sequences of observations locally while accounting for the global distribution of information in the robot state space which we model using a library of informative views. Simulation results demonstrate that the proposed system is able to maintain efficiency and completeness in exploration while only requiring a low rate of communication.

Journal ArticleDOI
TL;DR: In this paper, the vector-field-inequalities method is extended to provide dynamic active-constraints to any number of robots and moving objects sharing the same workspace, and the method is evaluated with experiments and simulations in which robot tools have to avoid collisions autonomously and in real time, in a constrained endonasal surgical environment.
Abstract: Robotic assistance allows surgeons to perform dexterous and tremor-free procedures, but robotic aid is still underrepresented in procedures with constrained workspaces, such as deep brain neurosurgery and endonasal surgery. In these procedures, surgeons have restricted vision to areas near the surgical tooltips, which increases the risk of unexpected collisions between the shafts of the instruments and their surroundings. In this paper, our vector-field-inequalities method is extended to provide dynamic active-constraints to any number of robots and moving objects sharing the same workspace. The method is evaluated with experiments and simulations in which robot tools have to avoid collisions autonomously and in real-time, in a constrained endonasal surgical environment. Simulations show that with our method the combined trajectory error of two robotic systems is optimal. Experiments using a real robotic system show that the method can autonomously prevent collisions between the moving robots themselves and between the robots and the environment. Moreover, the framework is also successfully verified under teleoperation with tool–tissue interactions.

Journal ArticleDOI
01 Feb 2019
TL;DR: The Tactile Robot connected with human operators via smart wearables as an essential multimodal embodiment of the coming Tactile Internet as well as addressing major challenges and hypothesize about potential solutions.
Abstract: In this paper, we discuss and speculate about the concept of the Tactile Robot connected with human operators via smart wearables as an essential multimodal embodiment of the coming Tactile Internet. The Tactile Robot, succeeding the recently introduced kinesthetic soft robot, is the upcoming next step in the evolution of rapidly developing robotic platforms that are capable of sensitive physical interaction with their environment. From the combination of rich tactile feedback with state-of-the-art robotics, technology, and algorithms emerge the potential of a meaningful and immersive connection to human operators via the vastly progressing smart wearables and virtual reality/augmented reality devices, effectively creating real-world avatars. Moreover, the Tactile Internet is believed to make it possible to create avatar collectives spanning different application domains and, therefore, cover heterogeneous robotic platforms. We hypothesize that this development will enable us to seamlessly interact with heterogeneous systems such as industrial assembly lines, service robots, automated medical units, or even deep sea and space exploration units. This new paradigm of an immersive coexistence between humans and robots builds on numerous technological advances in robotics, multimodal teleoperation, wearable technology, distributed computing, or network technology, for example. However, such a vision obviously poses major challenges in multiple areas that are still to be overcome. In this paper, we discuss the potentials and enabling technologies together with foreseeable application domains in the framework of the Tactile Internet. Furthermore, we address major challenges and hypothesize about potential solutions.

Journal ArticleDOI
24 Jan 2019
TL;DR: This letter revisits a classic problem by formulating a collision-avoidance framework and composing it with a nominal controller and experimental results show the efficacy of this framework on a light detection and ranging (LIDAR)-equipped differential-drive robot in a real-time obstacle- avoidance scenario.
Abstract: Robots are entering an age of ubiquity, and to operate effectively, these systems must typically satisfy a series of constraints (e.g., collision avoidance, obeying speed limits, maintaining connectivity). In addition, modern applications hinge on the completion of particular tasks, such as driving to a certain location or monitoring a crop patch. The dichotomy between satisfying constraints and completing objectives creates a need for constraint-satisfaction frameworks that are composable with a pre-existing primary objective. Barrier functions have recently emerged as a practical and the composable method for constraint satisfaction, and prior results demonstrate a system of Boolean logic for nonsmooth barrier functions as well as a composable controller-synthesis framework; however, this prior work does not consider dynamically changing constraints (e.g., a robot sensing and avoiding an obstacle). Consequently, the main theoretical contribution of this letter extends nonsmooth barrier functions to time-varying barrier functions with jumps. In a practical instantiation of the theoretical main results, this letter revisits a classic problem by formulating a collision-avoidance framework and composing it with a nominal controller. Experimental results show the efficacy of this framework on a light detection and ranging (LIDAR)-equipped differential-drive robot in a real-time obstacle-avoidance scenario.

Journal ArticleDOI
TL;DR: In this paper, a piezoelectric robot with four parallel legs operating in a rowing mechanism was presented and tested, and the robot achieved maximum output speeds of 584 and 614 μ m/s along the X and Y axes, respectively, under voltage of 400 Vp-p and frequency of 100 Hz.
Abstract: A piezoelectric robot with four parallel legs operating in rowing mechanism was presented and tested. The driving feet of the legs moved in a triangular trajectory, and a pair of legs on the same axis operated like rowing to move the robot step-by-step through the static friction forces between the driving feet and platform. A kinematic model with respect to the motion of the piezoelectric legs was established based on the Timoshenko beam theory and Galerkin procedure, which was used to design and optimize the structure of the robot. A prototype was fabricated and its experimental system was established. The motion trajectory of one driving foot was measured, which agreed well with the calculated results using the kinematic model in terms of both magnitudes and trajectory. A nanopositioning ability was achieved with a resolution of 16 nm. The experiment results showed that the output speed was linearly related to the voltages of the excitation signals, and motions along any directions could be achieved by changing the applied voltages. The prototype achieved maximum output speeds of 584 and 614 μ m/s along the X and Y axes, respectively, under voltage of 400 Vp-p and frequency of 100 Hz. Furthermore, a carrying capacity of 25 kg was achieved. The proposed piezoelectric robot could be used to transport heavy devices with nanopositioning accuracy.

Journal ArticleDOI
Jianya Yuan1, Hongjian Wang1, Changjian Lin1, Dawei Liu, Dan Yu1 
TL;DR: The proposed dynamic path planning method based on a gated recurrent unit-recurrent neural network model is more robust than the traditional path planning algorithms to differences in the robot structure and can plan a reasonable path in an unknown environment.
Abstract: A dynamic path planning method based on a gated recurrent unit-recurrent neural network model is proposed for the problem of path planning of a mobile robot in an unknown space. A deep neural network with sensor input is used to generate a new control strategy output to the physical model to control the movement of the robot and thus achieve collision avoidance behavior. Inputs and tags are derived from sample sets generated by an improved artificial potential field and an improved ant colony optimization algorithm. In order to make the ant colony algorithm converge quickly, the pheromone trail and the state transition probability are improved. The field function of the artificial potential field method is modified. Using the end-to-end network model to learn the mapping between input and output in the sample data, the direction and speed of the mobile robot are obtained. The simulation experiments and realistic simulations show that the network model can plan a reasonable path in an unknown environment. Compared with other traditional path planning algorithms, the proposed method is more robust than the traditional path planning algorithms to differences in the robot structure.

Proceedings ArticleDOI
11 Mar 2019
TL;DR: A video of the ORCA Hub simulator, a framework that unifies three types of autonomous systems (Husky, ANYmal and UAVs) on an offshore platform digital twin for training and testing human-robot collaboration scenarios, such as inspection and emergency response.
Abstract: To avoid putting humans at risk, there is an imminent need to pursue autonomous robotized facilities with maintenance capabilities in the energy industry. This paper presents a video of the ORCA Hub simulator, a framework that unifies three types of autonomous systems (Husky, ANYmal and UAVs) on an offshore platform digital twin for training and testing human-robot collaboration scenarios, such as inspection and emergency response.

Journal ArticleDOI
TL;DR: This paper proposes a novel adaptive control methodology based on the admittance model for multiple manipulators transporting a rigid object cooperatively along a predefined desired trajectory and a switching function is presented to guarantee the global stability of the closed loop.
Abstract: This paper proposes a novel adaptive control methodology based on the admittance model for multiple manipulators transporting a rigid object cooperatively along a predefined desired trajectory. First, an admittance model is creatively applied to generate reference trajectory online for each manipulator according to the desired path of the rigid object, which is the reference input of the controller. Then, an innovative integral barrier Lyapunov function is utilized to tackle the constraints due to the physical and environmental limits. Adaptive neural networks (NNs) are also employed to approximate the uncertainties of the manipulator dynamics. Different from the conventional NN approximation method, which is usually semiglobally uniformly ultimately bounded, a switching function is presented to guarantee the global stability of the closed loop. Finally, the simulation studies are conducted on planar two-link robot manipulators to validate the efficacy of the proposed approach.

Journal ArticleDOI
TL;DR: A novel deep convolutional neural network (DCNN) structure for reconstruction enhancement and reducing online prediction time is proposed for managing redundancy control a 7 DoFs anthropomorphic robot arm (LWR4+, KUKA, Germany) for validation.
Abstract: Human-like behavior has emerged in the robotics area for improving the quality of Human-Robot Interaction (HRI). For the human-like behavior imitation, the kinematic mapping between a human arm and robot manipulator is one of the popular solutions. To fulfill this requirement, a reconstruction method called swivel motion was adopted to achieve human-like imitation. This approach aims at modeling the regression relationship between robot pose and swivel motion angle. Then it reaches the human-like swivel motion using its redundant degrees of the manipulator. This characteristic holds for most of the redundant anthropomorphic robots. Although artificial neural network (ANN) based approaches show moderate robustness, the predictive performance is limited. In this paper, we propose a novel deep convolutional neural network (DCNN) structure for reconstruction enhancement and reducing online prediction time. Finally, we utilized the trained DCNN model for managing redundancy control a 7 DoFs anthropomorphic robot arm (LWR4+, KUKA, Germany) for validation. A demonstration is presented to show the human-like behavior on the anthropomorphic manipulator. The proposed approach can also be applied to control other anthropomorphic robot manipulators in industry area or biomedical engineering.

Proceedings ArticleDOI
11 Mar 2019
TL;DR: It is suggested that voice design should be considered more thoroughly when planning spoken human-robot interactions, because people associate voices with robot pictures, even when the content of spoken utterances was unintelligible.
Abstract: It is well established that a robot's visual appearance plays a significant role in how it is perceived. Considerable time and resources are usually dedicated to help ensure that the visual aesthetics of social robots are pleasing to users and helps facilitate clear communication. However, relatively little consideration is given to how the voice of the robot should sound, which may have adverse effects on acceptance and clarity of communication. In this study, we explore the mental images people form when they hear robots speaking. In our experiment, participants listened to several voices, and for each voice they were asked to choose a robot, from a selection of eight commonly used social robot platforms, that was best suited to have that voice. The voices were manipulated in terms of naturalness, gender, and accent. Results showed that a) participants seldom matched robots with the voices that were used in previous HRI studies, b) the gender and naturalness vocal manipulations strongly affected participants' selection, and c) the linguistic content of the utterances spoken by the voices does not affect people's selection. This finding suggests that people associate voices with robot pictures, even when the content of spoken utterances was unintelligible. Our findings indicate that both a robot's voice and its appearance contribute to robot perception. Thus, giving a mismatched voice to a robot might introduce a confounding effect in HRI studies. We therefore suggest that voice design should be considered more thoroughly when planning spoken human-robot interactions.

Journal ArticleDOI
TL;DR: This paper proposes a singularity-free trajectory planning method to simultaneously keep the attitude and centroid position of the base stabilized in inertial space; the balance arms are also designed.
Abstract: In a multiarm space robotic system, one or more manipulators can be used to stabilize the base through counteracting the disturbance caused by other manipulators performing on-orbital tasks. However, singularities are inevitably present in the traditional methods based on differential kinematics solutions. In this paper, we propose a singularity-free trajectory planning method to simultaneously keep the attitude and centroid position of the base stabilized in inertial space; the balance arms are also designed. First, we derive the coupling motion equations of a free-floating multiarm space robotic system. Then, the singularity problems are theoretically analyzed, and the theoretical basis for singularity-free trajectory planning is established. Second, we decompose the six degrees of freedom pose (attitude and position) stabilization problem into two 3DOF subproblems related to attitude and position balancing. We then design two robotic arms: 1) a position balance arm and 2) an attitude balance arm, to maintain the base centroid position and attitude, respectively. Third, we plan the coordinated trajectories of the two balance arms according to holonomic and nonholonomic constraints. As long as the desired motion is not beyond its balance ability, the reasonable joint variables can always be determined without encountering a singularity problem. Finally, the proposed methods are verified using simulations of typical on-orbital missions, including joint trajectory tracking and target capturing.