scispace - formally typeset
Search or ask a question

Showing papers on "Kinematics published in 2020"


Journal ArticleDOI
TL;DR: An improved recurrent neural network (RNN) scheme is proposed to perform the trajectory control of redundant robot manipulators using remote center of motion (RCM) constraints to facilitate accurate task tracking based on the general quadratic performance index.

177 citations


Journal ArticleDOI
TL;DR: The swivel motion reconstruction approach was applied to imitate human-like behavior using the kinematic mapping in robot redundancy and showed that the architecture could not only enhance the regression accuracy but also significantly reduce the processing time of learning human motion data.
Abstract: Recently, the human-like behavior on anthropomorphic robot manipulators are increasingly accomplished by the kinematic model estabilshing the relationship of an anthropomorphic manipulator and human arm motions. Notably, the growth and broad availability of advanced techniques in data science facilitate the imitation learning process in anthropomorphic robotics. However, the enormous data set causes the labeling and prediction burden. In this paper, the swivel motion reconstruction approach was applied to imitate human-like behavior using the kinematic mapping in robot redundancy. For the sake of efficient computing, a novel incremental learning framework that combines an incremental learning approach with a deep convolutional neural network (IN-DCNN) is proposed for fast and efficient learning. The algorithm exploits a novel approach to detect changes from human motion data streaming and then to evolve its hierarchical representation of features. The incremental learning process is capable of fine-tuning the deep network only when model drifts detection mechanisms are triggered. Finally, we experimentally demonstrated this neural network's learning procedure and translated the trained human-like model to manage the redundancy optimization control of an anthropomorphic robot manipulator (LWR4+, KUKA, Germany). The anthropomorphic kinematic structure based redundant robots can be held by this approach. The experimental results showed that our architecture could not only enhance the regression accuracy but also significantly reduce the processing time of learning human motion data.

106 citations


Journal ArticleDOI
10 Mar 2020-PLOS ONE
TL;DR: Both TRACAB’s tracking systems can be considered as valid technologies for football-specific performance analyses in the settings tested as long as players are tracked correctly.
Abstract: The present study aimed to validate and compare the football-specific measurement accuracy of two optical tracking systems engineered by TRACAB. The "Gen4" system consists of two multi-camera units (a stereo pair) in two locations either side of the halfway line, whereas the distributed "Gen5" system combines two stereo pairs on each side of the field as well as two monocular systems behind the goal areas. Data were collected from 20 male football players in two different exercises (a football sport-specific running course and small-sided games) in a professional football stadium. For evaluating the accuracy of the systems, measures were compared against simultaneously recorded measures of a reference system (VICON motion capture system). Statistical analysis uses RMSE for kinematic variables (position, speed and acceleration) and the difference in percentages for performance indicators (e.g. distance covered, peak speed) per run compared to the reference system. Frames in which players were obviously not tracked were excluded. Gen5 had marginally better accuracy (0.08 m RMSE) for position measurements than Gen4 (0.09 m RMSE) compared to the reference. Accuracy difference in instantaneous speed (Gen4: 0.09 m⋅s-1 RMSE; Gen5: 0.08 m⋅s-1 RMSE) and acceleration (Gen4: 0.26 m⋅s-2 RMSE; Gen5: 0.21 m⋅s-2 RMSE) measurements were significant, but also trivial in terms of the effect size. For total distance travelled, both Gen4 (0.42 ± 0.60%) and Gen5 (0.27 ± 0.35%) showed only trivial deviations compared to the reference. Gen4 showed moderate differences in the low-speed distance travelled category (-19.41 ± 13.24%) and small differences in the high-speed distance travelled category (8.94 ± 9.49%). Differences in peak speed, acceleration and deceleration were trivial (<0.5%) for both Gen4 and Gen5. These findings suggest that Gen5's distributed camera architecture has minor benefits over Gen4's single-view camera architecture in terms of accuracy. We assume that the main benefit of the Gen5 towards Gen4 lies in increased robustness of the tracking when it comes to optical overlapping of players. Since differences towards the reference system were very low, both TRACAB's tracking systems can be considered as valid technologies for football-specific performance analyses in the settings tested as long as players are tracked correctly.

92 citations


Journal ArticleDOI
TL;DR: A model-free reinforcement learning strategy is proposed for training a policy for online trajectory planning without establishing the dynamic and kinematic models of the space robot.

86 citations


Journal ArticleDOI
TL;DR: The coupled planning method uses stochastic and derivatives-free search to plan both foothold locations and horizontal motions due to the local minima produced by the terrain model, which shows remarkable capability to deal with a wide range of noncoplanar terrains.
Abstract: Planning whole-body motions while taking into account the terrain conditions is a challenging problem for legged robots since the terrain model might produce many local minima. Our coupled planning method uses stochastic and derivatives-free search to plan both foothold locations and horizontal motions due to the local minima produced by the terrain model. It jointly optimizes body motion, step duration and foothold selection, and it models the terrain as a cost-map. Due to the novel attitude planning method, the horizontal motion plans can be applied to various terrain conditions. The attitude planner ensures the robot stability by imposing limits to the angular acceleration. Our whole-body controller tracks compliantly trunk motions while avoiding slippage, as well as kinematic and torque limits. Despite the use of a simplified model, which is restricted to flat terrain, our approach shows remarkable capability to deal with a wide range of non-coplanar terrains. The results are validated by experimental trials and comparative evaluations in a series of terrains of progressively increasing complexity.

81 citations


Journal ArticleDOI
TL;DR: This work develops an efficient reliability analysis method to account for random dimensions and joint angles of robotic mechanisms to proficiently predict the kinematic reliability of robotic manipulators.
Abstract: Kinematic reliability of robotic manipulators is the linchpin for restraining the positional errors within acceptable limits. This work develops an efficient reliability analysis method to account for random dimensions and joint angles of robotic mechanisms. It aims to proficiently predict the kinematic reliability of robotic manipulators. The kinematic reliability is defined by the probability that the actual position of an end-effector falls into a specified tolerance sphere, which is centered at the target position. The motion error is indicated by a compound function of independent standard normal variables constructed by three co-dependent coordinates of the end-effector. The saddle point approximation is then applied to compute the kinematic reliability. Exemplification demonstrates satisfactory accuracy and efficiency of the proposed method due to the construction and the saddle point since random simulation is spared.

77 citations


Journal ArticleDOI
26 Jan 2020-Sensors
TL;DR: This review shows that methods for lower limb joint kinematics are inherently application dependent, and future research should focus on alternative validation methods, subject-specific IMU-based biomechanical joint models and disturbed movement patterns in real-world settings.
Abstract: The use of inertial measurement units (IMUs) has gained popularity for the estimation of lower limb kinematics. However, implementations in clinical practice are still lacking. The aim of this review is twofold-to evaluate the methodological requirements for IMU-based joint kinematic estimation to be applicable in a clinical setting, and to suggest future research directions. Studies within the PubMed, Web Of Science and EMBASE databases were screened for eligibility, based on the following inclusion criteria: (1) studies must include a methodological description of how kinematic variables were obtained for the lower limb, (2) kinematic data must have been acquired by means of IMUs, (3) studies must have validated the implemented method against a golden standard reference system. Information on study characteristics, signal processing characteristics and study results was assessed and discussed. This review shows that methods for lower limb joint kinematics are inherently application dependent. Sensor restrictions are generally compensated with biomechanically inspired assumptions and prior information. Awareness of the possible adaptations in the IMU-based kinematic estimates by incorporating such prior information and assumptions is necessary, before drawing clinical decisions. Future research should focus on alternative validation methods, subject-specific IMU-based biomechanical joint models and disturbed movement patterns in real-world settings.

76 citations


Journal ArticleDOI
TL;DR: This paper validates a two-cameras OpenPose-based markerless system for gait analysis, considering its accuracy relative to three factors: cameras' relative distance, gait direction and video resolution, and confirms the feasibility of tracking kinematics and gait parameters of a single subject in a 3D space using two low-cost webcams and theopenPose engine.
Abstract: The design of markerless systems to reconstruct human motion in a timely, unobtrusive and externally valid manner is still an open challenge. Artificial intelligence algorithms based on automatic landmarks identification on video images opened to a new approach, potentially e-viable with low-cost hardware. OpenPose is a library that t using a two-branch convolutional neural network allows for the recognition of skeletons in the scene. Although OpenPose-based solutions are spreading, their metrological performances relative to video setup are still largely unexplored. This paper aimed at validating a two-cameras OpenPose-based markerless system for gait analysis, considering its accuracy relative to three factors: cameras' relative distance, gait direction and video resolution. Two volunteers performed a walking test within a gait analysis laboratory. A marker-based optical motion capture system was taken as a reference. Procedures involved: calibration of the stereoscopic system; acquisition of video recordings, simultaneously with the reference marker-based system; video processing within OpenPose to extract the subject's skeleton; videos synchronization; triangulation of the skeletons in the two videos to obtain the 3D coordinates of the joints. Two set of parameters were considered for the accuracy assessment: errors in trajectory reconstruction and error in selected gait space-temporal parameters (step length, swing and stance time). The lowest error in trajectories (~20 mm) was obtained with cameras 1.8 m apart, highest resolution and straight gait, and the highest (~60 mm) with the 1.0 m, low resolution and diagonal gait configuration. The OpenPose-based system tended to underestimate step length of about 1.5 cm, while no systematic biases were found for swing/stance time. Step length significantly changed according to gait direction (p = 0.008), camera distance (p = 0.020), and resolution (p < 0.001). Among stance and swing times, the lowest errors (0.02 and 0.05 s for stance and swing, respectively) were obtained with the 1 m, highest resolution and straight gait configuration. These findings confirm the feasibility of tracking kinematics and gait parameters of a single subject in a 3D space using two low-cost webcams and the OpenPose engine. In particular, the maximization of cameras distance and video resolution enabled to achieve the highest metrological performances.

72 citations


Journal ArticleDOI
TL;DR: A study of an under-actuated resilient robot with closed loops and passive joints is presented and shows that the desired resilient behavior of R-Robot II can be exhibited.
Abstract: A resilient robot is a robot that can recover its function after the robot is partially damaged. In this paper, a study of an under-actuated resilient robot with closed loops and passive joints is presented. First, a prototype system was built, which serves as a study vehicle and is called R-Robot II for short. Second, the kinematics of the prototype robot R-Robot II, necessarily for the change of the robot structure in, was developed. Finally, the experimentation of the R-Robot II was carried out. The result shows that the desired resilient behavior of R-Robot II can be exhibited. The architecture of R-Robot II, along with the design of the mechanical modules and simulation, was reported elsewhere. This paper focuses on the physical realization of R-Robot II and on the experimentation.

71 citations


Journal ArticleDOI
TL;DR: In inertial sensor data—linear acceleration and angular rate—was simulated from a database of optical motion tracking data and used as input for a feedforward and long short-term memory neural network to predict the joint angles and moments of the lower limbs during gait.
Abstract: In recent years, gait analysis outside the laboratory attracts more and more attention in clinical applications as well as in life sciences. Wearable sensors such as inertial sensors show high potential in these applications. Unfortunately, they can only measure kinematic motions patterns indirectly and the outcome is currently jeopardized by measurement discrepancies compared with the gold standard of optical motion tracking. The aim of this study was to overcome the limitation of measurement discrepancies and the missing information on kinetic motion parameters using a machine learning application based on artificial neural networks. For this purpose, inertial sensor data—linear acceleration and angular rate—was simulated from a database of optical motion tracking data and used as input for a feedforward and long short-term memory neural network to predict the joint angles and moments of the lower limbs during gait. Both networks achieved mean correlation coefficients higher than 0.80 in the minor motion planes, and correlation coefficients higher than 0.98 in the sagittal plane. These results encourage further applications of artificial intelligence to support gait analysis.

68 citations


Posted Content
TL;DR: This work proposes a novel method for monocular video-based 3D object detection which carefully leverages kinematic motion to improve precision of 3D localization and achieves state-of-the-art performance on monocular 3Dobject detection and the Bird's Eye View tasks within the KITTI self-driving dataset.
Abstract: Perceiving the physical world in 3D is fundamental for self-driving applications. Although temporal motion is an invaluable resource to human vision for detection, tracking, and depth perception, such features have not been thoroughly utilized in modern 3D object detectors. In this work, we propose a novel method for monocular video-based 3D object detection which carefully leverages kinematic motion to improve precision of 3D localization. Specifically, we first propose a novel decomposition of object orientation as well as a self-balancing 3D confidence. We show that both components are critical to enable our kinematic model to work effectively. Collectively, using only a single model, we efficiently leverage 3D kinematics from monocular videos to improve the overall localization precision in 3D object detection while also producing useful by-products of scene dynamics (ego-motion and per-object velocity). We achieve state-of-the-art performance on monocular 3D object detection and the Bird's Eye View tasks within the KITTI self-driving dataset.

Journal ArticleDOI
TL;DR: Different controllers for the redundancy resolution of redundant manipulators are compared to highlight the superiority and advantage of the proposed NRNN, which greatly improves the existing RMG solutions in theoretically eliminating the position error and joint drift.
Abstract: For the existing repetitive motion generation (RMG) schemes for kinematic control of redundant manipulators, the position error always exists and fluctuates. This article gives an answer to this phenomenon and presents the theoretical analyses to reveal that the existing RMG schemes exist a theoretical position error related to the joint angle error. To remedy this weakness of existing solutions, an orthogonal projection RMG (OPRMG) scheme is proposed in this article by introducing an orthogonal projection method with the position error eliminated theoretically, which decouples the joint space error and Cartesian space error with joint constraints considered. The corresponding new recurrent neural networks (NRNNs) are structured by exploiting the gradient descent method with the assistance of velocity compensation with theoretical analyses provided to embody the stability and feasibility. In addition, simulation results on a fixed-based redundant manipulator, a mobile manipulator, and a multirobot system synthesized by the existing RMG schemes and the proposed one are presented to verify the superiority and precise performance of the OPRMG scheme for kinematic control of redundant manipulators. Moreover, via adjusting the coefficient, simulations on the position error and joint drift of the redundant manipulator are conducted for comparison to prove the high performance of the OPRMG scheme. To bring out the crucial point, different controllers for the redundancy resolution of redundant manipulators are compared to highlight the superiority and advantage of the proposed NRNN. This work greatly improves the existing RMG solutions in theoretically eliminating the position error and joint drift, which is of significant contributions to increasing the accuracy and efficiency of high-precision instruments in manufacturing production.

Book ChapterDOI
23 Aug 2020
TL;DR: A physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input and produces motions that are significantly more realistic than those from purely kinematic methods, substantially improving quantitative measures of both kinematics and dynamic plausibility.
Abstract: Existing deep models predict 2D and 3D kinematic poses from video that are approximately accurate, but contain visible errors that violate physical constraints, such as feet penetrating the ground and bodies leaning at extreme angles. In this paper, we present a physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input. We first estimate ground contact timings with a novel prediction network which is trained without hand-labeled data. A physics-based trajectory optimization then solves for a physically-plausible motion, based on the inputs. We show this process produces motions that are significantly more realistic than those from purely kinematic methods, substantially improving quantitative measures of both kinematic and dynamic plausibility. We demonstrate our method on character animation and pose estimation tasks on dynamic motions of dancing and sports with complex contact patterns.

Posted Content
TL;DR: PhysCap is the first algorithm for physically plausible, real-time and marker-less human 3D motion capture with a single colour camera at 25 fps, and employs a combination of ground reaction force and residual force for plausible root control, and uses a trained neural network to detect foot contact events in images.
Abstract: Marker-less 3D human motion capture from a single colour camera has seen significant progress However, it is a very challenging and severely ill-posed problem In consequence, even the most accurate state-of-the-art approaches have significant limitations Purely kinematic formulations on the basis of individual joints or skeletons, and the frequent frame-wise reconstruction in state-of-the-art methods greatly limit 3D accuracy and temporal stability compared to multi-view or marker-based motion capture Further, captured 3D poses are often physically incorrect and biomechanically implausible, or exhibit implausible environment interactions (floor penetration, foot skating, unnatural body leaning and strong shifting in depth), which is problematic for any use case in computer graphics We, therefore, present PhysCap, the first algorithm for physically plausible, real-time and marker-less human 3D motion capture with a single colour camera at 25 fps Our algorithm first captures 3D human poses purely kinematically To this end, a CNN infers 2D and 3D joint positions, and subsequently, an inverse kinematics step finds space-time coherent joint angles and global 3D pose Next, these kinematic reconstructions are used as constraints in a real-time physics-based pose optimiser that accounts for environment constraints (eg, collision handling and floor placement), gravity, and biophysical plausibility of human postures Our approach employs a combination of ground reaction force and residual force for plausible root control, and uses a trained neural network to detect foot contact events in images Our method captures physically plausible and temporally stable global 3D human motion, without physically implausible postures, floor penetrations or foot skating, from video in real time and in general scenes The video is available at this http URL

Posted ContentDOI
11 Dec 2020-bioRxiv
TL;DR: This work compared the kinematics of human gait measured using a deep learning algorithm-based markerless motion capture system to those of a common marker-based motion Capture system, demonstrating markerless Motion Capture can measure similar 3D kinematic data to those from marker- based systems.
Abstract: Kinematic analysis is a useful and widespread tool used in research and clinical biomechanics for the estimation of human pose and the quantification of human movement. Common marker-based optical motion capture systems are expensive, time intensive, and require highly trained operators to obtain kinematic data. Markerless motion capture systems offer an alternative method for the measurement of kinematic data with several practical benefits. This work compared the kinematics of human gait measured using a deep learning algorithm-based markerless motion capture system to those of a common marker-based motion capture system. Thirty healthy adult participants walked on a treadmill while data were simultaneously recorded using eight video cameras (markerless) and seven infrared optical motion capture cameras (marker-based). Video data were processed using markerless motion capture software, marker-based data were processed using marker-based capture software, and both sets of data were compared. The average root mean square distance (RMSD) between corresponding joints was less than 2.5 cm for all joints except the hip, which was 3.6 cm. Lower limb segment angles indicated pose estimates from both systems were very similar, with RMSD of less than 5.5° for all segment angles except those that represent rotations about the long axis of the segment. Lower limb joint angles captured similar patterns for flexion/extension at all joints, ab/adduction at the knee and hip, and toe-in/toe-out at the ankle. These findings demonstrate markerless motion capture can measure similar 3D kinematics to those from marker-based systems.

Journal ArticleDOI
TL;DR: In this article, a CNN infers 2D and 3D joint positions, and subsequently, an inverse kinematics step finds space-time coherent joint angles and global 3D pose.
Abstract: Marker-less 3D human motion capture from a single colour camera has seen significant progress. However, it is a very challenging and severely ill-posed problem. In consequence, even the most accurate state-of-the-art approaches have significant limitations. Purely kinematic formulations on the basis of individual joints or skeletons, and the frequent frame-wise reconstruction in state-of-the-art methods greatly limit 3D accuracy and temporal stability compared to multi-view or marker-based motion capture. Further, captured 3D poses are often physically incorrect and biomechanically implausible, or exhibit implausible environment interactions (floor penetration, foot skating, unnatural body leaning and strong shifting in depth), which is problematic for any use case in computer graphics. We, therefore, present PhysCap, the first algorithm for physically plausible, real-time and marker-less human 3D motion capture with a single colour camera at 25 fps. Our algorithm first captures 3D human poses purely kinematically. To this end, a CNN infers 2D and 3D joint positions, and subsequently, an inverse kinematics step finds space-time coherent joint angles and global 3D pose. Next, these kinematic reconstructions are used as constraints in a real-time physics-based pose optimiser that accounts for environment constraints (e.g., collision handling and floor placement), gravity, and biophysical plausibility of human postures. Our approach employs a combination of ground reaction force and residual force for plausible root control, and uses a trained neural network to detect foot contact events in images. Our method captures physically plausible and temporally stable global 3D human motion, without physically implausible postures, floor penetrations or foot skating, from video in real time and in general scenes. PhysCap achieves state-of-the-art accuracy on established pose benchmarks, and we propose new metrics to demonstrate the improved physical plausibility and temporal stability.

Journal ArticleDOI
TL;DR: A human-cooperative control strategy is proposed for locomotion assistance along a physiologically meaningful path through a wearable walking exoskeleton and an adaptive controller is designed to simultaneously incorporate robot motion and human's capabilities.
Abstract: In this paper, a human-cooperative control strategy is proposed for locomotion assistance along a physiologically meaningful path through a wearable walking exoskeleton. First, a kinematics model for climbing stairs with the step length, walking speed, and direction is developed. Then, the trajectory of the center of gravity (CoG) in both sagittal plane (SP) and coronal plane (CP) of the human–robot system is designed using virtual slope method (VSM) and dual length linear inverted pendulum model (DLLIPM), which are generated in parallel to the gradient vector of the virtual slope and the horizontal plane, respectively. Then, the task workspace of climbing stairs can be divided into a human domination region and a robot assistance region through a novel designed barrier energy function, such that the motion of human's legs can be constrained within a compliant region around the desired trajectories. Moreover, based on a smooth transition between the human and robot regions, an adaptive controller is designed to simultaneously incorporate robot motion and human's capabilities. Actual experiments involving several human subjects have been conducted and its results demonstrate the performance of the control strategy.

Journal ArticleDOI
TL;DR: A geometric interpretation of color-kinematics duality between tree-level scattering amplitudes of gauge and gravity theories and the kinematic Jacobi identity between each triple of numerators is a residue theorem in disguise is given.
Abstract: We give a geometric interpretation of color-kinematics duality between tree-level scattering amplitudes of gauge and gravity theories. Using their representation as intersection numbers we show how to obtain Bern-Carrasco-Johansson numerators in a constructive way as residues around boundaries of the moduli space. In this language the kinematic Jacobi identity between each triple of numerators is a residue theorem in disguise.

Book ChapterDOI
23 Aug 2020
TL;DR: In this article, the authors propose a method for monocular video-based 3D object detection which leverages kinematic motion to extract scene dynamics and improve localization accuracy by decomposing object orientation and a self balancing 3D confidence.
Abstract: Perceiving the physical world in 3D is fundamental for self-driving applications. Although temporal motion is an invaluable resource to human vision for detection, tracking, and depth perception, such features have not been thoroughly utilized in modern 3D object detectors. In this work, we propose a novel method for monocular video-based 3D object detection which leverages kinematic motion to extract scene dynamics and improve localization accuracy. We first propose a novel decomposition of object orientation and a self-balancing 3D confidence. We show that both components are critical to enable our kinematic model to work effectively. Collectively, using only a single model, we efficiently leverage 3D kinematics from monocular videos to improve the overall localization precision in 3D object detection while also producing useful by-products of scene dynamics (ego-motion and per-object velocity). We achieve state-of-the-art performance on monocular 3D object detection and the Bird’s Eye View tasks within the KITTI self-driving dataset.

Journal ArticleDOI
TL;DR: The relative attitude kinematic and dynamic models of a spacecraft are presented and a sliding mode surface and predefined-time stability theory are applied to ensure that both the tracking errors of the attitude and the angular velocity converge to zero within a prescribed time.
Abstract: This paper investigates the attitude tracking problem of a rigid spacecraft using contemporary predefined-time stability theory. To this end, the relative attitude kinematic and dynamic models of a spacecraft are presented. Then, a sliding mode surface and predefined-time stability theory are applied to ensure that both the tracking errors of the attitude, expressed by the quaternion and the angular velocity, converge to zero within a prescribed time. Simulation results demonstrate the performance of the proposed scheme.

Journal ArticleDOI
TL;DR: The simulation results of tracking control in three-dimensional underwater environment are given, which illustrates that the proposed control strategy can not only meet the hardware requirements (drive saturation) but also achieve a stable and efficient tracking control performance because of its constraint to speed and speed increment.
Abstract: In this article, in order to solve the trajectory tracking control problem with the drive saturation (thrust overrun) for the 4500-m human occupied vehicle named “Deep-sea Warrior,” a model predictive adaptive constraint control strategy is put forward. The proposed control strategy mainly consists of two controllers. The first part is a kinematics controller designed by quantum-behaved particle swarm optimization model predictive control method. The second part is a dynamic controller designed by an adaptive algorithm. In order to study the effect of the ocean current disturbance on tracking controller, the ocean current is incorporated into the kinematics and dynamics model of the 4500-m human occupied vehicle. The thrusts of four degrees of freedom under the ocean current are calculated from designed controllers. Then, the thrusts are assigned to six thrusters on the 4500-m human occupied vehicle according to its thruster arrangement. An ocean current observer based on artificial fish proportional-integral control is designed for unknown currents. The simulation results of tracking control in three-dimensional underwater environment are given, which illustrates that the proposed control strategy can not only meet the hardware requirements (drive saturation) but also achieve a stable and efficient tracking control performance because of its constraint to speed and speed increment, the effect of the ocean current on kinematics and dynamics models and the dual feedback mechanism.

Journal ArticleDOI
TL;DR: A new computational method to evaluate comprehensively the positional accuracy reliability for single coordinate, single point, multipoint and trajectory accuracy of industrial robots is proposed using the sparse grid numerical integration method and the saddlepoint approximation method.

Proceedings ArticleDOI
25 May 2020
TL;DR: The performance of a low-cost markerless system for 3D human motion detection and tracking, consisting of the open-source library OpenPose, two webcams and a linear triangulation algorithm, showed that the system was generally able to track lower limbs motion, producing angular traces representative of normal gait similar to the ones computed by IMUs.
Abstract: The paper reports the performance of a low-cost markerless system for 3D human motion detection and tracking, consisting of the open-source library OpenPose, two webcams and a linear triangulation algorithm. OpenPose is able to identify anatomical landmarks with a commercial webcam, using Convolutional Neural Networks trained on data obtained from monocular images. When images from at least two different points of view are processed by OpenPose, 3D kinematic and spatiotemporal data of human gait can be also computed and assessed. Despite its potential, the accuracy of such a system in the estimation of kinematic parameters of human gait is currently unknown. With the aim to estimate OpenPose accuracy in 3D lower limb joint angle measurement during gait, two synchronized videos of a healthy subject were acquired, with two webcams, in a walking session on a treadmill at comfortable speed. 2-dimensional joint centers coordinates were assessed by OpenPose, and computed in 3D by triangulation algorithm. The resulting angular kinematics was, then, compared with inertial sensors outputs. Results showed that the system was generally able to track lower limbs motion, producing angular traces representative of normal gait similar to the ones computed by IMUs. However, OpenPose approach showed inaccuracy, mostly in the computation of maxima and minima joint angles, reaching error values up to 9.9°.

Journal ArticleDOI
TL;DR: Continuous low frequency EEG-based movement decoding for the online control of a robotic arm with LF-EEG-based decoded movements is achieved for the first time.
Abstract: Objective Continuous decoding of voluntary movement is desirable for closed-loop, natural control of neuroprostheses. Recent studies showed the possibility to reconstruct the hand trajectories from low-frequency (LF) electroencephalographic (EEG) signals. So far this has only been performed offline. Here, we attempt for the first time continuous online control of a robotic arm with LF-EEG-based decoded movements. Approach The study involved ten healthy participants, asked to track a moving target by controlling a robotic arm. At the beginning of the experiment, the robot was fully controlled by the participant's hand trajectories. After calibrating the decoding model, that control was gradually replaced by LF-EEG-based decoded trajectories, first with 33%, 66% and finally 100% EEG control. Likewise with other offline studies, we regressed the movement parameters (two-dimensional positions, velocities, and accelerations) from the EEG with partial least squares (PLS) regression. To integrate the information from the different movement parameters, we introduced a combined PLS and Kalman filtering approach (named PLSKF). Main results We obtained moderate yet overall significant (α = 0.05) online correlations between hand kinematics and PLSKF-decoded trajectories of 0.32 on average. With respect to PLS regression alone, the PLSKF had a stable correlation increase of Δr = 0.049 on average, demonstrating the successful integration of different models. Parieto-occipital activations were highlighted for the velocity and acceleration decoder patterns. The level of robot control was above chance in all conditions. Participants finally reported to feel enough control to be able to improve with training, even in the 100% EEG condition. Significance Continuous LF-EEG-based movement decoding for the online control of a robotic arm was achieved for the first time. The potential bottlenecks arising when switching from offline to online decoding, and possible solutions, were described. The effect of the PLSKF and its extensibility to different experimental designs were discussed.

Journal ArticleDOI
Chong Zenghui1, Fugui Xie1, Xin-Jun Liu1, Jinsong Wang1, Huifeng Niu 
TL;DR: The design of the 1T2R parallel mechanism lays the foundation for the development of the hybrid mobile robot, and the configuration design and parameter optimization in this paper can be further applied to the design of other parallel mechanisms.
Abstract: Efficient and precise processing of large-scale parts is an uprising problem in industry. In this paper, a method to polish large-scale wind turbine blades using hybrid mobile robots is proposed. The robot combines an automated guided vehicle (AGV), a 2-DoF 3-parallelogram hybrid module and a 1T2R parallel module. This paper focuses on the design of the 1T2R parallel mechanism. In order to realize flexible A/B axis rotational capacity and efficient transmission of driving units, two 1T2R parallel mechanisms actuated by ball screw drives are derived under the guidance of a type synthesis approach based on Grassmann line geometry and a line-graph method. Under the description of Tilt and Torsion (T&T) angles, kinematics (especially the parasitic motion) of the proposed mechanisms is investigated. To carry out the dimension synthesis, performance evaluation indices including good transmission and constraint workspace, maximum orientation capacity, global transmission and constraint index and global average parasitic motion are defined. The performance atlases are plotted in the parameter design space. By comparing the parasitic motion and the motion/force transmission and constraint performance of the two mechanisms, 3-RCU mechanism is identified to develop the parallel module and its kinematic optimization is carried out. The CAD model with a set of optimal parameters is presented. The work in this paper lays the foundation for the development of the hybrid mobile robot, and the configuration design and parameter optimization in this paper can be further applied to the design of other parallel mechanisms.

Journal ArticleDOI
TL;DR: The results show that the proposed method can give solutions of the three-dimensional-pose-determining problem and the configuration-planning problem of a spatial hyper-redundant manipulator and the computation of the inverse kinematics is simplified for real-time control.
Abstract: With many degrees of freedom (DOFs), a hyper-redundant manipulator has superior dexterity and flexible manipulation ability. However, its inverse kinematics and configuration planning are very challenging. With the increase in the number of DOFs, the corresponding computation load or training set will be much larger for traditional methods (such as the generalized inverse method and the artificial neural network method). In this paper, a segmented geometry method is proposed for a spatial hyper-redundant manipulator to solve the above problems. Similar to the human arm, the hyper-redundant manipulator is segmented into three sections from geometry, i.e., shoulder, elbow, and wrist. Then, its kinematics can be solved separately according to the segmentation, which reduces the complexity of the solution and simplifies the computation of the inverse kinematics. Furthermore, the configuration is parameterized by several parameters, i.e., the arm-angle, space arc parameters, and desired direction vector. The shoulder has proximal four DOFs, which is redundant for positioning the elbow and avoiding the joint limit. The arm-angle parameter is defined to solve the redundancy. The wrist consists of the distal two DOFs, and its joints are determined to match the desired direction vector of the end-effector. All the other joints (except for the joints belonging to shoulder and wrist) compose the elbow. These joint angles are solved by using space arc-based method. The configuration planning for avoiding joint limit, obstacles, and inspecting narrow pipeline are detailed for practical applications. Finally, circular trajectory tracking and pipeline inspection are, respectively, simulated and experimented on a 20-DOFs hyper-redundant manipulator. The results show that the proposed method can give solutions of the three-dimensional-pose-determining problem and the configuration-planning problem. The computation of the inverse kinematics is simplified for real-time control. It can also be applied to other spatial hyper-redundant manipulators with similar serial configurations.

Journal ArticleDOI
Yi Fang1, Jin Qi1, Jie Hu1, Weiming Wang1, Yinghong Peng1 
TL;DR: The proposed approach is capable of yielding better performance than existing techniques in terms of efficiency and jerk suppression and enables a higher level of regularity for achieving desirable motion behaviours in light of the operation requirements and joint characteristics.

Journal ArticleDOI
TL;DR: A novel bounded saturation function has been developed to describe the unknown asymmetrical actuator saturation and the NN is adopted to approximate the complex AUV hydrodynamics and differential of desired tracking velocities.

Journal ArticleDOI
TL;DR: A novel path tracking control method is proposed, which is designed using a kinematic MPC to handle the disturbances on road curvature, a PID feedback control of yaw rate to reject uncertainties and modeling errors, and a vehicle sideslip angle compensator to correct the kinematics model prediction.
Abstract: Kinematic model predictive control (MPC) is well known for its simplicity and computational efficiency for path tracking of autonomous vehicles, however, it merely works well at low speed. In addition, earlier studies have demonstrated that tracking accuracy is improved by the feedback of yaw rate, as it improves the system transients. With this in mind, it is expected that the performance of path tracking can be improved by a cascaded controller that utilizes kinematic MPC to determine desired yaw rate rather than steering angle, and uses proportional-integral-derivative (PID) control to follow the reference yaw rate. However, directly combining MPC with PID feedback control of yaw rate results in a controller with poor tracking accuracy. The simulation results show that the cascaded MPC-PID controller has relatively stable but larger error compared to classic kinematic and dynamic MPC. Based on the analysis of vehicle sideslip angle, a novel path tracking control method is proposed, which is designed using a kinematic MPC to handle the disturbances on road curvature, a PID feedback control of yaw rate to reject uncertainties and modeling errors, and a vehicle sideslip angle compensator to correct the kinematic model prediction. The proposed controller performances involving steady-state and transient response, robustness, and computing efficiency were evaluated on Carsim/Matlab joint simulation environment. Furthermore, field experiments were conducted to validate the robustness against sensor disturbances and time lag. The results demonstrate that the developed vehicle sideslip compensator is sufficient to capture steer dynamics, and the developed controller significantly improves the performance of path tracking and follows the desired path very well, ranging from low speed to high speed even at the limits of handling.

Journal ArticleDOI
TL;DR: Three methods to design the WMR bilateral teleoperation system’s controller that can cope with the slippage-induced nonpassivity and constant time delays are proposed.
Abstract: The increasing application of wheeled mobile robots (WMR) in many fields has brought new challenges on its control and teleoperation, two of which are induced by contact slippage phenomenon between wheel and terrain as well as time delays in the master-slave communication channel. In the WMR bilateral tele-driving system, in this paper, the linear velocity of the slave mobile robot follows the position command from the haptic master robot while the slippage-induced velocity error is fed back as a haptic force felt by the human operator. To cope with the slippage-induced nonpassivity and constant time delays, this paper proposes three methods to design the WMR bilateral teleoperation system’s controller. An experiment system is set up with Phantom Premium 1.5A haptic device as the master robot and a simulation platform of WMR as the slave robot. Experiments with the proposed methods demonstrate that they can result in a stable WMR bilateral tele-driving system under wheel’s slippage and constant time-delays.