Author
Yi-Yun Cho
Bio: Yi-Yun Cho is an academic researcher from National Sun Yat-sen University. The author has contributed to research in topics: Visual servoing & Inverse kinematics. The author has an hindex of 1, co-authored 1 publications receiving 11 citations.
Papers
More filters
TL;DR: This paper introduces a visual servoing system for a manipulator with redundant joints that the trajectory of the manipulator approaching the target is determined spontaneously by the visual control law and can always maintain a safe distance from obstacles while approach the target smoothly.
Abstract: To tackle the problem on trajectory planning or the design of control law, this paper introduces a visual servoing system for a manipulator with redundant joints that the trajectory of the manipulator approaching the target is determined spontaneously by the visual control law. The proposed method resolves joint solution for visual servoing and obstacle avoidance. The work comprises of two procedures, feature extraction for position-based visual servoing (PBVS) and collision avoidance within the working envelope. In the PBVS control, the target pose must be reconstructed with respect to the robot and this results in a Cartesian motion-planning problem. Once the geometric relationship between the target and the end effector is determined, a secure inverse kinematics method incorporating trajectory planning is used to solve the solution of the redundant manipulator by the virtual repulsive torque method. Therefore, the links of the manipulator can always maintain a safe distance from obstacles while approaching the target smoothly. The proposed method is verified with its applicability in experiments using an eye-in-hand manipulator with seven joints. For reusability and extensibility, the system has been coded and constructed in the framework of the Robot Operating System so as that the developed algorithms can be disseminated to different platforms.
23 citations
Cited by
More filters
TL;DR: A fuzzy adaptive method is proposed for decoupled IBVS that allows the efficient control of a wheeled mobile robot (WMR) and performs better than other methods, in terms of convergence.
Abstract: To address the performance bottleneck for image-based visual servoing (IBVS), it is necessary to have appropriate servoing control laws, increased accuracy for image feature detection, and minimal approximation errors. This article proposes a fuzzy adaptive method for decoupled IBVS that allows the efficient control of a wheeled mobile robot (WMR). To address the under-actuated dynamics of the WMR, a decoupled controller is used and translation and rotation are decoupled by using two independent servoing gains, instead of the single servoing gain that is used for traditional IBVS. To reduce the effect of image noise, this article develops an improved bagging method for the decoupled controller that calculates the inverse kinematics and does not use the Moore–Penrose pseudoinverse method. To improve convergence, improved Q-learning is used to adaptively adjust the mixture parameter for the image Jacobian matrix (IQ-IBVS). This allows the mixture parameter can be adjusted while the robot moves under the influence of servo control. A fuzzy method is used to tune the learning rate for the IQ-IBVS method, which ensures effective learning. The results of simulation and experiments show that the proposed method performs better than other methods, in terms of convergence.
33 citations
TL;DR: A method that uses fuzzy state coding to accelerate learning during the training phase and to produce a smooth output in the application phase of the learning experience is proposed, which performs better than other methods in terms of learning speed, movement trajectory, and convergence time.
Abstract: Image-based visual servoing (IBVS) allows precise control of positioning and motion for relatively stationary targets using visual feedback. For IBVS, a mixture parameter $\beta$ allows better approximation of the image Jacobian matrix, which has a significant effect on the performance of IBVS. However, the setting for the mixture parameter depends on the camera's real-time posture; there is no clear way to define the change rules for most IBVS applications. Using simple model-free reinforcement learning, Q-learning, this article proposes a method to adaptively adjust the image Jacobian matrix for IBVS. If the state-space is discretized, traditional Q-learning encounters problems with the resolution that can cause sudden changes in the action, so the visual servoing system performs poorly. Besides, a robot in a real-world environment also cannot learn on as large a scale as virtual agents, so the efficiency with which agents learn must be increased. This article proposes a method that uses fuzzy state coding to accelerate learning during the training phase and to produce a smooth output in the application phase of the learning experience. A method that compensates for delay also allows more accurate extraction of features in a real environment. The results for simulation and experiment demonstrate that the proposed method performs better than other methods, in terms of learning speed, movement trajectory, and convergence time.
11 citations
TL;DR: In this paper , a survey of state-of-the-art learning-based algorithms for visual servoing is presented, especially those algorithms that combine with model predictive control (MPC).
Abstract: Major difficulties and challenges of modern robotics systems focus on how to give robots self-learning and self-decision-making ability. Visual servoing control strategy is an important strategy of robotic systems to perceive the environment via the vision. The vision can guide new robotic systems to complete more complicated tasks in complex working environments. This survey aims at describing the state-of-the-art learning-based algorithms, especially those algorithms that combine with model predictive control (MPC) used in visual servoing systems, and providing some pioneering and advanced references with several numerical simulations. The general modeling methods of visual servo and the influence of traditional control strategies on robotic visual servoing systems are introduced. The advantages of introducing neural-network-based algorithms and reinforcement-learning-based algorithms into the systems are discussed. Finally, according to the existing research progress and references, the future directions of robotic visual servoing systems are summarized and prospected.
10 citations
TL;DR: In this paper, a survey of state-of-the-art learning-based algorithms for visual servoing is presented, especially those algorithms that combine with model predictive control (MPC).
Abstract: Major difficulties and challenges of modern robotics systems focus on how to give robots self-learning and self-decision-making ability. Visual servoing control strategy is an important strategy of robotic systems to perceive the environment via the vision. The vision can guide new robotic systems to complete more complicated tasks in complex working environments. This survey aims at describing the state-of-the-art learning-based algorithms, especially those algorithms that combine with model predictive control (MPC) used in visual servoing systems, and providing some pioneering and advanced references with several numerical simulations. The general modeling methods of visual servo and the influence of traditional control strategies on robotic visual servoing systems are introduced. The advantages of introducing neural-network-based algorithms and reinforcement-learning-based algorithms into the systems are discussed. Finally, according to the existing research progress and references, the future directions of robotic visual servoing systems are summarized and prospected.
9 citations
TL;DR: This approach learns a mapping from image feature errors for each joint’s velocity instead of the classical kinematics, thereby reducing the computational complexity and improving the self-regulation ability of the control system.
Abstract: This study presents a fuzzy robotic joint controller using a cerebellar model articulation controller (CMAC) integrating a Takagi-Sugeno (T-S) framework with an online compensator for an articulated manipulator. The proposed controller is applied to image-based visual servoing (IBVS), including closed-loop feedback control and the kinematic Jacobian calculation. This approach learns a mapping from image feature errors for each joint’s velocity instead of the classical kinematics, thereby reducing the computational complexity and improving the self-regulation ability of the control system. These connecting weights of the cerebellar model learn offline, and an online compensator that uses reinforcement learning is developed to resolve system noise and uncertainties in an unknown environment . Compared with the classical inverse kinematics model , this approach does not need an excessive computational expense so that this proportional controller can be implemented in general scenarios with an eye-in-hand configuration. Experimental results show the proposed method can outperform the classical IBVS controller.
8 citations