scispace - formally typeset
Search or ask a question
Author

Sungho Jo

Bio: Sungho Jo is an academic researcher from KAIST. The author has contributed to research in topics: Adaptive control & Robotic arm. The author has an hindex of 1, co-authored 1 publications receiving 8 citations.

Papers
More filters
Journal ArticleDOI
Sungho Jo1
TL;DR: A biologically inspired robotic model that combines a modified feedback error learning, an unsupervised learning, and the viscoelastic actuator system in order to drive adaptive arm motions demonstrates the potential usefulness of a biomimetic design of robot skill.

8 citations


Cited by
More filters
Journal ArticleDOI
01 Jul 2009
TL;DR: In this article, a two-link planar mechanical manipulator that emulates a human arm is modeled and controlled using an active force control strategy to compensate for the vibration effect.
Abstract: This article focuses on the modelling and control of a two-link planar mechanical manipulator that emulates a human arm. The simplicity of the control algorithm and its ease of computation are particularly highlighted in this study. The arm is subjected to a vibratory excitation at a specific location on the arm while performing trajectory tracking tasks in two-dimensional space, taking into account the presence of 'muscle' elements that are mathematically modelled. A closed-loop control system is applied using an active force control strategy to accommodate the disturbances based on a predefined set of loading and operating conditions to observe the system responses. Results of the study imply the effectiveness of the proposed method in compensating the vibration effect to produce robust and accurate tracking performance of the system. The results may serve as a useful tool in aiding the design and development of a tooling device for use in a mechatronic robot arm or even human arm (smart glove) where precise and/or robust performance is a critical factor and of considerable importance.

19 citations

Journal ArticleDOI
TL;DR: A Particle Swarm Optimization technique is anticipated to suggest that reduce the computation time as well as make the output result as much closer to the true value (i.e.,) experimentally obtained value.
Abstract: In this modern world, robotic evaluation plays a most important role. In secure distance, this leads the humans to execute insecure task. To acquire an effective result, the system which makes the human task easier should be taken care of and the holdup behind the system should be eradicated. Only static parameters are considered and such parameters are not enough to obtain optimized value in existing work. For consecutively attaining optimized value in our previous work, we focused on both static and dynamic parameters in the robotic arm gearbox model. Now, a genetic algorithm is utilized and the result obtained is greater than the existing work. On the other hand, to attain an effective result the genetic algorithm itself is not enough since it takes massive time for computation process and the result obtained in this computation is not as much closer to the true value. By eliminating all those aforementioned issues, a proper algorithm needs to be utilized in order to achieve an efficient result than the existing and our previous works. In this paper, we anticipated to suggest a Particle Swarm Optimization technique that reduce the computation time as well as make the output result as much closer to the true value (i.e.,) experimentally obtained value.

5 citations

Journal ArticleDOI
TL;DR: A newly developed learning strategy (‘learning by averaging’) shows consistent success if the motor task is constrained by special requirements and indicates a general superiority of DIM if combined with abstract recurrent neural networks.
Abstract: This paper focuses on adaptive motor control in the kinematic domain. Several motor-learning strategies from the literature are adopted to kinematic problems: ‘feedback-error learning’, ‘distal supervised learning’, and ‘direct inverse modelling’ (DIM). One of these learning strategies, DIM, is significantly enhanced by combining it with abstract recurrent neural networks. Moreover, a newly developed learning strategy (‘learning by averaging’) is presented in detail. The performance of these learning strategies is compared with different learning tasks on two simulated robot setups (a robot-camera-head and a planar arm). The results indicate a general superiority of DIM if combined with abstract recurrent neural networks. Learning by averaging shows consistent success if the motor task is constrained by special requirements.

2 citations