scispace - formally typeset
Search or ask a question
Author

Hsien-Chung Lin

Bio: Hsien-Chung Lin is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Robot & GRASP. The author has an hindex of 8, co-authored 23 publications receiving 227 citations.

Papers
More filters
Proceedings ArticleDOI
01 Aug 2016
TL;DR: An autonomous alignment method by force/torque measurement before insertion phase that can autonomously correct the misalignment before applying traditional assembly methods to perform insertions is proposed.
Abstract: In the past years, many methods have been developed for robotic peg-hole-insertion to automate the assembly process. However, many of them are based on the assumption that the peg and hole are well aligned before insertion starts. In practice, if there is a large pose(position/orientation) misalignment, the peg and hole may suffer from a three-point contact condition where the traditional assembly methods cannot work. To deal with this problem, this paper proposes an autonomous alignment method by force/torque measurement before insertion phase. A three-point contact model is built up and the pose misalignment between the peg and hole is estimated by force and geometric analysis. With the estimated values, the robot can autonomously correct the misalignment before applying traditional assembly methods to perform insertions. A series of experiments on a FANUC industrial robot and a H7h7 tolerance peg-hole testbed validate the effectiveness of the proposed method. Experimental results show that the robot is able to perform peg-hole-insertion from three-point contact conditions with 96% success rate.

67 citations

Proceedings ArticleDOI
29 Sep 2016
TL;DR: The original LfD is modified to be suitable for industrial robots, and a new demonstration tool is designed to acquire the human demonstration data to validate the performance of the proposed learning method.
Abstract: Programming robotic assembly tasks usually requires delicate force tuning. In contrast, human may accomplish assembly tasks with much less time and fewer trials. It will be a great benefit if robots can learn the human inherent skill of force control and apply it autonomously. Recent works on Learning from Demonstration (LfD) have shown the possibility to teach robots by human demonstration. The basic idea is to collect the force and corrective velocity that human applies during assembly, and then use them to regress a proper gain for the robot admittance controller. However, many of the LfD methods are tested on collaborative robots with compliant joints and relatively large assembly clearance. For industrial robots, the non-backdrivable mechanism and strict tolerance requirement make the assembly tasks more challenging. This paper modifies the original LfD to be suitable for industrial robots. A new demonstration tool is designed to acquire the human demonstration data. The force control gains are learned by Gaussian Mixture Regression (GMR) and the closed-loop stability is analysed. A series of peg-hole-insertion experiments with H7h7 tolerance on a FANUC manipulator validate the performance of the proposed learning method.

62 citations

Proceedings ArticleDOI
28 Oct 2015
TL;DR: A framework for teaching robot peg-hole-insertion from human demonstration is introduced and a Dimension Reduction and Recovery method is proposed to simplify control policy learning.
Abstract: Peg-hole-insertion is a common operation in industry production, but autonomous execution by robots has been a big challenge for many years. Current robot programming for this kind of contact problem requires tremendous effort, which needs delicate trajectory and force tuning. However, human may accomplish this task with much less time and fewer trials. It will be a great benefit if robots can learn the human skill and apply it autonomously. This paper introduces a framework for teaching robot peg-hole-insertion from human demonstration. A Dimension Reduction and Recovery method is proposed to simplify control policy learning. The Gaussian Mixture Regression is utilized to imitate human skill and a Dual Stage Force Control strategy is designed for autonomous execution by robots. The effectiveness of the teaching framework is demonstrated by a series of experiments.Copyright © 2015 by ASME

43 citations

Proceedings ArticleDOI
01 Sep 2017
TL;DR: It is shown that the object can be tracked robustly with sensor noise, outliers and massive occlusion and is further refined by running a dynamic simulation in parallel, which guarantees the estimates to satisfy the object's physical constraints.
Abstract: To enhance the robotic manipulation of deformable objects, a robust state estimator is proposed to track the object configuration in real time. A Gaussian mixture model (GMM) is constructed to register the object nodes towards the noisy point cloud. To deal with occlusion, the coherent point drift (CPD) regularization is applied on the mixture model, so as to maintain the topological structure from previous sequences of data and to infer the object states in occluded area. The state estimation is further refined by running a dynamic simulation in parallel, which guarantees the estimates to satisfy the object's physical constraints. A series of rope tracking experiments are performed to evaluate the proposed state estimator. It is shown that the object can be tracked robustly with sensor noise, outliers and massive occlusion.

35 citations

Proceedings ArticleDOI
01 Aug 2017
TL;DR: A novel real-time velocity based collision avoidance planner is presented to deal with both collision avoidance and reference tracking simultaneously, and an invariant safe set is introduced to exclude the dangerous states that may lead to collision.
Abstract: Safety is a fundamental issue in robotics, especially in the growing application of human-robot interaction (HRI), where collision avoidance is an important consideration. In this paper, a novel real-time velocity based collision avoidance planner is presented to address this problem. The proposed algorithm provides a solution to deal with both collision avoidance and reference tracking simultaneously. An invariant safe set is introduced to exclude the dangerous states that may lead to collision, and a smoothing function is introduced to adapt different reference commands and to preserve the invariant property of the safe set. A real-time experiment with a moving obstacle is conducted on FANUC LR Mate 200iD/7L.

27 citations


Cited by
More filters
Proceedings ArticleDOI
01 Sep 2017
TL;DR: In this article, a recurrent neural network with reinforcement learning was used to perform a peg-in-hole task with a tight clearance and robustness against positional and angular errors for part-mating.
Abstract: The high precision assembly of mechanical parts requires precision that exceeds that of robots. Conventional part-mating methods used in the current manufacturing require numerous parameters to be tediously tuned before deployment. We show how a robot can successfully perform a peg-in-hole task with a tight clearance through training a recurrent neural network with reinforcement learning. In addition to reducing manual effort, the proposed method also shows a better fitting performance with a tighter clearance and robustness against positional and angular errors for the peg-in-hole task. The neural network learns to take the optimal action by observing the sensors of a robot to estimate the system state. The advantages of our proposed method are validated experimentally on a 7-axis articulated robot arm.

230 citations

Journal ArticleDOI
16 Apr 2018-Robotics
TL;DR: The main focus is placed on how to demonstrate the example behaviors to the robot in assembly operations, and how to extract the manipulation features for robot learning and generating imitative behaviors.

158 citations

Journal ArticleDOI
28 Jan 2020
TL;DR: In this article, a state-space representation of the physical system that the robot aims to control is used to estimate the high-dimensional state of a deformable object from raw images, where annotations are very expensive on real data, and finding a dynamics model that is both accurate and efficient to compute.
Abstract: We demonstrate model-based, visual robot manipulation of deformable linear objects. Our approach is based on a state-space representation of the physical system that the robot aims to control. This choice has multiple advantages, including the ease of incorporating physics priors in the dynamics model and perception model, and the ease of planning manipulation actions. In addition, physical states can naturally represent object instances of different appearances. Therefore, dynamics in the state space can be learned in one setting and directly used in other visually different settings. This is in contrast to dynamics learned in pixel space or latent space, where generalization to visual differences are not guaranteed. Challenges in taking the state-space approach are the estimation of the high-dimensional state of a deformable object from raw images, where annotations are very expensive on real data, and finding a dynamics model that is both accurate, generalizable, and efficient to compute. We are the first to demonstrate self-supervised training of rope state estimation on real images, without requiring expensive annotations. This is achieved by our novel self-supervising learning objective, which is generalizable across a wide range of visual appearances. With estimated rope states, we train a fast and differentiable neural network dynamics model that encodes the physics of mass-spring systems. Our method has a higher accuracy in predicting future states compared to models that do not involve explicit state estimation and do not use any physics prior, while only using 3% of training data. We also show that our approach achieves more efficient manipulation, both in simulation and on a real robot, when used within a model predictive controller.

110 citations

Journal ArticleDOI
TL;DR: A model-driven deep deterministic policy gradient algorithm is proposed to accomplish the assembly task through the learned policy without analyzing the contact states, and a fuzzy reward system is utilized for the complex assembly process to improve the learning efficiency.
Abstract: The automatic completion of multiple peg-in-hole assembly tasks by robots remains a formidable challenge because the traditional control strategies require a complex analysis of the contact model In this paper, the assembly task is formulated as a Markov decision process, and a model-driven deep deterministic policy gradient algorithm is proposed to accomplish the assembly task through the learned policy without analyzing the contact states In our algorithm, the learning process is driven by a simple traditional force controller In addition, a feedback exploration strategy is proposed to ensure that our algorithm can efficiently explore the optimal assembly policy and avoid risky actions, which can address the data efficiency and guarantee stability in realistic assembly scenarios To improve the learning efficiency, we utilize a fuzzy reward system for the complex assembly process Then, simulations and realistic experiments of a dual peg-in-hole assembly demonstrate the effectiveness of the proposed algorithm The advantages of the fuzzy reward system and feedback exploration strategy are validated by comparing the performances of different cases in simulations and experiments

105 citations