scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Multi-Sensor Perception Strategy to Enhance Autonomy of Robotic Operation for Uncertain Peg-in-Hole Task

31 May 2021-Sensors (Multidisciplinary Digital Publishing Institute)-Vol. 21, Iss: 11, pp 3818
TL;DR: In this article, a Bayesian networks-based strategy is presented in order to seamlessly combine multiple heterogeneous senses data like humans, where an interactive exploration method implemented by hybrid Monte Carlo sampling algorithms and particle filtering is designed to identify the features' estimated starting value, and the memory adjustment method and the inertial thinking method are introduced to correct the target position and shape features of the object respectively.
Abstract: The peg-in-hole task with object feature uncertain is a typical case of robotic operation in the real-world unstructured environment It is nontrivial to realize object perception and operational decisions autonomously, under the usual visual occlusion and real-time constraints of such tasks In this paper, a Bayesian networks-based strategy is presented in order to seamlessly combine multiple heterogeneous senses data like humans In the proposed strategy, an interactive exploration method implemented by hybrid Monte Carlo sampling algorithms and particle filtering is designed to identify the features' estimated starting value, and the memory adjustment method and the inertial thinking method are introduced to correct the target position and shape features of the object respectively Based on the Dempster-Shafer evidence theory (D-S theory), a fusion decision strategy is designed using probabilistic models of forces and positions, which guided the robot motion after each acquisition of the estimated features of the object It also enables the robot to judge whether the desired operation target is achieved or the feature estimate needs to be updated Meanwhile, the pliability model is introduced into repeatedly perform exploration, planning and execution steps to reduce interaction forces, the number of exploration The effectiveness of the strategy is validated in simulations and in a physical robot task
Citations
More filters
Journal ArticleDOI
01 Feb 2022-Sensors
TL;DR: The system model is established from data and is used for the online planning of the robot motion and an online learning algorithm is introduced to tune the pretrained model according to the real-time data from the peeling process experiments to cover the uncertainties of the real process.
Abstract: Autonomous planning robotic contact-rich manipulation has long been a challenging problem. Automatic peeling of glass substrates of LCD flat panel displays is a typical contact-rich manipulation task, which requires extremely high safe handling through the manipulation process. To this end of peeling glass substrates automatically, the system model is established from data and is used for the online planning of the robot motion in this paper. A simulation environment is designed to pretrain the process model with deep learning-based neural network structure to avoid expensive and time-consuming collection of real-time data. Then, an online learning algorithm is introduced to tune the pretrained model according to the real-time data from the peeling process experiments to cover the uncertainties of the real process. Finally, an Online Learning Model Predictive Path Integral (OL-MPPI) algorithm is proposed for the optimal trajectory planning of the robot. The performance of our algorithm was validated through glass substrate peeling tasks of experiments.

1 citations

References
More filters
Journal ArticleDOI
21 Jun 2019-Science
TL;DR: The progress made in robotics to emulate humans’ ability to grab, hold, and manipulate objects is reviewed, with a focus on designing humanlike hands capable of using tools.
Abstract: BACKGROUND Humans have a fantastic ability to manipulate objects of various shapes, sizes, and materials and can control the objects’ position in confined spaces with the advanced dexterity capabilities of our hands. Building machines inspired by human hands, with the functionality to autonomously pick up and manipulate objects, has always been an essential component of robotics. The first robot manipulators date back to the 1960s and are some of the first robotic devices ever constructed. In these early days, robotic manipulation consisted of carefully prescribed movement sequences that a robot would execute with no ability to adapt to a changing environment. As time passed, robots gradually gained the ability to automatically generate movement sequences, drawing on artificial intelligence and automated reasoning. Robots would stack boxes according to size, weight, and so forth, extending beyond geometric reasoning. This task also required robots to handle errors and uncertainty in sensing at run time, given that the slightest imprecision in the position and orientation of stacked boxes might cause the entire tower to topple. Methods from control theory also became instrumental for enabling robots to comply with the environment’s natural uncertainty by empowering them to adapt exerted forces upon contact. The ability to stably vary forces upon contact expanded robots’ manipulation repertoire to more-complex tasks, such as inserting pegs in holes or hammering. However, none of these actions truly demonstrated fine or in-hand manipulation capabilities, and they were commonly performed using simple two-fingered grippers. To enable multipurpose fine manipulation, roboticists focused their efforts on designing humanlike hands capable of using tools. Wielding a tool in-hand became a problem of its own, and a variety of advanced algorithms were developed to facilitate stable holding of objects and provide optimality guarantees. Because optimality was difficult to achieve in a stochastic environment, from the 1990s onward researchers aimed to increase the robustness of object manipulation at all levels. These efforts initiated the design of sensors and hardware for improved control of hand–object contacts. Studies that followed were focused on robust perception for coping with object occlusion and noisy measurements, as well as on adaptive control approaches to infer an object’s physical properties, so as to handle objects whose properties are unknown or change as a result of manipulation. ADVANCES Roboticists are still working to develop robots capable of sorting and packaging objects, chopping vegetables, and folding clothes in unstructured and dynamic environments. Robots used for modern manufacturing have accomplished some of these tasks in structured settings that still require fences between the robots and human operators to ensure safety. Ideally, robots should be able to work side by side with humans, offering their strength to carry heavy loads while presenting no danger. Over the past decade, robots have gained new levels of dexterity. This enhancement is due to breakthroughs in mechanics with sensors for perceiving touch along a robot’s body and new mechanics for soft actuation to offer natural compliance. Most notably, this development leverages the immense progress in machine learning to encapsulate models of uncertainty and support further advances in adaptive and robust control. Learning to manipulate in real-world settings is costly in terms of both time and hardware. To further elaborate on data-driven methods but avoid generating examples with real, physical systems, many researchers use simulation environments. Still, grasping and dexterous manipulation require a level of reality that existing simulators are not yet able to deliver—for example, in the case of modeling contacts for soft and deformable objects. Two roads are hence pursued: The first draws inspiration from the way humans acquire interaction skills and prompts robots to learn skills from observing humans performing complex manipulation. This allows robots to acquire manipulation capabilities in only a few trials. However, generalizing the acquired knowledge to apply to actions that differ from those previously demonstrated remains difficult. The second road constructs databases of real object manipulation, with the goal to better inform the simulators and generate examples that are as realistic as possible. Yet achieving realistic simulation of friction, material deformation, and other physical properties may not be possible anytime soon, and real experimental evaluation will be unavoidable for learning to manipulate highly deformable objects. OUTLOOK Despite many years of software and hardware development, achieving dexterous manipulation capabilities in robots remains an open problem—albeit an interesting one, given that it necessitates improved understanding of human grasping and manipulation techniques. We build robots to automate tasks but also to provide tools for humans to easily perform repetitive and dangerous tasks while avoiding harm. Achieving robust and flexible collaboration between humans and robots is hence the next major challenge. Fences that currently separate humans from robots will gradually disappear, and robots will start manipulating objects jointly with humans. To achieve this objective, robots must become smooth and trustable partners that interpret humans’ intentions and respond accordingly. Furthermore, robots must acquire a better understanding of how humans interact and must attain real-time adaptation capabilities. There is also a need to develop robots that are safe by design, with an emphasis on soft and lightweight structures as well as control and planning methodologies based on multisensory feedback.

371 citations

Journal ArticleDOI
04 Jul 2018
TL;DR: In this article, an end-to-end action-conditional model is proposed to learn regrasping policies from raw visuo-tactile data, and a deep, multimodal convolutional network is used to predict the outcome of a candidate grasp adjustment and then execute a grasp by iteratively selecting the most promising actions.
Abstract: For humans, the process of grasping an object relies heavily on rich tactile feedback. Most recent robotic grasping work, however, has been based only on visual input, and thus cannot easily benefit from feedback after initiating contact. In this letter, we investigate how a robot can learn to use tactile information to iteratively and efficiently adjust its grasp. To this end, we propose an end-to-end action-conditional model that learns regrasping policies from raw visuo-tactile data. This model - a deep, multimodal convolutional network - predicts the outcome of a candidate grasp adjustment, and then executes a grasp by iteratively selecting the most promising actions. Our approach requires neither calibration of the tactile sensors nor any analytical modeling of contact forces, thus reducing the engineering effort required to obtain efficient grasping policies. We train our model with data from about 6450 grasping trials on a two-finger gripper equipped with GelSight high-resolution tactile sensors on each finger. Across extensive experiments, our approach outperforms a variety of baselines at 1) estimating grasp adjustment outcomes, 2) selecting efficient grasp adjustments for quick grasping, and 3) reducing the amount of force applied at the fingers, while maintaining competitive performance. Finally, we study the choices made by our model and show that it has successfully acquired useful and interpretable grasping behaviors.

222 citations

Journal ArticleDOI
TL;DR: In this article, self-supervision is used to learn a compact and multimodal representation of sensory inputs, which can then be used to improve the sample efficiency of policy learning.
Abstract: Contact-rich manipulation tasks in unstructured environments often require both haptic and visual feedback. It is nontrivial to manually design a robot controller that combines these modalities, which have very different characteristics. While deep reinforcement learning has shown success in learning control policies for high-dimensional inputs, these algorithms are generally intractable to train directly on real robots due to sample complexity. In this article, we use self-supervision to learn a compact and multimodal representation of our sensory inputs, which can then be used to improve the sample efficiency of our policy learning. Evaluating our method on a peg insertion task, we show that it generalizes over varying geometries, configurations, and clearances, while being robust to external perturbations. We also systematically study different self-supervised learning objectives and representation learning architectures. Results are presented in simulation and on a physical robot.

73 citations

Journal ArticleDOI
TL;DR: In this paper, the combined and interactive effects of the bolt-hole fit conditions and the preloads of the fasteners on the load carrying capacity of single-lap composite-to-titanium bolted joints have been investigated both experimentally and numerically.

49 citations

Journal ArticleDOI
04 Mar 2021
TL;DR: In this paper, the authors introduce a novel approach for simulating a GelSight tactile sensor in the commonly used Gazebo simulator, which can produce high-resolution images from depth-maps captured by a simulated optical sensor, and reconstruct the interaction between the touched object and an opaque soft membrane.
Abstract: Most current works in Sim2Real learning for robotic manipulation tasks leverage camera vision that may be significantly occluded by robot hands during the manipulation. Tactile sensing offers complementary information to vision and can compensate for the information loss caused by the occlusions. However, the use of tactile sensing is restricted in the Sim2Real research due to no simulated tactile sensors being available. To mitigate the gap, we introduce a novel approach for simulating a GelSight tactile sensor in the commonly used Gazebo simulator. Similar to the real GelSight sensor, the simulated sensor can produce high-resolution images from depth-maps captured by a simulated optical sensor, and reconstruct the interaction between the touched object and an opaque soft membrane. It can indirectly sense forces, geometry, texture and other properties of the object and enables Sim2Real learning with tactile sensing. Preliminary experimental results have shown that the simulated sensor could generate realistic outputs similar to the ones captured by a real GelSight sensor. All the materials used in this letter are available at https://danfergo.github.io/gelsight-simulation .

47 citations