scispace - formally typeset
Search or ask a question
Author

Zilin Si

Bio: Zilin Si is an academic researcher. The author has contributed to research in topics: Tactile sensor & Angular displacement. The author has an hindex of 1, co-authored 3 publications receiving 2 citations.

Papers
More filters
Posted Content
TL;DR: Taxim as mentioned in this paper is a high-speed simulation model for a vision-based tactile sensor, which uses a piece of soft elastomer as the medium of contact and embeds optical structures to capture the deformation.
Abstract: Simulation is widely used in robotics for system verification and large-scale data collection. However, simulating sensors, including tactile sensors, has been a long-standing challenge. In this paper, we propose Taxim, a realistic and high-speed simulation model for a vision-based tactile sensor, GelSight. A GelSight sensor uses a piece of soft elastomer as the medium of contact and embeds optical structures to capture the deformation of the elastomer, which infers the geometry and forces applied at the contact surface. We propose an example-based method for simulating GelSight: we simulate the optical response to the deformation with a polynomial look-up table. This table maps the deformed geometries to pixel intensity sampled by the embedded camera. In order to simulate the surface markers' motion that is caused by the surface stretch of the elastomer, we apply the linear elastic deformation theory and the superposition principle. The simulation model is calibrated with less than 100 data points from a real sensor. The example-based approach enables the model to easily migrate to other GelSight sensors or its variations. To the best of our knowledge, our simulation framework is the first to incorporate marker motion field simulation that derives from elastomer deformation together with the optical simulation, creating a comprehensive and computationally efficient tactile simulation framework. Experiments reveal that our optical simulation has the lowest pixel-wise intensity errors compared to prior work and can run online with CPU computing.

15 citations

Posted Content
TL;DR: In this paper, a model-based algorithm that detects those rotational patterns and measures rotational displacement using the GelSight sensor is proposed, which detects the rotational failure of grasp in an early stage and drives the robot to a stable grasp pose.
Abstract: Rotational displacement about the grasping point is a common grasp failure when an object is grasped at a location away from its center of gravity. Tactile sensors with soft surfaces, such as GelSight sensors, can detect the rotation patterns on the contacting surfaces when the object rotates. In this work, we propose a model-based algorithm that detects those rotational patterns and measures rotational displacement using the GelSight sensor. We also integrate the rotation detection feedback into a closed-loop regrasping framework, which detects the rotational failure of grasp in an early stage and drives the robot to a stable grasp pose. We validate our proposed rotation detection algorithm and grasp-regrasp system on self-collected dataset and online experiments to show how our approach accurately detects the rotation and increases grasp stability.

4 citations

Posted Content
TL;DR: In this paper, an incremental shape mapping method using a GelSight tactile sensor and a depth camera is proposed to reconstruct 3D reconstructions of household objects using a learned model trained in simulation.
Abstract: Knowledge of 3-D object shape is of great importance to robot manipulation tasks, but may not be readily available in unstructured environments. While vision is often occluded during robot-object interaction, high-resolution tactile sensors can give a dense local perspective of the object. However, tactile sensors have limited sensing area and the shape representation must faithfully approximate non-contact areas. In addition, a key challenge is efficiently incorporating these dense tactile measurements into a 3-D mapping framework. In this work, we propose an incremental shape mapping method using a GelSight tactile sensor and a depth camera. Local shape is recovered from tactile images via a learned model trained in simulation. Through efficient inference on a spatial factor graph informed by a Gaussian process, we build an implicit surface representation of the object. We demonstrate visuo-tactile mapping in both simulated and real-world experiments, to incrementally build 3-D reconstructions of household objects.

Cited by
More filters
Journal ArticleDOI
TL;DR: Due to the bidirectional generators of CycleGAN, the proposed method can not only generate more realistic simulated tactile images, but also improve the deformation measurement accuracy of real sensors by transferring them to simulation domain.
Abstract: GelSight optical tactile sensors have high-resolution and low-cost advantages and have witnessed growing adoption in various contact-rich robotic applications. Sim2Real for GelSight sensors can reduce the time cost and sensor damage during data collection and is crucial for learning-based tactile perception and control. However, it remains difficult for existing simulation methods to resemble the complex and non-ideal light transmission of real sensors. In this letter, we propose to narrow the gap between simulation and real world using CycleGAN. Due to the bidirectional generators of CycleGAN, the proposed method can not only generate more realistic simulated tactile images, but also improve the deformation measurement accuracy of real sensors by transferring them to simulation domain. Experiments on a public dataset and our own GelSight sensors have validated the effectiveness of our method. The materials related to this letter are available at https://github.com/RVSATHU/GelSight-Sim2Real.

8 citations

Journal ArticleDOI
TL;DR: In this paper , the authors focus on hardware technologies and review the literature over the past five years to provide a valuable guideline for developers to improve the performance of vision-based tactile sensors.
Abstract: A vision-based tactile sensor (VBTS) is an innovative optical sensor widely applied in robotic perception. The VBTS consists of a contact module, an illumination module, and a camera module. Its hardware technologies contain preparation, performance, functions, materials, and optimization design of components. However, the current literature lacks a review of hardware technologies to formulate a complete manufacturing scheme. Therefore, this article focuses on hardware technologies and reviewed the literature over the past five years. We analyze the core components of each module and sort out a technical route. The current challenges and problems of hardware technologies are discussed to propose some feasible solutions. Considering cross-disciplinary applications, we think that multidisciplinary hardware technologies are expected to promote the development of next-generation VBTSs. In addition, we aim to provide a valuable guideline for developers to improve the performance of VBTSs.

6 citations

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a tactile-motor policy learning method to generalize tubular object manipulation skills from simulation to reality, which is applied to a human-robot collaborative tube placing scenario and a robotic pipetting scenario.

3 citations

Posted Content
TL;DR: In this paper, a two-stage approach is proposed to track small objects using vision-based tactile sensors that provide high-dimensional tactile image measurements at the point of contact. But this method requires a prior knowledge about the object being localized.
Abstract: We address the problem of tracking 3D object poses from touch during in-hand manipulations. Specifically, we look at tracking small objects using vision-based tactile sensors that provide high-dimensional tactile image measurements at the point of contact. While prior work has relied on a-priori information about the object being localized, we remove this requirement. Our key insight is that an object is composed of several local surface patches, each informative enough to achieve reliable object tracking. Moreover, we can recover the geometry of this local patch online by extracting local surface normal information embedded in each tactile image. We propose a novel two-stage approach. First, we learn a mapping from tactile images to surface normals using an image translation network. Second, we use these surface normals within a factor graph to both reconstruct a local patch map and use it to infer 3D object poses. We demonstrate reliable object tracking for over 100 contact sequences across unique shapes with four objects in simulation and two objects in the real-world. Supplementary video: https://youtu.be/JwNTC9_nh8M

3 citations

Journal ArticleDOI
TL;DR: In this paper , the authors extend the Tactile Gym simulator to include three new optical tactile sensors (TacTip, DIGIT and DigiTac) of the two most popular types, Gelsight-style (image-shading based) and TacTip-style(marker based).
Abstract: High-resolution optical tactile sensors are increasingly used in robotic learning environments due to their ability to capture large amounts of data directly relating to agent-environment interaction. However, there is a high barrier of entry to research in this area due to the high cost of tactile robot platforms, specialised simulation software, and sim-to-real methods that lack generality across different sensors. In this letter we extend the Tactile Gym simulator to include three new optical tactile sensors (TacTip, DIGIT and DigiTac) of the two most popular types, Gelsight-style (image-shading based) and TacTip-style (marker based). We demonstrate that a single sim-to-real approach can be used with these three different sensors to achieve strong real-world performance despite the significant differences between real tactile images. Additionally, we lower the barrier of entry to the proposed tasks by adapting them to an inexpensive 4-DoF robot arm, further enabling the dissemination of this benchmark. We validate the extended environment on three physically-interactive tasks requiring a sense of touch: object pushing, edge following and surface following. The results of our experimental validation highlight some differences between these sensors, which may help future researchers select and customize the physical characteristics of tactile sensors for different manipulations scenarios.

3 citations