scispace - formally typeset
Search or ask a question
Author

Alex Church

Bio: Alex Church is an academic researcher from University of Bristol. The author has contributed to research in topics: Tactile sensor & Reinforcement learning. The author has an hindex of 5, co-authored 9 publications receiving 124 citations.

Papers
More filters
Journal ArticleDOI
01 Apr 2019
TL;DR: This letter applies deep learning to an optical biomimetic tactile sensor, the TacTip, which images an array of papillae inside its sensing surface analogous to structures within human skin, to show that the application of a deep convolutional neural network can give reliable edge perception and, thus, a robust policy for planning contact points to move around object contours.
Abstract: Deep learning has the potential to have same the impact on robot touch as it has had on robot vision. Optical tactile sensors act as a bridge between the subjects by allowing techniques from vision to be applied to touch. In this letter, we apply deep learning to an optical biomimetic tactile sensor, the TacTip, which images an array of papillae (pins) inside its sensing surface analogous to structures within human skin. Our main result is that the application of a deep convolutional neural network can give reliable edge perception, and, thus a robust policy for planning contact points to move around object contours. Robustness is demonstrated over several irregular and compliant objects with both tapping and continuous sliding, using a model trained only by tapping onto a disk. These results relied on using techniques to encourage generalization to tasks beyond which the model was trained. We expect this is a generic problem in practical applications of tactile sensing that deep learning will solve.

81 citations

Journal ArticleDOI
TL;DR: In this paper, the authors apply deep learning to an optical biomimetic tactile sensor, the TacTip, which images an array of papillae (pins) inside its sensing surface analogous to structures within human skin.
Abstract: Deep learning has the potential to have the impact on robot touch that it has had on robot vision. Optical tactile sensors act as a bridge between the subjects by allowing techniques from vision to be applied to touch. In this paper, we apply deep learning to an optical biomimetic tactile sensor, the TacTip, which images an array of papillae (pins) inside its sensing surface analogous to structures within human skin. Our main result is that the application of a deep CNN can give reliable edge perception and thus a robust policy for planning contact points to move around object contours. Robustness is demonstrated over several irregular and compliant objects with both tapping and continuous sliding, using a model trained only by tapping onto a disk. These results relied on using techniques to encourage generalization to tasks beyond which the model was trained. We expect this is a generic problem in practical applications of tactile sensing that deep learning will solve. A video demonstrating the approach can be found at this https URL

48 citations

Journal ArticleDOI
TL;DR: In this article, a three-dimensional-printed, three-fingered robotic hand was used for more effective grasping, along with a wide range of benefits of human-like touch.
Abstract: Bringing tactile sensation to robotic hands will allow for more effective grasping, along with a wide range of benefits of human-like touch. Here, we present a three-dimensional-printed, three-fing...

31 citations

Posted Content
TL;DR: A three-dimensional-printed, three-fingered tactile robot hand comprising an OpenHand ModelO customized to house a TacTip soft biomimetic tactile sensor in the distal phalanx of each finger is presented.
Abstract: Bringing tactile sensation to robotic hands will allow for more effective grasping, along with the wide range of benefits of human-like touch. Here we present a 3D-printed, three-fingered tactile robot hand comprising an OpenHand Model O customized to house a TacTip soft biomimetic tactile sensor in the distal phalanx of each finger. We expect that combining the grasping capabilities of this underactuated hand with sophisticated tactile sensing will result in an effective platform for robot hand research -- the Tactile Model O (T-MO). The design uses three JeVois machine vision systems, each comprising a miniature camera in the tactile fingertip with a processing module in the base of the hand. To evaluate the capabilities of the T-MO, we benchmark its grasping performance using the Gripper Assessment Benchmark on the YCB object set. Tactile sensing capabilities are evaluated by performing tactile object classification on 26 objects and predicting whether a grasp will successfully lift each object. Results are consistent with the state of the art, taking advantage of advances in deep learning applied to tactile image outputs. Overall, this work demonstrates that the T-MO is an effective platform for robot hand research and we expect it to open-up a range of applications in autonomous object handling. Supplemental video: this https URL.

23 citations

Journal ArticleDOI
20 Jul 2020
TL;DR: In this paper, the authors proposed a new environment and set of tasks to encourage development of tactile reinforcement learning: learning to type on a braille keyboard, progressing in difficulty from arrow to alphabet keys and from discrete to continuous actions.
Abstract: Artificial touch would seem well-suited for Reinforcement Learning (RL), since both paradigms rely on interaction with an environment. Here we propose a new environment and set of tasks to encourage development of tactile reinforcement learning: learning to type on a braille keyboard. Four tasks are proposed, progressing in difficulty from arrow to alphabet keys and from discrete to continuous actions. A simulated counterpart is also constructed by sampling tactile data from the physical environment. Using state-of-the-art deep RL algorithms, we show that all of these tasks can be successfully learnt in simulation, and 3 out of 4 tasks can be learned on the real robot. A lack of sample efficiency currently makes the continuous alphabet task impractical on the robot. To the best of our knowledge, this work presents the first demonstration of successfully training deep RL agents in the real world using observations that exclusively consist of tactile images. To aid future research utilising this environment, the code for this project has been released along with designs of the braille keycaps for 3D printing and a guide for recreating the experiments.

14 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An overview of tactile information and its applications in robotics is provided, which presents a hierarchy consisting of raw, contact, object, and action levels to structure the tactile information, with higher-level information often building upon lower- level information.
Abstract: Tactile sensing is a key sensor modality for robots interacting with their surroundings. These sensors provide a rich and diverse set of data signals that contain detailed information collected from contacts between the robot and its environment. The data are however not limited to individual contacts and can be used to extract a wide range of information about the objects in the environment as well as the actions of the robot during the interactions. In this article, we provide an overview of tactile information and its applications in robotics. We present a hierarchy consisting of raw, contact, object, and action levels to structure the tactile information, with higher-level information often building upon lower-level information. We discuss different types of information that can be extracted at each level of the hierarchy. The article also includes an overview of different types of robot applications and the types of tactile information that they employ. Finally we end the article with a discussion for future tactile applications which are still beyond the current capabilities of robots.

137 citations

Journal ArticleDOI
12 Sep 2019-Sensors
TL;DR: An overview of tactile image sensors employing a camera is provided with a focus on the sensing principle, typical design, and variation in the sensor configuration.
Abstract: A tactile image sensor employing a camera is capable of obtaining rich tactile information through image sequences with high spatial resolution. There have been many studies on the tactile image sensors from more than 30 years ago, and, recently, they have been applied in the field of robotics. Tactile image sensors can be classified into three typical categories according to the method of conversion from physical contact to light signals: Light conductive plate-based, marker displacement- based, and reflective membrane-based sensors. Other important elements of the sensor, such as the optical system, image sensor, and post-image analysis algorithm, have been developed. In this work, the literature is surveyed, and an overview of tactile image sensors employing a camera is provided with a focus on the sensing principle, typical design, and variation in the sensor configuration.

105 citations

Proceedings ArticleDOI
24 Oct 2020
TL;DR: SwingBot as discussed by the authors is able to learn the physical features of an object through tactile exploration, and it can predict the swing angle achieved by a robot performing dynamic swing-up manipulations on a previously unseen object.
Abstract: Several robot manipulation tasks are extremely sensitive to variations of the physical properties of the manipulated objects. One such task is manipulating objects by using gravity or arm accelerations, increasing the importance of mass, center of mass, and friction information. We present SwingBot, a robot that is able to learn the physical features of an held object through tactile exploration. Two exploration actions (tilting and shaking) provide the tactile information used to create a physical feature embedding space. With this embedding, SwingBot is able to predict the swing angle achieved by a robot performing dynamic swing-up manipulations on a previously unseen object. Using these predictions, it is able to search for the optimal control parameters for a desired swing-up angle. We show that with the learned physical features our end-to-end self-supervised learning pipeline is able to substantially improve the accuracy of swinging up unseen objects. We also show that objects with similar dynamics are closer to each other on the embedding space and that the embedding can be disentangled into values of specific physical properties.

63 citations

Journal ArticleDOI
TL;DR: This study demonstrates the slip detection capabilities of the recently developed Tactile Model O (T-MO) robotic hand by using support vector machines to detect slip and test multiple slip scenarios including responding to the onset of slip in real time with 11 different objects in various grasps.
Abstract: Tactile sensing is used by humans when grasping to prevent us dropping objects. One key facet of tactile sensing is slip detection, which allows a gripper to know when a grasp is failing and take action to prevent an object being dropped. This study demonstrates the slip detection capabilities of the recently developed Tactile Model O (T-MO) robotic hand by using support vector machines to detect slip and test multiple slip scenarios including responding to the onset of slip in real time with 11 different objects in various grasps. In this article, we demonstrate the benefits of slip detection in grasping by testing two real-world scenarios: adding weight to destabilize a grasp and using slip detection to lift up objects at the first attempt. The T-MO is able to detect when an object is slipping, react to stabilize the grasp, and be deployed in real-world scenarios. This shows the T-MO is a suitable platform for autonomous grasping by using reliable slip detection to ensure a stable grasp in unstructured environments.

47 citations

Journal ArticleDOI
TL;DR: This article illustrates the application of deep learning to robot touch by considering a basic yet fundamental capability: estimating the relative pose of part of an object in contact with a tactile sensor.
Abstract: This article illustrates the application of deep learning to robot touch by considering a basic yet fundamental capability: estimating the relative pose of part of an object in contact with a tactile sensor. We begin by surveying deep learning applied to tactile robotics, focusing on optical tactile sensors, which help to link touch and deep learning for vision. We then show how deep learning can be used to train accurate pose models of 3D surfaces and edges that are insensitive to nuisance variables, such as motion-dependent shear. This involves including representative motions as unlabeled perturbations of the training data and using Bayesian optimization of the network and training hyperparameters to find the most accurate models. Accurate estimation of the pose from touch will enable robots to safely and precisely control their physical interactions, facilitating a wide range of object exploration and manipulation tasks.

47 citations