scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Use of tactile sensors in enhancing the efficiency of vision-based object localization

02 Oct 1994-pp 243-250
TL;DR: A technique to localize polyhedral objects by integrating visual and tactile data, which is useful in tasks such as localizing an object in a robot hand.
Abstract: We present a technique to localize polyhedral objects by integrating visual and tactile data. This technique is useful in tasks such as localizing an object in a robot hand. It is assumed that visual data are provided by a monocular visual sensor, while tactile data by a planar-array tactile sensor in contact with the object. Visual data are used to generate a set of hypotheses about the 3D object's pose, while tactile data to assist in verifying the visually-generated pose hypotheses. We specifically focus on using tactile data in hypothesis verification. A set of indexed bounds on the object's six transformation parameters are constructed from the tactile data. These indexed bounds are constructed off-line by expressing them with respect to a tactile-array frame. At run-time, each visually-generated hypothesis is efficiently compared with the touch-based bounds to determine whether to eliminate the hypothesis, or to consider it for further verification. The proposed technique is tested using simulated and real data. >
Citations
More filters
Proceedings ArticleDOI
13 Mar 2001
TL;DR: The subjects were given the task of deciding which of two virtual springs is the stiffer, these springs being simulated with a PHANToM/sup TM/ force feedback device and displayed on a monoscopic computer screen, which represents the boundary of the sensory illusion.
Abstract: Describes a psychophysical experiment designed to study the phenomenon of illusion which occurs with pseudo-haptic feedback, and to identify the moment when this illusion occurs: the "boundary of illusion". The subjects were given the task of deciding which of two virtual springs is the stiffer, these springs being simulated with a PHANToM/sup TM/ force feedback device and displayed on a monoscopic computer screen. The first spring had a realistic behavior, since its visual and haptic displacements were identical. The second spring-the pseudo-haptic one-was stiffer on a haptic basis, but sometimes less stiff on a visual basis. The data collected allowed us to calculate the visual point of subjective equality (PSE) between the two springs, which represents the boundary of the sensory illusion. On average, a high value of PSE turned out to be -24%. This value increased monotonically when the haptic difference between the springs increased. This implies that more visual deformation is necessary to compensate for large haptic differences and qualifies the notion of visual dominance. However, this boundary varies greatly depending on the subjects and their sensory integration strategy. The subjects were sensitive to this illusion to varying degrees. They were divided into different populations-from those who were "haptically oriented" to those who were "visually oriented".

80 citations


Cites background from "Use of tactile sensors in enhancing..."

  • ...But when receiv ing multi-sensory information, people may become better [9] [5], or ....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes as an extension adaptive space warping; it shows how this technique can be employed in a mixed-reality surgical training simulator in order to map different virtual patients onto one physical anatomical model, and developed methods to warp different organ geometries onto onePhysical mock-up.
Abstract: Passive haptics, also known as tactile augmentation, denotes the use of a physical counterpart to a virtual environment to provide tactile feedback. Employing passive haptics can result in more realistic touch sensations than those from active force feedback, especially for rigid contacts. However, changes in the virtual environment would necessitate modifications of the physical counterparts. In recent work space warping has been proposed as one solution to overcome this limitation. In this technique virtual space is distorted such that a variety of virtual models can be mapped onto one single physical object. In this paper, we propose as an extension adaptive space warping; we show how this technique can be employed in a mixed-reality surgical training simulator in order to map different virtual patients onto one physical anatomical model. We developed methods to warp different organ geometries onto one physical mock-up, to handle different mechanical behaviors of the virtual patients, and to allow interactive modifications of the virtual structures, while the physical counterparts remain unchanged. Various practical examples underline the wide applicability of our approach. To the best of our knowledge this is the first practical usage of such a technique in the specific context of interactive medical training.

37 citations

Journal ArticleDOI
TL;DR: Experimental work demonstrates that a single physical surface can be made to ‘feel’ both softer and harder than it is in reality by the accompanying visual information presented, indicating haptic accuracy may not be essential for a realistic virtual experience.
Abstract: This paper considers tactile augmentation, the addition of a physical object within a virtual environment (VE) to provide haptic feedback. The resulting mixed reality environment is limited in terms of the ease with which changes can be made to the haptic properties of objects within it. Therefore sensory enhancements or illusions that make use of visual cues to alter the perceived hardness of a physical object allowing variation in haptic properties are considered. Experimental work demonstrates that a single physical surface can be made to ‘feel’ both softer and harder than it is in reality by the accompanying visual information presented. The strong impact visual cues have on the overall perception of object hardness, indicates haptic accuracy may not be essential for a realistic virtual experience. The experimental results are related specifically to the development of a VE for surgical training; however, the conclusions drawn are broadly applicable to the simulation of touch and the understanding of haptic perception within VEs.

26 citations


Cites background from "Use of tactile sensors in enhancing..."

  • ...Furthermore research shows that multi-sensory information improves the quality of perception and the sense of presence offered by a VE (Klatzky and Lederman 2002; Schultz and Petersik 1994; Boshra and Zhang 1994)....

    [...]

Journal ArticleDOI
TL;DR: A novel technique for localizing a polyhedral object in a robot hand by integrating visual and tactile data is presented and shows its superiority over vision-based localization in the following aspects: capability of determining the object pose under heavy occlusion, number of generated pose hypotheses, and accuracy of estimating the object depth.

8 citations


Cites methods from "Use of tactile sensors in enhancing..."

  • ...Touch-based veri cation is rst performed using a data-driven indexing scheme presented in [20]....

    [...]

Journal ArticleDOI
TL;DR: This work presents a framework for recognition of 3D objects by integrating 2D and 3D sensory data in the early stages of recognition in order to reduce the computational requirements of the recognition process.

3 citations

References
More filters
Journal ArticleDOI
TL;DR: It is argued that similar mechanisms and constraints form the basis for recognition in human vision.

1,444 citations

Journal ArticleDOI
TL;DR: The approach operates by examining all hypotheses about pairings between sensed data and object surfaces and efficiently discarding inconsistent ones by using local constraints on distances between faces, angles between face normals, and angles of vectors between sensed points.
Abstract: This paper discusses how local measurements of positions and surface normals may be used to identify and locate overlapping objects. The objects are modeled as polyhedra (or polygons) having up to six degrees of positional freedom relative to the sensors. The approach operates by examining all hypotheses about pairings between sensed data and object surfaces and efficiently discarding inconsistent ones by using local constraints on: distances between faces, angles between face normals, and angles (relative to the surface normals) of vectors between sensed points. The method described here is an extension of a method for recognition and localization of nonoverlapping parts previously described in [18] and [15].

539 citations

Journal ArticleDOI
TL;DR: It is shown on examples related to polyhedra that this approach leads to results useful for both location and recognition of 3D objects because few admissible hypotheses are retained from the interpolation of the three line segments.
Abstract: A method for finding analytical solutions to the problem of determining the attitude of a 3D object in space from a single perspective image is presented. Its principle is based on the interpretation of a triplet of any image lines as the perspective projection of a triplet of linear ridges of the object model, and on the search for the model attitude consistent with these projections. The geometrical transformations to be applied to the model to bring it into the corresponding location are obtained by the resolution of an eight-degree equation in the general case. Using simple logical rules, it is shown on examples related to polyhedra that this approach leads to results useful for both location and recognition of 3D objects because few admissible hypotheses are retained from the interpolation of the three line segments. Line matching by the prediction-verification procedure is thus less complex. >

493 citations

Book
02 Jan 1995
TL;DR: In this paper, a robotic system for object recognition that uses passive stereo vision and active exploratory tactile sensing is described, where the complementary nature of these sensing modalities allows the system to discover the underlying 3D structure of the objects to be recognized.
Abstract: A robotic system for object recognition is described that uses passive stereo vision and active exploratory tactile sensing. The complementary nature of these sensing modalities allows the system to discover the underlying 3-D structure of the objects to be recognized. This structure is embodied in rich, hierarchical, viewpoint-independent 3-D models of the objects which include curved surfaces, concavities and holes. The vision processing provides sparse 3-D data about regions of interest that are then actively explored by the tactile sensor mounted on the end of a six-degree-of-freedom manipulator. A robust, hierarchical procedure has been developed to inte grate the visual and tactile data into accurate 3-D surface and feature primitives. This integration of vision and touch provides geometric measures of the surfaces and features that are used in a matching phase to find model objects that are consistent with the sensory data. Methods for verification of the hypothesis are presented, including the sen...

148 citations

Journal ArticleDOI
TL;DR: The approach here is first to transform images of line segments to the center of the image plane as if the camera were rotated to aim at them, and the 3D information extracted in this canonical position is transformed back to the original configuration.
Abstract: Given a perspective projection of line segments on the image plane, the constraints on their 3D positions and orientations are derived on the assumption that their true lengths or the true angles they make are known. The approach here is first to transform images of line segments to the center of the image plane as if the camera were rotated to aim at them. The 3D information extracted in this canonical position is then transformed back to the original configuration. Examples are given, by using real images, for analytical 3D recovery of a rectangular corner and a corner with two right angles.

73 citations