Topic
Distance transform
About: Distance transform is a research topic. Over the lifetime, 2886 publications have been published within this topic receiving 59481 citations.
Papers published on a yearly basis
Papers
More filters
••
21 Oct 2001TL;DR: This work proposes a novel complete distance field representation (CDFR) that does not rely on Nyquist's sampling theory and constructs a volume where each voxel has a complete description of all portions of surface that affect the local distance field.
Abstract: Distance fields are an important volume representation. A high quality distance field facilitates accurate surface characterization and gradient estimation. However, due to Nyquist's Law, no existing volumetric methods based on the linear sampling theory can fully capture surface details, such as corners and edges, in 3D space. We propose a novel complete distance field representation (CDFR) that does not rely on Nyquist's sampling theory. To accomplish this, we construct a volume where each voxel has a complete description of all portions of surface that affect the local distance field. For any desired distance, we are able to extract a surface contour in true Euclidean distance, at any level of accuracy, from the same CDFR representation. Such point-based iso-distance contours have faithful per-point gradients and can be interactively visualized using splatting, providing per-point shaded image quality. We also demonstrate applying CDFR to a cutting edge design for manufacturing application involving high-complexity parts at un-precedented accuracy using only commonly available computational resources.
67 citations
••
TL;DR: The inverted distance transform of the edge map is used as an edge indicator function for contour detection and the problem of background clutter can be relaxed by taking the object motion into account.
Abstract: We propose a new method for contour tracking in video. The inverted distance transform of the edge map is used as an edge indicator function for contour detection. Using the concept of topographical distance, the watershed segmentation can be formulated as a minimization. This new viewpoint gives a way to combine the results of the watershed algorithm on different surfaces. In particular, our algorithm determines the contour as a combination of the current edge map and the contour, predicted from the tracking result in the previous frame. We also show that the problem of background clutter can be relaxed by taking the object motion into account. The compensation with object motion allows to detect and remove spurious edges in background. The experimental results confirm the expected advantages of the proposed method over the existing approaches.
67 citations
••
22 Apr 2013TL;DR: A new method for skin regions segmentation which consists in spatial analysis of skin probability maps obtained using pixel-wise detectors using the distance transform for propagating the “skinness” across the image in a combined domain of luminance, hue and skin probability.
Abstract: This paper introduces a new method for skin regions segmentation which consists in spatial analysis of skin probability maps obtained using pixel-wise detectors. There are a number of methods which use various techniques of skin color modeling to classify every individual pixel or transform input color images into skin probability maps, but their performance is limited due to high variance and low specificity of the skin color. Detection precision can be enhanced based on spatial analysis of skin pixels, however this direction has been little explored so far. Our contribution lies in using the distance transform for propagating the “skinness” across the image in a combined domain of luminance, hue and skin probability. In the paper we explain theoretical advantages of the proposed method over alternative skin detectors that also perform spatial analysis. Finally, we present results of an extensive experimental study which clearly indicate high competitiveness of the proposed method and its relevance to gesture recognition.
66 citations
••
01 Oct 2018TL;DR: This paper proposes an automatic crack detection method based on deep convolutional neural network −U-Net that can process an image as a whole without patchifying, thanks to the encoder-decoder structure of U-Net.
Abstract: In this paper, we proposed an automatic crack detection method based on deep convolutional neural network −U-Net [4]. Unlike existing machine learning based crack detection methods, we can process an image as a whole without patchifying, thanks to the encoder-decoder structure of U-Net. The segmentation result is output from the network as a whole, instead of aggregation from neighborhood patches. In addition, a new cost function based on distance transform is introduced to assign pixel-level weight according to the minimal distance to the ground truth segmentation. In experiments, we test the proposed method on two datasets of road crack images. The pixel-level segmentation accuracy is above 92% which outperforms other state-of-the-art methods significantly.
66 citations
••
TL;DR: A novel approach for creating a three-dimensional (3-D) face structure from multiple image views of a human face taken at a priori unknown poses by appropriately morphing a generic 3-D face into the specific face structure is described.
Abstract: We describe a novel approach for creating a three-dimensional (3-D) face structure from multiple image views of a human face taken at a priori unknown poses by appropriately morphing a generic 3-D face. A cubic explicit polynomial in 3-D is used to morph a generic face into the specific face structure. The 3-D face structure allows for accurate pose estimation as well as the synthesis of virtual images to be matched with a test image for face identification. The estimation of a 3-D person's face and pose estimation is achieved through the use of a distance map metric. This distance map residual error (geometric-based face classifier) and the image intensity residual error are fused in identifying a person in the database from one or more arbitrary image view(s). Experimental results are shown on simulated data in the presence of noise, as well as for images of real faces, and promising results are obtained.
66 citations