scispace - formally typeset
Search or ask a question
Topic

Distance transform

About: Distance transform is a research topic. Over the lifetime, 2886 publications have been published within this topic receiving 59481 citations.


Papers
More filters
Proceedings Article
01 Jan 2006
TL;DR: This paper introduces and experiment with a framework for learning local perceptual distance functions for visual recognition as a combination of elementary distances between patch-based visual features, and applies this framework to the tasks of image retrieval and classification of novel images.
Abstract: In this paper we introduce and experiment with a framework for learning local perceptual distance functions for visual recognition. We learn a distance function for each training image as a combination of elementary distances between patch-based visual features. We apply these combined local distance functions to the tasks of image retrieval and classification of novel images. On the Caltech 101 object recognition benchmark, we achieve 60.3% mean recognition across classes using 15 training images per class, which is better than the best published performance by Zhang, et al.

236 citations

Journal ArticleDOI
TL;DR: A surface manipulation technique that uses distance fields-scalar fields derived geometrically from surface models-to combine, modify, and analyze surfaces is presented, intended for application to complex models arising in scientific visualization.
Abstract: A surface manipulation technique that uses distance fields-scalar fields derived geometrically from surface models-to combine, modify, and analyze surfaces is presented. It is intended for application to complex models arising in scientific visualization. Computing distance from single triangles is discussed, and an optimized algorithm for computing the distance field from an entire closed surface is built. The use of the fields for surface removal, interpolation and blending is examined. The strength of the approach is that it lets simple 3D algorithms substitute for potentially very complex 2D methods. >

236 citations

Journal ArticleDOI
William J. Rucklidge1
TL;DR: This paper develops a rasterised approach to the search and a number of techniques that allow it to locate quickly all transformations of the model that satisfy two quality criteria; it can also efficiently locate only the best transformation.
Abstract: The Hausdorff distance is a measure defined between two point sets, here representing a model and an image. The Hausdorff distance is reliable even when the image contains multiple objects, noise, spurious features, and occlusions. In the past, it has been used to search images for instances of a model that has been translated, or translated and scaled, by finding transformations that bring a large number of model features close to image features, and vice versa. In this paper, we apply it to the task of locating an affine transformation of a model in an image; this corresponds to determining the pose of a planar object that has undergone weak-perspective projection. We develop a rasterised approach to the search and a number of techniques that allow us to locate quickly all transformations of the model that satisfy two quality criteria; we can also efficiently locate only the best transformation. We discuss an implementation of this approach, and present some examples of its use.

235 citations

Journal ArticleDOI
TL;DR: The authors have conducted several evaluation studies involving patient computed tomography and magnetic resonance data as well as mathematical phantoms indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation.
Abstract: Shape-based interpolation as applied to binary images causes the interpolation process to be influenced by the shape of the object. It accomplishes this by first applying a distance transform to the data. This results in the creation of a grey-level data set in which the value at each point represents the minimum distance from that point to the surface of the object. (By convention, points inside the object are assigned positive values; points outside are assigned negative values.) This distance transformed data set is then interpolated using linear or higher-order interpolation and is then thresholded at a distance value of zero to produce the interpolated binary data set. Here, the authors describe a new method that extends shape-based interpolation to grey-level input data sets. This generalization consists of first lifting the n-dimensional (n-D) image data to represent it as a surface, or equivalently as a binary image, in an (n+1)-dimensional [(n+1)-D] space. The binary shape-based method is then applied to this image to create an (n+1)-D binary interpolated image. Finally, this image is collapsed (inverse of lifting) to create the n-D interpolated grey-level data set. The authors have conducted several evaluation studies involving patient computed tomography (CT) and magnetic resonance (MR) data as well as mathematical phantoms. They all indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation.

231 citations

Proceedings ArticleDOI
10 Nov 2014
TL;DR: A novel image representation method by learning and using kernel classifiers using the one-against-all rule and the Euclidean distance between the classification response vectors is used as the new similarity measure.
Abstract: The learning of image representation is always the most important problem in computer vision community. In this paper, we propose a novel image representation method by learning and using kernel classifiers. We firstly train classifiers using the one-against-all rule, then use them classify the candidate images, and finally using the classification responses as the new representations. The Euclidean distance between the classification response vectors are used as the new similarity measure. The experimental results from a large scale image database show that the proposed algorithm can outperform the original feature on image retrieval problem.

230 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
91% related
Image processing
229.9K papers, 3.5M citations
91% related
Feature (computer vision)
128.2K papers, 1.7M citations
90% related
Convolutional neural network
74.7K papers, 2M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20235
202217
202161
202099
2019112
201881