scispace - formally typeset
Search or ask a question
Topic

Distance transform

About: Distance transform is a research topic. Over the lifetime, 2886 publications have been published within this topic receiving 59481 citations.


Papers
More filters
Proceedings ArticleDOI
01 Jun 2016
TL;DR: This paper proposes an exact and iteration free solution on a minimum spanning tree that largely reduces the search space of shortest paths, resulting an efficient and high quality distance transform algorithm and introduces a boundary dissimilarity measure to compliment the shortage of distance transform for salient object detection.
Abstract: In this paper, we present a real-time salient object detection system based on the minimum spanning tree. Due to the fact that background regions are typically connected to the image boundaries, salient objects can be extracted by computing the distances to the boundaries. However, measuring the image boundary connectivity efficiently is a challenging problem. Existing methods either rely on superpixel representation to reduce the processing units or approximate the distance transform. Instead, we propose an exact and iteration free solution on a minimum spanning tree. The minimum spanning tree representation of an image inherently reveals the object geometry information in a scene. Meanwhile, it largely reduces the search space of shortest paths, resulting an efficient and high quality distance transform algorithm. We further introduce a boundary dissimilarity measure to compliment the shortage of distance transform for salient object detection. Extensive evaluations show that the proposed algorithm achieves the leading performance compared to the state-of-the-art methods in terms of efficiency and accuracy.

285 citations

Proceedings ArticleDOI
19 Jul 2004
TL;DR: An algorithm which inverts the image formation process, to recover a good visibility image of the object, is presented, which obtained great improvement of scene contrast and color correction, and nearly doubled the underwater visibility range.
Abstract: Underwater imaging is important for scientific research and technology, as well as for popular activities. We present a computer vision approach which easily removes degradation effects in underwater vision. We analyze the physical effects of visibility degradation. We show that the main degradation effects can be associated with partial polarization of light. We therefore present an algorithm which inverts the image formation process, to recover a good visibility image of the object. The algorithm is based on a couple of images taken through a polarizer at different orientations. As a by product, a distance map of the scene is derived as well. We successfully used our approach when experimenting in the sea using a system we built. We obtained great improvement of scene contrast and color correction, and nearly doubled the underwater visibility range.

283 citations

Journal ArticleDOI
TL;DR: A practical method for automatic image correlation in three-dimensions (3D) based on chamfer matching is described, which has already been introduced in clinical practice and requires no user interaction.
Abstract: Image correlation is often required to utilize the complementary information in CT, MRI, and SPECT. A practical method for automatic image correlation in three-dimensions (3D) based on chamfer matching is described. The method starts with automatic extraction of contour points in one modality and automatic segmentation of the corresponding feature in the other modality. A distance transform is applied to the segmented volume and a cost function is defined that operates between the contour points and the distance transform. Matching is performed by iteratively optimizing the cost function for 3D translation, rotation, and scaling of the contour points. The complete matching process including segmentation requires no user interaction and takes about 100 s on an HP715/50 workstation. Perturbation tests on clinical data with cost functions based on mean, rms, and maximum distances in combination with two general purpose optimization procedures have been performed. The performance of the methods has been quantified in terms of accuracy, capture range, and reliability. The best results on clinical data are obtained with the cost function based on the mean distance and the simplex optimization method. The accuracy is 0.3 mm for CT-CT, 1.0 mm for CT-MRI, and 0.7 mm for CT-SPECT correlation of the head. The accuracy is usually at subpixel level but is limited by global geometric distortions, e.g., for CT-MRI correlation. Both for CT-CT and CT-MRI correlation the capture range is about 6 cm, which is higher than normal differences in patient setup found on the scanners (less than 4 cm). This means that the correlation procedure seldom fails (better than 98% reliability) and user interaction is unnecessary. For CT-SPECT matching the capture range is about 3 cm (80% reliability), and must be further improved. The method has already been introduced in clinical practice.

281 citations

Proceedings ArticleDOI
26 Mar 2000
TL;DR: This paper investigates both graph theoretic methods and ad hoc heuristics for instrumenting the Internet to obtain distance maps and evaluates the efficacy of the resulting distance maps by comparing the determinations of the closest replica using known topologies against those obtained using the distance maps.
Abstract: The IDMaps project aims to provide a distance map of the Internet from which relative distances between hosts on the Internet can be gauged. Many distributed systems and applications can benefit from such a distance map service, for example, a common method to improve user-perceived performance of the Internet is to place data and server mirrors closer to clients. When a client tries to access a mirrored server, which mirror should it access? With IDMaps, the closest mirror can be determined based on distance estimates between the client and the mirrors. In this paper we investigate both graph theoretic methods and ad hoc heuristics for instrumenting the Internet to obtain distance maps. We evaluate the efficacy of the resulting distance maps by comparing the determinations of the closest replica using known topologies against those obtained using the distance maps.

267 citations

Book ChapterDOI
01 Jan 2000
TL;DR: A new general algorithm for computing distance transforms of digital images is presented, which can be used for the computation of the exact Euclidean, Manhattan, and chessboard distance transforms.
Abstract: A new general algorithm for computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the computation per row (column) is independent of the computation of other rows (columns), the algorithm can be easily parallelized on shared memory computers. The algorithm can be used for the computation of the exact Euclidean, Manhattan (L 1 norm), and chessboard distance (L ∞ norm) transforms.

263 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
91% related
Image processing
229.9K papers, 3.5M citations
91% related
Feature (computer vision)
128.2K papers, 1.7M citations
90% related
Convolutional neural network
74.7K papers, 2M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20235
202217
202161
202099
2019112
201881