scispace - formally typeset
Search or ask a question
Topic

Distance transform

About: Distance transform is a research topic. Over the lifetime, 2886 publications have been published within this topic receiving 59481 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper shows how to apply a fiber detection process to minimize the calibration time and improve the quality of the recovered image.
Abstract: Image transmission by incoherent optical fiber bundles (IOFBs) requires prior calibration to obtain the spatial in-out fiber correspondence to reconstruct the image captured by the pseudocamera. This information is recorded in a lookup table (LUT), which is later used for reordering the fiber positions and reconstructing the original image. This paper shows how to apply a fiber detection process to minimize the calibration time and improve the quality of the recovered image. Two different fiber detection methods were developed. The former uses the circular Hough transform algorithm based on the image gradient. The second algorithm combines a number of morphological transformations with distance transform. The results demonstrate that this technique provides a remarkable reduction in the processing time while improving fiber detection accuracy.

15 citations

Patent
19 May 2006
TL;DR: In this article, a method to extend the display range of 2D image recordings of an object region, particularly in medical applications, was proposed, where the first 2D or 3D image data are acquired from a larger object region and at least one additional set for 2D images of a smaller object region is acquired that lies within the larger region.
Abstract: In a method to extend the display range of 2D image recordings of an object region, particularly in medical applications, first 2D or 3D image data are acquired from a larger object region, and at least one additional set for 2D image data of a smaller object region is acquired that lies within the larger object region The first 2D or 3D image data are brought into registration with the additional 2D image data with a projection geometry. From the first 2D or 3D image data, an image data set is generated for an image display of the first object region, which is suitable for combination with the additional 2D image data. In the image display of the larger object region, at least temporarily, at least one display of the additional 2D image data is integrated, by image data in the first image data set, for the image display of the larger object region, being replaced with image data from the additional 2D image data representing the smaller image region. An overview of the larger object region is thus enabled, with the smaller object region of interest being displayed within the image in a more up-to-date fashion, as well as with higher resolution and/or higher contrast.

15 citations

Journal ArticleDOI
TL;DR: This paper uses a fast, GPU-based method to approximate the true geometric distance between the source and the target by rendering the source object into a distance field which was built around the target.
Abstract: In this paper, we propose an efficient method for partial 3D shape matching based on minimizing the geometric distance between the source and the target geometry. Unlike existing methods, our method does not use a feature-based distance in order to obtain a matching score. Instead, we use a fast, GPU-based method to approximate the true geometric distance between the source and the target by rendering the source object into a distance field which was built around the target. This function behaves smoothly in the space of transformations and allows for an efficient gradient-based local optimization. In order to overcome local minima, we use single point correspondences between surface points on the source and the target respectively employing simple, yet efficient local features based on the distribution of normal vectors around a reference point. The best correspondences define starting positions for a local optimization. The high efficiency of the distance computation allows for robust determination of the global minima in less than a second, which makes our method usable in interactive applications. Our method works for any kind of input data since it only requires point data with normal information at each point. We also demonstrate the capability of our algorithm to perform global alignment of similar 3D objects.

15 citations

Proceedings ArticleDOI
Ziheng Zhang1, Anpei Chen1, Ling Xie1, Jingyi Yu1, Shenghua Gao1 
15 Oct 2019
TL;DR: This work introduces a new representation, namely a semantics-aware distance map (sem-dist map), to serve as a target for amodal segmentation instead of the commonly used masks and heatmaps, and introduces a novel convolutional neural network architecture, which is referred to as semantic layering network, to estimate sem-dist maps layer by layer.
Abstract: In this work, we demonstrate yet another approach to tackle the amodal segmentation problem. Specifically, we first introduce a new representation, namely a semantics-aware distance map (sem-dist map), to serve as our target for amodal segmentation instead of the commonly used masks and heatmaps. The sem-dist map is a kind of level-set representation, of which the different regions of an object are placed into different levels on the map according to their visibility. It is a natural extension of masks and heatmaps, where modal, amodal segmentation, as well as depth order information, are all well-described. Then we also introduce a novel convolutional neural network (CNN) architecture, which we refer to as semantic layering network, to estimate sem-dist maps layer by layer, from the global-level to the instance-level, for all objects in an image. Extensive experiments on the COCOA and D2SA datasets have demonstrated that our framework can predict amodal segmentation, occlusion, and depth order with state-of-the-art performance.

15 citations

Book ChapterDOI
23 Sep 2009
TL;DR: The proposed method can segment regions of an object with a stepwise process from global to local segmentation by iterating the graph-cuts process with mean shift clustering using a different bandwidth.
Abstract: We present a novel approach to segmenting video using iterated graph cuts based on spatio-temporal volumes. We use the mean shift clustering algorithm to build the spatio-temporal volumes with different bandwidths from the input video. We compute the prior probability obtained by the likelihood from a color histogram and a distance transform using the segmentation results from graph cuts in the previous process, and set the probability as the t-link of the graph for the next process. The proposed method can segment regions of an object with a stepwise process from global to local segmentation by iterating the graph-cuts process with mean shift clustering using a different bandwidth. It is possible to reduce the number of nodes and edges to about 1/25 compared to the conventional method with the same segmentation rate.

15 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
91% related
Image processing
229.9K papers, 3.5M citations
91% related
Feature (computer vision)
128.2K papers, 1.7M citations
90% related
Convolutional neural network
74.7K papers, 2M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20235
202217
202161
202099
2019112
201881