scispace - formally typeset
Search or ask a question

Showing papers on "Range segmentation published in 2009"


Journal ArticleDOI
TL;DR: It is shown that local combination strategies outperform global methods in segmenting high-contrast structures, while global techniques are less sensitive to noise when contrast between neighboring structures is low.
Abstract: It has been shown that employing multiple atlas images improves segmentation accuracy in atlas-based medical image segmentation. Each atlas image is registered to the target image independently and the calculated transformation is applied to the segmentation of the atlas image to obtain a segmented version of the target image. Several independent candidate segmentations result from the process, which must be somehow combined into a single final segmentation. Majority voting is the generally used rule to fuse the segmentations, but more sophisticated methods have also been proposed. In this paper, we show that the use of global weights to ponderate candidate segmentations has a major limitation. As a means to improve segmentation accuracy, we propose the generalized local weighting voting method. Namely, the fusion weights adapt voxel-by-voxel according to a local estimation of segmentation performance. Using digital phantoms and MR images of the human brain, we demonstrate that the performance of each combination technique depends on the gray level contrast characteristics of the segmented region, and that no fusion method yields better results than the others for all the regions. In particular, we show that local combination strategies outperform global methods in segmenting high-contrast structures, while global techniques are less sensitive to noise when contrast between neighboring structures is low. We conclude that, in order to achieve the highest overall segmentation accuracy, the best combination method for each particular structure must be selected.

546 citations


Proceedings ArticleDOI
01 Sep 2009
TL;DR: This paper introduces an unsupervised color segmentation method to segment the input image several times, each time focussing on a different salient part of the image and to subsequently merge all obtained results into one composite segmentation.
Abstract: This paper introduces an unsupervised color segmentation method The underlying idea is to segment the input image several times, each time focussing on a different salient part of the image and to subsequently merge all obtained results into one composite segmentation We identify salient parts of the image by applying affinity propagation clustering to efficiently calculated local color and texture models Each salient region then serves as an independent initialization for a figure/ground segmentation Segmentation is done by minimizing a convex energy functional based on weighted total variation leading to a global optimal solution Each salient region provides an accurate figure/ ground segmentation highlighting different parts of the image These highly redundant results are combined into one composite segmentation by analyzing local segmentation certainty Our formulation is quite general, and other salient region detection algorithms in combination with any semi-supervised figure/ground segmentation approach can be used We demonstrate the high quality of our method on the well-known Berkeley segmentation database Furthermore we show that our method can be used to provide good spatial support for recognition frameworks

262 citations


Proceedings ArticleDOI
01 Sep 2009
TL;DR: This work presents a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation, in a cue independent manner and evaluates the performance of the proposed algorithm on challenging videos and stereo pairs.
Abstract: The human visual system observes and understands a scene/image by making a series of fixations. Every “fixation point” lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the “fixation point”. Segmenting this region is equivalent to finding the enclosing contour - a connected set of boundary edge fragments in the edge map of the scene - around the fixation. We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. We evaluate the performance of the proposed algorithm on challenging videos and stereo pairs. Although the proposed algorithm is more suitable for an active observer capable of fixating at different locations in the scene, it applies to a single image as well. In fact, we show that even with monocular cues alone, the introduced algorithm performs as well or better than a number of image segmentation algorithms, when applied to challenging inputs.

137 citations


Book ChapterDOI
23 Sep 2009
TL;DR: A novel algorithm for unsupervised segmentation of natural images that harnesses the principle of minimum description length (MDL), based on observations that a homogeneously textured region of a natural image can be well modeled by a Gaussian distribution and the region boundary can be effectively coded by an adaptive chain code.
Abstract: We present a novel algorithm for unsupervised segmentation of natural images that harnesses the principle of minimum description length (MDL). Our method is based on observations that a homogeneously textured region of a natural image can be well modeled by a Gaussian distribution and the region boundary can be effectively coded by an adaptive chain code. The optimal segmentation of an image is the one that gives the shortest coding length for encoding all textures and boundaries in the image, and is obtained via an agglomerative clustering process applied to a hierarchy of decreasing window sizes. The optimal segmentation also provides an accurate estimate of the overall coding length and hence the true entropy of the image. Our algorithm achieves state-of-the-art results on the Berkeley Segmentation Dataset compared to other popular methods.

117 citations


Proceedings ArticleDOI
07 Nov 2009
TL;DR: This paper presents a simple yet effective approach for estimating a defocus blur map based on the relationship of the contrast to the image gradient in a local image region, called the local contrast prior.
Abstract: Image defocus estimation is useful for several applications including deblurring, blur magnification, measuring image quality, and depth of field segmentation. In this paper, we present a simple yet effective approach for estimating a defocus blur map based on the relationship of the contrast to the image gradient in a local image region. We call this relationship the local contrast prior. The advantage of our approach is that it does not require filter banks or frequency decomposition of the input image; instead we only need to compare local gradient profiles with the local contrast. We discuss the idea behind the local contrast prior and demonstrate its effectiveness on a variety of experiments.

107 citations


Patent
04 Mar 2009
TL;DR: In this paper, a system and method of identifying anatomical structures in a patient was proposed, which includes the acts of acquiring an image of the patient, the image including a set of image elements, segmenting the image to categorize each image elements according to its substance, computing the probability that the categorization of each image element is correct, resegmenting the segmented image starting with image elements that have a high probability and progressing to image elements with lower probabilities.
Abstract: A system and method of identifying anatomical structures in a patient. The method includes the acts of acquiring an image of the patient, the image including a set of image elements; segmenting the image to categorize each image elements according to its substance; computing the probability that the categorization of each image element is correct; resegmenting the image starting with image elements that have a high probability and progressing to image elements with lower probabilities; aligning at least one of the image elements with an anatomical atlas; and fitting the anatomical atlas to the segmented image.

106 citations


Journal ArticleDOI
TL;DR: Watershed segmentation algorithm has been able to detect flaws like slag inclusions and wormholes-type weld flaws with reasonable accuracy having close contours and small cavities are also highlighted successfully.
Abstract: In this paper, the concept of application of morphological multistage watershed segmentation for detection of flaws in radiographic weld images is discussed. It is simple and intuitive and always produces a complete division of the image. The multistage watershed segmentation used here reduces the problem of over segmentation besides generating boundaries with very less deviation from their original position. Two-stage water segmentation is implemented here. At the first stage, watershed transform is applied to an X-ray image and the resultant mosaic image pattern is further thresholded by Otsu's thresholding method and converted into the binary image. Then, morphology and top-hat transformation is applied on binary image to separate partially overlapping objects. Euclidean distance map is calculated for each basin to label resultant segments uniquely and to separate ridges. This follows the second stage of watershed segmentation to obtain better-defined boundaries while removing over-segmented regions. Watershed segmentation algorithm has been able to detect flaws like slag inclusions and wormholes-type weld flaws. It shows all defects with reasonable accuracy having close contours. Similarly, small cavities are also highlighted successfully.

70 citations


Journal ArticleDOI
TL;DR: This paper presents a new approach for the segmentation of color textured images, which is based on a novel energy function, derived by exploiting an intermediate step of modal analysis that is utilized in order to describe and analyze the deformations of a 3-D deformable surface model.
Abstract: This paper presents a new approach for the segmentation of color textured images, which is based on a novel energy function. The proposed energy function, which expresses the local smoothness of an image area, is derived by exploiting an intermediate step of modal analysis that is utilized in order to describe and analyze the deformations of a 3-D deformable surface model. The external forces that attract the 3-D deformable surface model combine the intensity of the image pixels with the spatial information of local image regions. The proposed image segmentation algorithm has two steps. First, a color quantization scheme, which is based on the node displacements of the deformable surface model, is utilized in order to decrease the number of colors in the image. Then, the proposed energy function is used as a criterion for a region growing algorithm. The final segmentation of the image is derived by a region merge approach. The proposed method was applied to the Berkeley segmentation database. The obtained results show good segmentation robustness, when compared to other state of the art image segmentation algorithms.

67 citations


Patent
30 Jun 2009
TL;DR: In this paper, a method comprising the steps of: capturing a first image illuminated by natural light and capturing a second image illumination by infrared light is described, where the intensity for each pixel in the first image may be calculated.
Abstract: Systems and methods are disclosed for creating image maps. Some embodiments include a method comprising the steps of: capturing a first image illuminated by natural light and capturing a second image illuminated by infrared light. The second image may be captured at the same time as the first image. The R, G, and B values for each pixel in the first image may be determined. The intensity for each pixel in the first image may be calculated. An IR intensity for each pixel in the second image may be calculated. A depth value may then be estimated for each pixel using the ratio of the IR intensity and the intensity of corresponding pixels in the first and second images.

53 citations


Journal ArticleDOI
TL;DR: This paper presents work on accurate image segmentation utilizing local image characteristics and determines the value of the number of models that best suits the natural number of clusters present in the image based on the Schwarz criterion.

51 citations


Journal ArticleDOI
TL;DR: An image segmentation method based on the modified edge-following scheme where different thresholds are automatically determined according to areas with varied contents in a picture, thus yielding suitable segmentation results in different areas is developed.
Abstract: Image segmentation has become an indispensable task in many image and video applications. This work develops an image segmentation method based on the modified edge-following scheme where different thresholds are automatically determined according to areas with varied contents in a picture, thus yielding suitable segmentation results in different areas. First, the iterative threshold selection technique is modified to calculate the initial-point threshold of the whole image or a particular block. Second, the quad-tree decomposition that starts from the whole image employs gray-level gradient characteristics of the currently-processed block to decide further decomposition or not. After the quad-tree decomposition, the initial-point threshold in each decomposed block is adopted to determine initial points. Additionally, the contour threshold is determined based on the histogram of gradients in each decomposed block. Particularly, contour thresholds could eliminate inappropriate contours to increase the accuracy of the search and minimize the required searching time. Finally, the edge-following method is modified and then conducted based on initial points and contour thresholds to find contours precisely and rapidly. By using the Berkeley segmentation data set with realistic images, the proposed method is demonstrated to take the least computational time for achieving fairly good segmentation performance in various image types.

Journal ArticleDOI
TL;DR: In this article, a preprocessing step using Random Walk method is performed on input images to reduce the deficiencies of watershed algorithm, which improves the image contrast in the way image is degraded.
Abstract: With the repaid advancement of computer technology, the use of computer-based technologies is increasing in different fields of life. Image segme ntation is an important problem in different fields of image processing and computer vision. Ima ge segmentation is the process of dividing images according to its characteristic e.g., color and objects present in the images. Different methods are presented for image segmentation. The focus of this study is the watershed segmentation. The tool used in this study is MATLAB. Good result of watershed segmentation entirely relay on the image contrast. Image contrast may be degraded during image acquisition. Watershed algorithm can generate over segmentation or under segmentation on badly contrast images. In order to reduce these deficiencies of watershed algorithm a preprocessing step using Random Walk method is performed on input images. Random Walk method is a probabilistic approach, which improves the image contrast in the way image is degraded.

Patent
30 Sep 2009
TL;DR: In this article, the authors present methods and systems for creating depth and volume in a 2D planar image to create an associated 3D image by utilizing a plurality of layers of the 2D image, where each layer comprises one or more portions of the two-dimensional image.
Abstract: Implementations of the present invention involve methods and systems for creating depth and volume in a 2-D planar image to create an associated 3-D image by utilizing a plurality of layers of the 2-D image, where each layer comprises one or more portions of the 2-D image. Each layer may be reproduced into a corresponding left eye and right eye layers, with one or both layers including a pixel offset corresponding to a perceived depth. Further, a depth model may be created for one or more objects of the 2-D image to provide a template upon which the pixel offset for one or more pixels of the 2-D image may be adjusted to provide the 2-D image with a more nuanced 3-D effect. In this manner, the 2-D image may be converted to a corresponding 3-D image with a perceived depth.

Patent
17 Mar 2009
TL;DR: In this article, a method is proposed to synthesize virtual images from a sequence of texture images and corresponding depth images, where each depth image stores depths d at pixel locations I(x, y).
Abstract: A method synthesizes virtual images from a sequence of texture images and a sequence of corresponding depth images, wherein each depth images stores depths d at pixel locations I(x, y). Each depth image, is preprocessed to produce a corresponding preprocessed depth image. A first reference image and a second reference image are from the sequence of texture images. Then, depth-based 3D warping, depth-based histogram matching, base plus assistant image blending, and depth-based in-painting are applied in order to synthesize a virtual image.

Journal ArticleDOI
TL;DR: The number of regions is equivalent to the number of code words, the mean of a region provides canonical representation of respective group members, and the distortion function is the mean-square error assuring a good evaluation method for image segmentation.
Abstract: The ill-defined nature of the segmentation problem makes the selection of the optimal image partition difficult. One can characterize image segmentation as an attempt to find the best possible representation of a data set using a certain number of ldquoobjects.rdquo This can be regarded as data information compression, resulting in the distortion of the original values. Data sets are well represented when the correct number of regions is chosen. The concept behind this approach is similar to the main problem of rate distortion theory: A finite set of code words is chosen to approximate the numbers or source symbols as well as possible. In our approach, the number of regions is equivalent to the number of code words. The mean of a region provides canonical representation of respective group members, and the distortion function is the mean-square error assuring a good evaluation method for image segmentation.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: An algorithm for tree-based representation of single images and its applications to segmentation and filtering with depth is presented and a depth-oriented filter is proposed, which allows to remove foreground regions and to replace them with a plausible background.
Abstract: This paper presents an algorithm for tree-based representation of single images and its applications to segmentation and filtering with depth. In a our recent work, we have addressed the problem of segmentation with depth by incorporating depth ordering information into a region merging algorithm and by reasoning about depth relations through a graph model. In this paper, we extend this previous work giving a two-fold contribution. First, we propose to model each pixel statistically by its probability distribution instead of deterministically by its color value. Second, we propose a depth-oriented filter, which allows to remove foreground regions and to replace them with a plausible background. Experimental results are satisfactory.

Patent
17 Mar 2009
TL;DR: In this article, a moving window is applied to the pixels in the depth image, wherein a size of the window covers a set of pixels centered at each pixel to produce a processed depth image.
Abstract: A method filters a depth image, wherein each depth image includes an array of pixels at locations (x, y), and wherein each pixel has a depth. A moving window is applied to the pixels in the depth image, wherein a size of the window covers a set of pixels centered at each pixel. A single representative depth from the set of pixel in the window is assigned to the pixel to produce a processed depth image. Then, each pixel in the processed depth image is filtered to correct outlier depths without blurring depth discontinuities to produce a filtered depth image.

Patent
17 Mar 2009
TL;DR: In this paper, a method for up-sampling images in a reduced resolution video is presented, where each image stores depths at pixel locations (x, y) and each depth image is scaled up to produce a corresponding up-scaled image.
Abstract: A method up-samples images in a reduced resolution video, wherein each image I(x, y) stores depths d at pixel locations (x, y). each depth image is scaled up to produce a corresponding up-scaled image. Then, image dilation, a median filter, image erosion, and a min-max filter are applied in order to produce a corresponding up-sampled image.

Journal ArticleDOI
TL;DR: This work introduces a segmentation-based detection and top-down figure-ground delineation algorithm that can accurately detect and segment objects with complex shapes and allows it to simultaneously detect multiple instances of class objects in images and to cope with challenging types of occlusions.
Abstract: We introduce a segmentation-based detection and top-down figure-ground delineation algorithm. Unlike common methods which use appearance for detection, our method relies primarily on the shape of objects as is reflected by their bottom-up segmentation. Our algorithm receives as input an image, along with its bottom-up hierarchical segmentation. The shape of each segment is then described both by its significant boundary sections and by regional, dense orientation information derived from the segment's shape using the Poisson equation. Our method then examines multiple, overlapping segmentation hypotheses, using their shape and color, in an attempt to find a "coherent whole," i.e., a collection of segments that consistently vote for an object at a single location in the image. Once an object is detected, we propose a novel pixel-level top-down figure-ground segmentation by "competitive coverage" process to accurately delineate the boundaries of the object. In this process, given a particular detection hypothesis, we let the voting segments compete for interpreting (covering) each of the semantic parts of an object. Incorporating competition in the process allows us to resolve ambiguities that arise when two different regions are matched to the same object part and to discard nearby false regions that participated in the voting process. We provide quantitative and qualitative experimental results on challenging datasets. These experiments demonstrate that our method can accurately detect and segment objects with complex shapes, obtaining results comparable to those of existing state of the art methods. Moreover, our method allows us to simultaneously detect multiple instances of class objects in images and to cope with challenging types of occlusions such as occlusions by a bar of varying size or by another object of the same class, that are difficult to handle with other existing class-specific top-down segmentation methods.

Book ChapterDOI
29 Aug 2009
TL;DR: This work presents a pixel coverage segmentation method which assigns pixel values corresponding to the area of a pixel that is covered by the imaged object(s) and concludes that for reasonable noise levels the presented method outperforms the achievable results of a perfect crisp segmentation.
Abstract: By utilizing intensity information available in images, partial coverage of pixels at object borders can be estimated. Such information can, in turn, provide more precise feature estimates. We present a pixel coverage segmentation method which assigns pixel values corresponding to the area of a pixel that is covered by the imaged object(s). Starting from any suitable crisp segmentation, we extract a one-pixel thin 4-connected boundary between the observed image components where a local linear mixture model is used for estimating fractional pixel coverage values. We evaluate the presented segmentation method, as well as its usefulness for subsequent precise feature estimation, on synthetic test objects with increasing levels of noise added. We conclude that for reasonable noise levels the presented method outperforms the achievable results of a perfect crisp segmentation. Finally, we illustrate the application of the suggested method on a real histological colour image.

Patent
30 Sep 2009
TL;DR: In this paper, the authors present methods and systems for converting a 2D image to a stereoscopic 3D image by segmenting one or more portions of the image based on pixel color ranges.
Abstract: Implementations of the present invention involve methods and systems for converting a 2-D image to a stereoscopic 3-D image by segmenting one or more portions of the 2-D image based on one or more pixel color ranges. Further, a matte may be created that takes the shape of the segmented region such that several stereoscopic effects may be applied to the segmented region. In addition, ink lines that are contained within the segmented region may be removed to further define the corresponding matte. Implementations of the present disclosure also include a interface that provides the above functionality to a user for ease of segmentation and region selection. By utilizing the segmentation process, a 2-D image may be converted to a corresponding stereoscopic 3-D image with a perceived depth. Further, this process may be applied to each image of an animated feature film to convert the film from 2-D to 3-D.

Journal ArticleDOI
TL;DR: An object density-based image segmentation methodology is developed, which incorporates intensity-based, edge-based and texture-based segmentation techniques and is 98% accurate in segmenting synthetic images.

Patent
13 Jul 2009
TL;DR: In this article, a watershed transform sub-process is performed upon an edge strength map of the image, which is then used to segment an image into tuned multi-scale regions that comprise similarity in the pixels contained in each respective region.
Abstract: Systems and methods for segmentation of an image into tuned multi-scale regions that comprise similarity in the pixels contained in each respective region. A watershed transform sub-process is performed upon an edge strength map of the image. A process for deriving an edge strength map may comprise preprocessing the image, extracting channels from the image, applying an edge operator to each channel, enhancing edge signal, normalizing the edge channels, combining the edge channels, and enhancing the signal to noise ratio for the channel. Once the watershed transform is complete, decisions on which neighboring regions to agglomerate may occur based on the cost effectiveness of the mergers. As desired, the boundaries for the regions created are resolved.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed approach is promising for applications such as object segmentation and video tracking with cluttered backgrounds.
Abstract: A novel preferential image segmentation method is proposed that performs image segmentation and object recognition using mathematical morphologies. The method preferentially segments objects that have intensities and boundaries similar to those of objects in a database of prior images. A tree of shapes is utilized to represent the content distributions in images, and curve matching is applied to compare the boundaries. The algorithm is invariant to contrast change and similarity transformations of translation, rotation and scale. A performance evaluation of the proposed method using a large image dataset is provided. Experimental results show that the proposed approach is promising for applications such as object segmentation and video tracking with cluttered backgrounds.

Patent
02 Jul 2009
TL;DR: In this paper, an advection vector field is computed based on image influences and user input, and the image is segmented into one or more regions based on the determined dye concentration for the corresponding dye.
Abstract: A method for segmenting image data within a data processing system includes acquiring an image. One or more seed points are established within the image. An advection vector field is computed based on image influences and user input. A dye concentration is determined at each of a plurality of portions of the image that results from a diffusion of dye within the computed advection field. The image is segmented into one or more regions based on the determined dye concentration for the corresponding dye.

Patent
06 Feb 2009
TL;DR: In this paper, an apparatus for segmenting an object comprising sub-objects shown in an object image is presented. The apparatus comprises a feature image generation unit (2) for generating feature image showing features related to intermediate regions between the subobjects and a segmentation unit (3) for segmentation by using the object image and the feature image.
Abstract: The present invention relates to an apparatus (1) for segmenting an object comprising sub-objects shown in an object image. The apparatus comprises a feature image generation unit (2) for generating a feature image showing features related to intermediate regions between the sub-objects and a segmentation unit (3) for segmenting the sub-objects by using the object image and the feature image. Preferentially, the feature image generation unit (2) is adapted for generating a feature image from the object image. In a further embodiment, the feature image generation unit (2) comprises a feature enhancing unit for enhancing features related to intermediate regions between the sub-objects in the object image.

Proceedings ArticleDOI
TL;DR: In this paper, a prior shape segmentation method is proposed to create a constant width ribbon-like zone that runs along the boundary to be extracted, and the image data corresponding to that zone is transformed into a rectangular image subspace where the boundary is roughly straightened.
Abstract: This paper proposes a prior shape segmentation method to create a constant-width ribbon-like zone that runs along the boundary to be extracted. The image data corresponding to that zone is transformed into a rectangular image subspace where the boundary is roughly straightened. Every step of the segmentation process is then applied to that straightened subspace image where the final extracted boundary is transformed back into the original image space. This approach has the advantage of producing very efficient filtering and edge detection using conventional techniques. The final boundary is continuous even over image regions where partial information is missing. The technique was applied to the femoral head segmentation where we show that the final segmented boundary is very similar to the one obtained manually by a trained orthopedist and has low sensitivity to the initial positioning of the prior shape.

Proceedings ArticleDOI
24 Nov 2009
TL;DR: This paper presents a method to automatically determine how many thresholds should be set and what the best range of each thresholds is for different images, by observing the change of the variance values and the mean values of each threshold range in the image histogram.
Abstract: Thresholding is an important technology for image segmentation. Before we get the segmentation thresholds, most segmentation technologies need to set many parameters. This paper presents a method to automatically determine how many thresholds should be set and what the best range of each threshold is for different images. It finds the segmentation threshold by observing the change of the variance values and the mean values of each threshold range in the image histogram. The proposed method needs simple calculation, so that it has less time complexity.

Proceedings ArticleDOI
14 Jun 2009
TL;DR: The proposed fuzzy edge detection method, that only detects the connected edge, is used with fuzzy image pixel similarity to automatically select the initial seeds, and outperforms other existing segmentation methods.
Abstract: This study proposes a novel seeded region growing based image segmentation method for both color and gray level images. The proposed fuzzy edge detection method, that only detects the connected edge, is used with fuzzy image pixel similarity to automatically select the initial seeds. The fuzzy distance is used to determine the difference between the pixel and region in the consequent regions growing, in which the conventional regions growing is modified to ensure that the pixel on the edge is processed later than other pixels, and the difference between two regions in the regions merging. In the simulations, the proposed method outperforms other existing segmentation methods.

Proceedings ArticleDOI
30 Jun 2009
TL;DR: This paper selects an appropriate distance measure in the composite feature space of color and texture, then the distance measure is incorporated in a clustering method that utilizes the spatial information of each feature vector.
Abstract: Image segmentation is a classical problem in the area of image processing, motion estimation, and soon. Although there exist a lot of clustering based approaches to perform image segmentation, few of them study how to obtain more accurate image segmentation results by designing a suitable clustering method. In this paper, we select an appropriate distance measure in the composite feature space of color and texture. Then the distance measure is incorporated in a clustering method that utilizes the spatial information of each feature vector. Finally, the proposed scheme performs morphology filtering to obtain the final segmented regions. Experimental results show that proposed scheme can constantly achieve higher segmentation accuracy compared to some state-of-art image segmentation algorithms.