scispace - formally typeset
Search or ask a question

Showing papers on "Range segmentation published in 2011"


Journal ArticleDOI
TL;DR: A novel region-based method for image segmentation, which is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction).
Abstract: Intensity inhomogeneity often occurs in real-world images, which presents a considerable challenge in image segmentation. The most widely used image segmentation algorithms are region-based and typically rely on the homogeneity of the image intensities in the regions of interest, which often fail to provide accurate segmentation results due to the intensity inhomogeneity. This paper proposes a novel region-based method for image segmentation, which is able to deal with intensity inhomogeneities in the segmentation. First, based on the model of images with intensity inhomogeneities, we derive a local intensity clustering property of the image intensities, and define a local clustering criterion function for the image intensities in a neighborhood of each point. This local clustering criterion function is then integrated with respect to the neighborhood center to give a global criterion of image segmentation. In a level set formulation, this criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, by minimizing this energy, our method is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction). Our method has been validated on synthetic images and real images of various modalities, with desirable performance in the presence of intensity inhomogeneities. Experiments show that our method is more robust to initialization, faster and more accurate than the well-known piecewise smooth model. As an application, our method has been used for segmentation and bias correction of magnetic resonance (MR) images with promising results.

1,201 citations


Patent
14 Nov 2011
TL;DR: In this paper, a method and system operative to process monochrome image data are disclosed, which can comprise the steps of receiving the image data, segmenting the input pixel values into pixel value ranges, assigning pixel positions in the lowest pixel value range an output pixel value of a first binary value and assigning pixel position in the highest pixel values of a second binary value, wherein the first and second binary values are different.
Abstract: A method and system operative to process monochrome image data are disclosed. In one embodiment, the method can comprise the steps of receiving monochrome image data, segmenting the input pixel values into pixel value ranges, assigning pixel positions in the lowest pixel value range an output pixel value of a first binary value, assigning pixel positions in the highest pixel value range an output pixel value of a second binary value, wherein the first and second binary values are different, and assigning pixel positions in intermediate pixel value ranges output pixel values that correspond to a spatial binary pattern. The resulting binary image data can be written to a file for subsequent storage, transmission, processing, or retrieval and rendering. In further embodiments, a system can be made operative to accomplish the same.

333 citations


Journal ArticleDOI
TL;DR: Comparison of single- and multi-scale segmentations shows that identifying and refining under- and over-segmented regions using local statistics can improve global segmentation results.
Abstract: In this study, a multi-scale approach is used to improve the segmentation of a high spatial resolution (30 cm) color infrared image of a residential area. First, a series of 25 image segmentations are performed in Definiens Professional 5 using different scale parameters. The optimal image segmentation is identified using an unsupervised evaluation method of segmentation quality that takes into account global intra-segment and inter-segment heterogeneity measures (weighted variance and Moran’s I, respectively). Once the optimal segmentation is determined, under-segmented and over-segmented regions in this segmentation are identified using local heterogeneity measures (variance and Local Moran’s I). The under- and over-segmented regions are refined by (1) further segmenting under-segmented regions at finer scales, and (2) merging over-segmented regions with spectrally similar neighbors. This process leads to the creation of several segmentations consisting of segments generated at three different segmentation scales. Comparison of single- and multi-scale segmentations shows that identifying and refining under- and over-segmented regions using local statistics can improve global segmentation results.

302 citations


Journal ArticleDOI
TL;DR: An integrated framework consisting of a novel supervised cell-image segmentation algorithm and a new touching-cell splitting method for quantitative analysis of histopathological images, which achieves better results than the other compared methods.
Abstract: For quantitative analysis of histopathological images, such as the lymphoma grading systems, quantification of features is usually carried out on single cells before categorizing them by classification algorithms. To this end, we propose an integrated framework consisting of a novel supervised cell-image segmentation algorithm and a new touching-cell splitting method. For the segmentation part, we segment the cell regions from the other areas by classifying the image pixels into either cell or extra-cellular category. Instead of using pixel color intensities, the color-texture extracted at the local neighborhood of each pixel is utilized as the input to our classification algorithm. The color-texture at each pixel is extracted by local Fourier transform (LFT) from a new color space, the most discriminant color space (MDC). The MDC color space is optimized to be a linear combination of the original RGB color space so that the extracted LFT texture features in the MDC color space can achieve most discrimination in terms of classification (segmentation) performance. To speed up the texture feature extraction process, we develop an efficient LFT extraction algorithm based on image shifting and image integral. For the splitting part, given a connected component of the segmentation map, we initially differentiate whether it is a touching-cell clump or a single nontouching cell. The differentiation is mainly based on the distance between the most likely radial-symmetry center and the geometrical center of the connected component. The boundaries of touching-cell clumps are smoothed out by Fourier shape descriptor before carrying out an iterative, concave-point and radial-symmetry based splitting algorithm. To test the validity, effectiveness and efficiency of the framework, it is applied to follicular lymphoma pathological images, which exhibit complex background and extracellular texture with nonuniform illumination condition. For comparison purposes, the results of the proposed segmentation algorithm are evaluated against the outputs of superpixel, graph-cut, mean-shift, and two state-of-the-art pathological image segmentation methods using ground-truth that was established by manual segmentation of cells in the original images. Our segmentation algorithm achieves better results than the other compared methods. The results of splitting are evaluated in terms of under-splitting, over-splitting, and encroachment errors. By summing up the three types of errors, we achieve a total error rate of 5.25% per image.

212 citations


Journal ArticleDOI
TL;DR: The proposed dynamic region-merging algorithm formulates the image segmentation as an inference problem, where the final segmentation is established based on the observed image and it is proved that the produced segmentation satisfies certain global properties.
Abstract: This paper addresses the automatic image segmentation problem in a region merging style. With an initially oversegmented image, in which many regions (or superpixels) with homogeneous color are detected, an image segmentation is performed by iteratively merging the regions according to a statistical test. There are two essential issues in a region-merging algorithm: order of merging and the stopping criterion. In the proposed algorithm, these two issues are solved by a novel predicate, which is defined by the sequential probability ratio test and the minimal cost criterion. Starting from an oversegmented image, neighboring regions are progressively merged if there is an evidence for merging according to this predicate. We show that the merging order follows the principle of dynamic programming. This formulates the image segmentation as an inference problem, where the final segmentation is established based on the observed image. We also prove that the produced segmentation satisfies certain global properties. In addition, a faster algorithm is developed to accelerate the region-merging process, which maintains a nearest neighbor graph in each iteration. Experiments on real natural images are conducted to demonstrate the performance of the proposed dynamic region-merging algorithm.

199 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a fully automatic new approach for color texture image segmentation based on neutrosophic set (NS) and multiresolution wavelet transformation, which aims to segment the natural scene images, in which the color and texture of each region does not have uniform statistical characteristics.

148 citations


Journal ArticleDOI
TL;DR: In this paper, a Gaussian distribution is used to model a homogeneously textured region of a natural image and the region boundary can be effectively coded by an adaptive chain code.
Abstract: We present a novel algorithm for segmentation of natural images that harnesses the principle of minimum description length (MDL). Our method is based on observations that a homogeneously textured region of a natural image can be well modeled by a Gaussian distribution and the region boundary can be effectively coded by an adaptive chain code. The optimal segmentation of an image is the one that gives the shortest coding length for encoding all textures and boundaries in the image, and is obtained via an agglomerative clustering process applied to a hierarchy of decreasing window sizes as multi-scale texture features. The optimal segmentation also provides an accurate estimate of the overall coding length and hence the true entropy of the image. We test our algorithm on the publicly available Berkeley Segmentation Dataset. It achieves state-of-the-art segmentation results compared to other existing methods.

142 citations


Journal ArticleDOI
TL;DR: With the optimal segmentation, object- based classification achieved accuracy significantly higher than that of the pixel-based classification, with 99% significance level, shows McNemar's test.
Abstract: Image segmentation is a preliminary and critical step in object-based image classification. Its proper evaluation ensures that the best segmentation is used in image classification. In this article, image segmentations with nine different parameter settings were carried out with a multi-spectral Landsat imagery and the segmentation results were evaluated with an objective function that aims at maximizing homogeneity within segments and separability between neighbouring segments. The segmented images were classified into eight land-cover classes and the classifications were evaluated with independent ground data comprising 600 randomly distributed points. The accuracy assessment results presented similar distribution as that of the objective function values, that is segmentations with the highest objective function values also resulted in the highest classification accuracies. This result shows that image segmentation has a direct effect on the classification accuracy; the objective function not only worked on a single band image as proved by (Espindola, G.M., Camara, G., Reis, I.A., Bins, L.S. and Monteiro, A.M., 2006, Parameter selection for region-growing image segmentation algorithms using spatial autocorrelation. International Journal of Remote Sensing, 27, pp. 3035-3040.) but also on multi-spectral imagery as tested in this, and is indeed an effective way to determine the optimal segmentation parameters. McNemar's test (z2 = 10.27) shows that with the optimal segmentation, object-based classification achieved accuracy significantly higher than that of the pixel-based classification, with 99% significance level.

115 citations


Patent
31 Jan 2011
TL;DR: In this article, a moving object is segmented from the background of a depth image of a scene received from a mobile depth camera using an iterative closest point algorithm, which includes a determination of a set of points that correspond between the current depth image and the previous depth image.
Abstract: Moving object segmentation using depth images is described. In an example, a moving object is segmented from the background of a depth image of a scene received from a mobile depth camera. A previous depth image of the scene is retrieved, and compared to the current depth image using an iterative closest point algorithm. The iterative closest point algorithm includes a determination of a set of points that correspond between the current depth image and the previous depth image. During the determination of the set of points, one or more outlying points are detected that do not correspond between the two depth images, and the image elements at these outlying points are labeled as belonging to the moving object. In examples, the iterative closest point algorithm is executed as part of an algorithm for tracking the mobile depth camera, and hence the segmentation does not add substantial additional computational complexity.

112 citations


Journal ArticleDOI
TL;DR: This work summarizes first the protocol followed for the resolution on two examples of kidney calculi, taken as representations of images with major and minor compounds, respectively, and proposes the use of MCR scores (concentration profiles) for segmentation purposes.

105 citations


Patent
22 Jun 2011
TL;DR: In this article, a method of image segmentation using intensity and depth value information is disclosed, which comprises steps of segmenting an image data into intensity-based segmented regions based on intensity value of each pixel in the image data.
Abstract: A method of image segmentation using intensity and depth value information is disclosed. The method of the present invention comprises steps of: a) segmenting an image data into intensity-based segmented regions based on intensity value of each pixel in the image data: b) obtaining depth value and confidence level information of each pixel in the image data to form a depth map: c) comparing the intensity-based segmented regions against the depth map at the corresponding regions: d) determining if the respective intensity-based segmented regions consist of more than one depth value; e) further segmenting the respective intensity-based segmented regions when the respective intensity-based segmented regions consist depth value with no confidence: and f) splitting the intensity-based segmented region to extract object therefrom, when the depth values with confidence levels within the intensity-based segmented regions are determined.

Patent
Gabriel G. Marcu1, Steve Swen1
08 Dec 2011
TL;DR: In this article, the authors proposed a method to generate a low dynamic range image from a high-dynamic range image by determining one or more regions of the image containing pixels having values that are outside a first range and inside a second range.
Abstract: Methods and apparatuses for generating a low dynamic range image for a high dynamic range scene. In one aspect, a method to generate a low dynamic range image from a high dynamic range image, includes: determining one or more regions of the high dynamic range image containing pixels having values that are outside a first range and inside a second range; computing a weight distribution from the one or more regions; and generating the low dynamic range image from the high dynamic range image using the weight distribution. In another aspect, a method of image processing, includes: detecting one or more regions in a first image of a high dynamic range scene according to a threshold to generate a mask; and blending the first image and a second image of the scene to generate a third image using the mask.

Proceedings ArticleDOI
03 Jun 2011
TL;DR: A single seeded region growing technique for image segmentation is proposed, which starts from the center pixels of the image as the initial seed and grows region according to the grow formula and selects the next seed from connected pixel of the region.
Abstract: In this paper, we present a region growing technique for color image segmentation. Conventional image segmentation techniques using region growing requires initial seeds selection, which increases computational cost & execution time. To overcome this problem, a single seeded region growing technique for image segmentation is proposed, which starts from the center pixel of the image as the initial seed. It grows region according to the grow formula and selects the next seed from connected pixel of the region. We use intensity based similarity index for the grow formula and Otsu's Adaptive thresholding is used to calculate the stopping criteria for the grow formula. We apply the proposed method to the Berkley segmentation database images and discuss results based on Liu's F-factor that shows efficient segmentation.

Journal ArticleDOI
TL;DR: It is observed that shadowed clustering can efficiently handle overlapping among segments while modeling uncertainty among the boundaries, and the superiority of the system is demonstrated in segmenting a synthetic image, along with land cover types from the Indian Remote Sensing (IRS) images of the cities of Mumbai and Kolkata and the SPOT image around KolkATA.

Patent
08 Apr 2011
TL;DR: In this article, image quality is assessed for a digital image that is a composite of tiles or other image segments, especially focus accuracy for a microscopic pathology sample, where pixel data at margins where adjacent image segments overlap and thus contain the same content in separately acquired images.
Abstract: Image quality is assessed for a digital image that is a composite of tiles or other image segments, especially focus accuracy for a microscopic pathology sample. An algorithm or combination of algorithms correlated to image quality is applied to pixel data at margins where adjacent image segments overlap and thus contain the same content in separately acquired images. The margins may be edges merged to join the image segments smoothly into a composite image, and typically occur on four sides of the image segments. The two versions of the same image content at each margin are processed by the quality algorithm, producing two assessment values. A sign and difference value are compared with other image segments, including by subsets selected for the orientation of the margins on sides on the image segments. The differences are mapped to displays. Selection criteria determine segments to be reacquired.

Patent
Jaume Civit1, Oscar Divorra1
11 Aug 2011
TL;DR: In this article, a set of cost functions for foreground, background and shadow segmentation classes or models are generated, where the background and shadows segmentation models are a function of chromatic distortion and brightness and colour distortion, and where said cost functions are related to probability measures of a given pixel or region to belong to each of said segmentations classes.
Abstract: The method comprises: - generating a set of cost functions for foreground, background and shadow segmentation classes or models, where the background and shadow segmentation models are a function of chromatic distortion and brightness and colour distortion, and where said cost functions are related to probability measures of a given pixel or region to belong to each of said segmentation classes; and - applying to pixel data of an image said set of generated cost functions. The method further comprises defining said background and shadow segmentation cost functionals introducing depth information of the scene said image has been acquired of. The system comprises camera means intended for acquiring, from a scene, colour and depth information, and processing means intended for carrying out said foreground segmentation by hardware and/or software elements implementing the method.

Proceedings ArticleDOI
05 Jun 2011
TL;DR: In this article, a two-step approach combining a patch-based segmentation with additional boundary detection is proposed, which leads to improved appearance descriptors for road and non-road parts on patch level.
Abstract: In this paper a novel approach for road detection with a monocular camera is introduced. We propose a two step approach, combining a patch-based segmentation with additional boundary detection. We use Slow Feature Analysis (SFA) which leads to improved appearance descriptors for road and non-road parts on patch level. From the slow features a low order feature set is formed which is used together with color and Walsh Hadamard texture features to train a patch-based GentleBoost classifier. This allows extracting areas from the image that correspond to the road with a certain confidence. Typically the border regions between road and non-road have the highest classification error rates, because the appearance is hard to distinguish on the patch level. Therefore we propose a post-processing step with a specialized classifier applied to the boundary region of the image to improve the segmentation results. In order to evaluate the quality of road segmentation we propose an application-based quality measurement applying an inverse perspective mapping on the image to obtain a Birds Eye View (BEV). The advantage of this approach is that the important distant parts and boundaries of the road in the real world, which are only a low fraction in the perspective image, can be assessed in this metric measure significantly better than on the pixel level. In addition, we estimate the driving corridor width and boundary error, because for Advanced Driver Assistant Systems (ADAS) metric information is needed. For all evaluations in different road and weather conditions, our system shows an improved performance of the two step approach compared to the basic segmentation.

Journal ArticleDOI
TL;DR: This work presents a new image segmentation based on colour features with Fuzzy c-means clustering unsupervised algorithm that is possible to reduce the computational cost avoiding feature calculation for every pixel in the image.
Abstract: Mostly due to the progresses in spatial resolution of satellite imagery, the methods of segment- based image analysis for generating and updating geographical information are becoming more and more important. This work presents a new image segmentation based on colour features with Fuzzy c-means clustering unsupervised algorithm. The entire work is divided into two stages. First enhancement of color separation of satellite image using decorrelation stretching is carried out and then the regions are grouped into a set of five classes using Fuzzy c-means clustering algorithm. Using this two step process, it is possible to reduce the computational cost avoiding feature calculation for every pixel in the image. Although the colour is not frequently used for image segmentation, it gives a high discriminative power of regions present in the image. In remote sensing, the process of image segmentation is defined as: "the search for homogenous regions in an image and later the classification of these regions". It also means the partitioning of an image into meaningful regions based on homogeneity or heterogeneity criteria. Image segmentation techniques can be differentiated into the following basic concepts: pixel oriented, Contour-oriented, region-oriented, model oriented, color oriented and hybrid. Color segmentation of image is a crucial operation in image analysis and in many computer vision, image interpretation, and pattern recognition system, with applications in scientific and industrial field(s) such as medicine, Remote Sensing, Microscopy, content based image and video retrieval, document analysis, industrial automation and quality control. The performance of color segmentation may significantly affect the quality of an image understanding system .The most common features used in image segmentation include texture, shape, grey level intensity, and color. The constitution of the right data space is a common problem in connection with segmentation/classification. In order to construct realistic classifiers, the features that are sufficiently representative of the physical process must be searched. In the literature, it is observed that different transforms are used to extract desired information from remote-sensing images or biomedical images. Segmentation evaluation techniques can be generally divided into two categories (supervised and unsupervised). The first category is not applicable to remote sensing because an optimum segmentation (ground truth segmentation) is difficult to obtain. Moreover, available segmentation evaluation techniques have not been thoroughly tested for remotely sensed data. Therefore, for comparison purposes, it is possible to proceed with the classification process and then indirectly assess the segmentation process through the produced classification accuracies. For image segment based classification, the images that need to be classified are segmented into many homogeneous areas with similar spectrum information firstly, and the image segments' features are extracted based on the specific requirements of ground features classification. The color

Patent
01 Dec 2011
TL;DR: In this article, image and range data associated with an image can be processed to estimate planes within the 3D environment in the image by utilizing image segmentation techniques, image data can identify regions of visible pixels having common features, these regions can be used to candidate regions for fitting planes to the range data based on a RANSAC technique.
Abstract: Image and range data associated with an image can be processed to estimate planes within the 3D environment in the image. By utilizing image segmentation techniques, image data can identify regions of visible pixels having common features. These regions can be used to candidate regions for fitting planes to the range data based on a RANSAC technique.

Journal ArticleDOI
TL;DR: The results demonstrate that the constrained segmentation not only stitches solutions seamlessly along overlapping patch borders but also refines the segmentation in the patch interiors.

Patent
18 Nov 2011
TL;DR: This article used a lexicon of smallest semantic units (LSTU) to segment the received text into coarse-grained segmentation and fine-graining segmentation results based on the respective search elements.
Abstract: Text processing includes: segmenting received text based on a lexicon of smallest semantic units to obtain medium-grained segmentation results; merging the medium-grained segmentation results to obtain coarse-grained segmentation results, the coarse-grained segmentation results having coarser granularity than the medium-grained segmentation results; looking up in the lexicon of smallest semantic units respective search elements that correspond to segments in the medium-grained segmentation results; and forming fine-grained segmentation results based on the respective search elements, the fine-grained segmentation results having finer granularity than the medium-grained segmentation results.

Proceedings ArticleDOI
11 Jul 2011
TL;DR: A novel compression scheme that exploits a segmentation of the color data to predict the shape of the different surfaces in the depth map and permits to outperform the standard H.264/AVC Intra codec on depth data.
Abstract: 3D video representations usually associate to each view a depth map with the corresponding geometric information. Many compression schemes have been proposed for multi-view video and for depth data, but the exploitation of the correlation between the two representations to enhance compression performances is still an open research issue. This paper presents a novel compression scheme that exploits a segmentation of the color data to predict the shape of the different surfaces in the depth map. Then each segment is approximated with a parameterized plane. In case the approximation is sufficiently accurate for the target bit rate, the surface coefficients are compressed and transmitted. Otherwise, the region is coded using a standard H.264/AVC Intra coder. Experimental results show that the proposed scheme permits to outperformthe standardH.264/AVC Intra codec on depth data and can be effectively included into multi-view plus depth compression schemes.

Proceedings ArticleDOI
20 Jun 2011
TL;DR: Novel algorithms for segmentation of objects and parts from range images with extensions based on semantic cues to yield robust part detection and Grasping by Components (GBC) scheme is presented, which provides a scalable framework for grasping of objects.
Abstract: Recognition by Components (RBC) has been one of the most conceptually significant frameworks for modeling human visual object recognition. Extension of the model to practical robotic applications have been traditionally limited by the lack of good response in textureless areas in the case of conventional inexpensive stereo cameras as well as by the need for expensive laser based sensor systems to compensate for this deficiency. The recent availability of RGB-D sensors such as the PrimeSense sensor has opened new avenues for practical usage of these sensors for robotic applications such as grasping. In this paper, we present novel algorithms for segmentation of objects and parts from range images with extensions based on semantic cues to yield robust part detection. The detected parts are then parameterized using a superquadric based fitting framework and classified into one of different generic shapes. The categorization of the parts enables rules for grasping the object. This Grasping by Components (GBC) scheme is a natural extension of the RBC framework and provides a scalable framework for grasping of objects. This scheme also permits the grasping of novel objects in the scene, with at least one known grasp affordance. Results of the range segmentation are compared with another scene agnostic graph based segmentation approach.

01 Jan 2011
TL;DR: Object of this paper is segmenting the medical image using marker controlled watershed segmentation, and comparing the results of directly applying watershed transformation and markers controlled watershed transformation.
Abstract: Segmentation refers to the process of partitioning a digital image into multiple segments (sets of pixels, also known as superpixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is used to cluster pixels into salient image regions, i.e., regions corresponding to individual surfaces, objects, or natural parts of objects. The watershed transform has interesting properties that make it useful for many different image segmentation applications: it is simple and intuitive, can be parallelized, and always produces a complete division of the image. However, when applied to medical image analysis, it has important drawbacks (oversegmentation, sensitivity to noise).In this paper medical image segmentation using marker controlled watershed segmentation is presented. Objective of this paper is segmenting the medical image using marker controlled watershed segmentation, and comparing the results of directly applying watershed transformation and marker controlled watershed transformation.

Journal Article
TL;DR: Thin areas are segmented continuously by using the proposed image segmentation method, which segments more accurately than existing methods in images that have a narrow elongated object, shading and blurring.
Abstract: Image segmentation is an important basic process in image analysis. It is used in several processes, which receive the input of more advanced processes, the result of which will affect the accuracy of overall results significantly. Edge detection is the basic process of image segmentation. The Sobel filter, Derivative of Gaussian and Laplace of Gaussian are well-known methods of edge detection. However, these methods cannot detect edge details and it remains difficult to segment thin areas because the edge pixels obliterate details. For example, in a text image, the details of the characters are obliterated by using these methods. We propose an image segmentation method by using boundary code to solve this problem. Thin areas are segmented continuously by using our proposed segmentation method. In an experiment, we compare the proposed method with existing methods. We use binarization with a discriminant analysis method and a Sobel filter to compare our proposed method. According to the experimental result, the proposed method segments more accurately than existing methods in images that have a narrow elongated object, shading and blurring.

Patent
Sung-Chan Park1, Won-Hee Choe1, Byung-kwan Park1, Lee Seong Deok1, Jae-guyn Lim1 
14 Jan 2011
TL;DR: In this article, a first multi-view image may be generated using patterned light of infrared light, and a second multiview image can be obtained using non-patterned lighting of visible light.
Abstract: An apparatus and method for obtaining a three-dimensional image. A first multi-view image may be generated using patterned light of infrared light, and a second multi-view image may be generated using non-patterned light of visible light. A first depth image may be obtained from the first multi-view image, and a second depth image may be obtained from the second multi-view image. Then, stereo matching may be performed on the first depth image and the second depth image to generate a final depth image.

Journal ArticleDOI
TL;DR: This paper discusses the use of graph-cuts to merge the regions of the watershed transform optimally and introduces two methods based on regions histogram and dissimilarity measures between adjacent regions.
Abstract: In this paper, we discuss the use of graph-cuts to merge the regions of the watershed transform optimally Watershed is a simple, intuitive and efficient way of segmenting an image Unfortunately it presents a few limitations such as over-segmentation and poor detection of low boundaries Our segmentation process merges regions of the watershed over-segmentation by minimizing a specific criterion using graph-cuts optimization Two methods will be introduced in this paper The first is based on regions histogram and dissimilarity measures between adjacent regions The second method deals with efficient approximation of minimal surfaces and geodesics Experimental results show that these techniques can efficiently be used for large images segmentation when a pre-computed low level segmentation is available We will present these methods in the context of interactive medical image segmentation

Patent
Bhaven Dedhia1, Tommer Leyvand1
29 Nov 2011
TL;DR: In this paper, the primary image and the depth image are cooperatively used to identify whether a primary pixel images a foreground subject or a background subject in a digital image, and a depth image from one or more depth sensors is also received.
Abstract: Classifying pixels in a digital image includes receiving a primary image from one or more image sensors. The primary image includes a plurality of primary pixels. A depth image from one or more depth sensors is also received. The depth image includes a plurality of depth pixels, each depth pixel registered to one or more primary pixels. The depth image and the primary image are cooperatively used to identify whether a primary pixel images a foreground subject or a background subject.

Journal ArticleDOI
TL;DR: Experimental evidence shows that the proposed method has a very effective segmentation results and computational behavior, and decreases the time and increases the quality of color image segmentation in compare with the state-of-the-art segmentation methods recently proposed in the literature.

Journal ArticleDOI
TL;DR: It is observed that SOM-K and SOM-KS, being an unsupervised method, can achieve better segmentation results with less computational load and no human intervention.
Abstract: Natural image segmentation is an important topic in digital image processing, and it could be solved by clustering methods. We present in this paper an SOM-based k-means method (SOM-K) and a further saliency map-enhanced SOM-K method (SOM-KS). In SOM-K, pixel features of intensity and 𝐿∗𝑢∗𝑣∗ color space are trained with SOM and followed by a k-means method to cluster the prototype vectors, which are filtered with hits map. A variant of the proposed method, SOM-KS, adds a modified saliency map to improve the segmentation performance. Both SOM-K and SOM-KS segment the image with the guidance of an entropy evaluation index. Compared to SOM-K, SOM-KS makes a more precise segmentation in most cases by segmenting an image into a smaller number of regions. At the same time, the salient object of an image stands out, while other minor parts are restrained. The computational load of the proposed methods of SOM-K and SOM-KS are compared to J-image-based segmentation (JSEG) and k-means. Segmentation evaluations of SOM-K and SOM-KS with the entropy index are compared with JSEG and k-means. It is observed that SOM-K and SOM-KS, being an unsupervised method, can achieve better segmentation results with less computational load and no human intervention.