scispace - formally typeset
Search or ask a question

Showing papers on "Range segmentation published in 2008"


Book ChapterDOI
12 May 2008
TL;DR: A novel method to determine salient regions in images using low-level features of luminance and color is presented, which is fast, easy to implement and generates high quality saliency maps of the same size and resolution as the input image.
Abstract: Detection of salient image regions is useful for applications like image segmentation, adaptive compression, and region-based image retrieval. In this paper we present a novel method to determine salient regions in images using low-level features of luminance and color. The method is fast, easy to implement and generates high quality saliency maps of the same size and resolution as the input image. We demonstrate the use of the algorithm in the segmentation of semantically meaningful whole objects from digital images.

895 citations


Journal ArticleDOI
Shutao Li1, Bin Yang1
TL;DR: The proposed region-based multifocus image fusion method is more robust to misregistration or slight motion of the object than the pixel-based method and experimental results show that the proposed method can give good results.

407 citations


Journal ArticleDOI
TL;DR: A novel algorithm is proposed for segmenting an image into multiple levels using its mean and variance, making use of the fact that a number of distributions tend towards Dirac delta function, peaking at the mean, in the limiting condition of vanishing variance.

249 citations


Journal ArticleDOI
TL;DR: The watershed segmentation has been proved to be a powerful and fast technique for both contour detection and region-based segmentation, and the simple algorithm implemented in MATLAB is proposed.
Abstract: Segmentation, a new method, for color, gray-scale MR medical images, and aerial images, is proposed. The method is based on gray-scale morphology. Edge detection algorithm includes function edge and marker-controlled watershed segmentation. It features the simple algorithm implemented in MATLAB. The watershed segmentation has been proved to be a powerful and fast technique for both contour detection and region-based segmentation. In principle, watershed segmentation depends on ridges to perform a proper segmentation, a property that is often fulfilled in contour detection where the boundaries of the objects are expressed as ridges. For region-based segmentation, it is possible to convert the edges of the objects into ridges by calculating an edge map of the image. Watershed is normally implemented by region growing, based on a set of markers to avoid oversegmentation.

186 citations


Journal ArticleDOI
TL;DR: This work merges the benefits of multiview geometry with automated registration of 3D range scans to produce photorealistic models with minimal human interaction, and introduces a novel algorithm for automatically recovering the rotation, scale, and translation that best aligns the dense and sparse models.
Abstract: The photorealistic modeling of large-scale scenes, such as urban structures, requires a fusion of range sensing technology and traditional digital photography. This paper presents a system that integrates automated 3D-to-3D and 2D-to-3D registration techniques, with multiview geometry for the photorealistic modeling of urban scenes. The 3D range scans are registered using our automated 3D-to-3D registration method that matches 3D features (linear or circular) in the range images. A subset of the 2D photographs are then aligned with the 3D model using our automated 2D-to-3D registration algorithm that matches linear features between the range scans and the photographs. Finally, the 2D photographs are used to generate a second 3D model of the scene that consists of a sparse 3D point cloud, produced by applying a multiview geometry (structure-from-motion) algorithm directly on a sequence of 2D photographs. The last part of this paper introduces a novel algorithm for automatically recovering the rotation, scale, and translation that best aligns the dense and sparse models. This alignment is necessary to enable the photographs to be optimally texture mapped onto the dense model. The contribution of this work is that it merges the benefits of multiview geometry with automated registration of 3D range scans to produce photorealistic models with minimal human interaction. We present results from experiments in large-scale urban scenes.

120 citations


Book ChapterDOI
12 Oct 2008
TL;DR: This paper defines a good image segment as one which can be easily composed using its own pieces, but is difficult to compose using pieces from other parts of the image, and develops a segment extraction algorithm which induces a figure-ground image segmentation.
Abstract: There is a huge diversity of definitions of "visually meaningful" image segments, ranging from simple uniformly colored segments, textured segments, through symmetric patterns, and up to complex semantically meaningful objects. This diversity has led to a wide range of different approaches for image segmentation. In this paper we present a single unified framework for addressing this problem --- "Segmentation by Composition". We define a good image segment as one which can be easily composed using its own pieces, but is difficult to compose using pieces from other parts of the image. This non-parametric approach captures a large diversity of segment types, yet requires no pre-definition or modelling of segment types, nor prior training. Based on this definition, we develop a segment extraction algorithm --- i.e., given a single point-of-interest, provide the "best" image segment containing that point. This induces a figure-ground image segmentation, which applies to a range of different segmentation tasks: single image segmentation, simultaneous co-segmentation of several images, and class-based segmentations.

112 citations


Patent
30 Oct 2008
TL;DR: In this article, a method for producing an image with depth by using 2D image includes obtaining a set of internal parameters of a camera, and several sets of external parameters of the camera corresponding to the 2D images.
Abstract: A method for producing an image with depth by using 2D image includes obtaining a set of internal parameters of a camera. The camera takes at least a first and a second 2D images with a small shift. The first 2D image has N depths, and N≧2. Several sets of external parameters of the camera corresponding to the 2D images are estimated. A 3D information respectively corresponding to the N depths of the first 2D image at each pixel or block is calculated. A proper depth of each pixel or image block is determined. Through the internal parameters, the external parameters, and the N depths, each pixel or image block of the first 2D image is projected onto N positions of the second 2D image, so as to perform a matching comparison analysis with the second 2D image, thereby determining the proper depth from the N depths.

112 citations


Patent
Jue Wang1, Xue Bai1
20 Nov 2008
TL;DR: In this article, a segmentation shape prediction and a color model are determined for a current image of a video sequence based on existing segmentation information for at least one previous image of the video sequence.
Abstract: A method, system, and computer-readable storage medium for automatic segmentation of a video sequence. A segmentation shape prediction and a segmentation color model are determined for a current image of a video sequence based on existing segmentation information for at least one previous image of the video sequence. A segmentation of the current image is automatically generated based on a weighted combination of the segmentation shape prediction and the segmentation color model. The segmentation of the current image is stored in a memory medium.

67 citations


Patent
18 Jun 2008
TL;DR: In this paper, a method for processing an image to determine whether segments of the image belong to an object class is described. But the method is not suitable for image segmentation.
Abstract: Systems and methods for processing an image to determine whether segments of the image belong to an object class are disclosed. In one embodiment, the method comprises receiving digitized data representing an image, the image data comprising a plurality of pixels, segmenting the pixel data into segments at a plurality of scale levels, determining feature vectors of the segments at the plurality of scale levels, the feature vectors comprising one or more measures of visual perception of the segments, determining one or more similarities, each similarity determined by comparing two or more feature vectors, determining, for each of a first subset of the segments, a first measure of probability that the segments is a member of an object class, determining probability factors based on the determined first measures of probability and similarity factors based on the determined similarities, and performing factor graph analysis to determine a second measure of probability for each of a second subset of the segments based on the probability factors and similarity factors.

58 citations


Patent
02 Oct 2008
TL;DR: In this article, the authors describe a selective compression unit that compresses image data to produce intermediate image data based upon a segmentation of the input image according to a parameter, such as spatial segments, chromatic segments or temporal segments.
Abstract: Displays systems and methods for selectively reducing or compressing image data values within an image are recited. Display systems transform input image data from one input gamut hull or space to another target gamut hull or space that is substantially defined by different subpixel repeating groups comprising the display. Display systems described herein comprise a selective compression unit, said unit surveying said input image data to produce intermediate image data based upon a segmentation of the input image according to a parameter. Suitable parameters for segmenting the image include one or more of the following: spatial segments, chromatic segments or temporal segments. A selective compression amount may be determined so as to substantially maintain local contrast of the image data within a given segment.

49 citations


Proceedings ArticleDOI
14 Oct 2008
TL;DR: The proposed watershed algorithm is able to merge more than 80% of the initial partitions, which indicates that a large amount of over-segmentation has been reduced.
Abstract: The use of the watershed algorithm for image segmentation is widespread because it is able to produce a complete division of the image. However, it is susceptible to over-segmentation and in medical image segmentation, this meant that that we do not have good representations of the anatomy. We address this issue by thresholding the gradient magnitude image and performing post-segmentation merging on the initial segmentation map. The automated thresholding technique is based on the histogram of the gradient magnitude map while the post-segmentation merging is based on the similarity in textural features (namely angular second moment, contrast, entropy and inverse difference moment) belonging to two neighboring partitions. When applied to the segmentation of various facial anatomical structures from magnetic resonance (MR) images, the proposed method achieved an overlap index of 92.6% compared to manual contour tracings. It is able to merge more than 80% of the initial partitions, which indicates that a large amount of over-segmentation has been reduced. Results produced using watershed algorithm with and without the proposed and proposed post-segmentation merging are presented for comparisons.

Patent
12 Mar 2008
TL;DR: In this article, the authors present a system and method for controlling 2D to 3D image conversion, which includes receiving an image and masking the objects in the image using segmentation layers each segmentation layer can have weighted values for static and dynamic features.
Abstract: The present invention is directed to systems and methods for controlling 2-D to 3-D image conversion. The system and method includes receiving an image and masking the objects in the image using segmentation layers Each segmentation layer can have weighted values for static and dynamic features. Iterations are used to form the final image which, if desired, can be formed using less than all of the segmentation layers. A final iteration can be run with the weighted values equal for static and dynamic features.

Proceedings ArticleDOI
28 May 2008
TL;DR: This paper proposes a multiple object tracking algorithm in three-dimensional (3D) domain based on a state of the art, adaptive range segmentation method, having a significantly high preprocessing efficiency.
Abstract: In this paper, we propose a multiple object tracking algorithm in three-dimensional (3D) domain based on a state of the art, adaptive range segmentation method. The performance of segmentation processes has an important impact on the achieved tracking results. Furthermore, segmentation methods which perform best on intensity images will not necessarily achieve promising results when applied on depth images from a time-of-flight sensor. Here, the employed unique segmentation promises a real-time tracking analysis, having a significantly high preprocessing efficiency. Our experiments confirm the robustness, as well as efficiency of the proposed approach.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: A new framework to image segmentation which combines edge- and region-based information with spectral techniques through the morphological algorithm of watersheds is proposed which clearly demonstrate the effectiveness of the proposed approach to produce simpler segmentations and to compare favourably with state-of-the-art methods.
Abstract: This paper proposes a new framework to image segmentation which combines edge- and region-based information with spectral techniques through the morphological algorithm of watersheds. A pre-processing step is used to reduce the spatial resolution without losing important image information. An initial partitioning of the image into primitive regions is set by applying a rainfalling watershed algorithm on the image gradient magnitude. This initial partition is the input to a computationally efficient region segmentation process which produces the final segmentation. The latter process uses a region-based similarity graph representation of the image regions. The experimental results clearly demonstrate the effectiveness of the proposed approach to produce simpler segmentations and to compare favourably with state-of-the-art methods.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed unsupervised algorithm for the segmentation of salient regions in color images achieves excellent segmentation performance and the computation is very efficient.
Abstract: In this paper, we propose a novel unsupervised algorithm for the segmentation of salient regions in color images. There are three phases in this algorithm. In the first phase, we use nonparametric density estimation to extract candidates of dominant colors in an image, which are then used for the quantization of the image. The label map of the quantized image forms initial regions of segmentation. In the second phase, we define salient region with two properties; i.e., conspicuous; compact and complete. According to the definition, two new parameters are proposed. One is called ldquoImportance indexrdquo, which is used to measure the importance of a region, and the other is called ldquoMerging likelihoodrdquo, which is utilized to measure the suitability of region merging. Initial regions are merged based on the two new parameters. In the third phase, a similarity check is performed to further merge the surviving regions. Experimental results show that the proposed method achieves excellent segmentation performance for most of our test images. In addition, the computation is very efficient.

Journal ArticleDOI
TL;DR: A novel top-down region dividing based approach is developed for image segmentation, which combines the advantages of both histogram-based and region-based approaches, and Experimental results show that the algorithm can efficiently perform image segmentsation without distorting the spatial structure of an image.

Patent
Christiaan Varekamp1
15 Dec 2008
TL;DR: In this article, a method of processing image data comprises receiving image data, segmenting image data using a first criteria and a first threshold to create a first segmented view of the image data.
Abstract: A method of processing image data comprises receiving image data, segmenting the image data using a first criteria and a first threshold to create a first segmented view of the image data, segmenting the image data using the first criteria and a second threshold to create a second segmented view of the image data, displaying the first segmented view of the image data, receiving one or more selection user inputs selecting one or more segments of the image data, as displayed in the first segmented view, receiving a defined user input, displaying the second segmented view of the image data, and receiving one or more further selection user inputs selecting one or more segments of the image data, as displayed in the second segmented view. This method can be used in the creation of a depth map. In this case, the process further comprises receiving one or more depth user inputs, the or each depth user input relating to a respective selection user input, and creating a depth map for the image data accordingly.

Proceedings ArticleDOI
12 Dec 2008
TL;DR: A new uncertainty theory-Cloud Model is induced to realize automatic and adaptive segmentation threshold selecting, which considers the uncertainty of image and extracts concepts from characteristics of the region to be segmented like human being.
Abstract: In this paper, we have made two improvements in region growing image segmentation. The First one is seeds select method, we use Harris corner detect theory to auto find growing seeds, through this method, we can improve the segmentation speed. The second one is growing rule. The homogeneity criterion usually depends on image formation properties that are not known to the user. We induced a new uncertainty theory-Cloud Model to realize automatic and adaptive segmentation threshold selecting, which considers the uncertainty of image and extracts concepts from characteristics of the region to be segmented like human being. Parameters of the homogeneity criterion are estimated from sample locations in the region. The method was tested for segmentation on general photo image and high-resolution remote sensing images. We found the method works reliable on homogeneity and region characteristics. Furthermore, the method is simple but robust; it can extract objects and boundary smoothly.

Journal Article
TL;DR: Comparisons with multi-phase Chan-Vese method show that the proposed multi-layer level set method has a less time-consuming computation and much faster convergence.
Abstract: In this paper, a new multi-layer level set method is proposed for multi-phase image segmentation. The proposed method is based on the conception of image layer and improved numerical solution of bimodal Chan-Vese model. One level set function is employed for curve evolution with a hierarchical form in sequential image layers. In addition, new initialization method and more efficient computational method for signed distance function are introduced. Moreover, the evolving curve can automatically stop on true boundaries in single image layer according to a termination criterion which is based on the length change of evolving curve. Specially, an adaptive improvement scheme is designed to speed up curve evolution process in a queue of sequential image layers, and the detection of background image layer is used to confirm the termination of the whole multi-layer level set evolution procedure. Finally, numerical experiments on some synthetic and real images have demonstrated the efficiency and robustness of our method. And the comparisons with multi-phase Chan-Vese method also show that our method has a less time-consuming computation and much faster convergence.

Proceedings ArticleDOI
07 Jan 2008
TL;DR: A new saliency definition for 3-D point clouds is proposed and it is incorporated with saliency features from color information and it also provides valuable information for other high-level tasks in the form of optimal segments and their ranked saliency.
Abstract: This paper describes a segmentation method for extracting salient regions in outdoor scenes using both 3-D laser scans and imagery information. Our approach is a bottom- up attentive process without any high-level priors, models, or learning. As a mid-level vision task, it is not only robust against noise and outliers but it also provides valuable information for other high-level tasks in the form of optimal segments and their ranked saliency. In this paper, we propose a new saliency definition for 3-D point clouds and we incorporate it with saliency features from color information.

Patent
22 Jan 2008
TL;DR: An apparatus usable in an image encoding and decoding system includes segmentation unit (230) to divide an image into one or more blocks, to segment the blocks into a binary mask layer of a foreground and a background according to a cost optimized function and a feature vector to generate a segmentation image according to the segmented blocks.
Abstract: An apparatus usable in an image encoding and decoding system includes segmentation unit (230) to divide an image into one or more blocks, to segment the blocks into a binary mask layer of a foreground and a background according to a cost optimized function and a feature vector to generate a segmentation image according to the segmented blocks.

Proceedings ArticleDOI
21 Apr 2008
TL;DR: This paper focuses on errors which are possible to occur in the depth measurements of range cameras, and a simple method for the correction of flying pixel errors is presented and its limitations are shown.
Abstract: This paper focuses on errors which are possible to occur in the depth measurements of range cameras. Range cameras can capture 3D information of a scene by sending out infrared light and then measuring the reflections. Wrong measurements occur at the edges of objects where the depth level changes. A depth value between the foreground and background level is measured which creates a so-called "flying pixel" when displaying the 3D points.In this paper different methods for the identification of flying pixels are presented and compared. The advantages and drawbacks of each method are discussed. Then a simple method for the correction of flying pixel errors is presented and its limitations are shown. The final method for correction is presented which is based on segmenting the pixel matrix into horizontal and vertical scanlines. After segmentation, linear segments can be identified to which the pixels can be mapped. The paper concludes with the evaluation of the presented methods to show their effectiveness.

Journal ArticleDOI
TL;DR: A novel variational approach for simultaneous segmentation of two images of the same object taken from different viewpoints, with a unified level-set framework for region and edge based segmentation associated with a shape similarity term.
Abstract: We present a novel variational approach for simultaneous segmentation of two images of the same object taken from different viewpoints. Due to noise, clutter and occlusions, neither of the images contains sufficient information for correct object-background partitioning. The evolving object contour in each image provides a dynamic prior for the segmentation of the other object view. We call this process mutual segmentation. The foundation of the proposed method is a unified level-set framework for region and edge based segmentation, associated with a shape similarity term. The suggested shape term incorporates the semantic knowledge gained in the segmentation process of the image pair, accounting for excess or deficient parts in the estimated object shape. Transformations, including planar projectivities, between the object views are accommodated by a registration process held concurrently with the segmentation. The proposed segmentation algorithm is demonstrated on a variety of image pairs. The homography between each of the image pairs is estimated and its accuracy is evaluated.

Book ChapterDOI
07 Jul 2008
TL;DR: This paper proposes a scheme for constructing a correspondence relation in adjacent regions of two arbitrary surfaces and shows correspondence relations for regions on a femoral head and acetabulum and other adjacent structures, as well as preliminary segmentation results obtained by a graph cut algorithm.
Abstract: For biomechanical simulations, the segmentation of multiple adjacent anatomical structures from medical image data is often required. If adjacent structures are hardly distinguishable in image data, automatic segmentation methods for single structures in general do not yield sufficiently accurate results. To improve segmentation accuracy in these cases, knowledge about adjacent structures must be exploited. Optimal graph searching based on deformable surface models allows for a simultaneous segmentation of multiple adjacent objects. However, this method requires a correspondence relation between vertices of adjacent surface meshes. Line segments, each containing two corresponding vertices, may then serve as shared displacement directions in the segmentation process. The problem is how to define suitable correspondences on arbitrary surfaces. In this paper we propose a scheme for constructing a correspondence relation in adjacent regions of two arbitrary surfaces. When applying the thus generated shared displacement directions in segmentation with deformable surfaces, overlap of the surfaces is guaranteed not to occur. We show correspondence relations for regions on a femoral head and acetabulum and other adjacent structures, as well as preliminary segmentation results obtained by a graph cut algorithm.

Proceedings ArticleDOI
12 Dec 2008
TL;DR: Experimental results show that the image segmentation method using the spectral graph theoretic framework of Normalized cuts to find partitions of an image based on the similarities matrix is an effective segmenting method for medical ultrasonic image.
Abstract: This image partition plays an important role in both qualitative and quantitative analysis of medical ultrasound images. But medical ultrasound images have features of poor contrast and strong speckle noise and segmenting result may not be satisfactory with traditional image segmentation method. Medical ultrasound images are segmented using image segmentation method based on texture feature and graph cut in this paper. The texture feature parameters are obtained according to gray level co-occurrence matrix. The similarities matrix is made based on texture feature parameters and gray intensity of pixel. We use the spectral graph theoretic framework of Normalized cuts to find partitions of an image based on the similarities matrix. Experimental results show that the method is an effective segmentation method for medical ultrasonic image.

Patent
21 Jul 2008
TL;DR: In this article, a system and method for segmenting an image is proposed, which includes imposing a grid having a plurality of grid cells on the image, each grid cell including a respective subimage; extracting image features from each of the plurality of grids; classifying the subimages in each grid cells without using geometric information about the image using a trained classification routine; generating classified image segments from the classified subimages; and generating a class map from the classification image segments.
Abstract: A system and method for segmenting an image. The system and method include: imposing a grid having a plurality of grid cells on the image, each grid cell including a respective subimage; extracting image features from each of the plurality of grid cells; classifying the subimages in each grid cell without using geometric information about the image using a trained classification routine; generating classified image segments from the classified subimages; and generating a class map from the classified image segments. Selected features in the generated class map may then be compared with a database of existing images to determine a potential match. The image may be an iris image, a facial image, a fingerprint image, a medical image, a satellite image

01 Jan 2008
TL;DR: A novel watershed segmentation algorithm based on Hill Climbing technique is proposed for the segmentation of nucleus from the surrounding cytoplasm of cervical cancer images, which is superior to most other segmentation techniques.
Abstract: In this paper, a novel watershed segmentation algorithm is proposed for the segmentation of nucleus from the surrounding cytoplasm of cervical cancer images. The proposed method converts the input RGB image into HSI image which contains three components hue, saturation and intensity. The saturation component is thresholded to obtain the binary image and each pixel in the binary image is multiplied with hue component to obtain the product image. The intensity image is complemented, thresholded and merged with the product image and smoothened. The local minima are reduced using extended minima function and the multiscale gradient of this resultant gray scale image is segmented using watershed algorithm based on Hill Climbing technique. Experimental results add to the computational efficiency of the algorithm, its shape maintaining, edge preserving and scale-calibrating features. The performance is also superior to most other segmentation techniques.

Patent
06 Oct 2008
TL;DR: In this paper, an image segmentation subsystem partitions the image into image segments and a text recognition subsystem transforms the restored image data into computer readable text data based on the determined parameters.
Abstract: A system that extracts text from an image includes a capture device that captures the image having a low resolution. An image segmentation subsystem partitions the image into image segments. An image restoration subsystem generates a resolution-expanded image from the image segments and negates degradation effects of the low-resolution image by transforming the image segments from a first domain to a second domain and deconvolving the transformed image segments in the second domain to determine parameters of the low-resolution image. A text recognition subsystem transforms the restored image data into computer readable text data based on the determined parameters.

Patent
Makoto Terao1, Takafumi Koshinaka1
25 Dec 2008
TL;DR: In this paper, a model-based topic segmentation unit is used to segment a text using a topic model representing semantic coherence, and a parameter estimation section that estimates a control parameter used in segmenting the text based on detection of a change point of word distribution in the text.
Abstract: There is provided an apparatus including a model based topic segmentation section that when segments a text using a topic model representing semantic coherence, a parameter estimation section that estimates a control parameter used in segmenting the text based on detection of a change point of word distribution in the text, using the result of segmentation by the model based topic segmentation unit as training data, and a change point detection topic segmentation section that segments the text, based on detection of the change point of word distribution in the text, using the parameter estimated by the parameter estimation section.

Patent
01 Sep 2008
TL;DR: In this article, the authors proposed an image capturing apparatus capable of capturing a still image in which a reduction in image quality because of the presence of a focus detecting pixel, is being suppressed while a high sharpness is being maintained.
Abstract: PROBLEM TO BE SOLVED: To provide an image capturing apparatus capable of capturing a still image in which a reduction in image quality because of the presence of a focus detecting pixel, is being suppressed while a high sharpness is being maintained. SOLUTION: The image capturing apparatus 100 includes: image sensors 104 with an image capturing pixel group 105 and a focus detecting pixel group 106; a detection means 117 which detects variations of image signals of pixels existing around phase difference sensors of the focus detecting pixel group 106; an adjustment means 11 which adjusts a gain of the focus detecting pixel group 106; an interpolation means 112 which interpolates image data corresponding to a position of the focus detecting pixel group 106 based on the image signal of the image capturing pixel group 105; and a decision means 117 which decides a ratio of the gain adjusting amount and the interpolation correction amount based on the variations of the image signals detected by the detection means 117. COPYRIGHT: (C)2010,JPO&INPIT