scispace - formally typeset
Search or ask a question

Showing papers on "Segmentation-based object categorization published in 2011"


Journal ArticleDOI
TL;DR: This paper investigates two fundamental problems in computer vision: contour detection and image segmentation and presents state-of-the-art algorithms for both of these tasks.
Abstract: This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications.

5,068 citations


Journal ArticleDOI
TL;DR: A novel region-based method for image segmentation, which is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction).
Abstract: Intensity inhomogeneity often occurs in real-world images, which presents a considerable challenge in image segmentation. The most widely used image segmentation algorithms are region-based and typically rely on the homogeneity of the image intensities in the regions of interest, which often fail to provide accurate segmentation results due to the intensity inhomogeneity. This paper proposes a novel region-based method for image segmentation, which is able to deal with intensity inhomogeneities in the segmentation. First, based on the model of images with intensity inhomogeneities, we derive a local intensity clustering property of the image intensities, and define a local clustering criterion function for the image intensities in a neighborhood of each point. This local clustering criterion function is then integrated with respect to the neighborhood center to give a global criterion of image segmentation. In a level set formulation, this criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, by minimizing this energy, our method is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction). Our method has been validated on synthetic images and real images of various modalities, with desirable performance in the presence of intensity inhomogeneities. Experiments show that our method is more robust to initialization, faster and more accurate than the well-known piecewise smooth model. As an application, our method has been used for segmentation and bias correction of magnetic resonance (MR) images with promising results.

1,201 citations


Book
14 Dec 2011
TL;DR: This book discusses methods for preserving geometric deformable models for brain reconstruction, as well as methods for implicit active contour models, and some of the methods used in this book were adapted for this purpose.
Abstract: * Level set methods * Deformable models * Fast methods for implicit active contour models * Fast edge integration * Variational snake theory * Multiplicative denoising and deblurring * Total varation minimization for scalar/vector regularization * Morphological global reconstruction and levelings * Fast marching techniques for visual grouping and segmentation * Multiphase object detection and image segmentation * Adaptive segmentation of vector-valued images * Mumford-Shah for segmentation and stereo * Shape analysis toward model-based segmentation * Joint image registration and segmentation * Image alignment * Variational principles in optical flow estimation and tracking * Region matching and tracking under deformations or occlusions * Computational stereo * Visualization, analysis and shape reconstruction of sparse data * Variational problems and partial differential equations on implicit surfaces * Knowledge-based segmentation of medical images * Topology preserving geometric deformable models for brain reconstruction * Editing geometric models * Simulating natural phenomena

899 citations


Proceedings ArticleDOI
06 Nov 2011
TL;DR: This work adapt segmentation as a selective search by reconsidering segmentation to generate many approximate locations over few and precise object delineations because an object whose location is never generated can not be recognised and appearance and immediate nearby context are most effective for object recognition.
Abstract: For object recognition, the current state-of-the-art is based on exhaustive search. However, to enable the use of more expensive features and classifiers and thereby progress beyond the state-of-the-art, a selective search strategy is needed. Therefore, we adapt segmentation as a selective search by reconsidering segmentation: We propose to generate many approximate locations over few and precise object delineations because (1) an object whose location is never generated can not be recognised and (2) appearance and immediate nearby context are most effective for object recognition. Our method is class-independent and is shown to cover 96.7% of all objects in the Pascal VOC 2007 test set using only 1,536 locations per image. Our selective search enables the use of the more expensive bag-of-words method which we use to substantially improve the state-of-the-art by up to 8.5% for 8 out of 20 classes on the Pascal VOC 2010 detection challenge.

815 citations


Journal ArticleDOI
TL;DR: Inspired by recent work in image denoising, the proposed nonlocal patch-based label fusion produces accurate and robust segmentation in quantitative magnetic resonance analysis.

709 citations


Proceedings ArticleDOI
09 May 2011
TL;DR: This paper presents a set of segmentation methods for various types of 3D point clouds addressed using ground models of non-constant resolution either providing a continuous probabilistic surface or a terrain mesh built from the structure of a range image, both representations providing close to real-time performance.
Abstract: This paper presents a set of segmentation methods for various types of 3D point clouds. Segmentation of dense 3D data (e.g. Riegl scans) is optimised via a simple yet efficient voxelisation of the space. Prior ground extraction is empirically shown to significantly improve segmentation performance. Segmentation of sparse 3D data (e.g. Velodyne scans) is addressed using ground models of non-constant resolution either providing a continuous probabilistic surface or a terrain mesh built from the structure of a range image, both representations providing close to real-time performance. All the algorithms are tested on several hand labeled data sets using two novel metrics for segmentation evaluation.

502 citations


Journal ArticleDOI
TL;DR: In this paper an attempt is made to study the performance of most commonly used edge detection techniques for image segmentation and the comparison of these techniques is carried out with an experiment by using MATLAB software.
Abstract: Interpretation of image contents is one of the objectives in computer vision specifically in image processing. In this era it has received much awareness of researchers. In image interpretation the partition of the image into object and background is a severe step. Segmentation separates an image into its component regions or objects. Image segmentation t needs to segment the object from the background to read the image properly and identify the content of the image carefully. In this context, edge detection is a fundamental tool for image segmentation. In this paper an attempt is made to study the performance of most commonly used edge detection techniques for image segmentation and also the comparison of these techniques is carried out with an experiment by using MATLAB software.

420 citations


Journal ArticleDOI
TL;DR: A new fuzzy level set algorithm is proposed in this paper to facilitate medical image segmentation that is able to directly evolve from the initial segmentation by spatial fuzzy clustering and enhanced with locally regularized evolution.

417 citations


Journal ArticleDOI
TL;DR: Comparison of single- and multi-scale segmentations shows that identifying and refining under- and over-segmented regions using local statistics can improve global segmentation results.
Abstract: In this study, a multi-scale approach is used to improve the segmentation of a high spatial resolution (30 cm) color infrared image of a residential area. First, a series of 25 image segmentations are performed in Definiens Professional 5 using different scale parameters. The optimal image segmentation is identified using an unsupervised evaluation method of segmentation quality that takes into account global intra-segment and inter-segment heterogeneity measures (weighted variance and Moran’s I, respectively). Once the optimal segmentation is determined, under-segmented and over-segmented regions in this segmentation are identified using local heterogeneity measures (variance and Local Moran’s I). The under- and over-segmented regions are refined by (1) further segmenting under-segmented regions at finer scales, and (2) merging over-segmented regions with spectrally similar neighbors. This process leads to the creation of several segmentations consisting of segments generated at three different segmentation scales. Comparison of single- and multi-scale segmentations shows that identifying and refining under- and over-segmented regions using local statistics can improve global segmentation results.

302 citations


Proceedings ArticleDOI
20 Jun 2011
TL;DR: This method is based on a discriminative temporal extension of the spatial bag-of-words model that has been very popular in object recognition and is performed robustly within a multi-class SVM framework whereas the inference over the segments is done efficiently with dynamic programming.
Abstract: Automatic video segmentation and action recognition has been a long-standing problem in computer vision. Much work in the literature treats video segmentation and action recognition as two independent problems; while segmentation is often done without a temporal model of the activity, action recognition is usually performed on pre-segmented clips. In this paper we propose a novel method that avoids the limitations of the above approaches by jointly performing video segmentation and action recognition. Unlike standard approaches based on extensions of dynamic Bayesian networks, our method is based on a discriminative temporal extension of the spatial bag-of-words model that has been very popular in object recognition. The classification is performed robustly within a multi-class SVM framework whereas the inference over the segments is done efficiently with dynamic programming. Experimental results on honeybee, Weizmann, and Hollywood datasets illustrate the benefits of our approach compared to state-of-the-art methods.

290 citations


Proceedings ArticleDOI
06 Nov 2011
TL;DR: Compared to previous methods, which are usually based on a single type of features, the proposed method seamlessly integrates multiple types of features to jointly produce the affinity matrix within a single inference step, and produces more accurate and reliable segmentation results.
Abstract: This paper investigates how to boost region-based image segmentation by pursuing a new solution to fuse multiple types of image features. A collaborative image segmentation framework, called multi-task low-rank affinity pursuit, is presented for such a purpose. Given an image described with multiple types of features, we aim at inferring a unified affinity matrix that implicitly encodes the segmentation of the image. This is achieved by seeking the sparsity-consistent low-rank affinities from the joint decompositions of multiple feature matrices into pairs of sparse and low-rank matrices, the latter of which is expressed as the production of the image feature matrix and its corresponding image affinity matrix. The inference process is formulated as a constrained nuclear norm and l 2;1 -norm minimization problem, which is convex and can be solved efficiently with the Augmented Lagrange Multiplier method. Compared to previous methods, which are usually based on a single type of features, the proposed method seamlessly integrates multiple types of features to jointly produce the affinity matrix within a single inference step, and produces more accurate and reliable segmentation results. Experiments on the MSRC dataset and Berkeley segmentation dataset well validate the superiority of using multiple features over single feature and also the superiority of our method over conventional methods for feature fusion. Moreover, our method is shown to be very competitive while comparing to other state-of-the-art methods.

Journal ArticleDOI
TL;DR: An integrated framework consisting of a novel supervised cell-image segmentation algorithm and a new touching-cell splitting method for quantitative analysis of histopathological images, which achieves better results than the other compared methods.
Abstract: For quantitative analysis of histopathological images, such as the lymphoma grading systems, quantification of features is usually carried out on single cells before categorizing them by classification algorithms. To this end, we propose an integrated framework consisting of a novel supervised cell-image segmentation algorithm and a new touching-cell splitting method. For the segmentation part, we segment the cell regions from the other areas by classifying the image pixels into either cell or extra-cellular category. Instead of using pixel color intensities, the color-texture extracted at the local neighborhood of each pixel is utilized as the input to our classification algorithm. The color-texture at each pixel is extracted by local Fourier transform (LFT) from a new color space, the most discriminant color space (MDC). The MDC color space is optimized to be a linear combination of the original RGB color space so that the extracted LFT texture features in the MDC color space can achieve most discrimination in terms of classification (segmentation) performance. To speed up the texture feature extraction process, we develop an efficient LFT extraction algorithm based on image shifting and image integral. For the splitting part, given a connected component of the segmentation map, we initially differentiate whether it is a touching-cell clump or a single nontouching cell. The differentiation is mainly based on the distance between the most likely radial-symmetry center and the geometrical center of the connected component. The boundaries of touching-cell clumps are smoothed out by Fourier shape descriptor before carrying out an iterative, concave-point and radial-symmetry based splitting algorithm. To test the validity, effectiveness and efficiency of the framework, it is applied to follicular lymphoma pathological images, which exhibit complex background and extracellular texture with nonuniform illumination condition. For comparison purposes, the results of the proposed segmentation algorithm are evaluated against the outputs of superpixel, graph-cut, mean-shift, and two state-of-the-art pathological image segmentation methods using ground-truth that was established by manual segmentation of cells in the original images. Our segmentation algorithm achieves better results than the other compared methods. The results of splitting are evaluated in terms of under-splitting, over-splitting, and encroachment errors. By summing up the three types of errors, we achieve a total error rate of 5.25% per image.

Proceedings ArticleDOI
12 Dec 2011
TL;DR: The presented approach to segmenting shapes in a heterogenous shape database is evaluated on the Princeton segmentation benchmark and it is shown that joint shape segmentation significantly outperforms single-shape segmentation techniques.
Abstract: We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques.

Journal ArticleDOI
TL;DR: The proposed dynamic region-merging algorithm formulates the image segmentation as an inference problem, where the final segmentation is established based on the observed image and it is proved that the produced segmentation satisfies certain global properties.
Abstract: This paper addresses the automatic image segmentation problem in a region merging style. With an initially oversegmented image, in which many regions (or superpixels) with homogeneous color are detected, an image segmentation is performed by iteratively merging the regions according to a statistical test. There are two essential issues in a region-merging algorithm: order of merging and the stopping criterion. In the proposed algorithm, these two issues are solved by a novel predicate, which is defined by the sequential probability ratio test and the minimal cost criterion. Starting from an oversegmented image, neighboring regions are progressively merged if there is an evidence for merging according to this predicate. We show that the merging order follows the principle of dynamic programming. This formulates the image segmentation as an inference problem, where the final segmentation is established based on the observed image. We also prove that the produced segmentation satisfies certain global properties. In addition, a faster algorithm is developed to accelerate the region-merging process, which maintains a nearest neighbor graph in each iteration. Experiments on real natural images are conducted to demonstrate the performance of the proposed dynamic region-merging algorithm.

Journal ArticleDOI
TL;DR: The aim of this article is to thoroughly evaluate and categorise the most relevant algorithms with respect to the modality behind the integration of these two fundamental image attributes.

Journal ArticleDOI
TL;DR: A new mouth-structure segmentation methodology uses pixel color classification, segmentation refinement, and fitted region-of-interest clipping to improve the speed and accuracy of mouth-state segmentation using standard database images.
Abstract: Lip-contour extraction has great potential for human-machine interface and communication systems, but most existing techniques are inappropriate for changing poses, malformations, or whole-mouth descriptions. A new mouth-structure segmentation methodology uses pixel color classification, segmentation refinement, and fitted region-of-interest clipping to improve the speed and accuracy of mouth-structure segmentation using standard database images.

Journal ArticleDOI
TL;DR: This paper proposes a hierarchical segmentation process, based on agglomerative merging, that re-estimates boundary strength as the segmentation progresses, and applies Gestalt grouping principles using a conditional random field (CRF) model.
Abstract: Occlusion reasoning is a fundamental problem in computer vision. In this paper, we propose an algorithm to recover the occlusion boundaries and depth ordering of free-standing structures in the scene. Rather than viewing the problem as one of pure image processing, our approach employs cues from an estimated surface layout and applies Gestalt grouping principles using a conditional random field (CRF) model. We propose a hierarchical segmentation process, based on agglomerative merging, that re-estimates boundary strength as the segmentation progresses. Our experiments on the Geometric Context dataset validate our choices for features, our iterative refinement of classifiers, and our CRF model. In experiments on the Berkeley Segmentation Dataset, PASCAL VOC 2008, and LabelMe, we also show that the trained algorithm generalizes to other datasets and can be used as an object boundary predictor with figure/ground labels.

Proceedings ArticleDOI
06 Nov 2011
TL;DR: A new scalable, alternation-based algorithm for co-segmentation, BiCoS, is introduced, which is simpler than many of its predecessors, and yet has superior performance on standard benchmark image datasets.
Abstract: The objective of this paper is the unsupervised segmentation of image training sets into foreground and background in order to improve image classification performance. To this end we introduce a new scalable, alternation-based algorithm for co-segmentation, BiCoS, which is simpler than many of its predecessors, and yet has superior performance on standard benchmark image datasets.

Proceedings ArticleDOI
20 Jun 2011
TL;DR: This paper presents a method for joint stereo matching and object segmentation that is able to recover the depth of regions that are fully occluded in one input view, which to the knowledge is new for stereo matching.
Abstract: This paper presents a method for joint stereo matching and object segmentation. In our approach a 3D scene is represented as a collection of visually distinct and spatially coherent objects. Each object is characterized by three different aspects: a color model, a 3D plane that approximates the object's disparity distribution, and a novel 3D connectivity property. Inspired by Markov Random Field models of image segmentation, we employ object-level color models as a soft constraint, which can aid depth estimation in powerful ways. In particular, our method is able to recover the depth of regions that are fully occluded in one input view, which to our knowledge is new for stereo matching. Our model is formulated as an energy function that is optimized via fusion moves. We show high-quality disparity and object segmentation results on challenging image pairs as well as standard benchmarks. We believe our work not only demonstrates a novel synergy between the areas of image segmentation and stereo matching, but may also inspire new work in the domain of automatic and interactive object-level scene manipulation.

Journal ArticleDOI
TL;DR: The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available, and then attempts to correct such errors in segmentationsproduced by thehost method on new images.

Journal ArticleDOI
Miao Ma1, Jianhui Liang1, Min Guo1, Yi Fan1, Yilong Yin2 
01 Dec 2011
TL;DR: Experimental results indicate that the proposed fast SAR image segmentation method is superior to Genetic Algorithm based and Artificial Fish Swarm based segmentation methods in terms of segmentation accuracy and segmentation time.
Abstract: Due to the presence of speckle noise, segmentation of Synthetic Aperture Radar (SAR) images is still a challenging problem. This paper proposes a fast SAR image segmentation method based on Artificial Bee Colony (ABC) algorithm. In this method, threshold estimation is regarded as a search procedure that searches for an appropriate value in a continuous grayscale interval. Hence, ABC algorithm is introduced to search for the optimal threshold. In order to get an efficient fitness function for ABC algorithm, after the definition of grey number in Grey theory, the original image is decomposed by discrete wavelet transform. Then, a filtered image is produced by performing a noise reduction to the approximation image reconstructed with low-frequency coefficients. At the same time, a gradient image is reconstructed with some high-frequency coefficients. A co-occurrence matrix based on the filtered image and the gradient image is therefore constructed, and an improved two-dimensional grey entropy is defined to serve as the fitness function of ABC algorithm. Finally, by the swarm intelligence of employed bees, onlookers and scouts in honey bee colony, the optimal threshold is rapidly discovered. Experimental results indicate that the proposed method is superior to Genetic Algorithm (GA) based and Artificial Fish Swarm (AFS) based segmentation methods in terms of segmentation accuracy and segmentation time.

Proceedings Article
12 Dec 2011
TL;DR: This paper shows that if, instead of a flat partitioning, the image is represented by a hierarchical segmentation tree, then the resulting energy combining unary and boundary terms can still be optimized using graph cut (with all the corresponding benefits of global optimality and efficiency).
Abstract: Graph cut optimization is one of the standard workhorses of image segmentation since for binary random field representations of the image, it gives globally optimal results and there are efficient polynomial time implementations. Often, the random field is applied over a flat partitioning of the image into non-intersecting elements, such as pixels or super-pixels. In the paper we show that if, instead of a flat partitioning, the image is represented by a hierarchical segmentation tree, then the resulting energy combining unary and boundary terms can still be optimized using graph cut (with all the corresponding benefits of global optimality and efficiency). As a result of such inference, the image gets partitioned into a set of segments that may come from different layers of the tree. We apply this formulation, which we call the pylon model, to the task of semantic segmentation where the goal is to separate an image into areas belonging to different semantic classes. The experiments highlight the advantage of inference on a segmentation tree (over a flat partitioning) and demonstrate that the optimization in the pylon model is able to flexibly choose the level of segmentation across the image. Overall, the proposed system has superior segmentation accuracy on several datasets (Graz-02, Stanford background) compared to previously suggested approaches.

Journal ArticleDOI
TL;DR: Experimental evidence shows that the proposed method has a very effective segmentation results and computational behavior, and decreases the time and increases the quality of color image segmentation in comparison with the state-of-the-art segmentation methods recently proposed in the literature.

Proceedings Article
12 Dec 2011
TL;DR: Experimental results on various datasets show that the proposed higher-order correlation clustering outperforms other state-of-the-art image segmentation algorithms.
Abstract: For many of the state-of-the-art computer vision algorithms, image segmentation is an important preprocessing step. As such, several image segmentation algorithms have been proposed, however, with certain reservation due to high computational load and many hand-tuning parameters. Correlation clustering, a graph-partitioning algorithm often used in natural language processing and document clustering, has the potential to perform better than previously proposed image segmentation algorithms. We improve the basic correlation clustering formulation by taking into account higher-order cluster relationships. This improves clustering in the presence of local boundary ambiguities. We first apply the pairwise correlation clustering to image segmentation over a pairwise superpixel graph and then develop higher-order correlation clustering over a hypergraph that considers higher-order relations among superpixels. Fast inference is possible by linear programming relaxation, and also effective parameter learning framework by structured support vector machine is possible. Experimental results on various datasets show that the proposed higher-order correlation clustering outperforms other state-of-the-art image segmentation algorithms.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a fully automatic new approach for color texture image segmentation based on neutrosophic set (NS) and multiresolution wavelet transformation, which aims to segment the natural scene images, in which the color and texture of each region does not have uniform statistical characteristics.

Journal ArticleDOI
TL;DR: In this paper, a Gaussian distribution is used to model a homogeneously textured region of a natural image and the region boundary can be effectively coded by an adaptive chain code.
Abstract: We present a novel algorithm for segmentation of natural images that harnesses the principle of minimum description length (MDL). Our method is based on observations that a homogeneously textured region of a natural image can be well modeled by a Gaussian distribution and the region boundary can be effectively coded by an adaptive chain code. The optimal segmentation of an image is the one that gives the shortest coding length for encoding all textures and boundaries in the image, and is obtained via an agglomerative clustering process applied to a hierarchy of decreasing window sizes as multi-scale texture features. The optimal segmentation also provides an accurate estimate of the overall coding length and hence the true entropy of the image. We test our algorithm on the publicly available Berkeley Segmentation Dataset. It achieves state-of-the-art segmentation results compared to other existing methods.

Book ChapterDOI
18 Sep 2011
TL;DR: A novel method for applying active learning strategies to interactive 3D image segmentation by constructing an "uncertainty field" over the image domain based on boundary, regional, smoothness and entropy terms.
Abstract: We propose a novel method for applying active learning strategies to interactive 3D image segmentation. Active learning has been recently introduced to the field of image segmentation. However, so far discussions have focused on 2D images only. Here, we frame interactive 3D image segmentation as a classification problem and incorporate active learning in order to alleviate the user from choosing where to provide interactive input. Specifically, we evaluate a given segmentation by constructing an "uncertainty field" over the image domain based on boundary, regional, smoothness and entropy terms. We then calculate and highlight the plane of maximal uncertainty in a batch query step. The user can proceed to guide the labeling of the data on the query plane, hence actively providing additional training data where the classifier has the least confidence. We validate our method against random plane selection showing an average DSC improvement of 10% in the first five plane suggestions (batch queries). Furthermore, our user study shows that our method saves the user 64% of their time, on average.

Journal ArticleDOI
TL;DR: This paper proposes a novel neonatal image segmentation method by combining local intensity information, atlas spatial prior, and cortical thickness constraint in a single level-set framework and provides a robust and reliable tissue surface initialization for the proposed method by using a convex optimization technique.

Journal ArticleDOI
TL;DR: This paper replaces a tedious manual process that is often based on guess-work and luck by a principled approach that systematically explores the parameter space and relies on existing ground-truth images in order to evaluate the "goodness" of an image segmentation technique.
Abstract: In this paper we address the difficult problem of parameter-finding in image segmentation. We replace a tedious manual process that is often based on guess-work and luck by a principled approach that systematically explores the parameter space. Our core idea is the following two-stage technique: We start with a sparse sampling of the parameter space and apply a statistical model to estimate the response of the segmentation algorithm. The statistical model incorporates a model of uncertainty of the estimation which we use in conjunction with the actual estimate in (visually) guiding the user towards areas that need refinement by placing additional sample points. In the second stage the user navigates through the parameter space in order to determine areas where the response value (goodness of segmentation) is high. In our exploration we rely on existing ground-truth images in order to evaluate the "goodness" of an image segmentation technique. We evaluate its usefulness by demonstrating this technique on two image segmentation algorithms: a three parameter model to detect microtubules in electron tomograms and an eight parameter model to identify functional regions in dynamic Positron Emission Tomography scans.

13 Oct 2011
TL;DR: Results show that the Gaussian Blur technique is to be used in images with high noise and with aGaussian function of small variance whereas larger variance Gaussian function is more relevant in segmentation of images.
Abstract: The present work investigates the qualitative and quantitative effects of the convolution of a Gaussian function with an image. Besides the evaluation of the commonly called "Gaussian-blur" in the filtering of images, this work also investigates a methodology of segmentation using Gaussian blurring. Noise is inherent to the physical process of acquisition. Therefore, to know the effects of a filtering technique it is fundamental to choose the right technique to filter the image properly, since the segmentation process could be very expensive and time-consuming. An automated method for segmentation that saves time and human labor is always desirable. To evaluate the filtering characteristics, we chose a Quality Index in order to analyze in a quantitative way the effects of the convolution. Results show that the Gaussian Blur technique is to be used in images with high noise and with a Gaussian function of small variance whereas larger variance Gaussian function is more relevant in segmentation of images.