scispace - formally typeset
Proceedings ArticleDOI

A new method for image segmentation

01 Nov 2009-Vol. 2, pp 123-125

...read more


Citations
More filters
Journal ArticleDOI

[...]

TL;DR: In this paper, the applicability of various thresholding and locally adaptive segmentation techniques for industrial and synchrotron X-ray CT images of natural and artificial porous media was investigated.
Abstract: [1] Nondestructive imaging methods such as X-ray computed tomography (CT) yield high-resolution, three-dimensional representations of pore space and fluid distribution within porous materials. Steadily increasing computational capabilities and easier access to X-ray CT facilities have contributed to a recent surge in microporous media research with objectives ranging from theoretical aspects of fluid and interfacial dynamics at the pore scale to practical applications such as dense nonaqueous phase liquid transport and dissolution. In recent years, significant efforts and resources have been devoted to improve CT technology, microscale analysis, and fluid dynamics simulations. However, the development of adequate image segmentation methods for conversion of gray scale CT volumes into a discrete form that permits quantitative characterization of pore space features and subsequent modeling of liquid distribution and flow processes seems to lag. In this paper we investigated the applicability of various thresholding and locally adaptive segmentation techniques for industrial and synchrotron X-ray CT images of natural and artificial porous media. A comparison between directly measured and image-derived porosities clearly demonstrates that the application of different segmentation methods as well as associated operator biases yield vastly differing results. This illustrates the importance of the segmentation step for quantitative pore space analysis and fluid dynamics modeling. Only a few of the tested methods showed promise for both industrial and synchrotron tomography. Utilization of local image information such as spatial correlation as well as the application of locally adaptive techniques yielded significantly better results.

426 citations

Journal ArticleDOI

[...]

TL;DR: In this article, the authors focus on multiclass segmentation and detailed descriptions as to why a specific method may fail together with strategies for preventing the failure by applying suitable image enhancement prior to segmentation.
Abstract: Easier access to X-ray microtomography (μCT) facilities has provided much new insight from high-resolution imaging for various problems in porous media research. Pore space analysis with respect to functional properties usually requires segmentation of the intensity data into different classes. Image segmentation is a nontrivial problem that may have a profound impact on all subsequent image analyses. This review deals with two issues that are neglected in most of the recent studies on image segmentation: (i) focus on multiclass segmentation and (ii) detailed descriptions as to why a specific method may fail together with strategies for preventing the failure by applying suitable image enhancement prior to segmentation. In this way, the presented algorithms become very robust and are less prone to operator bias. Three different test images are examined: a synthetic image with ground-truth information, a synchrotron image of precision beads with three different fluids residing in the pore space, and a μCT image of a soil sample containing macropores, rocks, organic matter, and the soil matrix. Image blur is identified as the major cause for poor segmentation results. Other impairments of the raw data like noise, ring artifacts, and intensity variation can be removed with current image enhancement methods. Bayesian Markov random field segmentation, watershed segmentation, and converging active contours are well suited for multiclass segmentation, yet with different success to correct for partial volume effects and conserve small image features simultaneously.

384 citations

Journal ArticleDOI

[...]

TL;DR: An overview of the literature concerning the automatic analysis of images of printed and handwritten musical scores and a reference scheme for any researcher wanting to compare new OMR algorithms against well-known ones is presented.
Abstract: For centuries, music has been shared and remembered by two traditions: aural transmission and in the form of written documents normally called musical scores. Many of these scores exist in the form of unpublished manuscripts and hence they are in danger of being lost through the normal ravages of time. To preserve the music some form of typesetting or, ideally, a computer system that can automatically decode the symbolic images and create new scores is required. Programs analogous to optical character recognition systems called optical music recognition (OMR) systems have been under intensive development for many years. However, the results to date are far from ideal. Each of the proposed methods emphasizes different properties and therefore makes it difficult to effectively evaluate its competitive advantages. This article provides an overview of the literature concerning the automatic analysis of images of printed and handwritten musical scores. For self-containment and for the benefit of the reader, an introduction to OMR processing systems precedes the literature overview. The following study presents a reference scheme for any researcher wanting to compare new OMR algorithms against well-known ones.

216 citations


Cites background from "A new method for image segmentation..."

  • [...]

Posted Content

[...]

TL;DR: This paper describes a locally adaptive thresholding technique that removes background by using local mean and mean deviation and uses integral sum image as a prior processing to calculate local mean.
Abstract: Image binarization is the process of separation of pixel values into two groups, white as background and black as foreground Thresholding plays a major in binarization of images Thresholding can be categorized into global thresholding and local thresholding In images with uniform contrast distribution of background and foreground like document images, global thresholding is more appropriate In degraded document images, where considerable background noise or variation in contrast and illumination exists, there exists many pixels that cannot be easily classified as foreground or background In such cases, binarization with local thresholding is more appropriate This paper describes a locally adaptive thresholding technique that removes background by using local mean and mean deviation Normally the local mean computational time depends on the window size Our technique uses integral sum image as a prior processing to calculate local mean It does not involve calculations of standard deviations as in other local adaptive techniques This along with the fact that calculations of mean is independent of window size speed up the process as compared to other local thresholding techniques

176 citations

Journal ArticleDOI

[...]

TL;DR: A novel binarization method for document images produced by cameras that divides an image into several regions and decides how to binarize each region, derived from a learning process that takes training images as input.
Abstract: In this paper, we propose a novel binarization method for document images produced by cameras. Such images often have varying degrees of brightness and require more careful treatment than merely applying a statistical method to obtain a threshold value. To resolve the problem, the proposed method divides an image into several regions and decides how to binarize each region. The decision rules are derived from a learning process that takes training images as input. Tests on images produced under normal and inadequate illumination conditions show that our method yields better visual quality and better OCR performance than three global binarization methods and four locally adaptive binarization methods.

100 citations


Cites methods from "A new method for image segmentation..."

  • [...]


References
More filters
Journal ArticleDOI

[...]

TL;DR: There is a natural uncertainty principle between detection and localization performance, which are the two main goals, and with this principle a single operator shape is derived which is optimal at any scale.
Abstract: This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.

26,639 citations


"A new method for image segmentation..." refers methods in this paper

  • [...]

Journal ArticleDOI

[...]

TL;DR: This work presents a simple and efficient implementation of Lloyd's k-means clustering algorithm, which it calls the filtering algorithm, and establishes the practical efficiency of the algorithm's running time.
Abstract: In k-means clustering, we are given a set of n data points in d-dimensional space R/sup d/ and an integer k and the problem is to determine a set of k points in Rd, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's (1982) algorithm. We present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation.

4,757 citations


"A new method for image segmentation..." refers methods in this paper

  • [...]

Book

[...]

15 Sep 1994
TL;DR: The fundamental principles of Digital Image Processing are explained, as well as practical suggestions for improving the quality and efficiency of image processing.
Abstract: What Is Image Processing?. Fundamentals of Digital Image Processing. The Digital Image. PROCESSING CONCEPTS. Image Enhancement and Restoration. Image Analysis. Image Compression. Image Synthesis. PROCESSING SYSTEMS. Image Origination and Display. Image Data Handling. Image Data Processing. PROCESSING IN ACTION. Image Operation Studies. Appendices. Glossary. Index.

444 citations

Proceedings ArticleDOI

[...]

12 May 1998
TL;DR: A novel method for measuring the orientation of an edge is introduced and it is shown that it is without error in the noise-free case, and the wreath product transform edge detection performance is shown to be superior to many standard edge detectors.
Abstract: Wreath product group based spectral analysis has led to the development of the wreath product transform, a new multiresolution transform closely related to the wavelet transform. We derive the filter bank implementation of a simple wreath product transform and show that it is in fact, a multiresolution Roberts (1965) Cross edge detector. We also derive the relationship between this transform and the two-dimensional Haar wavelet transform. We prove that, using a non-traditional metric for measuring edge amplitude with the wreath product transform, yields a rotation and translation invariant edge detector. We introduce a novel method for measuring the orientation of an edge and show that it is without error in the noise-free case. The wreath product transform edge detection performance is shown to be superior to many standard edge detectors.

17 citations



Trending Questions (1)
How to Train an image segmentation model?

Through experiments, it is demonstrated that the image segmentation method in this paper is very effective.