scispace - formally typeset
Search or ask a question

Statistical Measurement of Ultrasound Placenta Images Complicated by Gestational Diabetes Mellitus Using Segmentation Approach.

01 Jan 2011-Vol. 2, pp 332-343
TL;DR: The ultrasound screening of placenta in the initial stages of gestation helps to identify the complication induced by GDM on the placental development which accounts for the fetal growth.
Abstract: Medical diagnosis is the major challenge faced by the medical experts. Highly specialized tools are necessary to assist the experts in diagnosing the diseases. Gestational Diabetes Mellitus is a condition in pregnant women which increases the blood sugar levels. It complicates the pregnancy by affecting the placental growth. The ultrasound screening of placenta in the initial stages of gestation helps to identify the complication induced by GDM on the placental development which accounts for the fetal growth. This work focus on the classification of ultrasound placenta images into normal and abnormal images based on statistical measurements. The ultrasound images are usually low in resolution which may lead to loss of characteristic features of the ultrasound images. The placenta images obtained in an ultrasound examination is stereo mapped to reconstruct the placenta structure from the ultrasound images. The dimensionality reduction is done on stereo mapped placenta images using wavelet decomposition. The ultrasound placenta image is segmented using watershed approach to obtain the statistical measurements of the stereo mapped placenta images. Using the statistical measurements, the ultrasound placenta images are then classified as normal and abnormal using Back Propagation neural networks.
Citations
More filters
Journal Article•DOI•
Changyang Li1, Xiuying Wang1, Stefan Eberl1, Michael J. Fulham1, David Dagan Feng1 •
TL;DR: This work proposes a novel segmentation energy function with two distribution descriptors to model the background and the target, which outperforms other level set models for accuracy and immunity to noise.
Abstract: Segmentation of the target object(s) from images that have multiple complicated regions, mixture intensity distributions or are corrupted by noise poses a challenge for the level set models. In addition, the conventional piecewise smooth level set models normally require prior knowledge about the number of image segments. To address these problems, we propose a novel segmentation energy function with two distribution descriptors to model the background and the target. The single background descriptor models the heterogeneous background with multiple regions. Then, the target descriptor takes into account the intensity distribution and incorporates local spatial constraint. Our descriptors, which have more complete distribution information, construct the unique energy function to differentiate the target from the background and are more tolerant of image noise. We compare our approach to three other level set models: 1) the Chan-Vese; 2) the multiphase level set; and 3) the geodesic level set. This comparison using 260 synthetic images with varying levels and types of image noise and medical images with more complicated backgrounds showed that our method outperforms these models for accuracy and immunity to noise. On an additional set of 300 synthetic images, our model is also less sensitive to the contour initialization as well as to different types and levels of noise.

13 citations

Proceedings Article•
01 Jan 2014
TL;DR: The method of optimizing matrix mapping with data dependent kernel for feature extraction of the image for classification adaptively optimizes the parameter of kernel for nonlinear mapping.
Abstract: Kernel based nonlinear feature extraction is feasible to extract the feature of image for classification.The current kernel-based method endures two problems: 1) kernelbased method is to use the data vector through transforming the image matrix into vector, which will cause the store and computing burden; 2) the parameter of kernel function has the heavy influences on kernel based learning method. In order to solve the two problems, we present the method of optimizing matrix mapping with data dependent kernel for feature extraction of the image for classification. The method implements the algorithm without transforming the matrix to vector, and it adaptively optimizes the parameter of kernel for nonlinear mapping. The comprehensive experiments are implemented evaluate the performance of the algorithms.

12 citations


Cites methods from "Statistical Measurement of Ultrasou..."

  • ...The method achieves the excellent performances on image denoising, image debluring, image restoration, and so on, and they are the basic operation in the image processing [20][23]....

    [...]

01 Jan 2012
TL;DR: The proposed algorithm exploits an intermediate step derived from the empirical mode decomposition, which can decompose any nonlinear and non-stationary data into a number of intrinsic mode functions (IMFs).
Abstract: This paper presents a novel unsupervised image clustering approach based on the image histogram, which is processed by the empirical mode decomposition (EMD). The proposed algorithm exploits an intermediate step derived from the empirical mode decomposition, which can decompose any nonlinear and non-stationary data into a number of intrinsic mode functions (IMFs). The IMFs of the image histogram have interesting characteristics and provide a novel workspace that is utilized in order to automatically detect the different clusters into the image under examination. The proposed method was applied to several real and synthetic images and the obtained results show good image clustering robustness.

11 citations


Cites background from "Statistical Measurement of Ultrasou..."

  • ...Clustering participates in pattern recognition, spatial data analysis, image processing, image classification in world wide web and large image databases, image segmentation, document retrieval, data mining and generally in data analysis [4, 6, 10, 12, 13, 21, 22, 24, 29, 30, 31, 32, 38, 41, 43]....

    [...]

Journal Article•DOI•
TL;DR: This paper’s contribution was to propose a novel Fuzzy Association Rule for improving traditional association rules and experimental results on a database of 6000 general-purpose images demonstrated the superiority of the proposed algorithm.
Abstract: One of the major challenges in the content-based information retrieval and machine learning techniques is to-build-the-so-called "semantic classifier" which is able to effectively and efficiently classify semantic concepts in a large database. This paper dealt with semantic image classification based on hierarchical Fuzzy Association Rules (FARs) mining in the image database. Intuitively, an association rule is a unique and significant combination of image features and a semantic concept, which determines the degree of correlation between features and concept. The main idea behind this approach is that any image visual concept has some associated features, so that, there are strong correlations between the concepts and their corresponding features. Regardless of the semantic gap, an image concept appears when the corresponding features emerge in an image and vice versa. Specially, this paper's contribution was to propose a novel Fuzzy Association Rule for improving traditional association rules. Moreover, it was concerned with establishing a hierarchical fuzzy rule base in the training phase and setup corresponding fuzzy inference engine in order to classify images in the testing phase. The presented approach was independent from image segmentation and can be applied on multi-label images. Experimental results on a database of 6000 general-purpose images demonstrated the superiority of the proposed algorithm.

7 citations

Journal Article•DOI•
TL;DR: The proposed approach, image fusion with stabilization and registration outperforms the existing techniques in terms of subjective and objective evaluation.

4 citations

References
More filters
Journal Article•DOI•
TL;DR: There is a natural uncertainty principle between detection and localization performance, which are the two main goals, and with this principle a single operator shape is derived which is optimal at any scale.
Abstract: This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.

28,073 citations


"Statistical Measurement of Ultrasou..." refers background in this paper

  • ...If the magnitude is above the high threshold, it is made an edge and if the magnitude is between the 2nd thresholds, then it is set to zero unless there is a path from this pixel to a pixel with a gradient above threshold two [10],[16]....

    [...]

Proceedings Article•
E.E. Pissaloux1•
01 Jan 1994
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.

2,134 citations


"Statistical Measurement of Ultrasou..." refers methods in this paper

  • ...The binary image is obtained with useful details represented as 1 and others represented as 0 [17]....

    [...]

Proceedings Article•DOI•
05 Apr 1985
TL;DR: Each of the major classes of image segmentation techniques is defined and several specific examples of each class of algorithm are described, illustrated with examples of segmentations performed on real images.
Abstract: There are now a wide variety of image segmentation techniques, some considered general purpose and some designed for specific classes of images. These techniques can be classified as: measurement space guided spatial clustering, single linkage region growing schemes, hybrid linkage region growing schemes, centroid linkage region growing schemes, spatial clustering schemes, and split-and-merge schemes. In this paper, we define each of the major classes of image segmentation techniques and describe several specific examples of each class of algorithm. We illustrate some of the techniques with examples of segmentations performed on real images.

1,025 citations

Journal Article•DOI•
TL;DR: It is shown that ``edge focusing'', i.e., a coarse-to-fine tracking in a continuous manner, combines high positional accuracy with good noise-reduction, which is of vital interest in several applications.
Abstract: Edge detection in a gray-scale image at a fine resolution typically yields noise and unnecessary detail, whereas edge detection at a coarse resolution distorts edge contours. We show that ``edge focusing'', i.e., a coarse-to-fine tracking in a continuous manner, combines high positional accuracy with good noise-reduction. This is of vital interest in several applications. Junctions of different kinds are in this way restored with high precision, which is a basic requirement when performing (projective) geometric analysis of an image for the purpose of restoring the three-dimensional scene. Segmentation of a scene using geometric clues like parallelism, etc., is also facilitated by the algorithm, since unnecessary detail has been filtered away. There are indications that an extension of the focusing algorithm can classify edges, to some extent, into the categories diffuse and nondiffuse (for example diffuse illumination edges). The edge focusing algorithm contains two parameters, namely the coarseness of the resolution in the blurred image from where we start the focusing procedure, and a threshold on the gradient magnitude at this coarse level. The latter parameter seems less critical for the behavior of the algorithm and is not present in the focusing part, i.e., at finer resolutions. The step length of the scale parameter in the focusing scheme has been chosen so that edge elements do not move more than one pixel per focusing step.

498 citations

Journal Article•
TL;DR: In this paper, the influence of edge focusing on the tunes and chromaticities of the NSLS rings is described and a correction to the fringe field gradient peculiar to a combined function magnet with strong edge focusing is also found.
Abstract: Beam transport matrix elements describing the linearly falling fringe field of a combined function bending magnet are expanded in powers of the fringe field length by iteratively solving the integral form of Hill's equation. The method is applicable to any linear optical element with variable focusing strength along the reference orbit. Results for the vertical and horizontal focal lengths agree with previous calculations for a zero gradient magnet and an added correction to the dispersion is found for this case. A correction to the fringe field gradient peculiar to a combined-function magnet with strong edge focusing is also found. The influence of edge focusing on the tunes and chromaticities of the NSLS rings is described. The improved chromaticity calculation for the booster was of particular interest since this ring has bending magnets with poletips shaped to achieve small positive chromaticities.

134 citations


"Statistical Measurement of Ultrasou..." refers background in this paper

  • ...Laplacian Edge detection searches for the zero crossing in the second derivative of the image [10],[12]....

    [...]