scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Statistical measurement of ultrasound placenta images using segmentation approach

01 Dec 2010-pp 309-316
TL;DR: The ultrasound screening of placenta in the initial stages of gestation helps to identify the complication induced by GDM on the placental development which accounts for the fetal growth.
Abstract: Medical diagnosis is the major challenge faced by the medical experts. Highly specialized tools are necessary to assist the experts in diagnosing the diseases. Gestational Diabetes Mellitus is a condition in pregnant women which increases the blood sugar levels. It complicates the pregnancy by affecting the placental growth. The ultrasound screening of placenta in the initial stages of gestation helps to identify the complication induced by GDM on the placental development which accounts for the fetal growth. This work focus on the classification of ultrasound placenta images into normal and abnormal images based on statistical measurements. The ultrasound images are usually low in resolution which may lead to loss of characteristic features of the ultrasound images. The placenta images obtained in an ultrasound examination is stereo mapped to reconstruct the placenta structure from the ultrasound images. The dimensionality reduction is done on stereo mapped placenta images using wavelet decomposition. The ultrasound placenta image is segmented using watershed approach to obtain the statistical measurements of the stereo mapped placenta images. Using the statistical measurements, the ultrasound placenta images are then classified as normal and abnormal using Back Propagation neural networks.
Citations
More filters
Journal ArticleDOI
TL;DR: This review covers state‐of‐the‐art segmentation and classification methodologies for the whole fetus and, more specifically, the fetal brain, lungs, liver, heart and placenta in magnetic resonance imaging and (3D) ultrasound for the first time.

70 citations

Proceedings ArticleDOI
Wen Li1, Yan Li1, Yide Ma1
18 Jul 2012
TL;DR: A new effective contour tracking algorithm and representation method based on the pixel vertex matrix that could effectively reduce code stream for contours and hence increase the compression ratio of the image.
Abstract: Based on analysis of contours of irregular region, and according to the characteristic that massive continuous code and the same specific code combination are usually contained in a region boundary's vertex chain code, a new effective contour tracking algorithm and representation method based on the pixel vertex matrix is proposed. Moreover, we re-encoding the new vertex chain code using a Huffman coding strategy and then select the more compressed result as the output. The results showed that the new method could effectively reduce code stream for contours, hence increase the compression ratio of the image.

1 citations

Dissertation
13 Jul 2015
TL;DR: A new penalized likelihood iterative reconstruction algorithm for Positron Emission Tomography, based on the maximum likelihood or the least squares cost function is proposed.
Abstract: Iterative image reconstruction methods have attracted considerable attention in the past decades for applications in Computer Tomography (CT) due to the feasibility of incorporating the physical and statistical properties of the imaging process completely. So far, all statistical reconstruction algorithms are based on the maximum likelihood (ML) or the least squares cost function. The maximum likelihood-expectation maximization (ML-EM) algorithm, which is a general statistical method for seeking the estimate of the image, allows computing projections that are close to the measured projection data. Iterative based ML reconstruction algorithms require a considerable computational cost per iteration. The advantages of the iterative approach include better insensitivity to noise and capability of reconstructing an optimal image in the case of incomplete data. The method has been applied in emission tomography modalities like SPECT and PET, where there is significant attenuation along ray paths and noise statistics are relatively poor. Generally speaking, the tomography reconstruction with a limited number of data appears as a highly underdetermined ill-posed problem. The projection data generated by the CT system are initially noisy and the ML algorithm tends to increase this noise and in particular the noise artifacts through the successive iterations. This accumulation of noise leads to a premature stopping of the ML-EM reconstruction process. Several methods have been developed to decrease this accumulation of noise and improve the quality of the reconstructed images in tomography. The aim of this research is to propose a new penalized likelihood iterative reconstruction algorithm for Positron Emission Tomography, by

Additional excerpts

  • ...Malathi et al [50] developed an algorithm to classify the ultrasound placenta images either as normal or abnormal, based on statistical measurements....

    [...]

Journal ArticleDOI
TL;DR: This work proposes a framework of dictionary-optimized sparse learning based MR super-resolution method to solve the problem of sample selection for dictionary learning of sparse reconstruction and shows that the dictionary- optimized sparse learning improves the performance of sparse representation.
Abstract: Abstract Magnetic Resonance Super-resolution Imaging Measurement (MRIM) is an effective way of measuring materials. MRIM has wide applications in physics, chemistry, biology, geology, medical and material science, especially in medical diagnosis. It is feasible to improve the resolution of MR imaging through increasing radiation intensity, but the high radiation intensity and the longtime of magnetic field harm the human body. Thus, in the practical applications the resolution of hardware imaging reaches the limitation of resolution. Software-based super-resolution technology is effective to improve the resolution of image. This work proposes a framework of dictionary-optimized sparse learning based MR super-resolution method. The framework is to solve the problem of sample selection for dictionary learning of sparse reconstruction. The textural complexity-based image quality representation is proposed to choose the optimal samples for dictionary learning. Comprehensive experiments show that the dictionary-optimized sparse learning improves the performance of sparse representation.

Additional excerpts

  • ...The sparse representation model-based image processing performs well on image denoising [1], image deblurring [2], [3], image restoration [4]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: There is a natural uncertainty principle between detection and localization performance, which are the two main goals, and with this principle a single operator shape is derived which is optimal at any scale.
Abstract: This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.

28,073 citations

Book
01 Jan 1993
TL;DR: The digitized image and its properties are studied, including shape representation and description, and linear discrete image transforms, and texture analysis.
Abstract: List of Algorithms. Preface. Possible Course Outlines. 1. Introduction. 2. The Image, Its Representations and Properties. 3. The Image, Its Mathematical and Physical Background. 4. Data Structures for Image Analysis. 5. Image Pre-Processing. 6. Segmentation I. 7. Segmentation II. 8. Shape Representation and Description. 9. Object Recognition. 10. Image Understanding. 11. 3d Geometry, Correspondence, 3d from Intensities. 12. Reconstruction from 3d. 13. Mathematical Morphology. 14. Image Data Compression. 15. Texture. 16. Motion Analysis. Index.

5,451 citations

Journal ArticleDOI
TL;DR: There are several image segmentation techniques, some considered general purpose and some designed for specific classes of images as discussed by the authors, some of which can be classified as: measurement space guided spatial clustering, single linkage region growing schemes, hybrid link growing scheme, centroid region growing scheme and split-and-merge scheme.
Abstract: There are now a wide Abstract There are now a wide variety of image segmentation techniques, some considered general purpose and some designed for specific classes of images. These techniques can be classified as: measurement space guided spatial clustering, single linkage region growing schemes, hybrid linkage region growing schemes, centroid linkage region growing schemes, spatial clustering schemes, and split-and-merge schemes. In this paper, we define each of the major classes of image segmentation techniques and describe several specific examples of each class of algorithm. We illustrate some of the techniques with examples of segmentations performed on real images.

2,009 citations

Journal ArticleDOI
TL;DR: It is shown that ``edge focusing'', i.e., a coarse-to-fine tracking in a continuous manner, combines high positional accuracy with good noise-reduction, which is of vital interest in several applications.
Abstract: Edge detection in a gray-scale image at a fine resolution typically yields noise and unnecessary detail, whereas edge detection at a coarse resolution distorts edge contours. We show that ``edge focusing'', i.e., a coarse-to-fine tracking in a continuous manner, combines high positional accuracy with good noise-reduction. This is of vital interest in several applications. Junctions of different kinds are in this way restored with high precision, which is a basic requirement when performing (projective) geometric analysis of an image for the purpose of restoring the three-dimensional scene. Segmentation of a scene using geometric clues like parallelism, etc., is also facilitated by the algorithm, since unnecessary detail has been filtered away. There are indications that an extension of the focusing algorithm can classify edges, to some extent, into the categories diffuse and nondiffuse (for example diffuse illumination edges). The edge focusing algorithm contains two parameters, namely the coarseness of the resolution in the blurred image from where we start the focusing procedure, and a threshold on the gradient magnitude at this coarse level. The latter parameter seems less critical for the behavior of the algorithm and is not present in the focusing part, i.e., at finer resolutions. The step length of the scale parameter in the focusing scheme has been chosen so that edge elements do not move more than one pixel per focusing step.

498 citations

Journal Article
TL;DR: In this paper, the influence of edge focusing on the tunes and chromaticities of the NSLS rings is described and a correction to the fringe field gradient peculiar to a combined function magnet with strong edge focusing is also found.
Abstract: Beam transport matrix elements describing the linearly falling fringe field of a combined function bending magnet are expanded in powers of the fringe field length by iteratively solving the integral form of Hill's equation. The method is applicable to any linear optical element with variable focusing strength along the reference orbit. Results for the vertical and horizontal focal lengths agree with previous calculations for a zero gradient magnet and an added correction to the dispersion is found for this case. A correction to the fringe field gradient peculiar to a combined-function magnet with strong edge focusing is also found. The influence of edge focusing on the tunes and chromaticities of the NSLS rings is described. The improved chromaticity calculation for the booster was of particular interest since this ring has bending magnets with poletips shaped to achieve small positive chromaticities.

134 citations