scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Statistical measurement of ultrasound placenta images using segmentation approach

01 Dec 2010-pp 309-316
TL;DR: The ultrasound screening of placenta in the initial stages of gestation helps to identify the complication induced by GDM on the placental development which accounts for the fetal growth.
Abstract: Medical diagnosis is the major challenge faced by the medical experts. Highly specialized tools are necessary to assist the experts in diagnosing the diseases. Gestational Diabetes Mellitus is a condition in pregnant women which increases the blood sugar levels. It complicates the pregnancy by affecting the placental growth. The ultrasound screening of placenta in the initial stages of gestation helps to identify the complication induced by GDM on the placental development which accounts for the fetal growth. This work focus on the classification of ultrasound placenta images into normal and abnormal images based on statistical measurements. The ultrasound images are usually low in resolution which may lead to loss of characteristic features of the ultrasound images. The placenta images obtained in an ultrasound examination is stereo mapped to reconstruct the placenta structure from the ultrasound images. The dimensionality reduction is done on stereo mapped placenta images using wavelet decomposition. The ultrasound placenta image is segmented using watershed approach to obtain the statistical measurements of the stereo mapped placenta images. Using the statistical measurements, the ultrasound placenta images are then classified as normal and abnormal using Back Propagation neural networks.
Citations
More filters
Journal ArticleDOI
TL;DR: This review covers state‐of‐the‐art segmentation and classification methodologies for the whole fetus and, more specifically, the fetal brain, lungs, liver, heart and placenta in magnetic resonance imaging and (3D) ultrasound for the first time.

70 citations

Proceedings ArticleDOI
Wen Li1, Yan Li1, Yide Ma1
18 Jul 2012
TL;DR: A new effective contour tracking algorithm and representation method based on the pixel vertex matrix that could effectively reduce code stream for contours and hence increase the compression ratio of the image.
Abstract: Based on analysis of contours of irregular region, and according to the characteristic that massive continuous code and the same specific code combination are usually contained in a region boundary's vertex chain code, a new effective contour tracking algorithm and representation method based on the pixel vertex matrix is proposed. Moreover, we re-encoding the new vertex chain code using a Huffman coding strategy and then select the more compressed result as the output. The results showed that the new method could effectively reduce code stream for contours, hence increase the compression ratio of the image.

1 citations

Dissertation
13 Jul 2015
TL;DR: A new penalized likelihood iterative reconstruction algorithm for Positron Emission Tomography, based on the maximum likelihood or the least squares cost function is proposed.
Abstract: Iterative image reconstruction methods have attracted considerable attention in the past decades for applications in Computer Tomography (CT) due to the feasibility of incorporating the physical and statistical properties of the imaging process completely. So far, all statistical reconstruction algorithms are based on the maximum likelihood (ML) or the least squares cost function. The maximum likelihood-expectation maximization (ML-EM) algorithm, which is a general statistical method for seeking the estimate of the image, allows computing projections that are close to the measured projection data. Iterative based ML reconstruction algorithms require a considerable computational cost per iteration. The advantages of the iterative approach include better insensitivity to noise and capability of reconstructing an optimal image in the case of incomplete data. The method has been applied in emission tomography modalities like SPECT and PET, where there is significant attenuation along ray paths and noise statistics are relatively poor. Generally speaking, the tomography reconstruction with a limited number of data appears as a highly underdetermined ill-posed problem. The projection data generated by the CT system are initially noisy and the ML algorithm tends to increase this noise and in particular the noise artifacts through the successive iterations. This accumulation of noise leads to a premature stopping of the ML-EM reconstruction process. Several methods have been developed to decrease this accumulation of noise and improve the quality of the reconstructed images in tomography. The aim of this research is to propose a new penalized likelihood iterative reconstruction algorithm for Positron Emission Tomography, by

Additional excerpts

  • ...Malathi et al [50] developed an algorithm to classify the ultrasound placenta images either as normal or abnormal, based on statistical measurements....

    [...]

Journal ArticleDOI
TL;DR: This work proposes a framework of dictionary-optimized sparse learning based MR super-resolution method to solve the problem of sample selection for dictionary learning of sparse reconstruction and shows that the dictionary- optimized sparse learning improves the performance of sparse representation.
Abstract: Abstract Magnetic Resonance Super-resolution Imaging Measurement (MRIM) is an effective way of measuring materials. MRIM has wide applications in physics, chemistry, biology, geology, medical and material science, especially in medical diagnosis. It is feasible to improve the resolution of MR imaging through increasing radiation intensity, but the high radiation intensity and the longtime of magnetic field harm the human body. Thus, in the practical applications the resolution of hardware imaging reaches the limitation of resolution. Software-based super-resolution technology is effective to improve the resolution of image. This work proposes a framework of dictionary-optimized sparse learning based MR super-resolution method. The framework is to solve the problem of sample selection for dictionary learning of sparse reconstruction. The textural complexity-based image quality representation is proposed to choose the optimal samples for dictionary learning. Comprehensive experiments show that the dictionary-optimized sparse learning improves the performance of sparse representation.

Additional excerpts

  • ...The sparse representation model-based image processing performs well on image denoising [1], image deblurring [2], [3], image restoration [4]....

    [...]

References
More filters
Proceedings ArticleDOI
G. Malathi1, V. Shanthi1
16 Dec 2009
TL;DR: This pilot study was carried out to find the feasibility for detecting anomalies in placental growth due to the implications of gestational diabetics by considering the stereo image mapping based on wavelet analysis for 2D reconstruction.
Abstract: Medical Diagnosis is the utmost need of an hour. Gestational Diabetics in women represents the second leading cause of yielding children born with birth defects. The ultrasound images are usually low in resolution making diagnosis difficult. Specialized tools are required to assist the medical experts to categorize and diagnose diseases to accuracy. If the anomalies in the ultrasound images are detected in the preliminary screening of placenta, fetal loss could be minimized. This pilot study was carried out to find the feasibility for detecting anomalies in placental growth due to the implications of gestational diabetics by considering the stereo image mapping based on wavelet analysis for 2D reconstruction. The research uses wavelet based methods to extract features from the ultrasonic images of placenta. The shape of the placenta is generated using the Back Propagation Network. Euclidean Distance Classifier is used for classifying the ultrasonic images of placenta.

7 citations


"Statistical measurement of ultrasou..." refers background in this paper

  • ...All other placenta was singly lobed and discoid shape with central attachment of umbilical cord to the fetal surface of placenta....

    [...]

Journal ArticleDOI
TL;DR: This pilot study involves the feasibility for classifying the ultrasound images ofplacenta with complicating diabetes based on placenta thickness using statistical textural features.
Abstract: medical domain, one of the major challenges faced by the medical experts is the extraction of critical information for medical diagnosis. Specialized tools are necessary to assist the experts in diagnosing the diseases. Information retrieval is difficult in the case of ultrasound medical images due to its low resolution making diagnosis difficult. Gestational diabetes is a form of diabetes, which affects pregnant women. It is believed that the hormones produced during pregnancy reduce a woman's receptivity to insulin, leading to high blood sugar levels. The duration of departures from normogycemia in maternal diabetes is the critical factor. The earlier detection of GDM occurs, the lesser the influence on placental development, which indirectly accounts for fetal growth and metabolism. This pilot study involves the feasibility for classifying the ultrasound images of placenta with complicating diabetes based on placenta thickness using statistical textural features

6 citations


"Statistical measurement of ultrasou..." refers methods in this paper

  • ...The performance of the classifier using textural features corresponding to cooccurrence matrices, Law’s operators and neighborhood gray-tone difference matrices applied to the classification of ultrasonic images of the placenta corresponding to different grades....

    [...]