scispace - formally typeset
Search or ask a question
Author

Aisha Gul Hafeez

Bio: Aisha Gul Hafeez is an academic researcher from National University of Sciences and Technology. The author has contributed to research in topics: Image quality & Image fusion. The author has an hindex of 1, co-authored 1 publications receiving 4 citations.

Papers
More filters
Proceedings ArticleDOI
04 Nov 2013
TL;DR: Results obtained from 10 subjects demonstrate that 3D echocardiography image fusion helps in improving quantitative evaluation measures SNR, CNR and contrast while extending FOV and thus filling the missing information in the individual source images.
Abstract: 3D echocardiography offers the ability to perform cardiac functional analysis by visualizing the full 3D geometry of the heart. The full potential of 3D echocardiography has still not been achieved due to problems with image quality and automated quantitative analysis. Native single-view images often lack sufficient anatomical information and are low in contrast and noisy in nature due to poor acoustic window and ultrasound physics limitations. In this work, we explore various ways of fusing the multiple single-view 3D echocardiography images in order to obtain a complete 3D view of the heart by preserving maximum salient information from individual images. Three fusion techniques have been explored for image fusion that include: maximum, averaging, and wavelet image fusion. A novel method of 3D echocardiography fusion utilizing principal component analysis is proposed and a comparative analysis of all discussed techniques is conducted Results obtained from 10 subjects demonstrate that 3D echocardiography image fusion helps in improving quantitative evaluation measures SNR, CNR and contrast while extending FOV and thus filling the missing information in the individual source images. It is hoped that this improved image quality leads to an improved cardiac functional analysis as the multi-view fused image shows the whole picture of the heart.

4 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The comparison results show that the proposed method image fusion algorithm based on the lifting wavelet transform is better than conventional methods, and has much application value.
Abstract: The directional characteristic of the low-frequency and high-frequency coefficients based on the wavelet transform for original images is discussed and analyzed, and a novel image fusion algorithm based on the lifting wavelet transform is proposed in this paper. Firstly, the source images are transformed to the frequency domain by means of the lifting wavelet. Then, the resultant coefficients of the low-frequency sub-band are achieved by comparing the covariance of the coefficients of different images. Meanwhile, the resultant coefficients of each high-frequency sub-band are calculated according to the matching measure between the directional characteristic of the coefficients in the same sub-band and the quad-tree structure relationship of the coefficients with the same direction in different sub-bands. At last, the fusion-resultant image is obtained through the reversely lifting wavelet transformation. Several evaluation indexes, such as entropy, average grads, PSNR and RMSE, are employed to judge the experimental images with different fusion methods. The comparison results show that the proposed method image fusion algorithm based on the lifting wavelet transform is better than conventional methods, and has much application value.

12 citations

Journal ArticleDOI
TL;DR: This study proposes the novel approach of fusing multiple 3-D echocardiography images using an optical tracking system that incorporates breath-hold position tracking to infer that the heart remains at the same position during different acquisitions.
Abstract: Recent advances in echocardiography allow real-time 3-D dynamic image acquisition of the heart. However, one of the major limitations of 3-D echocardiography is the limited field of view, which results in an acquisition insufficient to cover the whole geometry of the heart. This study proposes the novel approach of fusing multiple 3-D echocardiography images using an optical tracking system that incorporates breath-hold position tracking to infer that the heart remains at the same position during different acquisitions. In six healthy male volunteers, 18 pairs of apical/parasternal 3-D ultrasound data sets were acquired during a single breath-hold as well as in subsequent breath-holds. The proposed method yielded a field of view improvement of 35.4 ± 12.5%. To improve the quality of the fused image, a wavelet-based fusion algorithm was developed that computes pixelwise likelihood values for overlapping voxels from multiple image views. The proposed wavelet-based fusion approach yielded significant improvement in contrast (66.46 ± 21.68%), contrast-to-noise ratio (49.92 ± 28.71%), signal-to-noise ratio (57.59 ± 47.85%) and feature count (13.06 ± 7.44%) in comparison to individual views.

9 citations

Proceedings ArticleDOI
01 Aug 2016
TL;DR: A multi-camera based optical tracking system which eliminates the need for image overlap and compensates for patient movement during acquisition and fuse volumes acquired during R-R wave peaks based on Electrocardiogram (ECG) data to account for retrospective image acquisition.
Abstract: Limited field of view (FOV) is a major problem for 3D real-time echocardiography (3DRTE), which results in an incomplete representation of cardiac anatomy. Various image registration techniques have been proposed to improve the field of view in 3DRTE by fusing multiple image volumes. However, these techniques require significant overlap between the individual volumes and rely on high image resolution and high signal-to-noise ratio. Changes in the heart position due to patient movement during image acquisition can also reduce the quality of image fusion. In this paper, we propose a multi-camera based optical tracking system which 1) eliminates the need for image overlap and 2) compensates for patient movement during acquisition. We compensate for patient movement by continuously tracking the patient position using skin markers and incorporating this information into the fusion process. We fuse volumes acquired during R-R wave peaks based on Electrocardiogram (ECG) data to account for retrospective image acquisition. The fusion technique was validated using a heart phantom (Shelley Medical Imaging Technologies) and on one healthy volunteer. The fused ultrasound volumes could be generated in within 2 seconds and were found to have complete myocardial boundaries alignment upon visual assessment. No stitching artefacts or movement related artefacts were observed in the fused image.

3 citations

Journal ArticleDOI
TL;DR: A new method for image fusion based on a generalized random walker framework (GRW) using ultrasound confidence maps that could help in improving the diagnostic accuracy and clinical acceptance of 3-D echocardiography is proposed.
Abstract: Image fusion techniques in 3-D echocardiography attempt to improve the field-of-view by combining multiple 3-D ultrasound (3-DUS) volumes. Echocardiography fusion techniques are mostly based on either image registration or sensor tracking. Compared to registration techniques, sensor tracking approaches are image independent and do not need any spatial overlap between the images. Once the images are spatially aligned the pixel intensities in the overlapping regions are determined using fusion algorithms such as average fusion (AVG) and maximum fusion (MAX). However, averaging generally results in reduced contrast while maximizing results in amplification of noise artifacts in the fused image. Wavelet fusion (WAV) overcomes these issues by selectively enhancing the low-frequency components in the image, but this could result in pixelation artifacts. We propose a new method for image fusion based on a generalized random walker framework (GRW) using ultrasound confidence maps. The maps are based on: 1) focal properties of the transducer and 2) second order image features. The fusion technique was validated on image pairs sampled from 3-DUS volumes acquired from six healthy volunteers. All the images were spatially aligned using optical tracking, and the fusion algorithm was used to determine the pixel intensity in the overlapping region. Comparisons based on quantitative measures showed statistically significant improvements for GRW (p < 0:01) when compared to AVG, MAX, and WAV for contrast-to-noise ratio: 0.85 ± 0.03, signal-to-noise ratio: 7.42 ± 1.98, Wang–Bovik metric (Q0): 0.80 ± 0.15. The Piella metric (Q1): 0.82 ± 0.01 also gave higher values for GRW, but the difference was not statistically significant. Upon visual inspection, the GRW fusion had the lowest amount of stitching and pixelation artifacts. The fusion technique proposed could help in improving the diagnostic accuracy and clinical acceptance of 3-D echocardiography.

1 citations