scispace - formally typeset
Search or ask a question
Author

Nirmala Paramanandham

Bio: Nirmala Paramanandham is an academic researcher from Sri Sivasubramaniya Nadar College of Engineering. The author has contributed to research in topics: Image fusion & Artificial intelligence. The author has an hindex of 4, co-authored 9 publications receiving 84 citations. Previous affiliations of Nirmala Paramanandham include VIT University & M. S. Ramaiah Institute of Technology.

Papers
More filters
Journal ArticleDOI
TL;DR: A swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image.

44 citations

Journal ArticleDOI
TL;DR: A hybrid image fusion algorithm for multi focus and multi modality images is presented by exploiting the advantages of both the transform as well as spatial domain techniques to achieve a good balance between enhancing fusion quality meanwhile reducing the computational cost.
Abstract: Image fusion is the process of integrating several source images into a single image that provides more reliable information along with reduced redundancy. In this paper, a hybrid image fusion algorithm for multi focus and multi modality images is presented by exploiting the advantages of both the transform as well as spatial domain techniques. In the initial image fusion framework, the source images are decomposed only once using cascaded wavelet transform and the transformed coefficients are combined according to the fusion rules. Inverse cascaded wavelet transform is applied for obtaining the initial fused image. Further, Roberts operator is used for extracting the edge information and decision rule is introduced for choosing the edges from the focused part. The extracted edge information from the focused part replaces the existing edge information in the initial fused image for enhancing the reliability of the fused image. Experiments on various types of images such as multifocus as well as multimodality images are conducted to examine the performance of the proposed algorithm. Experimental results have shown that the proposed algorithm outperforms the well known techniques in terms of both visual perception and quantitative evaluation. Furthermore, the proposed algorithm achieves a good balance between enhancing fusion quality meanwhile reducing the computational cost.

36 citations

Journal ArticleDOI
TL;DR: In this article, a modified iterative grouping median filter (IMF) was proposed to remove the noise in the MRI image and a maximum likelihood estimation-based kernel principal component analysis (KPCA) was used for feature extraction.
Abstract: The most vital challenge for a radiologist is locating the brain tumors in the earlier stage. As the brain tumor grows rapidly, doubling its actual size in about twenty-five days. If not dealt properly, the affected person’s survival rate usually does no longer exceed half a year. This can rapidly cause dying. For this reason, an automatic system is desirable for locating brain tumors at the early stage. In general, when compared to computed tomography (CT), magnetic resonance image (MRI) scans are used for detecting the diagnosis of cancerous and noncancerous tumors. However, while MRI scans acquisition, there is a chance of appearing noise such as speckle noise, salt & pepper noise and Gaussian noise. It may degrade classification performance. Hence, a new noise removal algorithm is proposed, namely the modified iterative grouping median filter. Further, Maximum likelihood estimation-based kernel principal component analysis is proposed for feature extraction. A deep learning-based VGG16 architecture has been utilized for segmentation purposes. Experimental results have shown that the proposed algorithm outperforms the well-known techniques in terms of both qualitative and quantitative evaluation.

20 citations

Proceedings ArticleDOI
23 Mar 2016
TL;DR: A simple and competent image fusion algorithm based on standard deviation in wavelet domain is proposed and compared with both transform domain as well as spatial domain techniques.
Abstract: Fusion is a process of extraction of useful information acquired from several domains. The objective of image fusion is to extract the needed data from multiple images to generate a composite image that contains an enhanced representation of the image than any individual source image. Image fusion can be applied to various images like multi-sensor, multi-modal, multi-temporal or multi-focus. The reason behind going for image fusion is to reduce bandwidth, time consumption, increase in spatial information and reduction in power consumption. In this paper, a simple and competent image fusion algorithm based on standard deviation in wavelet domain is proposed and compared with both transform domain as well as spatial domain techniques. The techniques are evaluated with various databases quantitatively and qualitatively.

12 citations

Proceedings ArticleDOI
02 Apr 2015
TL;DR: A novel multi focus image fusion method is presented by combining Discrete Wavelets Transform and Stationary Wavelet Transform that can provide better performance in fusing multi-focus images than other conventional state of art single transform methods in the literature.
Abstract: Image fusion is the process of integrating multiple source images of the same scene into a single image, retaining the important and salient features from each image The resultant fused image provides a more accurate description than any of the individual source images In this paper a novel multi focus image fusion method is presented by combining Discrete Wavelet Transform and Stationary Wavelet Transform The source images are decomposed using wavelet transforms and the coefficients are fused according to the fusion rule An informative fused image is obtained by performing the inverse wavelet transforms to the fused coefficients The proposed framework is evaluated by employing parameters such as RMSE, PSNR, SF and Entropy The simulation results demonstrate both quantitatively and subjectively that the proposed fusion approach is effective and can provide better performance in fusing multi-focus images than other conventional state of art single transform methods in the literature

10 citations


Cited by
More filters
Journal ArticleDOI
Jiayi Ma1, Yong Ma1, Chang Li1
TL;DR: This survey comprehensively survey the existing methods and applications for the fusion of infrared and visible images, which can serve as a reference for researchers inrared and visible image fusion and related fields.

849 citations

Journal ArticleDOI
03 Apr 2020
TL;DR: A new unsupervised and unified densely connected network for different types of image fusion tasks, termed as FusionDN, which obtains a single model applicable to multiple fusion tasks by applying elastic weight consolidation to avoid forgetting what has been learned from previous tasks when training multiple tasks sequentially.
Abstract: In this paper, we present a new unsupervised and unified densely connected network for different types of image fusion tasks, termed as FusionDN. In our method, the densely connected network is trained to generate the fused image conditioned on source images. Meanwhile, a weight block is applied to obtain two data-driven weights as the retention degrees of features in different source images, which are the measurement of the quality and the amount of information in them. Losses of similarities based on these weights are applied for unsupervised learning. In addition, we obtain a single model applicable to multiple fusion tasks by applying elastic weight consolidation to avoid forgetting what has been learned from previous tasks when training multiple tasks sequentially, rather than train individual models for every fusion task or jointly train tasks roughly. Qualitative and quantitative results demonstrate the advantages of FusionDN compared with state-of-the-art methods in different fusion tasks.

182 citations

Journal ArticleDOI
Hao Zhang1, Zhuliang Le1, Zhenfeng Shao1, Han Xu1, Jiayi Ma1 
TL;DR: A new generative adversarial network with adaptive and gradient joint constraints to fuse multi-focus images is presented with the superiority of the method over the state-of-the-art in terms of both subjective visual effect and quantitative metrics.

125 citations

Journal ArticleDOI
TL;DR: In this article, a review of state-of-the-art image fusion methods of diverse levels with their pros and cons, various spatial and transform based method with quality metrics and their applications in different domains have been discussed.
Abstract: The necessity of image fusion is growing in recently in image processing applications due to the tremendous amount of acquisition systems. Fusion of images is defined as an alignment of noteworthy Information from diverse sensors using various mathematical models to generate a single compound image. The fusion of images is used for integrating the complementary multi-temporal, multi-view and multi-sensor Information into a single image with improved image quality and by keeping the integrity of important features. It is considered as a vital pre-processing phase for several applications such as robot vision, aerial, satellite imaging, medical imaging, and a robot or vehicle guidance. In this paper, various state-of-art image fusion methods of diverse levels with their pros and cons, various spatial and transform based method with quality metrics and their applications in different domains have been discussed. Finally, this review has concluded various future directions for different applications of image fusion.

87 citations

Proceedings ArticleDOI
30 Mar 2022
TL;DR: This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
Abstract: This study addresses the issue of fusing infrared and visible images that appear differently for object detection. Aiming at generating an image of high visual quality, previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks. These approaches neglect that modality differences implying the complementary information are extremely important for both fusion and subsequent detection task. This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network. The fusion network with one generator and dual discriminators seeks commons while learning from differences, which preserves structural information of targets from the infrared and textural details from the visible. Furthermore, we build a synchronized imaging system with calibrated infrared and optical sensors, and collect currently the most comprehensive benchmark covering a wide range of scenarios. Extensive experiments on several public datasets and our benchmark demonstrate that our method outputs not only visually appealing fusion but also higher detection mAP than the state-of-the-art approaches. The source code and benchmark are available at https://github.com/dlut-dimt/TarDAL.

47 citations