scispace - formally typeset
Author

Malay K. Kundu

Other affiliations: Intel
Bio: Malay K. Kundu is an academic researcher from Indian Statistical Institute. The author has contributed to research in topic(s): Image retrieval & Digital watermarking. The author has an hindex of 33, co-authored 151 publication(s) receiving 3283 citation(s). Previous affiliations of Malay K. Kundu include Intel.


Papers
More filters
Journal ArticleDOI

[...]

TL;DR: A novel multimodal medical image fusion (MIF) method based on non-subsampled contourlet transform (NSCT) and pulse-coupled neural network (PCNN) is presented, which exploits the advantages of both the NSCT and the PCNN to obtain better fusion results.
Abstract: In this article, a novel multimodal medical image fusion (MIF) method based on non-subsampled contourlet transform (NSCT) and pulse-coupled neural network (PCNN) is presented. The proposed MIF scheme exploits the advantages of both the NSCT and the PCNN to obtain better fusion results. The source medical images are first decomposed by NSCT. The low-frequency subbands (LFSs) are fused using the ‘max selection’ rule. For fusing the high-frequency subbands (HFSs), a PCNN model is utilized. Modified spatial frequency in NSCT domain is input to motivate the PCNN, and coefficients in NSCT domain with large firing times are selected as coefficients of the fused image. Finally, inverse NSCT (INSCT) is applied to get the fused image. Subjective as well as objective analysis of the results and comparisons with state-of-the-art MIF techniques show the effectiveness of the proposed scheme in fusing multimodal medical images.

121 citations

Journal ArticleDOI

[...]

TL;DR: An algorithm based on the facts of the human visual system is presented here whereby it is possible to select automatically (without human intervention) the thresholds for detecting the significant edges as perceived by human beings.
Abstract: An algorithm based on the facts of the human visual system is presented here whereby it is possible to select automatically (without human intervention) the thresholds for detecting the significant edges as perceived by human beings The threshold value adapts with the background intensity according to the criterion governed by a characteristics of one of the De Vries-Rose, Weber's and saturated regions The algorithm is found to provide a satisfactory improvement in the performance in the conventional edge detection process

114 citations

Journal ArticleDOI

[...]

TL;DR: This work is an attempt to demonstrate genetic algorithms adaptivity and effectiveness for searching global optimal solutions in selecting an appropriate image enhancement operator automatically.
Abstract: Genetic algorithms represent a class of highly parallel adaptive search processes for solving a wide range of optimization and machine learning problems. The present work is an attempt to demonstrate their adaptivity and effectiveness for searching global optimal solutions in selecting an appropriate image enhancement operator automatically.

111 citations

Journal ArticleDOI

[...]

TL;DR: This paper addresses a novel approach to the multimodal medical image fusion (MIF) problem, employing multiscale geometric analysis of the nonsubsampled contourlet transform and fuzzy-adaptive reduced pulse-coupled neural network (RPCNN).
Abstract: This paper addresses a novel approach to the multimodal medical image fusion (MIF) problem, employing multiscale geometric analysis of the nonsubsampled contourlet transform and fuzzy-adaptive reduced pulse-coupled neural network (RPCNN). The linking strengths of the RPCNNs' neurons are adaptively set by modeling them as the fuzzy membership values, representing their significance in the corresponding source image. Use of the RPCNN with a less complex structure and having less number of parameters leads to computational efficiency-an important requirement of point-of-care health care technologies. The proposed scheme is free from the common shortcomings of the state-of-the-art MIF techniques: contrast reduction, loss of image fine details, and unwanted image degradations, etc. Subjective and objective evaluations show better performance of this new approach compared to the existing techniques.

103 citations

Journal ArticleDOI

[...]

TL;DR: Experimental results and performance comparisons with state-of-the-art techniques, show that the proposed scheme is e-cient in brain MR image classiflcation.
Abstract: We propose an automatic and accurate technique for classifying normal and abnormal magnetic resonance (MR) images of human brain. Ripplet transform Type-I (RT), an e-cient multiscale geometric analysis (MGA) tool for digital images, is used to represent the salient features of the brain MR images. The dimensionality of the image representative feature vector is reduced by principal component analysis (PCA). A computationally less expensive support vector machine (SVM), called least square-SVM (LS-SVM) is used to classify the brain MR images. Extensive experiments were carried out to evaluate the performance of the proposed system. Two benchmark MR image datasets and a new larger dataset were used in the experiments, consisting 66, 160 and 255 images, respectively. The generalization capability of the proposed technique is enhanced by 5 £ 5 cross validation procedure. For all the datasets used in the experiments, the proposed system shows high classiflcation accuracies (on an average > 99%). Experimental results and performance comparisons with state-of-the-art techniques, show that the proposed scheme is e-cient in brain MR image classiflcation.

103 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

30,199 citations

Journal ArticleDOI

[...]

TL;DR: Attempts have been made to cover both fuzzy and non-fuzzy techniques including color image segmentation and neural network based approaches, which addresses the issue of quantitative evaluation of segmentation results.
Abstract: Many image segmentation techniques are available in the literature. Some of these techniques use only the gray level histogram, some use spatial details while others use fuzzy set theoretic approaches. Most of these techniques are not suitable for noisy environments. Some works have been done using the Markov Random Field (MRF) model which is robust to noise, but is computationally involved. Neural network architectures which help to get the output in real time because of their parallel processing ability, have also been used for segmentation and they work fine even when the noise level is very high. The literature on color image segmentation is not that rich as it is for gray tone images. This paper critically reviews and summarizes some of these techniques. Attempts have been made to cover both fuzzy and non-fuzzy techniques including color image segmentation and neural network based approaches. Adequate attention is paid to segmentation of range images and magnetic resonance images. It also addresses the issue of quantitative evaluation of segmentation results.

3,386 citations

[...]

01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.

1,758 citations

Journal ArticleDOI

[...]

TL;DR: This paper provides a state-of-the-art review and analysis of the different existing methods of steganography along with some common standards and guidelines drawn from the literature and some recommendations and advocates for the object-oriented embedding mechanism.
Abstract: Steganography is the science that involves communicating secret data in an appropriate multimedia carrier, e.g., image, audio, and video files. It comes under the assumption that if the feature is visible, the point of attack is evident, thus the goal here is always to conceal the very existence of the embedded data. Steganography has various useful applications. However, like any other science it can be used for ill intentions. It has been propelled to the forefront of current security techniques by the remarkable growth in computational power, the increase in security awareness by, e.g., individuals, groups, agencies, government and through intellectual pursuit. Steganography's ultimate objectives, which are undetectability, robustness (resistance to various image processing methods and compression) and capacity of the hidden data, are the main factors that separate it from related techniques such as watermarking and cryptography. This paper provides a state-of-the-art review and analysis of the different existing methods of steganography along with some common standards and guidelines drawn from the literature. This paper concludes with some recommendations and advocates for the object-oriented embedding mechanism. Steganalysis, which is the science of attacking steganography, is not the focus of this survey but nonetheless will be briefly discussed.

1,410 citations

Journal ArticleDOI

[...]

TL;DR: The superiority of the GA-clustering algorithm over the commonly used K-means algorithm is extensively demonstrated for four artificial and three real-life data sets.
Abstract: A genetic algorithm-based clustering technique, called GA-clustering, is proposed in this article. The searching capability of genetic algorithms is exploited in order to search for appropriate cluster centres in the feature space such that a similarity metric of the resulting clusters is optimized. The chromosomes, which are represented as strings of real numbers, encode the centres of a fixed number of clusters. The superiority of the GA-clustering algorithm over the commonly used K-means algorithm is extensively demonstrated for four artificial and three real-life data sets.

1,291 citations