scispace - formally typeset
Search or ask a question
Author

Malay K. Kundu

Other affiliations: Intel
Bio: Malay K. Kundu is an academic researcher from Indian Statistical Institute. The author has contributed to research in topics: Image retrieval & Digital watermarking. The author has an hindex of 33, co-authored 151 publications receiving 3283 citations. Previous affiliations of Malay K. Kundu include Intel.


Papers
More filters
Journal ArticleDOI
TL;DR: Extended experiments reveal that the proposed system can successfully classify Pap smear images performing significantly better when compared with other existing methods, which will be of particular help in early detection of cancer.

135 citations

Journal ArticleDOI
TL;DR: A novel multimodal medical image fusion (MIF) method based on non-subsampled contourlet transform (NSCT) and pulse-coupled neural network (PCNN) is presented, which exploits the advantages of both the NSCT and the PCNN to obtain better fusion results.
Abstract: In this article, a novel multimodal medical image fusion (MIF) method based on non-subsampled contourlet transform (NSCT) and pulse-coupled neural network (PCNN) is presented. The proposed MIF scheme exploits the advantages of both the NSCT and the PCNN to obtain better fusion results. The source medical images are first decomposed by NSCT. The low-frequency subbands (LFSs) are fused using the ‘max selection’ rule. For fusing the high-frequency subbands (HFSs), a PCNN model is utilized. Modified spatial frequency in NSCT domain is input to motivate the PCNN, and coefficients in NSCT domain with large firing times are selected as coefficients of the fused image. Finally, inverse NSCT (INSCT) is applied to get the fused image. Subjective as well as objective analysis of the results and comparisons with state-of-the-art MIF techniques show the effectiveness of the proposed scheme in fusing multimodal medical images.

134 citations

Journal ArticleDOI
TL;DR: This paper addresses a novel approach to the multimodal medical image fusion (MIF) problem, employing multiscale geometric analysis of the nonsubsampled contourlet transform and fuzzy-adaptive reduced pulse-coupled neural network (RPCNN).
Abstract: This paper addresses a novel approach to the multimodal medical image fusion (MIF) problem, employing multiscale geometric analysis of the nonsubsampled contourlet transform and fuzzy-adaptive reduced pulse-coupled neural network (RPCNN). The linking strengths of the RPCNNs' neurons are adaptively set by modeling them as the fuzzy membership values, representing their significance in the corresponding source image. Use of the RPCNN with a less complex structure and having less number of parameters leads to computational efficiency-an important requirement of point-of-care health care technologies. The proposed scheme is free from the common shortcomings of the state-of-the-art MIF techniques: contrast reduction, loss of image fine details, and unwanted image degradations, etc. Subjective and objective evaluations show better performance of this new approach compared to the existing techniques.

131 citations

Journal ArticleDOI
TL;DR: An algorithm based on the facts of the human visual system is presented here whereby it is possible to select automatically (without human intervention) the thresholds for detecting the significant edges as perceived by human beings.

118 citations

Journal ArticleDOI
TL;DR: This work is an attempt to demonstrate genetic algorithms adaptivity and effectiveness for searching global optimal solutions in selecting an appropriate image enhancement operator automatically.

116 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Attempts have been made to cover both fuzzy and non-fuzzy techniques including color image segmentation and neural network based approaches, which addresses the issue of quantitative evaluation of segmentation results.

3,527 citations

01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.

1,758 citations

Journal ArticleDOI
TL;DR: This paper provides a state-of-the-art review and analysis of the different existing methods of steganography along with some common standards and guidelines drawn from the literature and some recommendations and advocates for the object-oriented embedding mechanism.

1,572 citations

Journal ArticleDOI
TL;DR: The superiority of the GA-clustering algorithm over the commonly used K-means algorithm is extensively demonstrated for four artificial and three real-life data sets.

1,337 citations