scispace - formally typeset
Search or ask a question

Showing papers by "Malay K. Kundu published in 2008"


Journal Article
TL;DR: A robust thresholding technique is proposed in this paper for segmentation of brain MR images by splitting the image histogram into multiple crisp subsets based on the fuzzy thresholding techniques.
Abstract: A robust thresholding technique is proposed in this paper for segmentation of brain MR images. It is based on the fuzzy thresholding techniques. Its aim is to threshold the gray level histogram of brain MR images by splitting the image histogram into multiple crisp subsets. The histogram of the given image is thresholded according to the similarity between gray levels. The similarity is assessed through a second order fuzzy measure such as fuzzy correlation, fuzzy entropy, and index of fuzziness. To calculate the second order fuzzy measure, a weighted co-occurrence matrix is presented, which extracts the local information more accurately. Two quantitative indices are introduced to determine the multiple thresholds of the given histogram. The effectiveness of the proposed algorithm, along with a comparisonwith standard thresholding techniques, is demonstrated on a set of brain MR images.

30 citations


Journal ArticleDOI
29 Oct 2008
TL;DR: The scope of usage of Genetic Algorithms for data hiding in digital images is investigated to achieve an optimal solution in multidimensional nonlinear problem of conflicting nature that exists among imperceptibility, robustness, security and payload capacity.
Abstract: This paper investigates the scope of usage of Genetic Algorithms (GA) for data hiding in digital images. The tool has been explored in this topic of research to achieve an optimal solution in multidimensional nonlinear problem of conflicting nature that exists among imperceptibility, robustness, security and payload capacity. Two spatial domain data hiding methods are proposed where GA is used separately for (i) improvement in detection and (ii) optimal imperceptibility of hidden data in digital images respectively. In the first method, GA is used to achieve a set of parameter values (used as Key) to represent optimally the derived watermark in the form of approximate difference signal used for embedding. In the second method, GA is used for finding out values of parameters, namely reference amplitude (A) and modulation index (μ) both with linear and non linear transformation functions, for achieving the optimal data imperceptibility. Results on robustness for both the methods against linear, non linear filtering, noise addition, and lossy compression as well as statistical invisibility of the hidden data are reported here for some benchmark images.

29 citations


Proceedings ArticleDOI
16 Dec 2008
TL;DR: A transform domain data-hiding scheme for quality access control of images where watermark bits are detected using minimum distance decoder and the remaining self-noise due to information embedding is suppressed to provide better quality of image.
Abstract: This paper proposes a transform domain data-hiding scheme for quality access control of images. The original image is decomposed into tiles by applying n-level lifting-based discrete wavelet transformation (DWT). A binary watermark image (external information) is spatially dispersed using the sequence of number generated by a secret key. The encoded watermark bits are then embedded into all DWT-coefficients of nth-level and only in the high-high (HH) coefficients of the subsequent levels using dither modulation (DM) but without complete self-noise suppression. It is well known that due to insertion of external information, there will be degradation in visual quality of the host image (cover). The degree of deterioration depends on the amount of external data insertion as well as step size used for DM. If this insertion process is reverted, better quality of images can be accessed. To achieve that goal, watermark bits are detected using minimum distance decoder and the remaining self-noise due to information embedding is suppressed to provide better quality of image. The simulation results have shown the validity of this claim.

26 citations


Journal ArticleDOI
01 Sep 2008
TL;DR: To detect corners in a gray level image under imprecise information, an algorithm based on fuzzy set theoretic model is proposed and the robustness of the proposed algorithm is compared with well known conventional detectors.
Abstract: Reliable corner detection is an important task in determining shape of different regions in an image. To detect corners in a gray level image under imprecise information, an algorithm based on fuzzy set theoretic model is proposed. The uncertainties arising due to various types of imaging defects such as blurring, illumination change, noise, etc., usually result in missing of significant curvature junctions (corners). Fuzzy set theory based modeling is well known for efficient handling of impreciseness. In order to handle the incompleteness arising due to imperfection of data, it is reasonable to model image properties in fuzzy frame work for reliable decision making. The robustness of the proposed algorithm is compared with well known conventional detectors. The performance is tested on a number of benchmark test images to illustrate the efficiency of the algorithm.

19 citations


Journal ArticleDOI
TL;DR: A texture feature extraction scheme at multiple scales is proposed and the issues of rotation and gray-scale transform invariance as well as noise tolerance of a texture analysis system are discussed.
Abstract: In this paper, we propose a texture feature extraction scheme at multiple scales and discuss the issues of rotation and gray-scale transform invariance as well as noise tolerance of a texture analysis system. The nonseparable discrete wavelet frame analysis is employed which gives an overcomplete wavelet decomposition of the image. The texture is decomposed into a set of frequency channels by a circularly symmetric wavelet filter, which in essence gives a measure of edge magnitudes of the texture at different scales. The texture is characterized by local energies over small overlapping windows around each pixel at different scales. The features so extracted are used for the purpose of multi-texture segmentation. A simple clustering algorithm is applied to this signature to achieve the desired segmentation. The performance of the segmentation algorithm is evaluated through extensive testing over various types of test images.

17 citations


Journal Article
TL;DR: A new technique of fractal image compression using the theory of IFS and probabilities is proposed, which can be looked upon as one of the solutions to the problem of huge computational cost for obtaining fractal code of images.
Abstract: Approximation of an image by the attractor evolved through iterations of a set of contractive maps is usually known as fractal image compression. The set of maps is called iterated function system (IFS). Several algorithms, with different motivations, have been suggested towards the solution of this problem. But, so far, the theory of IFS with probabilities, in the context of image compression, has not been explored much. In the present article we have proposed a new technique of fractal image compression using the theory of IFS and probabilities. In our proposed algorithm, we have used a multiscaling division of the given image up to a predetermined level or up to that level at which no further division is required. At each level, the maps and the corresponding probabilities are computed using the gray value information contained in that image level and in the image level higher to that level. A fine tuning of the algorithm is still to be done. But, the most interesting part of the proposed technique is its extreme fastness in image encoding. It can be looked upon as one of the solutions to the problem of huge computational cost for obtaining fractal code of images.

7 citations