scispace - formally typeset
Search or ask a question
Author

B. Chandra Mohan

Bio: B. Chandra Mohan is an academic researcher from Bapatla Engineering College. The author has contributed to research in topics: Digital watermarking & Watermark. The author has an hindex of 11, co-authored 21 publications receiving 406 citations. Previous affiliations of B. Chandra Mohan include Jawaharlal Nehru Technological University, Hyderabad.

Papers
More filters
01 Jan 2013
TL;DR: The routing problem can be solved more effectively by achieving high successful path delivery rate rather than the conventional routing algorithms.
Abstract: In this work, a routing algorithm suitable for Mobile Adhoc Networks is proposed. MANETS are unstable nature when network mobility increases. Path selection process is a critical task in routing algorithms. The proposed work addresses this problem by employing Ant Colony Optimization and Fuzzy logic technques while developing the routing algorithm. The path information by ants will be given to FIS(Fuzzy Interference system ) in order to compute the available path’s score values, based on this score value from the FIS system the optimal paths will be selected. Hence, the routing problem can be solved more effectively by achieving high successful path delivery rate rather than the conventional routing algorithms. This technique is implemented and the results are compared to the existing algorithms. The performance of the proposed algorithm is assessed using distance, power consumption etc..,

6 citations

Proceedings ArticleDOI
01 Feb 2015
TL;DR: An efficient method for fusion of multifocus images based on UDWT and contrast visibility is presented and it is compared with various image quality metrics such as Mutual information (MI), Spatial frequency (SF), and Edge-based image fusion metric (QAB/F).
Abstract: The objective of image fusion is to combine multiple input images into a single composite image. Discrete Wavelet Transform (DWT) is widely used discrete mathematical techniques in analysis and synthesis of images. In this family, the Undecimated Discrete Wavelet Transform (UDWT) method works more efficiently in real-time systems in combining multifocus images into a single composite image. Existing DWT based methods for image fusion are suffering from undesirable side effects like blurring which reduce the quality of the output image. The discrete implementation of UDWT can be accomplished by using ‘a trous’ (with holes) algorithm. In this paper, an efficient method for fusion of multifocus images based on UDWT and contrast visibility is presented. First, the images to be fused are convolved with a predefined kernel then the edge features are extracted and contrast visibility is calculated for the edge features. Finally, the fused image is obtained by merging all edge planes and the residual plane. Experimental results on several pairs of multifocus images verify that the proposed method is consistent and preserves more information compared to earlier methods like Principal Component Analysis (PCA), Discrete Wavelet Transform (DWT) and it is also compared with various image quality metrics such as Mutual information (MI), Spatial frequency (SF), and Edge-based image fusion metric (QAB/F).

6 citations

Proceedings ArticleDOI
26 Feb 2010
TL;DR: In this paper, a robust algorithm for digital image watermarking based on human visual system (HVS) is presented. But the proposed method is robust and the watermark can survive to many image attacks like noise, bit plane removal, cropping, histogram equalization, rotation, and sharpening.
Abstract: This paper presents a robust algorithm for digital image watermarking based on Human Visual System (HVS). Watermark is embedded in the Slant Transform domain by altering the transform coefficients. The perceptibility of the watermarked image using proposed algorithm is improved over DCT based algorithm 9 by embedding the watermark image in selected positions based on the HVS weightage matrix. The proposed method is robust and the watermark image can survive to many image attacks like noise, bit plane removal, cropping, histogram equalization, rotation, and sharpening. Results are compared with DCT based watermarking method and found to be superior in terms of the quality of the watermarked image and resilience to attacks. The metrics used to test the robustness of the proposed algorithm are Peak Signal to Noise Ratio (PSNR) and Normalized Cross Correlation (NCC).

5 citations

Journal ArticleDOI
29 Feb 2012
TL;DR: Results indicate that the proposed face recognition system using different local features with different distance measures gives better recognition performance in terms of average recognized rate and retrieval time compared to the existing methods.
Abstract: A face recognition system using different local features with different distance measures is proposed in this paper. Proposed method is fast and gives accurate detection. Feature vector is based on Eigen values, Eigen vectors, and diagonal vectors of sub images. Images are partitioned into sub images to detect local features. Sub partitions are rearranged into vertically and horizontally matrices. Eigen values, Eigenvector and diagonal vectors are computed for these matrices. Global feature vector is generated for face recognition. Experiments are performed on benchmark face YALE database. Results indicate that the proposed method gives better recognition performance in terms of average recognized rate and retrieval time compared to the existing methods.

5 citations

Proceedings ArticleDOI
27 Dec 2009
TL;DR: This paper presents a new compression technique based on Contourlet Transform (CT) and energy based quantization and the superiority of proposed algorithm to JPEG is observed in terms of reduced blocking artifacts.
Abstract: This paper presents a new compression technique based on Contourlet Transform (CT) and energy based quantization. Double filter bank structure is used in CT. The Laplacian Pyramid (LP) is used to capture the point discontinuities, and then followed by a Directional Filter Bank (DFB) to link point discontinuities. The coefficients of down sampled low pass version of LP decomposed image are re-ordered in a pre-determined manner and prediction algorithm is used to reduce entropy (bits/pixel). In addition, the coefficients of CT are quantized based on the energy in the particular band. The superiority of proposed algorithm to JPEG is observed in terms of reduced blocking artifacts. The results are also compared with wavelet transform (WT). Superiority of CT to WT is observed when the image contains more contours.

5 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A comprehensive review of all conducting intensive research survey into the pros and cons, main architecture, and extended versions of this algorithm.

216 citations

Journal ArticleDOI
TL;DR: A robust digital image watermarking scheme based on singular value decomposition (SVD) and a tiny genetic algorithm (Tiny-GA) and Experimental results demonstrate that the scheme is able to withstand a variety of image processing attacks.

167 citations

Journal ArticleDOI
TL;DR: This paper presents a robust image watermarking scheme for multimedia copyright protection that is more secure and robust to various attacks, viz., JPEG2000 compression, JPEG compression, rotation, scaling, cropping, row-column blanking, rows-column copying, salt and pepper noise, filtering and gamma correction.
Abstract: This paper presents a robust image watermarking scheme for multimedia copyright protection. In this work, host image is partitioned into four sub images. Watermark image such as ‘logo’ is embedded in the two of these sub images, in both D (singular and diagonal matrix) and U (left singular and orthogonal matrix) components of Singular Value Decomposition (SVD) of two sub images. Watermark image is embedded in the D component using Dither quantization. A copy of the watermark is embedded in the columns of U matrix using comparison of the coefficients of U matrix with respect to the watermark image. If extraction of watermark from D matrix is not complete, there is a fair amount of probability that it can be extracted from U matrix. The proposed algorithm is more secure and robust to various attacks, viz., JPEG2000 compression, JPEG compression, rotation, scaling, cropping, row-column blanking, row-column copying, salt and pepper noise, filtering and gamma correction. Superior experimental results are observed with the proposed algorithm over a recent scheme proposed by Chung et al. in terms of Bit Error Rate (BER), Normalized Cross correlation (NC) and Peak Signal to Noise Ratio (PSNR).

86 citations

01 Jan 2014
TL;DR: Various content-based image retrieval techniques available for retrieving the require and classify images are reviewed, and some basic features of any image, like shape, texture, color, are shown and different techniques to calculate them are shown.
Abstract: Various content-based image retrieval techniques are available for retrieving the require and classify images, we are reviewing them. In our first section, we are tending towards some basics of a particular CBIR system with that we have shown some basic features of any image, these are like shape, texture, color and shown different techniques to calculate them. In the next section, we have shown different distance measuring techniques used for similarity measurement of any image and also discussed indexing techniques. Finally conclusion and future scope is discussed.

81 citations

Journal ArticleDOI
TL;DR: A novel multiplicative watermarking scheme in the contourlet domain using the univariate and bivariate alpha-stable distributions is proposed and the robustness of the proposed bivariate Cauchy detector against various kinds of attacks is studied and shown to be superior to that of the generalized Gaussian detector.
Abstract: In the past decade, several schemes for digital image watermarking have been proposed to protect the copyright of an image document or to provide proof of ownership in some identifiable fashion. This paper proposes a novel multiplicative watermarking scheme in the contourlet domain. The effectiveness of a watermark detector depends highly on the modeling of the transform-domain coefficients. In view of this, we first investigate the modeling of the contourlet coefficients by the alpha-stable distributions. It is shown that the univariate alpha-stable distribution fits the empirical data more accurately than the formerly used distributions, such as the generalized Gaussian and Laplacian, do. We also show that the bivariate alpha-stable distribution can capture the across scale dependencies of the contourlet coefficients. Motivated by the modeling results, a blind watermark detector in the contourlet domain is designed by using the univariate and bivariate alpha-stable distributions. It is shown that the detectors based on both of these distributions provide higher detection rates than that based on the generalized Gaussian distribution does. However, a watermark detector designed based on the alpha-stable distribution with a value of its parameter α other than 1 or 2 is computationally expensive because of the lack of a closed-form expression for the distribution in this case. Therefore, a watermark detector is designed based on the bivariate Cauchy member of the alpha-stable family for which α = 1 . The resulting design yields a significantly reduced-complexity detector and provides a performance that is much superior to that of the GG detector and very close to that of the detector corresponding to the best-fit alpha-stable distribution. The robustness of the proposed bivariate Cauchy detector against various kinds of attacks, such as noise, filtering, and compression, is studied and shown to be superior to that of the generalized Gaussian detector.

80 citations