scispace - formally typeset
Search or ask a question
Author

B. N. Chatterji

Bio: B. N. Chatterji is an academic researcher from B. P. Poddar Institute of Management & Technology. The author has contributed to research in topics: Wavelet & Curvelet. The author has an hindex of 5, co-authored 17 publications receiving 71 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This technique offers high-capacity data hiding, by considering the maximum difference between neighboring pixels without compromising on quality of the images, and can embed up to 3.19 bpp in Baboon image which is better than the other existing state-of-the art interpolation based RDH (IRDH) techniques.
Abstract: Data hiding is a noteworthy research topic in digital technology for years. Reversible data hiding (RDH) technique plays a vital role for confirming security of the digital transmissions. A new 2-layer secure, high capacity reversible data hiding technique has been proposed, using the concept of interpolation based data hiding and difference expansion method. Unlike, state-of-the art interpolation based RDH techniques, the security has been enhanced in the proposed technique by concealing the data into the image pixels in non-sequential manner. This technique offers high-capacity data hiding, by considering the maximum difference between neighboring pixels without compromising on quality of the images. It can embed up to 3.19 bpp in Baboon image which is better than the other existing state-of-the art interpolation based RDH (IRDH) techniques. The proposed technique is sustainable against the series of tests provided by standard StirMark Benchmark 4.0 analysis. It withstands the steganalysis attacks, viz., StegExpose as a measure of security.

24 citations

Journal ArticleDOI
TL;DR: This paper aims at denoising the mammogram by the wavelet and the curvelet transform with a motive to investigate the role of the “embedded” thresholding algorithm.
Abstract: Mammogram is an easy and affordable means of diagnosis of breast cancer. Like other medical data, it is also affected by noise during acquisition. Therefore, it is a challenge for the researchers to denoise the mammograms for clear data extraction. This paper aims at denoising the mammogram by the wavelet and the curvelet transform with a motive to investigate the role of the “embedded” thresholding algorithm. As the thresholding technique is a key factor for the noise reduction, a comprehensive study on the employment of the different types of the thresholding techniques with the transforms have been presented methodically. A standard mammogram from the Mammographic Image Analysis Society database is supplemented with different types of noise and then denoised by the wavelet and the curvelet transforms using the three commonly used thresholding techniques to compare the denoising performance of the thresholding algorithms along with the transforms. This investigation renders a clear insight to th...

20 citations

Journal ArticleDOI
TL;DR: A similarity measure based on fuzzy correlations is used in order to establish the corner correspondence between sequence images in the presence of intensity variations and motion blur and proves the superiority of this algorithm over standard and zero-mean cross correlation.

16 citations

Journal ArticleDOI
TL;DR: A two-stage fuzzy set theoretic approach to image thresholding utilizing the measure of fuzziness to evaluate the fuzziness of an image and to determine an adequate threshold value is proposed.
Abstract: Thresholding is a fast, popular and computationally inexpensive segmentation technique that is always critical and decisive in some image processing applications. The result of image thresholding is not always satisfactory because of the presence of noise and vagueness and ambiguity among the classes. Since the theory of fuzzy sets is a generalization of the classical set theory, it has greater flexibility to capture faithfully the various aspects of incompleteness or imperfectness in information of situation. To overcome this problem, in this paper we proposed a two-stage fuzzy set theoretic approach to image thresholding utilizing the measure of fuzziness to evaluate the fuzziness of an image and to determine an adequate threshold value. At first, images are preprocessed to reduce noise without any loss of image details by fuzzy rule-based filtering and then in the final stage a suitable threshold is determined with the help of a fuzziness measure as a criterion function. Experimental results on test images have demonstrated the effectiveness of this method.

7 citations

Journal ArticleDOI
TL;DR: An attempt has been made to review texture features in spatial domain, wavelet based and miscellaneous methods and discusses about the need for comparison of the methods and on the future directions in this area.
Abstract: Texture features are widely used for matching in content based image retrieval. Since mid nineteen nineties we find a lot of R & D activities in this area. In this paper an attempt has been made to review these works. Seventy one papers were reviewed and these were classified into spatial domain, wavelet based and miscellaneous methods. These methods were further sub-classified on the basis of mathematical techniques used or the algorithm applied. A very brief description of these methods is given. Finally the paper discusses about the need for comparison of the methods and on the future directions in this area.

7 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Different available approaches of dental X-ray image segmentation are reviewed and their advantages, disadvantages, and limitations are discussed.
Abstract: With a wide variety researches on Image segmentation techniques in biomedical and bioinformatics area, it is important to analyze the performance of these approaches in specific problems. Image segmentation is one of the most significant processes of dental X-ray image analysis. Therefore, to obtain the proper result, it is required to perform the accurate and efficient segmentation approach which proved itself in the aspect of X-ray image segmentation. The aim of this review paper is to understand the different image segmentation approaches which have been used for dental X-ray image analysis over the past studies. In this paper, different available approaches of dental X-ray image segmentation, reviewed and their advantages, disadvantages, and limitations are discussed.

69 citations

Journal ArticleDOI
TL;DR: This survey identifies the ML approach with better accuracy for medical diagnosis by radiologists using machine learning methods on MRI, US, X-Ray and Skin lesion images.
Abstract: Background This paper attempts to identify suitable Machine Learning (ML) approach for image denoising of radiology based medical application. The Identification of ML approach is based on (i) Review of ML approach for denoising (ii) Review of suitable Medical Denoising approach. Discussion The review focuses on six application of radiology: Medical Ultrasound (US) for fetus development, US Computer Aided Diagnosis (CAD) and detection for breast, skin lesions, brain tumor MRI diagnosis, X-Ray for chest analysis, Breast cancer using MRI imaging. This survey identifies the ML approach with better accuracy for medical diagnosis by radiologists. The image denoising approaches further includes basic filtering techniques, wavelet medical denoising, curvelet and optimization techniques. In most of the applications, the machine learning performance is better than the conventional image denoising techniques. For fast and computational results the radiologists are using the machine learning methods on MRI, US, X-Ray and Skin lesion images. The characteristics and contributions of different ML approaches are considered in this paper. Conclusion The problem faced by the researchers during image denoising techniques and machine learning applications for clinical settings have also been discussed.

62 citations

Journal ArticleDOI
TL;DR: Denoising experiments show that in the heavy noise condition, the proposed denoising model can suppress theheavy noise effectively and preserve the detail of images than several state-of-the-art methods.
Abstract: To suppress the heavy noise and keep the distinct edges of the images in the low light condition, we propose a denoising model based on the combination of total variation (TV) and nonlocal similarity in the wavelet domain. The TV regularization in the wavelet domain effectively suppresses the heavy noise with the biorthogonal wavelet function; the nonlocal similarity regularization improves the fine image details. Denoising experiments on artificially degraded and low light images show that in the heavy noise condition, the proposed denoising model can suppress the heavy noise effectively and preserve the detail of images than several state-of-the-art methods.

47 citations

Journal ArticleDOI
17 Mar 2016-PLOS ONE
TL;DR: New descriptor’s similarity metrics based on normalized Eigenvector correlation and signal directional differences, which are robust under local variation of the image information, are proposed to establish an efficient feature matching technique.
Abstract: An invariant feature matching method is proposed as a spatially invariant feature matching approach. Deformation effects, such as affine and homography, change the local information within the image and can result in ambiguous local information pertaining to image points. New method based on dissimilarity values, which measures the dissimilarity of the features through the path based on Eigenvector properties, is proposed. Evidence shows that existing matching techniques using similarity metrics—such as normalized cross-correlation, squared sum of intensity differences and correlation coefficient—are insufficient for achieving adequate results under different image deformations. Thus, new descriptor’s similarity metrics based on normalized Eigenvector correlation and signal directional differences, which are robust under local variation of the image information, are proposed to establish an efficient feature matching technique. The method proposed in this study measures the dissimilarity in the signal frequency along the path between two features. Moreover, these dissimilarity values are accumulated in a 2D dissimilarity space, allowing accurate corresponding features to be extracted based on the cumulative space using a voting strategy. This method can be used in image registration applications, as it overcomes the limitations of the existing approaches. The output results demonstrate that the proposed technique outperforms the other methods when evaluated using a standard dataset, in terms of precision-recall and corner correspondence.

39 citations

Journal ArticleDOI
TL;DR: The proposed corner detector using the magnitude responses of the imaginary part of the Gabor filters on contours is compared with five state-of-the-art detectors and reveals that the proposed detector is more competitive with respect to detection accuracy, localisation accuracy, affine transforms and noise-robustness.
Abstract: This study proposes a contour-based corner detector using the magnitude responses of the imaginary part of the Gabor filters on contours. Unlike the traditional contour-based methods that detect corners by analysing the shape of the edge contours and searching for local curvature maxima points on planar curves, the proposed corner detector combines the pixels of the edge contours and their corresponding grey-variation information. Firstly, edge contours are extracted from the original image using Canny edge detector. Secondly, the imaginary parts of the Gabor filters are used to smooth the pixels on the edge contours. At each edge pixel, the magnitude responses at each direction are normalised by their values and the sum of the normalised magnitude response at each direction is used to extract corners from edge contours. Thirdly, both the magnitude response threshold and the angle threshold are used to remove the weak or false corners. Finally, the proposed detector is compared with five state-of-the-art detectors on some grey-level images. The results from the experiment reveal that the proposed detector is more competitive with respect to detection accuracy, localisation accuracy, affine transforms and noise-robustness.

32 citations