scispace - formally typeset
Search or ask a question

Showing papers on "Contourlet published in 2018"


Journal ArticleDOI
TL;DR: Non-Sub sampled Contourlet Transform (NSCT) is used to enhance the brain image and then texture features are extracted from the enhanced brain image to identify tumor regions in Glioma brain image.

98 citations


Journal ArticleDOI
TL;DR: This paper proposes a learning-based approach for automatic detection of fabric defects based on a statistical representation of fabric patterns using the redundant contourlet transform (RCT) using a finite mixture of generalized Gaussians (MoGG).
Abstract: We propose a learning-based approach for automatic detection of fabric defects Our approach is based on a statistical representation of fabric patterns using the redundant contourlet transform (RCT) The distribution of the RCT coefficients are modeled using a finite mixture of generalized Gaussians (MoGG), which constitute statistical signatures distinguishing between defective and defect-free fabrics In addition to being compact and fast to compute, these signatures enable accurate localization of defects Our defect detection system is based on three main steps In the first step, a preprocessing is applied for detecting basic pattern size for image decomposition and signature calculation In the second step, labeled fabric samples are used to train a Bayes classifier (BC) to discriminate between defect-free and defective fabrics Finally, defects are detected during image inspection by testing local patches using the learned BC Our approach can deal with multiple types of textile fabrics, from simple to more complex ones Experiments on the TILDA database have demonstrated that our method yields better results compared with recent state-of-the-art methods Note to Practitioners —Fabric defect detection is central to automated visual inspection and quality control in textile manufacturing This paper deals with this problem through a learning-based approach By opposite to several existing approaches for fabric defect detection, which are effective in only some types of fabrics and/or defects, our method can deal with almost all types of patterned fabric and defects To enable both detection and localization of defects, a fabric image is first divided into local blocks, which are representative of the repetitive pattern structure of the fabric Then, statistical signatures are calculated by modeling the distribution of coefficients of an RCT using the finite MoGG The discrimination between defect-free and defective fabrics is then achieved through supervised classification of RCT-MoGG signatures based on expert-labeled examples of defective fabric images Experiments have shown that our method yields very good performance in terms of defect detection and localization In addition to its accuracy, inspection of images can be performed in a fully automatic fashion, whereas only labeled examples are initially required Finally, our method can be easily adapted to a real-time scenario since defect detection on inspected images is performed at the block level, which can be easily parallelized through hardware implementation

94 citations


Journal ArticleDOI
TL;DR: This paper has classified a set of Histopathological Breast-Cancer images utilizing a state-of-the-art CNN model containing a residual block and examined the performance of the novel CNN model as Histopathology image classifier.
Abstract: Identification of the malignancy of tissues from Histopathological images has always been an issue of concern to doctors and radiologists. This task is time-consuming, tedious and moreover very challenging. Success in finding malignancy from Histopathological images primarily depends on long-term experience, though sometimes experts disagree on their decisions. However, Computer Aided Diagnosis (CAD) techniques help the radiologist to give a second opinion that can increase the reliability of the radiologist’s decision. Among the different image analysis techniques, classification of the images has always been a challenging task. Due to the intense complexity of biomedical images, it is always very challenging to provide a reliable decision about an image. The state-of-the-art Convolutional Neural Network (CNN) technique has had great success in natural image classification. Utilizing advanced engineering techniques along with the CNN, in this paper, we have classified a set of Histopathological Breast-Cancer (BC) images utilizing a state-of-the-art CNN model containing a residual block. Conventional CNN operation takes raw images as input and extracts the global features; however, the object oriented local features also contain significant information—for example, the Local Binary Pattern (LBP) represents the effective textural information, Histogram represent the pixel strength distribution, Contourlet Transform (CT) gives much detailed information about the smoothness about the edges, and Discrete Fourier Transform (DFT) derives frequency-domain information from the image. Utilizing these advantages, along with our proposed novel CNN model, we have examined the performance of the novel CNN model as Histopathological image classifier. To do so, we have introduced five cases: (a) Convolutional Neural Network Raw Image (CNN-I); (b) Convolutional Neural Network CT Histogram (CNN-CH); (c) Convolutional Neural Network CT LBP (CNN-CL); (d) Convolutional Neural Network Discrete Fourier Transform (CNN-DF); (e) Convolutional Neural Network Discrete Cosine Transform (CNN-DC). We have performed our experiments on the BreakHis image dataset. The best performance is achieved when we utilize the CNN-CH model on a 200× dataset that provides Accuracy, Sensitivity, False Positive Rate, False Negative Rate, Recall Value, Precision and F-measure of 92.19%, 94.94%, 5.07%, 1.70%, 98.20%, 98.00% and 98.00%, respectively.

91 citations


Journal ArticleDOI
TL;DR: The proposed Discrete Shearlet Transform Transform (DST) as a new embedding domain for blind image watermarking shows greater windowing flexibility with more sensitive to directional and anisotropic features when compared against discrete wavelet and contourlets.
Abstract: Blind watermarking targets the challenging recovery of the watermark when the host is not available during the detection stage. This paper proposes Discrete Shearlet Transform (DST) as a new embedding domain for blind image watermarking. Our novel DST blind watermark detection system uses a nonadditive scheme based on the statistical decision theory. It first computes the Probability Density Function (PDF) of the DST coefficients modeled as a Laplacian distribution. The resulting likelihood ratio is compared with a decision threshold calculated using Neyman–Pearson criterion to minimize the missed detection subject to a fixed false alarm probability. Our method is evaluated in terms of imperceptibility, robustness, and payload against different attacks (Gaussian noise, blurring, cropping, compression, and rotation) using 30 standard grayscale images covering different characteristics (smooth, more complex with a lot of edges, and high detail textured regions). The proposed method shows greater windowing flexibility with more sensitive to directional and anisotropic features when compared against discrete wavelet and contourlets.

70 citations


Journal ArticleDOI
TL;DR: This study uses the likelihood ratio decision rule and t-location scale distribution to design an optimal multiplicative watermark detector that showed higher efficiency and robustness against different attacks, and derives the receiver operating characteristics (ROC) analytically.

53 citations


Journal ArticleDOI
TL;DR: The state-of-the-art of transformation techniques for reducing the speckle noise from medical ultrasound images are reviewed by considering the challenges on compressing the medical imaging to deduce extraneous information using the non-subsampled contourlet transform.

45 citations


Journal ArticleDOI
TL;DR: Experimental results reveal that the proposed image encryption technique provides better computational speed and high encryption intensity than recently developed well-known meta-heuristic based image encryption techniques.
Abstract: In this paper, an efficient image encryption technique using beta chaotic map, nonsubsampled contourlet transform, and genetic algorithm is proposed. Initially, the nonsubsampled contourlet transfo...

44 citations


Book ChapterDOI
01 Jan 2018
TL;DR: A new hybrid transform domain technique for medical image watermarking is discussed and high robustness against geometrical and signal processing attacks in terms of peak signal to noise ratio (PSNR) and correlation coefficient (CC) is proved.
Abstract: Medical images are of high importance and patient data must be kept confidential. In this chapter, we discuss a new hybrid transform domain technique for medical image watermarking and provide a detailed analysis of existing image watermarking methods. The proposed method uses a combination of nonsubsampled contourlet transform (NSCT), discrete cosine transform (DCT) and singular value decomposition (SVD) to achieve high capacity, robustness and imperceptibility. This method is non blind which requires cover image in receiver to extract watermarked image. Cover and watermark images are pre-processed in order to ensure accurate extraction of watermark. In this approach, we have considered medical images as cover and electronic patient record (EPR) is used as secret message. EPR message is embedded into selected sub band of cover image with selected gain factor so that there should be a good trade off among imperceptibility, robustness and capacity. NSCT increases hiding capacity and is more resistant to geometrical attacks. The combination of NSCT with DCT and SVD enhanced the perceptual quality and security of watermarked image. Experimental demonstration proved that the proposed method provides high robustness against geometrical and signal processing attacks in terms of peak signal to noise ratio (PSNR) and correlation coefficient (CC).

42 citations


Journal ArticleDOI
TL;DR: The experimental results reveal that the proposed differential evolution-based image encryption technique outperforms the other existing techniques in terms of security and better visual quality.
Abstract: The main challenges of image encryption are robustness against attacks, key space, key sensitivity, and diffusion. To deal with these challenges, a differential evolution-based image encryption technique is proposed. In the proposed technique, two concepts are utilised to encrypt the images in an efficient manner. The first one is Arnold transform, which is utilised to permute the pixels position of an input image to generate a scrambled image. The second one is differential evolution, which is used to tune the parameters required by a beta chaotic map. Since the beta chaotic map suffers from parameter tuning issue. The entropy of an encrypted image is used as a fitness function. The proposed technique is compared with seven well-known image encryption techniques over five well-known images. The experimental results reveal that the proposed technique outperforms the other existing techniques in terms of security and better visual quality.

38 citations


Journal ArticleDOI
29 Mar 2018-Sensors
TL;DR: In this article, the authors used artificially generated noise to analyze and estimate the Poisson-Gaussian noise of low-dose X-ray images in the NSCT domain, and the noise distribution of the subband coefficients was analyzed using the noiseless low-band coefficients and the variance of the noisy subbands coefficients.
Abstract: The noise distribution of images obtained by X-ray sensors in low-dosage situations can be analyzed using the Poisson and Gaussian mixture model. Multiscale conversion is one of the most popular noise reduction methods used in recent years. Estimation of the noise distribution of each subband in the multiscale domain is the most important factor in performing noise reduction, with non-subsampled contourlet transform (NSCT) representing an effective method for scale and direction decomposition. In this study, we use artificially generated noise to analyze and estimate the Poisson–Gaussian noise of low-dose X-ray images in the NSCT domain. The noise distribution of the subband coefficients is analyzed using the noiseless low-band coefficients and the variance of the noisy subband coefficients. The noise-after-transform also follows a Poisson–Gaussian distribution, and the relationship between the noise parameters of the subband and the full-band image is identified. We then analyze noise of actual images to validate the theoretical analysis. Comparison of the proposed noise estimation method with an existing noise reduction method confirms that the proposed method outperforms traditional methods.

36 citations


Journal ArticleDOI
TL;DR: RNAMlet as discussed by the authors uses non-symmetry anti-packing pattern representation model (NAM) to decompose the image into a set of rectangular blocks asymmetrically according to gray value changes of image pixels.

Report SeriesDOI
07 Nov 2018
TL;DR: A novel image representation method called “image elementary manifold" is proposed, here, an image elementary manifold can rep-resent all the basis functions lying in the same manifold and a fast elementary mani-fold based image decomposition and reconstruction algorithm are given.
Abstract: Image basis function plays a key role in image information analysis. Due to the complex geometric structure in image, a better image basis or frame often have a very large family with a large number of basis functions lying in a lower dimensionality manifold, such as 2D Gabor functions and Contourlets used in image texture analysis, the corresponding image transform and analysis will be very time consuming. In this article, we propose a novel image representation method called “image elementary manifold”, here, an image elementary manifold can represent all the basis functions lying in the same manifold. A fast elementary manifold based image decomposition and reconstruction algorithm are given. Comparing to traditional image representation methods, elementary manifold based image analysis reduce time consumption, discovers the latent intrinsic structure of images more efficiently and provides the possibility of empirical prediction. Finally, many experiments show the feasibility of image elementary manifold in image analysis.

Journal ArticleDOI
TL;DR: A novel multi-focus image fusion method based on a focused regions boundary finding and multi-scale transform (MST) is proposed, which can accurately determine the focused regions, and at the same time, a better fused boundary region can be obtained.

Journal ArticleDOI
TL;DR: A unified framework for the simultaneous detection of both AT and GT have been proposed in this article, which uses the multiscale geometric analysis of Non-Subsampled Contourlet Transform (NSCT) for feature extraction from the video frames.
Abstract: The fundamental step in video content analysis is the temporal segmentation of video stream into shots, which is known as Shot Boundary Detection (SBD). The sudden transition from one shot to another is known as Abrupt Transition (AT), whereas if the transition occurs over several frames, it is called Gradual Transition (GT). A unified framework for the simultaneous detection of both AT and GT have been proposed in this article. The proposed method uses the multiscale geometric analysis of Non-Subsampled Contourlet Transform (NSCT) for feature extraction from the video frames. The dimension of the feature vectors generated using NSCT is reduced through principal component analysis to simultaneously achieve computational efficiency and performance improvement. Finally, cost efficient Least Squares Support Vector Machine (LS-SVM) classifier is used to classify the frames of a given video sequence based on the feature vectors into No-Transition (NT), AT and GT classes. A novel efficient method of training set generation is also proposed which not only reduces the training time but also improves the performance. The performance of the proposed technique is compared with several state-of-the-art SBD methods on TRECVID 2007 and TRECVID 2001 test data. The empirical results show the effectiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: This paper reviews the preprocessing techniques applied to 2-DGE images for noise reduction, intensity normalization, and background correction and presents a quantitative comparison of non-linear filtering Techniques applied to synthetic gel images, through analyzing the performance of the filters under specific conditions.

Journal ArticleDOI
TL;DR: A new blind image watermarking scheme based on contourlet transform and principal component analysis that has good performance in terms of both quality and robustness against a variety of image-processing attacks, such as rotation, scaling and image compressions.
Abstract: In this paper, we first propose a new blind image watermarking scheme robust to geometric attacks and compressions. The scheme is based on contourlet transform (CT) and principal component analysis (PCA). The scheme uses the principal components of the largest contourlet coefficients of the last directional subband of the cover image to embed the watermark. Meanwhile, with the noise visibility function (NVF), the watermarking strength is adjusted adaptively to preserve the perceptual quality of the image. The watermark can be detected with high accuracy after various possible distortions. The normalized correlation (NC) between the original watermark and the watermark extracted from the distorted watermarked image is used as the robustness evaluation criterion. The simulation results demonstrate that the proposed scheme has good performance in terms of both quality and robustness against a variety of image-processing attacks, such as rotation, scaling and image compressions. Then we extend the scheme to blind video watermarking. The performance of the video watermarking scheme is evaluated against video attacks like rotation, frame averaging, noise additions and video compressions. The introduction of the CT produces robustness against image and video compressions, and the PCA yields resistance to geometric attacks.

Journal ArticleDOI
Shifei Ding, Xingyu Zhao, Hui Xu, Qiangbo Zhu, Yu Xue1 
TL;DR: Compared with other multi-scale decompositions-based image fusion and other improved NSCT-PCNN algorithms, the algorithm presented in this study outperforms them in terms of objective criteria and visual appearance.
Abstract: Pulse coupled neural network (PCNN) is widely used in image processing because of its unique biological characteristics, which is suitable for image fusion. When combining PCNN with non-subsampled contourlet (NSCT) model, it is applied in overcoming the difficulty of coefficients selection for subband of the NSCT model. However in the original model, only the grey values of image pixels are used as input, without considering that the subjective vision of human eyes lacks the sensitivity to the local factors of the image. In this study, the improved pulse-coupled neural network model has replaced the grey-scale value of the image and introduced the weighted product of the strength of the gradient of the image and the local phase coherence as the model input. Finally, compared with other multi-scale decompositions-based image fusion and other improved NSCT-PCNN algorithms, the algorithm presented in this study outperforms them in terms of objective criteria and visual appearance.

Journal ArticleDOI
TL;DR: A spectral resolution enhancement algorithm via the contourlet transforms regularization for FTIR spectral imaging that leads the high-resolution FTIR spectrum as a more efficient tool for the recognition of teacher's facial expressions in the intelligent learning environment.

Journal ArticleDOI
TL;DR: Experimental results show the superiority of the proposed approach over the other existing fusion approaches by improving all the performance parameters.

Proceedings ArticleDOI
TL;DR: In this article, a spatially adaptive contrast enhancement technique for enhancing retinal fundus images for blood vessel segmentation was proposed, which was integrated with a variant of Tyler Coye algorithm, which has been improved with Hough line transformation based vessel reconstruction method.
Abstract: The morphology of blood vessels in retinal fundus images is an important indicator of diseases like glaucoma, hypertension and diabetic retinopathy. The accuracy of retinal blood vessels segmentation affects the quality of retinal image analysis which is used in diagnosis methods in modern ophthalmology. Contrast enhancement is one of the crucial steps in any of retinal blood vessel segmentation approaches. The reliability of the segmentation depends on the consistency of the contrast over the image. This paper presents an assessment of the suitability of a recently invented spatially adaptive contrast enhancement technique for enhancing retinal fundus images for blood vessel segmentation. The enhancement technique was integrated with a variant of Tyler Coye algorithm, which has been improved with Hough line transformation based vessel reconstruction method. The proposed approach was evaluated on two public datasets STARE and DRIVE. The assessment was done by comparing the segmentation performance with five widely used contrast enhancement techniques based on wavelet transform, contrast limited histogram equalization, local normalization, linear un-sharp masking and contourlet transform. The results revealed that the assessed enhancement technique is well suited for the application and also outperforms all compared techniques.

Journal ArticleDOI
TL;DR: A three level Gaussian and Laplacian pyramids are constructed to represent the image in different resolution and the performance measure, peak signal to noise ratio proves that the unsharp masking method applied to difference images of LaPLacian pyramid outperforms the other image enhancement methods.
Abstract: Acoustic images captured by side scan sonar are normally affected by speckle noise for which the enhancement is required in different domain. The underwater acoustic images obtained using sound as a source, basically contain seafloor, sediments, living and non-living resources. The Multiresolution based image enhancement techniques nowadays play a vital role in improving the quality of the low resolution image with repeated patterns. Image pyramid is the representation of an image at various scales. In this work, a three level Gaussian and Laplacian pyramids are constructed to represent the image in different resolution. The multiscale representation requires different filters at different scales. The contrast of each image in Gaussian and Laplacian pyramids are improved by applying both histogram equalization and unsharp masking method. The sharpened images are used to reconstruct the enhanced image. The performance measure, peak signal to noise ratio proves that the unsharp masking method applied to difference images of Laplacian pyramid outperforms the other image enhancement methods.

Journal ArticleDOI
TL;DR: The authors propose a hybrid image denoising method in which the 2D separable wavelet transform in the second generation bandelet transform is replaced with the non-subsampled contourlet transform, indicating that the proposed method has good peak signal-to-noise ratio and visual quality performance.
Abstract: The second generation bandelet transform uses the two-dimensional (2D) separable wavelet transform to improve its image denoising and compression performance. However, the 2D separable wavelet transform is not a shift-invariant transform and therefore cannot capture geometric information well. The authors propose a hybrid image denoising method in which the 2D separable wavelet transform in the second generation bandelet transform is replaced with the non-subsampled contourlet transform. The results of the application of the proposed method to several greyscale and colour benchmark images contaminated with various levels of Gaussian white noise and Poisson noise indicate that the proposed method has good peak signal-to-noise ratio and visual quality performance.

Journal ArticleDOI
TL;DR: This study aims to improve dental radiographic image quality for assisting pulp capping treatment evaluation and finds that MSE and PSNR scores are not enough merely to give a recommendation of any suitable methods for improving contrast, therefore, it needs another success parameter coming from the dentist.
Abstract: Background Evaluation of dental treatment is performed by observing dental periapical radiography to obtain information of filling's condition, pulp tissue, remain dentin thickness, periodontal ligament, and lamina dura. Nevertheless, the radiographic image used often has low quality due to the level of x-ray radiation made low purposely in order to prevent health problem and limited tools capability. This low quality of the radiographic image, for examples, low image contrast, less brightness, and noise existence cause periapical radiography evaluation hard to be performed. This study aims to improve dental radiographic image quality for assisting pulp capping treatment evaluation. Material and methods The research methodology consists of three main stages, i.e. data collection, image enhancement method production, and result validation. Radiographic image data collection in The Dental Hospital UMY. Image enhancement method has been conducted by comparing several methods: contourlet transform (CT), wavelet transform, contrast stretching (CS), and contrast limited adaptive histogram equalization (CLAHE) to reduce noise, to optimize image contrast, and to enhance image brightness. Results The result of this study is according to mean square error (MSE) and peak signal to noise ratio (PSNR) statistics evaluation, it obtains that the highest scores of MSE and PSNR in row gained from CT method totaled 5.441453 and 40.53652, followed by CLAHE method with the scores are 10.66326 and 38.00736, CS method whose scores are 12.39881 and 39.18518, and the last is wavelet method with the scores are 15.41569 and 36.25343. Conclusions Nonetheless, MSE and PSNR scores are not enough merely to give a recommendation of any suitable methods for improving contrast, therefore, it needs another success parameter coming from the dentist. Key words:Dental radiography, image enhancement, digital image processing.

Journal ArticleDOI
TL;DR: It is found that the type of defect within insulation can be classified efficiently with the features extracted by the proposed method with a reasonable degree of accuracy.
Abstract: A new method to extract the features of direct current (DC) cross linked polyethylene (XLPE) cable partial discharge (PD) image based on non-subsampled contourlet transform (NSCT) is proposed in this paper. Four types of PD images are obtained from the artificially designed cable insulation defects. The features include Shannon entropy, Renyi entropy, Euclidean distance, and Mahalanobis distance of the optimized subband coefficients by NSCT. The optimized subband coefficients are selected by the immune algorithm optimized affinity propagation clustering method. The back propagating neural network, k-nearest neighbor, support vector machine and decision tree algorithms are used to classify the extracted features. The experimental results show that the proposed method can be applied to the feature extraction of PD signals in DC XLPE cables. Compared with the wavelet method, the recognition rate of the method is improved by more than 16.80% under the same classifier. It is found that the type of defect within insulation can be classified efficiently with the features extracted by the proposed method with a reasonable degree of accuracy.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods.

Journal ArticleDOI
TL;DR: The proposed method uses contourlet transform to find the directions based on the fact that it helps to represent the images effectively into multiple directional bands which will have more accurate directional information than in the spatial derivatives.
Abstract: Local binary pattern (LBP) is an effective image descriptor that is being used in various computer vision applications such as detection of faces, object classification, target detection, image retrieval. To improve the performance of local patterns, different variants of LBP were introduced. In this work, contourlet tetra pattern, a modified version of local pattern, is introduced which uses contourlet directions to derive the tetra pattern of the image. The difference between local tetra pattern (LTrP) and the proposed method is that LTrP uses spatial first-order derivatives to derive the directions, whereas the proposed method uses contourlet transform to find the directions. In this work, contourlet transform is used to find the directions based on the fact that it helps to represent the images effectively into multiple directional bands which will have more accurate directional information than in the spatial derivatives. The proposed method is evaluated using three different databases (namely Corel 1 K, Corel 10 K and Brodatz), and experimental result shows the proposed method performs better than the conventional local pattern techniques.

Journal ArticleDOI
TL;DR: A robust unwrapping algorithm based on the non-subsampled contourlet transform (NSCT) is developed and its universality and superiority in the digital holographic interferometry have been demonstrated by both numerical analysis and practical experiments.

Proceedings ArticleDOI
01 Apr 2018
TL;DR: This paper proposes a blindly invisible watermarking approach for grayscale document images in spatial domain, which is based on stable regions and object fill, and shows high performance in terms of imperceptibility, capacity and robustness against distortions like JPEG compression, geometric transformation and print-and-scan process.
Abstract: In the literature, the document image watermarking schemes in spatial domain mainly focus on text content, so they need to be further improved to be applied on general document content. In this paper, we propose a blindly invisible watermarking approach for grayscale document images in spatial domain, which is based on stable regions and object fill. In order to detect stable regions, the document is transformed into an intermediate form by taking advantage of image processing operations prior to applying nonsubsampled contourlet transform (NSCT). Next, the separated objects in stable regions are obtained by object segmentation. The stroke and fill of obtained objects are detected, and only the locations of object fill are marked as referential ones for mapping to gray level values where data hiding and detection are conducted. Then, the watermarking algorithm is developed by using every group of gray level values corresponding to locations of each object fill for carrying one watermark bit. The experiments are performed with various document contents, and our approach shows high performance in terms of imperceptibility, capacity and robustness against distortions like JPEG compression, geometric transformation and print-and-scan process.

Journal ArticleDOI
TL;DR: The results show that the proposed algorithm is superior to the current state-of-the-art watermarking algorithms in terms of imperceptibility, robustness and embedding capacity.
Abstract: High dynamic range (HDR) imaging has received increasing attention due to its powerful capacity to represent real scenes as perceived by human eyes. However, studies on effective HDR image watermarking algorithms remain limited. In contrast to watermarking algorithms proposed for low dynamic range (LDR) images, several critical problems, such as the peculiar HDR floating-point number data format and various tone mapping operators (TMOs) that are widely used to adapt HDR images to conventional displays, need to be properly addressed. Hence, a novel HDR image watermarking algorithm robust against the effects of TMOs is proposed in this paper. Two important spatial activity concepts, i.e. the activity of robustness and activity of perception, are respectively defined to characterize the spatial diversity of the robustness and imperceptibility of tone-mapped images. Then, nonsubsampled contourlet transform and singular value decomposition are successively performed to extract the associated structural information which is invariable in HDR images and their corresponding tone-mapped images. In addition, hierarchical embedding intensity and a hybrid perceptual mask are designed to enhance the imperceptibility and robustness of the HDR image watermarking. Experiments with numerous HDR images and TMOs were conducted and the results show that the proposed algorithm is superior to the current state-of-the-art watermarking algorithms in terms of imperceptibility, robustness and embedding capacity.

Journal ArticleDOI
TL;DR: A new algorithm is proposed for separation of machine-printed and handwritten texts using correlation coefficients and probabilities-based moments features and it provides a better text separation performance compared to that of other state-of-the-art approaches.
Abstract: To make paperless environment in office, document image analysis, where optical character recognition is mostly used, plays a major role. The documents such as bank cheques, admission forms, application forms, memorandums and letters generally consist of text material in mixed form, i.e., handwritten and machine-printed texts along with some noises. Because of this mixture, significant issues raise in the recognition process. By separating out handwritten and machine-printed texts, a solution is offered to overcome this problem. In this paper, a new algorithm is proposed for separation of machine-printed and handwritten texts using correlation coefficients and probabilities-based moments features. Contourlet transform is used to extract these significant features because it has excellent directional and isotropic properties. Finally, set of support vector machines classifiers is used to identify machine-printed text, handwritten text and noise. Comprehensive experiments on different databases show that the proposed algorithm is robust and it provides a better text separation performance compared to that of other state-of-the-art approaches. In benchmarking analysis, maximum identification recall rate of 98.9% is obtained from the proposed technique, which demonstrates its effectiveness.