scispace - formally typeset
Search or ask a question

Showing papers on "Contourlet published in 2022"


Journal ArticleDOI
TL;DR: In this paper , a spatially adaptive multi-scale image enhancement (SAMSIE) scheme is proposed, which decomposes a low-contrast image into multiscale layers.

16 citations


Journal ArticleDOI
TL;DR: In this paper , a novel unsupervised change detection method called adaptive contourlet fusion clustering based on adaptive Contourlet fused and fast non-local clustering is proposed for multi-temporal synthetic aperture radar (SAR) images.
Abstract: In this paper, a novel unsupervised change detection method called adaptive Contourlet fusion clustering based on adaptive Contourlet fusion and fast non-local clustering is proposed for multi-temporal synthetic aperture radar (SAR) images. A binary image indicating changed regions is generated by a novel fuzzy clustering algorithm from a Contourlet fused difference image. Contourlet fusion uses complementary information from different types of difference images. For unchanged regions, the details should be restrained while highlighted for changed regions. Different fusion rules are designed for low frequency band and high frequency directional bands of Contourlet coefficients. Then a fast non-local clustering algorithm (FNLC) is proposed to classify the fused image to generate changed and unchanged regions. In order to reduce the impact of noise while preserve details of changed regions, not only local but also non-local information are incorporated into the FNLC in a fuzzy way. Experiments on both small and large scale datasets demonstrate the state-of-the-art performance of the proposed method in real applications.

10 citations


Journal ArticleDOI
TL;DR: In this paper , a new parameter adaptive unit-linking dual-channel PCNN model was used to implement a novel fusion algorithm in the non-subsampled contourlet transform (NSCT) domain for the integration of infrared and visible images.

9 citations


Journal ArticleDOI
TL;DR: In this article , a secure data hiding in fused medical image for smart healthcare, since it is also suitable for applications in the cloud, is introduced, which is based on NSCT, QR, and Schur decomposition, allowing concealing the image and electronic patient records (EPR) mark into the fused image.
Abstract: Fusing the single modality medical images to obtain a distinct multimodality image is needed to ensure a better clinical experience. However, the distribution of these fused images brings the issues of ownership and authentication and has attracted many researchers. Presently, large volumes of medical data are stored on cloud platforms. However, outsourcing medical data to this popular platform may introduce security issues. Following this, we introduce a secure data hiding in fused medical image for smart healthcare, since it is also suitable for applications in the cloud. To achieve this, we first create a fused medical image as a cover by nonsubsampled contourlet transform (NSCT). The method, which is based on NSCT, QR, and Schur decomposition, allows concealing the image and electronic patient records (EPR) mark into the fused image. Importantly, EPR watermark includes a hash value of cover is created first and then embedded into the cover via magic cube-based procedure. Finally, the marked image is encrypted using deoxyribonucleic acid (DNA), chaotic maps, and a hash function-based encryption scheme. The introduced watermarking scheme has been evaluated using 25 pairs of medical images and several embedding/extracting parameters. Apart from being satisfactorily imperceptible, the proposed work is also robust and secure against well-known signal processing attacks and promising results are obtained when compared with similar techniques. It indicates a considerable improvement in robustness of 66.7% and 99.7% over existing discrete wavelet transform (DWT)–singular value decomposition (SVD)-based watermarking schemes.

9 citations



Journal ArticleDOI
TL;DR: In this paper , the authors proposed a robust image watermarking method using Artificial Bee Colony (ABC) algorithm to obtain high robustness with a predetermined quality threshold, which guarantees a PSNR value higher than 40 dB to ensure high imperceptibility.
Abstract: Today, advances in photo editing and manipulation software and wide and rapid access to social media have made it easier for unauthorized persons to manipulate and copy digital images. Therefore, studies on protecting the copyright of images have been increasing in recent years. This paper proposes a Contourlet, discrete cosine transform, and singular value decomposition based robust image watermarking method using Artificial Bee Colony (ABC) algorithm to obtain high robustness with a predetermined quality threshold. In the proposed method, the strength factor and embedding positions are optimally determined using the ABC algorithm for each image. The predetermined quality threshold defined in the optimization procedure guarantees a PSNR value higher than 40 dB to ensure high imperceptibility. The robustness of the proposed method was evaluated by applying a total of thirteen attacks from five different attack groups. Results confirmed that the presented method is robust to the compression, noise, enhancement, geometric, and filtering attacks. • A quality ensured blind and robust image watermarking method is proposed. • The embedding strategy is developed to achieve high imperceptibility. • ABC determines the optimal strength factor and embedding positions. • ABC is used to obtain high robustness while guaranteeing a pre-defined quality.

9 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a robust scale illumination rotation and affine invariant mask R-CNN (SIRA M-RCNN) framework for pedestrian detection, which resolved the challenging obstacle of detecting pedestrians with the ubiquity of irregularities in scale, rotation, and illumination of the natural scene images natively.
Abstract: In this paper, we resolve the challenging obstacle of detecting pedestrians with the ubiquity of irregularities in scale, rotation, and the illumination of the natural scene images natively. Pedestrian instances with such obstacles exhibit significantly unique characteristics. Thus, it strongly influences the performance of pedestrian detection techniques. We propose the new robust Scale Illumination Rotation and Affine invariant Mask R-CNN (SIRA M-RCNN) framework for overcoming the predecessor's difficulties. The first phase of the proposed system deals with illumination variation by histogram analysis. Further, we use the contourlet transformation, and the directional filter bank for the generation of the rotational invariant features. Finally, we use Affine Scale Invariant Feature Transform (ASIFT) to find points that are translation and scale-invariant. Extensive evaluation of the benchmark database will prove the effectiveness of SIRA M-RCNN. The experimental results achieve state-of-the-art performance and show a significant performance improvement in pedestrian detection.

8 citations


Journal ArticleDOI
TL;DR: A two-dimensional contourlet is utilized as the input image based on the Breast Cancer Ultrasound Dataset and the time-dependent approach to feature contourlets sub-bands from three groups of benign, malignant, and health control test samples is used.
Abstract: Breast diseases are a group of diseases that appear in different forms. An entire group of these diseases is breast cancer. This disease is one of the most important and common diseases in women. A machine learning system has been trained to identify specific patterns using an algorithm in a machine learning system to diagnose breast cancer. Therefore, designing a feature extraction method is essential to decrease the computation time. In this article, a two-dimensional contourlet is utilized as the input image based on the Breast Cancer Ultrasound Dataset. The sub-banded contourlet coefficients are modeled using the time-dependent model. The features of the time-dependent model are considered the leading property vector. The extracted features are applied separately to determine breast cancer classes based on classification methods. The classification is performed for the diagnosis of tumor types. We used the time-dependent approach to feature contourlet sub-bands from three groups of benign, malignant, and health control test samples. The final feature of 1200 ultrasound images used in three categories is trained based on k-nearest neighbor, support vector machine, decision tree, random forest, and linear discrimination analysis approaches, and the results are recorded. The decision tree results show that the method's sensitivity is 87.8%, 92.0%, and 87.0% for normal, benign, and malignant, respectively. The presented feature extraction method is compatible with the decision tree approach for this problem. Based on the results, the decision tree architecture with the highest accuracy is the more accurate and compatible method for diagnosing breast cancer using ultrasound images.

8 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a dual-path fusion network (DPFN), which includes two major components: the global subnetwork (GSN) and the local sub-network (LSN) to search similar image blocks in panchromatic (PAN) and multispectral (MS) space and exploit HR textural information from PAN space and spectral information from the MS space.
Abstract: Most existing deep learning-based pan-sharpening methods own several widely recognized issues, such as spectral distortion and insufficient spatial texture enhancement. To address these challenges in pan-sharpening, we propose a novel dual-path fusion network (DPFN). The proposed DPFN includes two major components: 1) the global subnetwork (GSN) and 2) the local subnetwork (LSN). In particular, GSN aims to search similar image blocks in panchromatic (PAN) space and multispectral (MS) space and exploits HR textural information from the PAN space and spectral information from the MS space for the fine representation of pan-sharpened MS features by employing a cross nonlocal block. Meanwhile, the proposed LSN based on a high-pass modification block (HMB) is designed to learn the high-pass information, aiming to enhance bandwise spatial information from MS images. HMB forces the fused image to obtain high-frequency details from PAN images. Moreover, to facilitate the generation of visually appealing pan-sharpened images, we propose a perceptual loss function and further optimize the model based on high-level features in the near-infrared space. Experiments demonstrate the superior performance of the proposed method quantitatively and qualitatively compared to existing state-of-the-art pan-sharpening methods. The source code is available at https://github.com/jiaming-wang/DPFN .

8 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed a proper diagnosis method of polyp using a fusion of contourlet transform and fine-tuned VGG19 pre-trained model from enhanced endoscopic 224 × 224 patch images.

7 citations


Journal ArticleDOI
TL;DR: In this article , a non-sub sampled contourlet transform (NSCT) was used to extract features from the noisy source images and a Siamese convolutional neural network (sCNN) was utilized for weighted fusion of important features from two multimodal images.

Journal ArticleDOI
TL;DR: In this article , a spatial dual-sensor module was developed for acquiring visible and near-infrared images in the same space without time shifting and to synthesize the captured images.
Abstract: This study aims to develop a spatial dual-sensor module for acquiring visible and near-infrared images in the same space without time shifting and to synthesize the captured images. The proposed method synthesizes visible and near-infrared images using contourlet transform, principal component analysis, and iCAM06, while the blending method uses color information in a visible image and detailed information in an infrared image. The contourlet transform obtains detailed information and can decompose an image into directional images, making it better in obtaining detailed information than decomposition algorithms. The global tone information is enhanced by iCAM06, which is used for high-dynamic range imaging. The result of the blended images shows a clear appearance through both the compressed tone information of the visible image and the details of the infrared image.

Journal ArticleDOI
TL;DR: In this article, a non-sub sampled contourlet transform (NSCT) is used to extract features from the noisy source images and a Siamese convolutional neural network (sCNN) is utilized for weighted fusion of important features from two multimodal images.

Proceedings ArticleDOI
04 Mar 2022
TL;DR: In this article , the authors used a backpropagation neural network to detect malignant lesions from a Slices scan image, which is a one-of-a-kind procedure that may be used to identify the existence of tumours.
Abstract: When it comes to medical science, radiology is a wide area that demands further information and thought in order to execute an appropriate tumour examination. This study makes use of MRI sequence pictures as input images to identify the tumour site, and as a consequence of this work, a malignant segment and detection technique is established. This expression is difficult to perform because of the considerable variability in the presence of cancer tissues linked with different inmates, as well as the similarity among normal tissues in the majority of cases, which makes the task difficult to finish. The most significant goal is to divide the brain into two groups: those with malignant tumours and those who do not have malignant tumours. There are four primary phases in the system that is presented. For efficient malignant detection, the registration process is carried out first using Edge based Contourlet Transformation, followed by segmentation of tumour points using region-expanding segmentation, followed by aspect extraction using two types of texture features, namely Otsu Thresholding, K-means, and Local Binary markings texture aspect, and finally, classification using neural network methods is imported out. Using reverse propagation detection of malignancy from a Slices scan image, the proposed approach is a one-of-a-kind procedure that may be used to identify the existence of tumours. For classification, a backpropagation method was utilised, and the accuracy of the classification was increased as a result. A variety of MRI sequences are used to test the proposed technique, which is implemented in Mat lab and yields experimental results for Image Registration and segmentation using point of growth. When the segmented photographs are compared to the victims' database, a method called Backpropagation neural network classification is used to classify them as serious or benign, respectively.

Journal ArticleDOI
01 Feb 2022-Entropy
TL;DR: In this paper , a robust change detection method based on nonsubsampled contourlet transform (NSCT) fusion and fuzzy local information C-means clustering (FLICM) model is introduced.
Abstract: Remote sensing image change detection is widely used in land use and natural disaster detection. In order to improve the accuracy of change detection, a robust change detection method based on nonsubsampled contourlet transform (NSCT) fusion and fuzzy local information C-means clustering (FLICM) model is introduced in this paper. Firstly, the log-ratio and mean-ratio operators are used to generate the difference image (DI), respectively; then, the NSCT fusion model is utilized to fuse the two difference images, and one new DI is obtained. The fused DI can not only reflect the real change trend but also suppress the background. The FLICM is performed on the new DI to obtain the final change detection map. Four groups of homogeneous remote sensing images are selected for simulation experiments, and the experimental results demonstrate that the proposed homogeneous change detection method has a superior performance than other state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a segmentation approach that combines multiresolution handcrafted features with CNN-based features to add directional properties and enrich the set of features to perform segmentation.


Journal ArticleDOI
TL;DR: A classical MIF system based on quarter shift dual-tree complex wavelet transform and modified principal component analysis in the laplacian pyramid (LP) domain is proposed to extract the focused image from multiple source images and outperforms many state-of-the-art techniques in terms of visual and quantitative evaluations.
Abstract: Multi-focus image fusion (MIF) uses fusion rules to combine two or more images of the same scene with various focus values into a fully focused image. An all-in-focus image refers to a fully focused image that is more informative and useful for visual perception. A fused image with high quality is essential for maintaining shift-invariant and directional selectivity characteristics of the image. Traditional wavelet-based fusion methods, in turn, create ringing distortions in the fused image due to a lack of directional selectivity and shift-invariance. In this paper, a classical MIF system based on quarter shift dual-tree complex wavelet transform (qshiftN DTCWT) and modified principal component analysis (MPCA) in the laplacian pyramid (LP) domain is proposed to extract the focused image from multiple source images. In the proposed fusion approach, the LP first decomposes the multi-focus source images into low-frequency (LF) components and high-frequency (HF) components. Then, qshiftN DTCWT is used to fuse low and high-frequency components to produce a fused image. Finally, to improve the effectiveness of the qshiftN DTCWT and LP-based method, the MPCA algorithm is utilized to generate an all-in-focus image. Due to its directionality, and its shift-invariance, this transform can provide high-quality information in a fused image. Experimental results demonstrate that the proposed method outperforms many state-of-the-art techniques in terms of visual and quantitative evaluations.


Journal ArticleDOI
TL;DR: In this paper , a novel unsupervised change detection method called NSCT nonlocal means (NSCT-NLM) is proposed based on the slow feature analysis (SFA) theory and the nonsubsampled contourlet transform (NCT) algorithm, which combines the complementary information of the two kinds of original DI.
Abstract: Change detection in multitemporal synthetic aperture radar (SAR) images has been an important research content in the field of remote sensing for a long time. In this article, based on the slow feature analysis (SFA) theory and the nonsubsampled contourlet transform (NSCT) algorithm, we propose a novel unsupervised change detection method called NSCT nonlocal means (NSCT-NLM). The powerful extraction to the changed information of SFA and the superior information fusion of NSCT are jointly adopted in this method. The main framework consists of the following parts. First, SFA and the log-ratio operator are used to generate difference images (DIs) independently. Then, the NSCT is used to fuse two DIs into a new higher quality DI. The newly fused DI combines the complementary information of the two kinds of original DI. Therefore, the contrast of the changed regions and unchanged regions is greatly enhanced, as well as the changed details are preserved more completely. Furthermore, an NLM filtering algorithm is employed to suppress the strong speckles in the fused DI. We use the fuzzy C-means algorithm to generate the final binary change map. The experiments are carried out on two public datasets and three real-world SAR datasets from different scenarios. The results demonstrate that the proposed method has higher detection accuracy by comparing with the reference methods.

Journal ArticleDOI
01 Mar 2022-Optik
TL;DR: In this paper , a new fusion framework based on Quaternion Non-Subsampled Contourlet Transform (QNSCT) and Guided Filter detail enhancement is designed to address the problems of inconspicuous infrared targets and poor background texture in Infrared and visible image fusion.



Journal ArticleDOI
TL;DR: In this article , a homomorphic Non-Subsampled Contourlet Transform (NSCT) based ultrasound image despeckling technique using a novel thresholding function, bilateral filter, and self-organizing map (SOM) was proposed.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a ROI-based contourlet subband energy (ROICSE) feature to represent the sMRI image in the frequency domain for AD classification.
Abstract: Structural magnetic resonance imaging (sMRI)-based Alzheimer's disease (AD) classification and its prodromal stage-mild cognitive impairment (MCI) classification have attracted many attentions and been widely investigated in recent years. Owing to the high dimensionality, representation of the sMRI image becomes a difficult issue in AD classification. Furthermore, regions of interest (ROI) reflected in the sMRI image are not characterized properly by spatial analysis techniques, which has been a main cause of weakening the discriminating ability of the extracted spatial feature. In this study, we propose a ROI-based contourlet subband energy (ROICSE) feature to represent the sMRI image in the frequency domain for AD classification. Specifically, a preprocessed sMRI image is first segmented into 90 ROIs by a constructed brain mask. Instead of extracting features from the 90 ROIs in the spatial domain, the contourlet transform is performed on each of these ROIs to obtain their energy subbands. And then for an ROI, a subband energy (SE) feature vector is constructed to capture its energy distribution and contour information. Afterwards, SE feature vectors of the 90 ROIs are concatenated to form a ROICSE feature of the sMRI image. Finally, support vector machine (SVM) classifier is used to classify 880 subjects from ADNI and OASIS databases. Experimental results show that the ROICSE approach outperforms six other state-of-the-art methods, demonstrating that energy and contour information of the ROI are important to capture differences between the sMRI images of AD and HC subjects. Meanwhile, brain regions related to AD can also be found using the ROICSE feature, indicating that the ROICSE feature can be a promising assistant imaging marker for the AD diagnosis via the sMRI image. Code and Sample IDs of this paper can be downloaded at https://github.com/NWPU-903PR/ROICSE.git.

Journal ArticleDOI
TL;DR: In this paper , a computational model for real-world applications for cervical dysplasia that has the highest degree of accuracy and the lowest computation time was developed, which has been trained and evaluated to classify dysplastic cells.
Abstract: Pattern detection and classification of cervical cell dysplasia can assist with diagnosis and treatment. This study aims to develop a computational model for real-world applications for cervical dysplasia that has the highest degree of accuracy and the lowest computation time. Initially, an ML framework is created, which has been trained and evaluated to classify dysplasia. Three different color models, three multi-resolution transform-based techniques for feature extraction (each with different filters), two feature representation schemes, and two well-known classification approaches are developed in conjunction to determine the optimal combination of “transform (filter) ⇒ color model ⇒ feature representation ⇒ classifier”. Extensive evaluations of two datasets, one is indigenous (own generated database) and the other is publicly available, demonstrated that the Non-subsampled Contourlet Transform (NSCT) feature-based classification performs well, it reveals that the combination “NSCT (pyrexc,pkva), YCbCr, MLP” gives most satisfactory framework with a classification accuracy of 98.02% (average) using the F1 feature set. Compared to two other approaches, our proposed model yields the most satisfying results, with an accuracy in the range of 98.00–99.50%.

Proceedings ArticleDOI
21 Apr 2022
TL;DR: In this paper , Pyramid transforms in place of traditional Laplacian pyramids were used as part of the modified contourlet transformation to enhance the effectiveness of the proposed approach.
Abstract: In the field of medicine, Tumor Segmentation from MR images is a laborious and time-consuming process conducted by medical experts. Based on image registration technique and classification approaches, this paper offers an fully automatic and efficient technique for identifying as well as Segmenting brain tumors. As a common procedure to alter the characteristics of MRI images before feature extraction, we perform intensity normalization and contourlet transform as part of our feature extraction process. In this paper we propose Pyramid transforms in place of traditional Laplacian pyramids as part of our modified contourlet transformation to enhance the effectiveness of the proposed approach. Using genetic algorithm, the extracted features are optimized, and a neuro fuzzy inference scheme classification algorithm is applied to brain MR images for classification of feature for tumor region identification. Quantitative analysis is carried out on the proposed approach to calculate parameters such as specificity, segmentation accuracy, sensitivity, precision and dice similarity coefficient.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a secure watermarking algorithm, termed WatMIF, based on multimodal medical image fusion, which consists of three major parts: the encryption of the host media, the fusion of multi-modal medical images, and the embedding and extraction of the fused mark.
Abstract: Over recent years, the volume of big data has drastically increased for medical applications. Such data are shared by cloud providers for storage and further processing. Medical images contain sensitive information, and these images are shared with healthcare workers, patients, and, in some scenarios, researchers for diagnostic and study purposes. However, the security of these images in the transfer process is extremely important, especially after the COVID-19 pandemic. This paper proposes a secure watermarking algorithm, termed WatMIF, based on multimodal medical image fusion. The proposed algorithm consists of three major parts: the encryption of the host media, the fusion of multimodal medical images, and the embedding and extraction of the fused mark. We encrypt the host media with a key-based encryption scheme. Then, a nonsubsampled contourlet transform (NSCT)-based fusion scheme is employed to fuse the magnetic resonance imaging (MRI) and computed tomography (CT) scan images to generate the fused mark image. Furthermore, the encrypted host media conceals the fused watermark using redundant discrete wavelet transform (RDWT) and randomised singular value decomposition (RSVD). Finally, denoising convolutional neural network (DnCNN) is used to improve the robustness of the WatMIF algorithm. The simulation experiments on two standard datasets were used to evaluate the algorithm in terms of invisibility, robustness, and security. When compared with the existing algorithms, the robustness is improved by 20.14%. Overall, the implementation of proposed watermarking for hiding fused marks and efficient encryption improved the identity verification, invisibility, robustness and security criteria in our WatMIF algorithm.

Journal ArticleDOI
TL;DR: Experimental results prove that the proposed method could effectively merge the multimodal medical images, while preserving the detail information, perfectly.

Journal ArticleDOI
Xueqin Li, Zhen Liu, Z. Feng, Liang Zheng, Shuang Liu 
TL;DR: In this article , a non-destructive testing method based on machine vision technology is proposed for magnetic tile crack defect detection, which first adopted Contourlet transform to decompose the original image, and the subband coefficients were decomposed by singular value decomposition.
Abstract: ABSTRACT The magnetic tile is a permanent magnet made of ferrite material, which is shaped like a tile and mainly used in permanent magnet motor. The magnetic tile image taken by camera is with dark colour and uneven background brightness and texture, and the contrast between crack defect and background is low. In this paper, a non-destructive testing method based on machine vision technology is proposed for magnetic tile crack defect detection. We propose a new crack defect detection algorithm based on Contourlet transform and singular value decomposition (SVD). The algorithm first adopted Contourlet transform to decompose the original image, and the subband coefficients were decomposed by singular value decomposition, then according to the difference of singular values gradient, the principal singular values which would be set to zero were determined. Finally, the image was reconstructed with the reconstruction coefficients that were modified by inversing SVD, the background texture was eliminated and crack defect was obtained. To verify the effectiveness and superiority of the proposed algorithm, extensive experiments were carried out and compared with the traditional algorithms. Experimental results show that the proposed method can effectively detect crack defect with the accuracy rate of 94.29%, and outperforms traditional methods.