Author
Kun Yan
Bio: Kun Yan is an academic researcher from China Academy of Space Technology. The author has contributed to research in topics: Deep learning & Image (mathematics). The author has an hindex of 1, co-authored 2 publications receiving 48 citations.
Papers
More filters
TL;DR: The proposed NSCTSR method can reduce the calculation cost of the fusion algorithm with sparse representation by the way of nonoverlapping blocking, and the experimental results show that the proposed method outperforms both the fusion method based on single sparse representation and multiscale decompositon.
Abstract: Image fusion combines several images of the same scene into a fused image, which contains all important information. Multiscale transform and sparse representation can solve this problem effectively. However, due to the limited number of dictionary atoms, it is difficult to provide an accurate description for image details in the sparse representation–based image fusion method, and it needs a great deal of calculations. In addition, for the multiscale transform–based method, the low-pass subband coefficients are so hard to represent sparsely that they cannot extract significant features from images. In this paper, a nonsubsampled contourlet transform (NSCT) and sparse representation–based image fusion method (NSCTSR) is proposed. NSCT is used to perform a multiscale decomposition of source images to express the details of images, and we present a dictionary learning scheme in NSCT domain, based on which we can represent low-frequency information of the image sparsely in order to extract the salient features of images. Furthermore, it can reduce the calculation cost of the fusion algorithm with sparse representation by the way of nonoverlapping blocking. The experimental results show that the proposed method outperforms both the fusion method based on single sparse representation and multiscale decompositon.
53 citations
TL;DR: Wang et al. as mentioned in this paper proposed a 3D cascaded spectral-spatial element attention network (3D-CSSEAN) for hyperspectral image classification, which integrates spectral and spatial feature extraction and attention area extraction for HSI classification.
Abstract: Most traditional hyperspectral image (HSI) classification methods relied on hand-crafted or shallow-based descriptors, which limits their applicability and performance. Recently, deep learning has gradually become the mainstream method of HSI classification, because it can automatically extract deep abstract features for classification. However, it remains a challenge to learn more meaningful features for HSI classification from a small training sample set. In this paper, a 3D cascaded spectral–spatial element attention network (3D-CSSEAN) is proposed to solve this issue. The 3D-CSSEAN integrates the spectral–spatial feature extraction and attention area extraction for HSI classification. Two element attention modules in the 3D-CSSEAN enable the deep network to focus on primary spectral features and meaningful spatial features. All attention modules are implemented though several simple activation operations and elementwise multiplication operations. In this way, the training parameters of the network are not added too much, which also makes the network structure suitable for small sample learning. The adopted module cascading pattern not only reduces the computational burden in the deep network but can also be easily operated via plug–expand–play. Experimental results on three public data sets show that the proposed 3D-CSSEAN achieved comparable performance with the state-of-the-art methods.
9 citations
TL;DR: In this article , a new image details enhancement algorithm based on deep convolution neural networks (DCNN) was designed to improve the image quality of nanocomposites and facilitate researchers to understand the structural characteristics of materials.
Abstract: Abstract Research on the image detail enhancement algorithm of nanocomposites can better improve the image quality of nanocomposites and facilitate researchers to understand the structural characteristics of materials. Due to the problems of low image quality and difficulty to retain the details of traditional nanocomposite image details enhancement algorithms, a new nanocomposite image details enhancement algorithm based on deep convolution neural networks (DCNN) is designed. Through histogram equalization, linear gray-scale stretching and median filtering, the image of nanocomposite is preprocessed, and then FCM algorithm is used for edge detection and image segmentation. Using color model transformation algorithm and DCNN, an image detail enhancement model including feature extraction and nonlinear mapping is constructed. Finally, the idea of image detail reconstruction is introduced, and image detail reconstruction is realized through a convolution layer. The results show that compared with the experimental comparison algorithm, the image processed by the proposed algorithm contains more detailed information and has higher image quality. The PSNR value reaches 27.1 dB, which indicates that the nanocomposite image has a better detail enhancement effect and can be widely used in the field of nanocomposite analysis and research.
Cited by
More filters
TL;DR: It is concluded that although various image fusion methods have been proposed, there still exist several future directions in different image fusion applications and the researches in the image fusion field are still expected to significantly grow in the coming years.
Abstract: This review provides a survey of various pixel-level image fusion methods according to the adopted transform strategy.The existing fusion performance evaluation methods and the unresolved problems are concluded.The major challenges met in different image fusion applications are analyzed and concluded. Pixel-level image fusion is designed to combine multiple input images into a fused image, which is expected to be more informative for human or machine perception as compared to any of the input images. Due to this advantage, pixel-level image fusion has shown notable achievements in remote sensing, medical imaging, and night vision applications. In this paper, we first provide a comprehensive survey of the state of the art pixel-level image fusion methods. Then, the existing fusion quality measures are summarized. Next, four major applications, i.e., remote sensing, medical diagnosis, surveillance, photography, and challenges in pixel-level image fusion applications are analyzed. At last, this review concludes that although various image fusion methods have been proposed, there still exist several future directions in different image fusion applications. Therefore, the researches in the image fusion field are still expected to significantly grow in the coming years.
871 citations
TL;DR: A novel nonsubsampled contourlet transform transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian fuzzy membership method, compressed sensing technique, total variation based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images.
Abstract: A novel nonsubsampled contourlet transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian (AG) fuzzy membership method, compressed sensing (CS) technique, total variation (TV) based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images. Compared with wavelet, contourlet, or any other multi-resolution analysis method, NSCT has many evident advantages, such as multi-scale, multi-direction, and translation invariance. As is known, a fuzzy set is characterized by its membership function (MF), while the commonly known Gaussian fuzzy membership degree can be introduced to establish an adaptive control of the fusion processing. The compressed sensing technique can sparsely sample the image information in a certain sampling rate, and the sparse signal can be recovered by solving a convex problem employing gradient descent based iterative algorithm(s). In the proposed fusion process, the pre-enhanced infrared image and the visible image are decomposed into low-frequency subbands and high-frequency subbands, respectively, via the NSCT method as a first step. The low-frequency coefficients are fused using the adaptive regional average energy rule; the highest-frequency coefficients are fused using the maximum absolute selection rule; the other high-frequency coefficients are sparsely sampled, fused using the adaptive-Gaussian regional standard deviation rule, and then recovered by employing the total variation based gradient descent recovery algorithm. Experimental results and human visual perception illustrate the effectiveness and advantages of the proposed fusion approach. The efficiency and robustness are also analyzed and discussed through different evaluation methods, such as the standard deviation, Shannon entropy, root-mean-square error, mutual information and edge-based similarity index.
69 citations
TL;DR: A novel method of multimodal medical image fusion using fuzzy-transform (FTR) based fusion helps in preservation as well as effective transfer of detailed information present in input images into a fused image.
Abstract: A multimodal medical image fusion method using fuzzy transform is proposed.Combination of entropy and select maxima based rule is used to perform fusion.Fused image obtained using proposed method preserves all important, relevant and interrelated information contained in input images.Fused image obtained using proposed method facilitates quick diagnosis and better treatment of diseases. Combined analysis of medical images obtained from multiple imaging modalities is extensively used by clinical professionals for quick diagnosis and treatment of critical diseases. Therefore, multimodal medical image fusion, that fuses information from different medical images into a single fused image, have gained potential interest of researchers in recent years. In this paper, a novel method of multimodal medical image fusion using fuzzy-transform (FTR) is proposed. FTR based fusion helps in preservation as well as effective transfer of detailed information present in input images into a fused image. To evaluate and prove better performance of the proposed fusion method, a number of experiments and comparisons with other existing methods of fusion have been carried out in this paper. Experimental results and comparative analysis prove that the proposed fusion algorithm is effective and generates better results.
61 citations
TL;DR: Experimental results indicate that the proposed method not only works well in various multi-focus image fusions, but also outperforms some existing methods in both subjective and objective qualities.
Abstract: Multi-focus image fusion aims to fuse multiple images with different focus points into one single image where all pixels appear in-focus. A novel multi-focus image fusion method is presented based on a sparse feature matrix decomposition and morphological filtering. First, the sparse feature matrices of original multi-focus images are extracted by decomposing the multi-focus images. Second, a temporary matrix is obtained by weighting the sparse matrices containing salient features of original images. Third, the bright and dark regions are extracted by morphologically filtering the temporary matrix. Finally, the final fusion result is formed by importing the extracted features into the base image which is established by weighting the source images. Experimental results indicate that the proposed method not only works well in various multi-focus image fusions, but also outperforms some existing methods in both subjective and objective qualities. (C) 2014 Elsevier B.V. All rights reserved.
60 citations
TL;DR: A new multi-focus image fusion method based on sparse representation (DWT-SR) is proposed, which reduces the operation burden by decomposing multiple frequency bands, and multi-channel operation is carried out by GPU parallel operation and the running time of the algorithm is further reduced.
Abstract: In the principle of lens imaging, when we project a three-dimensional object onto a photosensitive element through a convex lens, the point intersecting the focal plane can show a clear image of the photosensitive element, and the object point far away from the focal plane presents a fuzzy image point. The imaging position is considered to be clear within the limited size of the front and back of the focal plane. Otherwise, the image is considered to be fuzzy. In microscopic scenes, an electron microscope is usually used as the shooting equipment, which can basically eliminate the factors of defocus between the lens and the object. Most of the blur is caused by the shallow depth of field of the microscope, which makes the image defocused. Based on this, this paper analyzes the causes of defocusing in a video microscope and finds out that the shallow depth of field is the main reason, so we choose the corresponding deblurring method: the multi-focus image fusion method. We proposed a new multi-focus image fusion method based on sparse representation (DWT-SR). The operation burden is reduced by decomposing multiple frequency bands, and multi-channel operation is carried out by GPU parallel operation. The running time of the algorithm is further reduced. The results indicate that the DWT-SR algorithm introduced in this paper is higher in contrast and has much more details. It also solves the problem that dictionary training sparse approximation takes a long time.
48 citations