Infrared and visible image fusion based on QNSCT and Guided Filter
01 Mar 2022-Optik-Vol. 253, pp 168592-168592
TL;DR: In this paper , a new fusion framework based on Quaternion Non-Subsampled Contourlet Transform (QNSCT) and Guided Filter detail enhancement is designed to address the problems of inconspicuous infrared targets and poor background texture in Infrared and visible image fusion.
Abstract: Image fusion is the process of fusing multiple images of the same scene to obtain a more informative image for human eye perception. In this paper, a new fusion framework based on Quaternion Non-Subsampled Contourlet Transform (QNSCT) and Guided Filter detail enhancement is designed to address the problems of inconspicuous infrared targets and poor background texture in Infrared and visible image fusion. The proposed method uses the quaternion wavelet transform for the first time instead of the traditional Non-Subsampled Pyramid Filter Bank structure in the Non-Subsampled Contourlet Transform (NSCT). The flexible multi-resolution of quaternion wavelet and the multi-directionality of NSCT are fully utilized to refine the multi-scale decomposition scheme. On the other hand, the coefficient matrix obtained from the proposed QNSCT algorithm is fused using a weight refinement algorithm based on the guided filter. The fusion scheme is divided into four steps. First, the Infrared and visible images are decomposed into multi-directional and multiscale coefficient matrices using QNSCT. The experimental results show that the proposed algorithm not only extracts important visual information from the source image, but also preserves the texture information in the scene better. Meanwhile, the scheme outperforms state-of-the-art methods in both subjective and objective evaluations.
TL;DR: A deep neural network (DNN) for forecasting the intra-day solar irradiance, photovoltaic PV plants, regardless of whether or not they have energy storage, can benefit from the work being done here.
Abstract: In this paper, we introduce a deep neural network (DNN) for forecasting the intra-day solar irradiance, photovoltaic PV plants, regardless of whether or not they have energy storage, can benefit from the work being done here. The proposed DNN utilises a number of different methodologies, two of which are cloud motion analysis and machine learning, in order to make forecasts regarding the climatological conditions of the future. In addition to this, the accuracy of the model was evaluated in light of the data sources that were easily accessible. In general, four different cases have been investigated. According to the findings, the DNN is capable of making more accurate and reliable predictions of the incoming solar irradiance than the persistent algorithm. This is the case across the board. Even without any actual data, the proposed model is considered to be state-of-the-art because it outperforms the current NWP forecasts for the same time horizon as those forecasts. When making predictions for the short term, using actual data to reduce the margin of error can be helpful. When making predictions for the long term, however, weather information can be beneficial.
TL;DR: In this paper , a high-quality image enhancement algorithm is proposed to solve the problems of noise amplification and excessive enhancement caused by low contrast and uneven illumination in the process of low-illumination image enhancement.
Abstract: In order to solve the problems of noise amplification and excessive enhancement caused by low contrast and uneven illumination in the process of low-illumination image enhancement, a high-quality image enhancement algorithm is proposed in this paper. First, the total-variation model is used to obtain the smoothed V- and S-channel images, and the adaptive gamma transform is used to enhance the details of the smoothed V-channel image. Then, on this basis, the improved multi-scale retinex algorithms based on the logarithmic function and on the hyperbolic tangent function, respectively, are used to obtain different V-channel enhanced images, and the two images are fused according to the local intensity amplitude of the images. Finally, the three-dimensional gamma function is used to correct the fused image, and then adjust the image saturation. A lightness-order-error (LOE) approach is used to measure the naturalness of the enhanced image. The experimental results show that compared with other classical algorithms, the LOE value of the proposed algorithm can be reduced by 79.95% at most. Compared with other state-of-the-art algorithms, the LOE value can be reduced by 53.43% at most. Compared with some algorithms based on deep learning, the LOE value can be reduced by 52.13% at most. The algorithm proposed in this paper can effectively reduce image noise, retain image details, avoid excessive image enhancement, and obtain a better visual effect while ensuring the enhancement effect.
•04 Dec 2006
TL;DR: A new bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is proposed, which powerfully predicts human fixations on 749 variations of 108 natural images, achieving 98% of the ROC area of a human-based control, whereas the classical algorithms of Itti & Koch achieve only 84%.
Abstract: A new bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is proposed It consists of two steps: first forming activation maps on certain feature channels, and then normalizing them in a way which highlights conspicuity and admits combination with other maps The model is simple, and biologically plausible insofar as it is naturally parallelized This model powerfully predicts human fixations on 749 variations of 108 natural images, achieving 98% of the ROC area of a human-based control, whereas the classical algorithms of Itti & Koch (, , ) achieve only 84%
01 Jan 2016
TL;DR: This thesis develops an effective but very simple prior, called the dark channel prior, to remove haze from a single image, and thus solves the ambiguity of the problem.
Abstract: Haze brings troubles to many computer vision/graphics applications. It reduces the visibility of the scenes and lowers the reliability of outdoor surveillance systems; it reduces the clarity of the satellite images; it also changes the colors and decreases the contrast of daily photos, which is an annoying problem to photographers. Therefore, removing haze from images is an important and widely demanded topic in computer vision and computer graphics areas. The main challenge lies in the ambiguity of the problem. Haze attenuates the light reflected from the scenes, and further blends it with some additive light in the atmosphere. The target of haze removal is to recover the reflected light (i.e., the scene colors) from the blended light. This problem is mathematically ambiguous: there are an infinite number of solutions given the blended light. How can we know which solution is true? We need to answer this question in haze removal. Ambiguity is a common challenge for many computer vision problems. In terms of mathematics, ambiguity is because the number of equations is smaller than the number of unknowns. The methods in computer vision to solve the ambiguity can roughly categorized into two strategies. The first one is to acquire more known variables, e.g., some haze removal algorithms capture multiple images of the same scene under different settings (like polarizers).But it is not easy to obtain extra images in practice. The second strategy is to impose extra constraints using some knowledge or assumptions .All the images in this thesis are best viewed in the electronic version. This way is more practical since it requires as few as only one image. To this end, we focus on single image haze removal in this thesis. The key is to find a suitable prior. Priors are important in many computer vision topics. A prior tells the algorithm "what can we know about the fact beforehand" when the fact is not directly available. In general, a prior can be some statistical/physical properties, rules, or heuristic assumptions. The performance of the algorithms is often determined by the extent to which the prior is valid. Some widely used priors in computer vision are the smoothness prior, sparsity prior, and symmetry prior. In this thesis, we develop an effective but very simple prior, called the dark channel prior, to remove haze from a single image. The dark channel prior is a statistical property of outdoor haze-free images: most patches in these images should contain pixels which are dark in at least one color channel. These dark pixels can be due to shadows, colorfulness, geometry, or other factors. This prior provides a constraint for each pixel, and thus solves the ambiguity of the problem. Combining this prior with a physical haze imaging model, we can easily recover high quality haze-free images.
TL;DR: It is concluded that although various image fusion methods have been proposed, there still exist several future directions in different image fusion applications and the researches in the image fusion field are still expected to significantly grow in the coming years.
Abstract: This review provides a survey of various pixel-level image fusion methods according to the adopted transform strategy.The existing fusion performance evaluation methods and the unresolved problems are concluded.The major challenges met in different image fusion applications are analyzed and concluded. Pixel-level image fusion is designed to combine multiple input images into a fused image, which is expected to be more informative for human or machine perception as compared to any of the input images. Due to this advantage, pixel-level image fusion has shown notable achievements in remote sensing, medical imaging, and night vision applications. In this paper, we first provide a comprehensive survey of the state of the art pixel-level image fusion methods. Then, the existing fusion quality measures are summarized. Next, four major applications, i.e., remote sensing, medical diagnosis, surveillance, photography, and challenges in pixel-level image fusion applications are analyzed. At last, this review concludes that although various image fusion methods have been proposed, there still exist several future directions in different image fusion applications. Therefore, the researches in the image fusion field are still expected to significantly grow in the coming years.
TL;DR: This survey comprehensively survey the existing methods and applications for the fusion of infrared and visible images, which can serve as a reference for researchers inrared and visible image fusion and related fields.
Abstract: Infrared images can distinguish targets from their backgrounds based on the radiation difference, which works well in all-weather and all-day/night conditions. By contrast, visible images can provide texture details with high spatial resolution and definition in a manner consistent with the human visual system. Therefore, it is desirable to fuse these two types of images, which can combine the advantages of thermal radiation information in infrared images and detailed texture information in visible images. In this work, we comprehensively survey the existing methods and applications for the fusion of infrared and visible images. First, infrared and visible image fusion methods are reviewed in detail. Meanwhile, image registration, as a prerequisite of image fusion, is briefly introduced. Second, we provide an overview of the main applications of infrared and visible image fusion. Third, the evaluation metrics of fusion performance are discussed and summarized. Fourth, we select eighteen representative methods and nine assessment metrics to conduct qualitative and quantitative experiments, which can provide an objective performance reference for different fusion methods and thus support relative engineering with credible and solid evidence. Finally, we conclude with the current status of infrared and visible image fusion and deliver insightful discussions and prospects for future work. This survey can serve as a reference for researchers in infrared and visible image fusion and related fields.
24 Nov 2003
TL;DR: Three variants of a new quality metric for image fusion based on an image quality index recently introduced by Wang and Bovik are presented, which are compliant with subjective evaluations and can therefore be used to compare different image fusion methods or to find the best parameters for a given fusion algorithm.
Abstract: We present three variants of a new quality metric for image fusion. The interest of our metrics, which are based on an image quality index recently introduced by Wang and Bovik in [Z. Wang et al., March 2002], lies in the fact that they do not require a ground-truth or reference image. We perform several simulations which show that our metrics are compliant with subjective evaluations and can therefore be used to compare different image fusion methods or to find the best parameters for a given fusion algorithm.
Related Papers (5)
02 Sep 2002
21 May 2006