scispace - formally typeset
Search or ask a question

Showing papers on "Bilateral filter published in 2022"


Journal ArticleDOI
TL;DR: In this article , a structure-aware bilateral filter that incorporates the structural information throughout texture smoothing is proposed. But, it is not viable to use the bilateral filter for simple image smoothing, as certain modifications are required in a bilateral filter to exploit it as a precise tool for texture smoothhing.
Abstract: The classical bilateral filter is designed for preserving the structure of the image by utilizing the range and spatial kernel. However, its straightforward application on texture smoothing is not viable as certain modifications are required in a bilateral filter to exploit it as a precise tool for texture smoothing. It is worth noting that with numerous rectifications, several methods have been developed over the last few decades that employ a bilateral filter as a precise tool for texture smoothing. Although these methods are precise in preserving significant structural information, a loss in the sharpness of the structures transpires simultaneously. Moreover, these methods do smooth texture efficiently but at the same time, they also blur some prominent structures. In this paper, we have designed a novel structure-aware bilateral filter that incorporates the structural information throughout texture smoothing. The filtering is executed on individual pixel from the scale map by employing the spatial kernel. The experimental section reveals the supremacy of the proposed method with respect to texture smoothing and structure preservation. We have also made an effort to demonstrate the proposed method's efficiency in several applications, namely, edge detection, detail enhancement, and texture transfer.

9 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed a bilateral filter that can be incorporated into any deep learning pipeline and optimized in a purely data-driven way by calculating the gradient flow toward its hyperparameters and its input.
Abstract: Background Computed tomography (CT) is widely used as an imaging tool to visualize three-dimensional structures with expressive bone-soft tissue contrast. However, CT resolution can be severely degraded through low-dose acquisitions, highlighting the importance of effective denoising algorithms. Purpose Most data-driven denoising techniques are based on deep neural networks, and therefore, contain hundreds of thousands of trainable parameters, making them incomprehensible and prone to prediction failures. Developing understandable and robust denoising algorithms achieving state-of-the-art performance helps to minimize radiation dose while maintaining data integrity. Methods This work presents an open-source CT denoising framework based on the idea of bilateral filtering. We propose a bilateral filter that can be incorporated into any deep learning pipeline and optimized in a purely data-driven way by calculating the gradient flow toward its hyperparameters and its input. Denoising in pure image-to-image pipelines and across different domains such as raw detector data and reconstructed volume, using a differentiable backprojection layer, is demonstrated. In contrast to other models, our bilateral filter layer consists of only four trainable parameters and constrains the applied operation to follow the traditional bilateral filter algorithm by design. Results Although only using three spatial parameters and one intensity range parameter per filter layer, the proposed denoising pipelines can compete with deep state-of-the-art denoising architectures with several hundred thousand parameters. Competitive denoising performance is achieved on x-ray microscope bone data and the 2016 Low Dose CT Grand Challenge data set. We report structural similarity index measures of 0.7094 and 0.9674 and peak signal-to-noise ratio values of 33.17 and 43.07 on the respective data sets. Conclusions Due to the extremely low number of trainable parameters with well-defined effect, prediction reliance and data integrity is guaranteed at any time in the proposed pipelines, in contrast to most other deep learning-based denoising architectures.

6 citations


Journal ArticleDOI
25 Jan 2022-Sensors
TL;DR: A novel Dual-Histogram BF (DHBF) method that exploits an edge-preserving noise-reduced guidance image to compute the range kernel, removing isolated noisy pixels for better denoising results and outperforms other state-of-the-art BF methods.
Abstract: Bilateral Filtering (BF) is an effective edge-preserving smoothing technique in image processing. However, an inherent problem of BF for image denoising is that it is challenging to differentiate image noise and details with the range kernel, thus often preserving both noise and edges in denoising. This letter proposes a novel Dual-Histogram BF (DHBF) method that exploits an edge-preserving noise-reduced guidance image to compute the range kernel, removing isolated noisy pixels for better denoising results. Furthermore, we approximate the spatial kernel using mean filtering based on column histogram construction to achieve constant-time filtering regardless of the kernel radius’ size and achieve better smoothing. Experimental results on multiple benchmark datasets for denoising show that the proposed DHBF outperforms other state-of-the-art BF methods.

5 citations



Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a two-pass (TP) BF, TP-based BF, and an adaptive control scheme of range kernels for noise-invariant edge-preserving image smoothing.
Abstract: Bilateral filtering has been adopted for edge-preserving image smoothing and achieved the state-of-the-art performance. Most of the existing bilateral filters (BFs), however, focus on accelerating brute-force implementation but not on smoothing quality. In this letter, we propose a two-pass (TP) BF, TP-based BF, and an adaptive control scheme of range kernels for noise-invariant edge-preserving image smoothing. Specifically, the TP-based BF is composed of two bilateral filtering operations, which are, respectively, in charge of coarse context extraction and fine structure refinement. The control scheme of range kernels guides the TP bilateral mechanism to eliminate first high-frequency noisy pixels and then explore contribution between pixels from clean contexts. Experimental results on four aerial-imagery benchmark data sets show that our TP-based BF outperforms the existing BFs in terms of both feature- and gradient-aware measures.

5 citations



Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed a pixel difference function (PDF) and local entropy (LE) based anisotropic diffusion (AD) filter to improve the balance between speckle suppression and edge preservation.
Abstract: Speckles destroy the texture details of synthetic aperture radar (SAR) images, thereby constraining their high-precision application. Speckle suppression and edge preservation are two aspects that need to be balanced in despeckling. Although a conventional anisotropic diffusion (AD) filter can theoretically achieve this balance, it still triggers many edge losses. To better improve the balance, a novel AD filter based on the pixel difference function (PDF) and local entropy (LE) is proposed. The proposed filter utilizes a PDF to update the original diffusion function of the AD filter and introduces LE to recover the edge loss from the ratio image generated by noisy and filtered images. In addition, a neighborhood weighting approach and a new adaptive iterative rule are proposed for better AD filtering. Simulated data and real SAR images were applied to evaluate the performance of the proposed algorithm. Experimental results show that the proposed filter both effectively smooths speckles and reduces edge loss. Furthermore, the effectiveness and superiority of the proposed method were confirmed by comparison with other state-of-the-art methods.

3 citations


Journal ArticleDOI
TL;DR: Experimental results show that the CUR transformer outperforms the state-of-the-art methods significantly on four low-level vision tasks, including real and synthetic image denoising, JPEG compression artifact reduction, and low-light image enhancement.
Abstract: Image denoising is a fundamental problem in computer vision and multimedia computation. Non-local filters are effective for image denoising. But existing deep learning methods that use non-local computation structures are mostly designed for high-level tasks, and global self-attention is usually adopted. For the task of image denoising, they have high computational complexity and have a lot of redundant computation of uncorrelated pixels. To solve this problem and combine the marvelous advantages of non-local filter and deep learning, we propose a Convolutional Unbiased Regional (CUR) transformer. Based on the prior that, for each pixel, its similar pixels are usually spatially close, our insights are that (1) we partition the image into non-overlapped windows and perform regional self-attention to reduce the search range of each pixel, and (2) we encourage pixels across different windows to communicate with each other. Based on our insights, the CUR transformer is cascaded by a series of convolutional regional self-attention (CRSA) blocks with U-style short connections. In each CRSA block, we use convolutional layers to extract the query, key, and value features, namely Q, K, and V, of the input feature. Then, we partition the Q, K, and V features into local non-overlapped windows and perform regional self-attention within each window to obtain the output feature of this CRSA block. Among different CRSA blocks, we perform the unbiased window partition by changing the partition positions of the windows. Experimental results show that the CUR transformer outperforms the state-of-the-art methods significantly on four low-level vision tasks, including real and synthetic image denoising, JPEG compression artifact reduction, and low-light image enhancement.

3 citations



Book ChapterDOI
01 Jan 2022
TL;DR: A new method is proposed where bilateral filtering to preserve edges and GLCM feature analysis to generate more accurate features are used and this increases the accuracy and gives clear collection of edges.
Abstract: There is a high development in medical images processing field which includes methods like magnetic resonance imaging, X-rays, and computed tomography scans. Even a very small defect in the human body can be detected using these technologies. The knowledge and experience of the radiologists are important for brain tumor detection. So to aid them, an automated tumor detection system is used. In the existing system, only low pass filtering is used which does not give clear collection of edges. An LBP feature analysis is used that generates few and less accurate features. So, a new method is proposed where we use bilateral filtering to preserve edges and GLCM feature analysis to generate more accurate features. For classification, convolution neural networks are used. Here, the images are reduced without loss of features for easy processing to predict accurately. Proposed method increases the accuracy and gives clear collection of edges.

3 citations


Journal ArticleDOI
TL;DR: This study explores the characteristics of a bilateral filter in changing the noise and texture within computed tomography images in an iterative implementation and shows that the bilateral filter was effective in suppressing noise at high frequencies.
Abstract: A bilateral filter is a non-linear denoising algorithm that can reduce noise while preserving the edges. This study explores the characteristics of a bilateral filter in changing the noise and texture within computed tomography (CT) images in an iterative implementation. We collected images of a homogeneous Neusoft phantom scanned with tube currents of 77, 154, and 231 mAs. The images for each tube current were filtered five times with a configuration of sigma space (σd) = 2 pixels, sigma intensity (σr) = noise level, and a kernel of 5 × 5 pixels. To observe the noise texture in each filter iteration, the noise power spectrum (NPS) was obtained for the five slices of each dataset and averaged to generate a stable curve. The modulation-transfer function (MTF) was also measured from the original and the filtered images. Tests on an anthropomorphic phantom image were carried out to observe their impact on clinical scenarios. Noise measurements and visual observations of edge sharpness were performed on this image. Our results showed that the bilateral filter was effective in suppressing noise at high frequencies, which is confirmed by the sloping NPS curve for different tube currents. The peak frequency was shifted from about 0.2 to about 0.1 mm−1 for all tube currents, and the noise magnitude was reduced by more than 50% compared to the original images. The spatial resolution does not change with the number of iterations of the filter, which is confirmed by the constant values of MTF50 and MTF10. The test results on the anthropomorphic phantom image show a similar pattern, with noise reduced by up to 60% and object edges remaining sharp.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a Masked Joint Bilateral Filtering (MJBF) via deep image prior for digital X-ray image denoising, which consists of a deep prior generator and an iterative filtering block.
Abstract: Medical image denoising faces great challenges. Although deep learning methods have shown great potential, their efficiency is severely affected by millions of trainable parameters. The non-linearity of neural networks also makes them difficult to be understood. Therefore, existing deep learning methods have been sparingly applied to clinical tasks. To this end, we integrate known filtering operators into deep learning and propose a novel Masked Joint Bilateral Filtering (MJBF) via deep image prior for digital X-ray image denoising. Specifically, MJBF consists of a deep image prior generator and an iterative filtering block. The deep image prior generator produces plentiful image priors by a multi-scale fusion network. The generated image priors serve as the guidance for the iterative filtering block, which is utilized for the actual edge-preserving denoising. The iterative filtering block contains three trainable Joint Bilateral Filters (JBFs), each with only 18 trainable parameters. Moreover, a masking strategy is introduced to reduce redundancy and improve the understanding of the proposed network. Experimental results on the ChestX-ray14 dataset and real data show that the proposed MJBF has achieved superior performance in terms of noise suppression and edge preservation. Tests on the portability of the proposed method demonstrate that this denoising modality is simple yet effective, and could have a clinical impact on medical imaging in the future.

Journal ArticleDOI
01 Jan 2022
TL;DR: In this paper , the authors proposed Extended SRAD filter, which includes the intensity of four more neighboring pixels in addition with other four that are meant for SRAD operation, which improves despeckling performance by maintaining the information accessible at an image's edges.
Abstract: Speckle Reduction Anisotropic Diffusion filter which is used to despeckle ultrasound images, perform well at homogeneous region than in heterogeneous region resulting in loss of information available at the edges. Extended SRAD filter does the same, preserving better the edges in addition, compared to the existing SRAD filter. The proposed Extended SRAD filter includes the intensity of four more neighboring pixels in addition with other four that is meant for SRAD filter operation. So, a total of eight pixels are involved in determining the intensity of a single pixel. This improves despeckling performance by maintaining the information accessible at an image’s edges. The proposed filter produces better Peak Signal to Noise Ratio, Root Mean Square Error and Structural Similarity Index values for standard test images with different noise levels with variance 0.3, 0.35 and 0.4. It also performs well in denoising breast ultrasound images at different noise levels.

Journal ArticleDOI
TL;DR: Refined UNet v4 is presented, an end-to-end edge-precise segmentation network for cloud and shadow detection, which is capable of retrieving regions of interest with relatively tight edges and potential shadow regions with ambiguous edges and its TensorFlow implementation of the bilateral approximation is relatively computationally efficient.
Abstract: Remote sensing images are usually contaminated by cloud and corresponding shadow regions, making cloud and shadow detection one of the essential prerequisites for processing and translation of remote sensing images. Edge-precise cloud and shadow segmentation remains challenging due to the inherent high-level semantic acquisition of current neural segmentation fashions. We, therefore, introduce the Refined UNet series to partially achieve edge-precise cloud and shadow detection, including two-stage Refined UNet, v2 with a potentially efficient gray-scale guided Gaussian filter-based CRF, and v3 with an efficient multi-channel guided Gaussian filter-based CRF. However, it is visually demonstrated that the locally linear kernel used in v2 and v3 is not sufficiently sensitive to potential edges in comparison with Refined UNet. Accordingly, we turn back to the investigation of an end-to-end UNet-CRF architecture with a Gaussian-form bilateral kernel and its relatively efficient approximation. In this paper, we present Refined UNet v4, an end-to-end edge-precise segmentation network for cloud and shadow detection, which is capable of retrieving regions of interest with relatively tight edges and potential shadow regions with ambiguous edges. Specifically, we inherit the UNet-CRF architecture exploited in the Refined UNet series, which concatenates a UNet backbone of coarsely locating cloud and shadow regions and an embedded CRF layer of refining edges. In particular, the bilateral grid-based approximation to the Gaussian-form bilateral kernel is applied to the bilateral message-passing step, in order to ensure the delineation of sufficiently tight edges and the retrieval of shadow regions with ambiguous edges. Our TensorFlow implementation of the bilateral approximation is relatively computationally efficient in comparison with Refined UNet, attributed to the straightforward GPU acceleration. Extensive experiments on Landsat 8 OLI dataset illustrate that our v4 can achieve edge-precise cloud and shadow segmentation and improve the retrieval of shadow regions, and also confirm its computational efficiency.

Journal ArticleDOI
01 Feb 2022-Optik
TL;DR: In this article , a cellular automata is used to identify corrupted pixels by measuring the harmonic and arithmetic means values of four different states of the position of five pixels in the neighborhood of the central pixel.


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a trilateral filtering algorithm for speckle reduction in video synthetic aperture radar (video SAR), which takes traditional bilateral filter as the basic framework, so as to fully exploit the similarities of gray levels and the spatial location of neighboring pixels.
Abstract: This letter proposes a trilateral filtering algorithm for speckle reduction in video synthetic aperture radar (video SAR). The novel filter takes traditional bilateral filter as the basic framework, so as to fully exploit the similarities of gray levels and the spatial location of neighboring pixels. Moreover, the proposed trilateral filter additionally exploits the temporal correlation information among adjacent image frames of the SAR videos and effectively reduces the interference of redundant information by using an adaptive similar frame selection technology. Comprehensively considering the three-dimensional (3-D) correlation information of spatial, temporal, and gray scale, a triple-similarity kernel is specifically developed for video SAR despeckling. The proposed trilateral filter can effectively smooth the speckle noise while greatly sustain the details of each image frame of the SAR videos. Experiments show that the proposed algorithm has better despeckling performance compared with other algorithms.





Journal ArticleDOI
TL;DR: This paper proposes a scale-adaptive texture filtering algorithm that outperforms the previous methods in eliminating the textures while preserving main structures and also has advantages in structure similarity and visual perception quality.
Abstract: The biggest challenge of texture filtering is to smooth the strong gradient textures while maintaining the weak structures, which is difficult to achieve with current methods. Based on this, we propose a scale-adaptive texture filtering algorithm in this paper. First, the four-directional detection with gradient information is proposed for structure measurement. Second, the spatial kernel scale for each pixel is obtained based on the structure information; the larger spatial kernel is for pixels in textural regions to enhance the smoothness, while the smaller spatial kernel is for pixels on structures to maintain the edges. Finally, we adopt the Fourier approximation of range kernel, which reduces computational complexity without compromising the filtering visual quality. By subjective and objective analysis, our method outperforms the previous methods in eliminating the textures while preserving main structures and also has advantages in structure similarity and visual perception quality.

Journal ArticleDOI
TL;DR: In this article , a feature enhancement filter that is combined with a conventional denoising filter is proposed to remove the noise while enhancing the features. But the proposed filter is applied only to the feature areas in the mesh model.
Abstract: Mesh models resulting from scanners are inevitably noisy; hence, removing the noise in scanned meshes becomes an essential task in the services using three-dimensional mesh models. Filtering-based methods are simple but have some constraints in eliminating noise because they degrade the features in the mesh models while removing the noise. In this study, we design a feature enhancement filter that is combined with a conventional denoising filter to remove the noise while enhancing the features. The designed enhancement filter is applied only to the feature areas in the mesh model. Results from experiments on synthetic and natural scanned models validate that the proposed method can restore false features by integrating conventional filtering-based methods, and outperforms other state-of-the-art methods.

Proceedings ArticleDOI
01 Jun 2022
TL;DR: In this article , a bilateral video magnification filter (BVMF) was proposed to enhance the performance of Eulerian video magnification by performing temporal bandpass filtering via a Laplacian of Gaussian whose passband peaks at the target frequency.
Abstract: Eulerian video magnification (EVM) has progressed to magnify subtle motions with a target frequency even under the presence of large motions of objects. However, existing EVM methods often fail to produce desirable results in real videos due to (1) misextracting subtle motions with a non-target frequency and (2) collapsing results when large de/acceleration motions occur (e.g., objects suddenly start, stop, or change direction). To enhance EVM performance on real videos, this paper proposes a bilateral video magnification filter (BVMF) that offers simple yet robust temporal filtering. BVMF has two kernels; (I) one kernel performs temporal bandpass filtering via a Laplacian of Gaussian whose passband peaks at the target frequency with unity gain and (II) the other kernel excludes large motions outside the magnitude of interest by Gaussian filtering on the intensity of the input signal via the Fourier shift theorem. Thus, BVMF extracts only subtle motions with the target frequency while excluding large motions outside the magnitude of interest, regardless of motion dynamics. In addition, BVMF runs the two kernels in the temporal and intensity domains simultaneously like the bilateral filter does in the spatial and intensity domains. This simplifies implementation and, as a secondary effect, keeps the memory usage low. Experiments conducted on synthetic and real videos show that BVMF outperforms state-of-the-art methods.

Journal ArticleDOI
TL;DR: In this article , an image sample truncation method based on fast adaptive truncation statistical characteristics is designed to adjust the photometric similarity weight characteristics to realize the adaptive adjustment of spatial standard deviation and gray standard deviation.
Abstract: Aiming at the shortcomings of traditional bilateral filtering in suppressing speckle noise in SAR ship images, especially strong speckle noise and loss of image edge details, it proposes an improved bilateral filtering algorithm based on fast adaptive threshold and variable window in this paper. The algorithm was used to suppress speckle noise in SAR ship images. The traditional bilateral filtering cannot effectively filter out the strong speckle noise, but the SAR image has strong speckle noise because of the defects of its imaging principle. To solve these problems, an image sample truncation method based on fast adaptive truncation statistical characteristics is designed to adjust the photometric similarity weight characteristics to realize the adaptive adjustment of spatial standard deviation and gray standard deviation. After the local reference window is modified and truncated according to the local characteristics of the image, the adjusted combined similarity weight value greatly reduces the impact of strong speckle noise on the image. It is smoothed into speckle signal with strong impulse noise. In the traditional bilateral filtering, in order to enhance the effect of smoothing noise, it is necessary to specify a large value of geometric diffusion factor and gray similarity diffusion factor, resulting in the loss of image details. Based on the variable window size filtering method, when the extended local reference window is in the case of nonuniform edge, its window can be enlarged to make the speckle noise stronger. When an extended window contains details such as edges and textures, its size is not expanded to maintain image detail. This method can further smooth the speckle noise in the uniform region while preserving the edge details of the image. Finally, the adaptive truncated sample is used as the input of the bilateral filter. The image sample truncation method based on fast adaptive threshold can effectively eliminate the strong speckle noise information that affects the photometric similarity and weight accuracy of the image. The method based on variable window can greatly enhance the smoothness of the edge area of the image. The experimental results show that improved adaptive bilateral filtering algorithm improves the speckle noise removal ability by 16.06% compared with the traditional bilateral filtering algorithm in the speckle noise suppression of the SAR ship image, and the preservation performance of the image edge after filtering is improved by 5.41%. Compared with the original image, the filtered image has a 1.2% improvement in structural similarity. The algorithm can effectively suppress speckle noise and has a good ability to retain edge and texture information of the SAR ship image, which has strong practicability.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed Retinex low illumination image enhancement algorithm can effectively improve the visual quality of the image, the contrast is improved significantly and image edge details are protected, and objective evaluations such as average gradient, information entropy and peak signal-to-noise ratio have been improved.
Abstract: Aiming at the problems of insufficient illumination and low contrast of low illumination image, an improved Retinex low illumination image enhancement algorithm is proposed. Firstly, the brightness component V of the original image is extracted in HSV color space, and its enhancement by Single-Scale Retinex (SSR) is used to obtain the reflection component. For the edge problem caused by the estimation of illumination component, the Gaussian weighted bilateral filter is used as the filter function to maintain the edge information. Then, the saturation component S is adaptively stretched to improve the color saturation. However, different low illumination images have different contrast, and some images have insufficient contrast enhancement, so a global adaptive algorithm is introduced to modify the contrast and obtain the final image. According to the logarithmic characteristics of human vision, it can adaptively enhance the contrast of different images without over enhancement. Experimental results show that the proposed algorithm can effectively improve the visual quality of the image, the contrast is improved significantly and image edge details are protected, and objective evaluations such as average gradient, information entropy and peak signal-to-noise ratio have been improved.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a window-adaptive Gaussian guided filtering method to smooth images while preserving edge features, which can be widely used in image denoising, background smoothing, detexturing, detail enhancement, and edge extraction.
Abstract: We present a window-adaptive Gaussian guided filtering method to smooth images while preserving edge features. The key to our algorithm is designing a similarity-aware filtering window based on density clustering to protect the edge structure and introducing a small-scale Gaussian spatial kernel as the input of the Gaussian range kernel to construct a guided filter for image smoothing. Specifically, we first utilized the Gaussian spatial kernel filtering with a small spatial bandwidth to yield a guidance input. Then, the Mahalanobis metric was used to calculate the distance between the center point and the other pixels in the box filtering window, by which we employed the center density clustering algorithm to obtain a better nonbox region (i.e., similarity-aware window) in which each pixel was similar to the center point. Finally, based on the guidance input and the similarity-aware window, the guided Gaussian range filter performed better on image smoothing. Our proposed algorithm is simple and easy to implement. In particular, the window-aware technology effectively improved edge protection, and it can be widely used in image denoising, background smoothing, detexturing, detail enhancement, and edge extraction.


Proceedings ArticleDOI
03 Oct 2022
TL;DR: Wang et al. as mentioned in this paper proposed a hyperspectral image classification method called IFBF based on information fusion and bilateral filtering, which includes two different levels of information are given different weights, and they are fused by decision fusion.
Abstract: In recent years, different from the previous hyperspectral image (HSI) classification method which only considers spectral information or spatial information, people gradually realize that information in different fields is equally important. Therefore, this paper proposes a hyperspectral image classification method called IFBF based on information fusion and bilateral filtering. The proposed IFBF method includes the following main steps. Firstly, morphological operation and super pixel segmentation are respectively used to extract the pixel-level and super pixel-level feature information of HSI. Secondly, two different levels of information are given different weights, and they are fused by decision fusion. Then, considering that multi-level information fusion will bring information redundancy, bilateral filtering method is used to filter the fused image, and finally LDM classification method is used for classification. Experimental results on the real dataset show better performance than several well-known classifications methods.

Book ChapterDOI
01 Jan 2022
TL;DR: In this paper, the input images are decomposed into a frequency coefficient using stationary wavelet transform and then the low coefficients are fused by CBF and high coefficients were fused with maximum fusion rules.
Abstract: Image fusion is a process of combining relevant information from a different set of medical images into a single image. Multimodal medical image fusion techniques are used to improve the quality of a fused image. The cross bilateral filter (CBF) is an edge-preserving filter to highlight the edge's information in a fused image used for better analysis. In this paper, SWT and cross bilateral filter-based medical image fusion is proposed. Firstly, the input images are decomposed into a frequency coefficient using stationary wavelet transform. Secondly, the low coefficients are fused by CBF and high coefficients are fused with maximum fusion rules. Thirdly, the fused image is obtained by reconstruction process. This proposed method is superior to the conventional CBF method in terms of visualization and quality. The proposed method is evaluated by quality parameters such as mean (M), standard deviation (STD), and entropy (E).