scispace - formally typeset
Search or ask a question

Showing papers on "Bilateral filter published in 2019"


Journal ArticleDOI
TL;DR: An inventive bio-inspired optimization based filtering system is considered for the MI denoising process, the filter named as Bilateral Filter (BF), and Gaussian and spatial weights are chosen by utilizing swarm based optimization that is Dragonfly (DF) and Modified Firefly (MFF) algorithm.

129 citations


Journal ArticleDOI
TL;DR: The goal of this paper is to provide an apt choice of denoising method that suits to CT and X-ray images and to provide a review of the following important Poisson removal methods.
Abstract: In medical imaging systems, denoising is one of the important image processing tasks. Automatic noise removal will improve the quality of diagnosis and requires careful treatment of obtained imagery. Com-puted tomography (CT) and X-Ray imaging systems use the X radiation to capture images and they are usually corrupted by noise following a Poisson distribution. Due to the importance of Poisson noise re-moval in medical imaging, there are many state-of-the-art methods that have been studied in the image processing literature. These include methods that are based on total variation (TV) regularization, wave-lets, principal component analysis, machine learning etc. In this work, we will provide a review of the following important Poisson removal methods: the method based on the modified TV model, the adaptive TV method, the adaptive non-local total variation method, the method based on the higher-order natural image prior model, the Poisson reducing bilateral filter, the PURE-LET method, and the variance stabi-lizing transform-based methods. Our task focuses on methodology overview, accuracy, execution time and their advantage/disadvantage assessments. The goal of this paper is to provide an apt choice of denoising method that suits to CT and X-ray images. The integration of several high-quality denoising methods in image processing software for medical imaging systems will be always excellent option and help further image analysis for computer-aided diagnosis.

80 citations


Proceedings ArticleDOI
09 Jun 2019
TL;DR: In this paper, a novel road crack detection algorithm which is based on deep learning and adaptive image segmentation is proposed, and cracks are extracted from the road surface using an adaptive thresholding method.
Abstract: Crack is one of the most common road distresses which may pose road safety hazards. Generally, crack detection is performed by either certified inspectors or structural engineers. This task is, however, time-consuming, subjective and labor-intensive. In this paper, a novel road crack detection algorithm which is based on deep learning and adaptive image segmentation is proposed. Firstly, a deep convolutional neural network is trained to determine whether an image contains cracks or not. The images containing cracks are then smoothed using bilateral filtering, which greatly minimizes the number of noisy pixels. Finally, cracks are extracted from the road surface using an adaptive thresholding method. The experimental results illustrate that our network can classify images with an accuracy of 99.92%, and the cracks can be successfully extracted from the images using our proposed thresholding algorithm.

79 citations


Journal ArticleDOI
TL;DR: This paper proposes a fast algorithm for adaptive bilateral filtering, whose complexity does not scale with the spatial filter width, and shows that by replacing the histogram with a polynomial and the finite range-space sum with an integral, it can approximate the filter using analytic functions.
Abstract: In the classical bilateral filter, a fixed Gaussian range kernel is used along with a spatial kernel for edge-preserving smoothing. We consider a generalization of this filter, the so-called adaptive bilateral filter, where the center and width of the Gaussian range kernel are allowed to change from pixel to pixel. Though this variant was originally proposed for sharpening and noise removal, it can also be used for other applications, such as artifact removal and texture filtering. Similar to the bilateral filter, the brute-force implementation of its adaptive counterpart requires intense computations. While several fast algorithms have been proposed in the literature for bilateral filtering, most of them work only with a fixed range kernel. In this paper, we propose a fast algorithm for adaptive bilateral filtering, whose complexity does not scale with the spatial filter width. This is based on the observation that the concerned filtering can be performed purely in range space using an appropriately defined local histogram. We show that by replacing the histogram with a polynomial and the finite range-space sum with an integral, we can approximate the filter using analytic functions. In particular, an efficient algorithm is derived using the following innovations: the polynomial is fitted by matching its moments to those of the target histogram (this is done using fast convolutions), and the analytic functions are recursively computed using integration-by-parts. Our algorithm can accelerate the brute-force implementation by at least $20 \times $ , without perceptible distortions in the visual quality. We demonstrate the effectiveness of our algorithm for sharpening, JPEG deblocking, and texture filtering.

76 citations


01 Jan 2019
TL;DR: A novel road crack detection algorithm which is based on deep learning and adaptive image segmentation is proposed, which can classify images with an accuracy of 99.92%, and the cracks can be successfully extracted from the images using the proposed thresholding algorithm.
Abstract: Crack is one of the most common road distresses which may pose road safety hazards. Generally, crack detection is performed by either certified inspectors or structural engineers. This task is, however, time-consuming, subjective and labor-intensive. In this paper, a novel road crack detection algorithm which is based on deep learning and adaptive image segmentation is proposed. Firstly, a deep convolutional neural network is trained to determine whether an image contains cracks or not. The images containing cracks are then smoothed using bilateral filtering, which greatly minimizes the number of noisy pixels. Finally, cracks are extracted from the road surface using an adaptive thresholding method. The experimental results illustrate that our network can classify images with an accuracy of 99.92%, and the cracks can be successfully extracted from the images using our proposed thresholding algorithm.

63 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a joint bilateral filter based on a nonlinear optimization that enforces smoothness of the signal while preserving variations that correspond to features of certain scales.
Abstract: The joint bilateral filter, which enables feature-preserving signal smoothing according to the structural information from a guidance, has been applied for various tasks in geometry processing. Existing methods either rely on a static guidance that may be inconsistent with the input and lead to unsatisfactory results, or a dynamic guidance that is automatically updated but sensitive to noises and outliers. Inspired by recent advances in image filtering, we propose a new geometry filtering technique called static/dynamic filter , which utilizes both static and dynamic guidances to achieve state-of-the-art results. The proposed filter is based on a nonlinear optimization that enforces smoothness of the signal while preserving variations that correspond to features of certain scales. We develop an efficient iterative solver for the problem, which unifies existing filters that are based on static or dynamic guidances. The filter can be applied to mesh face normals followed by vertex position update, to achieve scale-aware and feature-preserving filtering of mesh geometry. It also works well for other types of signals defined on mesh surfaces, such as texture colors. Extensive experimental results demonstrate the effectiveness of the proposed filter for various geometry processing applications such as mesh denoising, geometry feature enhancement, and texture color filtering.

47 citations


Journal ArticleDOI
TL;DR: Experimental results on the proposed novel longitudinally guided super-resolution algorithm for neonatal MR images demonstrate that the proposed algorithm recovers clear structural details and outperforms state-of-the-art methods both qualitatively and quantitatively.
Abstract: Neonatal magnetic resonance (MR) images typically have low spatial resolution and insufficient tissue contrast. Interpolation methods are commonly used to upsample the images for the subsequent analysis. However, the resulting images are often blurry and susceptible to partial volume effects. In this paper, we propose a novel longitudinally guided super-resolution (SR) algorithm for neonatal images. This is motivated by the fact that anatomical structures evolve slowly and smoothly as the brain develops after birth. We propose a strategy involving longitudinal regularization, similar to bilateral filtering, in combination with low-rank and total variation constraints to solve the ill-posed inverse problem associated with image SR. Experimental results on neonatal MR images demonstrate that the proposed algorithm recovers clear structural details and outperforms state-of-the-art methods both qualitatively and quantitatively.

32 citations


Journal ArticleDOI
TL;DR: The experimental results demonstrate the performance of the proposed EOGA method with BF with the accuracy, computational time, and maximum deviation, Peak Signal to Noise Ratio (PSNR), MSE, SSIM, and entropy values of MR images over the existing methods.
Abstract: For researchers, denoising of Magnetic Resonance (MR) image is a greatest challenge in digital image processing. In this paper, the impulse noise and Rician noise in the medical MR images are removed by using Bilateral Filter (BF). The novel approaches are presented in this paper; Enhanced grasshopper optimization algorithm (EGOA) is used to optimize the BF parameters. To simulate the medical MR images (with different variances), the impulse and Rician noises are added. The EGOA is applied to the noisy image in searching regions of window size, spatial and intensity domain to obtain the filter parameters optimally. The PSNR is taken as fitness value for optimization. We examined the proposed technique results with other MR images After the optimal parameters assurance. In order to comprehend the BF parameters selection importance, the results of proposed denoising method is contrasted with other previously used BFs, genetic algorithm (GA), gravitational search algorithm (GSA) using the quality metrics such as signal-to-noise ratio (SNR), structural similarity index metric (SSIM), mean squared error (MSE), and PSNR. The outcome shows that the EOGA method with BF shows good results than the earlier methods in both edge preservation and noise elimination from medical MR images. The experimental results demonstrate the performance of the proposed method with the accuracy, computational time, and maximum deviation, Peak Signal to Noise Ratio (PSNR), MSE, SSIM, and entropy values of MR images over the existing methods.

29 citations


Journal ArticleDOI
TL;DR: A method for automatic extraction of impervious surfaces from HRRS images based on deep learning (AEISHIDL) is proposed to address the problem of complex urban surface situations and has higher accuracy and automation level compared with other four representative methods.

28 citations


Journal ArticleDOI
TL;DR: This 3D-CDBF may provide superior edge preserving noise reduction of low-dose CT images compared to currently available MBIR.

27 citations


Journal ArticleDOI
TL;DR: A spatiotemporal feature-based adaptive NUC algorithm with bilateral total variation (BTV) regularization with random projection-based bilateral filter to estimate the desired target image more accurately which yields more details in the actual scene.
Abstract: The residual nonuniformity response, ghosting artifacts, and over-smooth effects are the main defects of the existing nonuniformity correction (NUC) methods. In this paper, a spatiotemporal feature-based adaptive NUC algorithm with bilateral total variation (BTV) regularization is presented. The primary contributions of the innovative method are embodied in the following aspects: BTV regularizer is introduced to eliminate the nonuniformity response and suppress the ghosting effects. The spatiotemporal adaptive learning rate is presented to further accelerate convergence, remove ghosting artifacts, and avoid over-smooth. Moreover, the random projection-based bilateral filter is proposed to estimate the desired target image more accurately which yields more details in the actual scene. The experimental results validated that the proposed algorithm achieves outstanding performance upon both simulated data and real-world sequence.

Journal ArticleDOI
TL;DR: An algorithm called sparse unmixing via variable splitting augmented Lagrangian and bilateral filter based TV (SUnSAL-BF-TV) is designed, under the alternating direction method of multipliers (ADMM) framework, and experimental results show that the algorithm is effective to unmix both simulated and real hyperspectral data sets.

Journal ArticleDOI
TL;DR: The proposed switching bilateral filter for depth map from a RGB-D sensor is compared in terms of the accuracy of 3D object reconstruction and speed with that of common successful depth filtering algorithms.
Abstract: In this paper, we propose a novel switching bilateral filter for depth map from a RGB-D sensor. The switching method works as follows: the bilateral filter is applied not at all pixels of the depth map, but only in those where noise and holes are possible, that is, at the boundaries and sharp changes. With the help of computer simulation we show that the proposed algorithm can effectively and fast process a depth map. The presented results show an improvement in the accuracy of 3D object reconstruction using the proposed depth filtering. The performance of the proposed algorithm is compared in terms of the accuracy of 3D object reconstruction and speed with that of common successful depth filtering algorithms.

Journal ArticleDOI
TL;DR: In this article, a narrow-seam identification algorithm is developed to achieve seam tracking in keyhole deep-penetration tungsten inert gas welding (TIG), where welding images are captured by a high-dynamic-range camera and denoised by a bilateral filter based on a noise model analysis.
Abstract: A narrow-seam identification algorithm is developed to achieve seam tracking in keyhole deep-penetration tungsten inert gas welding (TIG). The welding images are captured by a high-dynamic-range camera and denoised by a bilateral filter based on a noise model analysis. The arc area is extracted as a fixed region of interest. Then, an improved Otsu algorithm and a parabolic fitting algorithm are used to obtain the centerline of the arc. The seam area is extracted as an adaptive region of interest based on a proposed HOG+LBP algorithm. Thereafter, a continuous single-pixel edge contour is extracted by the canny algorithm, and a proposed contour curvature evaluation method is used to obtain the corresponding pixel coordinates. After testing and analysis, the deviation can be reliably detected with an average measurement error within ± 0.04 mm. As a result, the algorithm proposed in this study can accurately identify the deviation during keyhole deep-penetration TIG welding, and has application prospects in the narrow-seam welding field.

Journal ArticleDOI
TL;DR: A novel algorithm to identify and correct images affected by impulse noise is introduced and claims better result than the other existing state-of-the-art algorithms.
Abstract: In this article, we introduced a novel algorithm to identify and correct images affected by impulse noise. The proposed technique composed of two stages: noisy pixels identification and restoration of them. Here, empirical mode decomposition is used to identify the pixels affected by impulse noise and adaptive bilateral filter is used to restore those noisy pixels. Mean absolute difference of the intrinsic mode functions (IMFs) are compared with the two dimensional cross entropic threshold value in order to identify the pixels affected by the impulse noise. In the next stage of the processing, an adaptive bilateral filter is used to retain the fine details and remove the noisy components in the image. The performance of the proposed scheme is evaluated on different benchmark images. Four performance evaluation measures: Peak signal to noise ratio (PSNR), Image Enhancement Factor (IEF), Mean Structure Similarity Index (MSSIM) and Correlation Factor (CF) are used to test the performance of the proposed algorithm. The simulation results of the proposed algorithm claim better result than the other existing state-of-the-art algorithms.

Proceedings ArticleDOI
01 Sep 2019
TL;DR: Experiments show that the proposed constant-time bilateral filter that supports arbitrary range kernel designed via singular value decomposition not only runs faster but also achieves higher accuracy than the state-of-the-art methods.
Abstract: This paper presents a constant-time bilateral filter that supports arbitrary range kernel designed via singular value decomposition (SVD). Bilateral filter (BF) suffers from high computational complexity in real-time processing due to the time-variant kernel. Although various accelerations for BF have been proposed, most of them have not achieved both arbitrary range kernel and tight computational complexity simultaneously. The proposed method supports arbitrary range kernel but requires half computational complexity of most state-of-the-art methods. Moreover, we present two implementation techniques well matched to the SVD approach: range fitting and tiling strategy. Experiments show that, in the cases of major range kernels, the proposed method not only runs faster (200 FPS) but also achieves higher accuracy than the state-of-the-art methods.

Journal ArticleDOI
TL;DR: An image adaptive algorithm, to determine the optimum value of ξ, meant for customizing the NLM for MR images is proposed, and the proposed scheme of NLM is found to be superior to Kuwahara, TV, AD, Bilateral and SUSAN Filters.

Journal ArticleDOI
TL;DR: Experiments were conducted on several pairs of multimodal images to verify the effectiveness and superiority of the proposed image fusion algorithm compared to the state-of-the-art methods.
Abstract: An iterative joint bilateral filter is used to obtain a natural weight map Images from different modalities are merged by a weighted-sum rule in the spatial domain Saliency maps are determined by the gradient of the pairwise raw images Comparing the pairwise values of saliency maps, a coarse weight map is attained to determine which pixel is preferred Since such a coarse weight map obtained by pairwise comparison is not a natural weight map subjectively, ie, it is inconsistent with human visual system, the weight map is modified by using an iterative joint bilateral filter With the iterative joint bilateral filter, the weight map becomes natural We use the refined weight map to obtain the fused image and we seamlessly merge images from different modalities effectively Experiments were conducted on several pairs of multimodal images to verify the effectiveness and superiority of the proposed image fusion algorithm compared to the state-of-the-art methods (C) 2019 SPIE and IS&T

Proceedings ArticleDOI
01 Sep 2019
TL;DR: A Fourier approximation of BPBF that can accelerate the filtering (by an order) without loss in visual quality is proposed and is competitive with recent algorithms in terms of visual perception and quality metrics.
Abstract: We consider the problem of enhancing images captured under low-light conditions. Several variational and filtering based solutions have been proposed for this problem that are based on the retinex model. The idea in retinex is to first estimate the illumination and reflectance from the observed image, enhance the illumination, and then combine it with the reflectance to get the rectified image. A variant of bilateral filtering, called bright-pass bilateral filtering (BPBF), can be used for illumination estimation. However, BPBF is computation intensive and takes up a significant amount of the processing time. Motivated by recent work, we propose a Fourier approximation of BPBF that can accelerate the filtering (by an order) without loss in visual quality. Experimental results demonstrate that our algorithm is sufficiently fast and can effectively enhance low-light images. In particular, our proposal is competitive with recent algorithms in terms of visual perception and quality metrics.

Proceedings ArticleDOI
01 Feb 2019
TL;DR: An algorithm based on adaptive bilateral filtering for selectively enhancing salient regions of an image that does not suffer from gradient reversals and halo artifacts, and does not amplify fine details in non-salient regions that often appear as noise grains in the enhanced image.
Abstract: The use of visual saliency for perceptual enhancement of images has drawn significant attention. In this paper, we explore the idea of selectively enhancing salient regions of an image. Moreover, we develop an algorithm based on adaptive bilateral filtering for this purpose. In most of the filtering based methods, detail enhancement is performed by decomposing the image into base and detail layers; the detail layer is amplified and added back to the base layer to obtain the enhanced image. The decomposition is performed using edge-preserving smoothing such as bilateral filtering. The present novelty is that we use the saliency map to locally guide the smoothing (and the enhancement) action of the bilateral filter. The effectiveness of our proposal is demonstrated using visual results. In particular, our method does not suffer from gradient reversals and halo artifacts, and does not amplify fine details in non-salient regions that often appear as noise grains in the enhanced image. Moreover, if we choose to perform the filtering channelwise, then our method can be efficiently implemented using an existing fast algorithm.

Journal ArticleDOI
TL;DR: A shadow detection algorithm based on PSO has been used to identify shadows in very high-resolution satellite images and the accuracy is validated using precision and recall parameters.
Abstract: The presence of shadows in satellite images is inevitable, and hence, shadow detection and removal has become very essential. In this paper, a shadow detection algorithm based on PSO has been used to identify shadows in very high-resolution satellite images. The image is first preprocessed using a bilateral filter to eliminate the noise followed by which PSO-based shadow segmentation is used to segment the shadow regions. Canny edge detection is done to identify the edges of the objects in the image. The results of the edge detection and segmentation are combined using a logical operator to generate the final shadow segmented image with well-defined boundaries. The accuracy is validated using precision and recall parameters.

Journal ArticleDOI
TL;DR: A superpixel-based BF algorithm is proposed, SuperBF, which divides a HSI into many homogeneous regions via superpixel segmentation and then separately filters each homogenous region via BF; this approach ensures that the pixel structure in the template after BF is similar to that in the filtering process, reduces the probability of generating mixed pixels, and thus improves the quality of the image feature extraction.
Abstract: Bilateral filtering (BF), which is an edge-preserving filtering (EPF) method, has been widely recognized as a simple and efficient approach for hyperspectral image (HSI) feature extraction. However, due to the limitation of spatial resolution and the influence of the complexity of land feature distribution in HSIs, updating the target pixel through weighted averaging of neighbouring pixels is prone to generating mixed pixels, i.e., the updated target pixel is mixed with the feature of other land objects in addition to that of the target object, decreasing the quality of the image feature extraction. To address this problem, in this study, we propose a superpixel-based BF algorithm, SuperBF. This algorithm divides a HSI into many homogeneous regions via superpixel segmentation and then separately filters each homogenous region via BF; this approach ensures that the pixel structure in the template after BF is similar to that in the filtering process, reduces the probability of generating mixed pixels, and thus improves the quality of the image feature extraction. To verify the effectiveness of this proposed method, a support vector machine (SVM) classifier is used to classify the extracted SuperBF features. Experiments on three commonly employed HSI datasets demonstrated that SuperBF is significantly superior to the traditional BF-based hyperspectral feature extraction method and some new feature extraction methods.

Journal ArticleDOI
TL;DR: The proposed method is shown to be competitive with state-of-the-art fast algorithms, and moreover, it comes with a theoretical guarantee on the approximation error.
Abstract: The bilateral and nonlocal means filters are instances of kernel-based filters that are popularly used in image processing. It was recently shown that fast and accurate bilateral filtering of grayscale images can be performed using a low-rank approximation of the kernel matrix. More specifically, based on the eigendecomposition of the kernel matrix, the overall filtering was approximated using spatial convolutions, for which efficient algorithms are available. Unfortunately, this technique cannot be scaled to high-dimensional data such as color and hyperspectral images. This is simply because one needs to compute/store a large matrix and perform its eigendecomposition in this case. We show how this problem can be solved using the Nystr $\ddot{\text{o}}$ m method, which is generally used for approximating the eigendecomposition of large matrices. The resulting algorithm can also be used for nonlocal means filtering. We demonstrate the effectiveness of our proposal for bilateral and nonlocal means filtering of color and hyperspectral images. In particular, our method is shown to be competitive with state-of-the-art fast algorithms, and moreover, it comes with a theoretical guarantee on the approximation error.

Journal ArticleDOI
TL;DR: A new denoising algorithm using Fast Guided Filter and Discrete Wavelet Transform is proposed to remove Gaussian noise in an image and it is observed that the performance of this algorithm is superior compared to the above mentionedGaussian noise removal techniques.
Abstract: A new denoising algorithm using Fast Guided Filter and Discrete Wavelet Transform is proposed to remove Gaussian noise in an image. The Fast Guided Filter removes some part of the details in addition to noise. These details are estimated accurately and combined with the filtered image to get back the final denoised image. The proposed algorithm is compared with other existing filtering techniques such as Wiener filter, Non Local means filter and bilateral filter and it is observed that the performance of this algorithm is superior compared to the above mentioned Gaussian noise removal techniques. The resultant image obtained from this method is very good both from subjective and objective point of view. This algorithm has less computational complexity and preserves edges and other detail information in an image.

Journal ArticleDOI
TL;DR: A comparative study between the proposed method and four state-of-the art preprocessing algorithm attests that the proposed approach could yield a competitive performance for magnetic resonance brain glioblastomas tumor preprocessing.
Abstract: We investigate a new preprocessing approach for MRI glioblastoma brain tumors. Based on combined denoising technique (bilateral filter) and contrast-enhancement technique (automatic contrast stretching based on image statistical information), the proposed approach offers competitive results while preserving the tumor region's edges and original image's brightness. In order to evaluate the proposed approach's performance, quantitative evaluation has been realized through the Multimodal Brain Tumor Segmentation (BraTS 2015) dataset. A comparative study between the proposed method and four state-of-the art preprocessing algorithm attests that the proposed approach could yield a competitive performance for magnetic resonance brain glioblastomas tumor preprocessing. In fact, the result of this step of image preprocessing is very crucial for the efficiency of the remaining brain image processing steps: i.e., segmentation, classification, and reconstruction.

Journal ArticleDOI
TL;DR: A refined bilateral filtering algorithm based on adaptively trimmed-statistics (ATS-RBF) for speckle reduction in SAR imagery and an alterable window size-based scheme is proposed to enhance the Speckle noise smoothing strength in homogeneous backgrounds.
Abstract: This paper proposes a refined bilateral filtering algorithm based on adaptively trimmed-statistics (ATS-RBF) for speckle reduction in SAR imagery. The new de-speckling method is based on the bilateral filtering method, where the similarities of gray levels and the spatial location of the neighboring pixels are exploited. However, the traditional bilateral filter is not effective to reduce the strong speckle, which is often presented as impulse noise. The ATS-RBF designs an adaptive sample trimming method to properly select the samples in the local reference window and the trimming depth used for sample trimming is automatically derived according to the homogeneity of the local reference window. Furthermore, an alterable window size-based scheme is proposed to enhance the speckle noise smoothing strength in homogeneous backgrounds. Finally, bilateral filtering is applied using the adaptively trimmed samples. The ATS-RBF has an excellent speckle noise smoothing performance while preserving the edges and the texture information of the SAR images. The experiments validate the effectiveness of the proposed method using TerraSAR-X images.

Proceedings ArticleDOI
01 Mar 2019
TL;DR: A method is proposed that utilizes color and depth information provided by an RGB-D camera to improved occlusion, and aims to achieve accurate and smoothing edges virtual-real Occlusion to enhance visual effects.
Abstract: The mixed reality makes the user's realism strong or weak, depending on the fusion effect of the virtual object and the real scene, and the key factor affecting the virtual-real fusion effect is whether the virtual and real object in the synthetic scene has occlusion consistency. In this paper, a method is proposed that utilizes color and depth information provided by an RGB-D camera to improved occlusion. In this method, based on the GPU, extracting the ROI area where the virtual object is located (the area where the virtual-real occlusion occurs),then introducing the real scence color image as the guide image to modified joint bilateral filtering the depth image to repair the wrong edge information and “black hole” of the depth image;pixel-by-pixel point to determine the depth relationship between the virtual object and real object, to achieve virtual-real occlusion rendering blend;for the still existing “sawtooth artifacts” using delay-coloring only fuzzy local boundary algorithm, to achieve accurate and smoothing edges virtual-real occlusion to enhance visual effects.

Journal ArticleDOI
TL;DR: A novel fusion of infrared and visible images with Gaussian smoothness and joint bilateral filtering iteration decomposition (MSD-Iteration), which has edge-preserving and scale-aware properties to improve detail acquisition.
Abstract: Edge-preserving filters have been applied to Multi-Scale Decomposition (MSD) for fusion of infrared and visible images Traditional edge-preserving MSDs may hardly make satisfied structural separation from details to cause fusion performance degradation To suppress this challenge, the authors propose a novel fusion of infrared and visible images with Gaussian smoothness and joint bilateral filtering iteration decomposition (MSD-Iteration) This method consists of three steps First, source images are decomposed by the Gaussian smoothness and joint bilateral filtering iteration The implementation includes the fine-scale detail removal with Gaussian filtering, edge and structure extraction with joint bilateral filtering iteration, and detail obtaining at multi-scales The decomposition has edge-preserving and scale-aware properties to improve detail acquisition Second, rules are designed to conduct the layer combination For the rule of base layers, saliency maps are constructed by Laplacian and Gaussian low-pass filters to calculate initial weight maps A guided filter is further applied to determine final weight maps for the combination Meanwhile, they use the regional average energy weighting to obtain decision maps at multi-scales by constructing intensity deviation to combine detail layers Third, they implement the reconstruction with the combined layers Sufficient experiments are presented to evaluate MSD-Iteration, and experimental results validate the superiority of the authors’ method

Journal ArticleDOI
TL;DR: A multi-scale iterative framework for underwater image de-scattering, where a convolutional neural network is used to estimate the transmission map and is followed by an adaptive bilateral filter to refine the estimated results, and a strategy based on white balance is proposed to remove color casts of underwater images.
Abstract: Image restoration is a critical procedure for underwater images, which suffer from serious color deviation and edge blurring. Restoration can be divided into two stages: de-scattering and edge enhancement. First, we introduce a multi-scale iterative framework for underwater image de-scattering, where a convolutional neural network is used to estimate the transmission map and is followed by an adaptive bilateral filter to refine the estimated results. Since there is no available dataset to train the network, a dataset which includes 2000 underwater images is collected to obtain the synthetic data. Second, a strategy based on white balance is proposed to remove color casts of underwater images. Finally, images are converted to a special transform domain for denoising and enhancing the edge using the non-subsampled contourlet transform. Experimental results show that the proposed method significantly outperforms state-of-the-art methods both qualitatively and quantitatively.

Journal ArticleDOI
TL;DR: The proposed method is able to reduce the speckle noise, to preserve the spatial characteristics and the contour information of the images and it is indicated for noise reduction at noise levels commonly found in ultrasound equipment.
Abstract: Ultrasound imaging has been widely used in medical diagnosis in modern medicine. Although widespread, it presents low-resolution images, typically degraded by speckle noise, which requires the use of image-processing techniques aiming at improving the image quality and allowing a proper medical diagnosis use. This paper presents a new association based on S-median thresholding and the fast bilateral filter for speckle noise reduction. In order to validate the results, the proposed method was analyzed and compared with two others thresholding methods. The image quality improvement is evidenced by an increase of 14.13% in PSNR while the structural features and contour preservation have increased 4.96% in MSSIM and 0.70% in β, respectively. As the obtained results shown, the proposed method is able to reduce the speckle noise, to preserve the spatial characteristics and the contour information of the images and it is indicated for noise reduction at noise levels commonly found in ultrasound equipment.