scispace - formally typeset
Search or ask a question

Showing papers on "Bilateral filter published in 2018"


Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this article, normalized cut loss is proposed to evaluate network output with criteria standard in "shallow" segmentation, e.g., cross-entropy and consistency of all pixels.
Abstract: Most recent semantic segmentation methods train deep convolutional neural networks with fully annotated masks requiring pixel-accuracy for good quality training. Common weakly-supervised approaches generate full masks from partial input (e.g. scribbles or seeds) using standard interactive segmentation methods as preprocessing. But, errors in such masks result in poorer training since standard loss functions (e.g. cross-entropy) do not distinguish seeds from potentially mislabeled other pixels. Inspired by the general ideas in semi-supervised learning, we address these problems via a new principled loss function evaluating network output with criteria standard in "shallow" segmentation, e.g. normalized cut. Unlike prior work, the cross entropy part of our loss evaluates only seeds where labels are known while normalized cut softly evaluates consistency of all pixels. We focus on normalized cut loss where dense Gaussian kernel is efficiently implemented in linear time by fast Bilateral filtering. Our normalized cut loss approach to segmentation brings the quality of weakly-supervised training significantly closer to fully supervised methods.

241 citations


Book ChapterDOI
05 Feb 2018
TL;DR: Experimental results demonstrate that the proposed shallow-water image enhancement method can achieve better perceptual quality, higher image information entropy, and less noise, compared to the state-of-the-art underwater image enhancement methods.
Abstract: Light absorption and scattering lead to underwater image showing low contrast, fuzzy, and color cast. To solve these problems presented in various shallow-water images, we propose a simple but effective shallow-water image enhancement method - relative global histogram stretching (RGHS) based on adaptive parameter acquisition. The proposed method consists of two parts: contrast correction and color correction. The contrast correction in RGB color space firstly equalizes G and B channels and then re-distributes each R-G-B channel histogram with dynamic parameters that relate to the intensity distribution of original image and wavelength attenuation of different colors under the water. The bilateral filtering is used to eliminate the effect of noise while still preserving valuable details of the shallow-water image and even enhancing local information of the image. The color correction is performed by stretching the ‘L’ component and modifying ‘a’ and ‘b’ components in CIE-Lab color space. Experimental results demonstrate that the proposed method can achieve better perceptual quality, higher image information entropy, and less noise, compared to the state-of-the-art underwater image enhancement methods.

148 citations


Journal ArticleDOI
TL;DR: The quantitative and qualitative results of experiments demonstrate that the proposed DDF performs well on contrast enhancement, structure preservation, and noise reduction, and its satisfactory computation time resulting from its simple implementation makes it suitable for extensive application.
Abstract: Enhancement and denoising have always been a pair of conflicting problems in image processing of computer vision. Inspired by an earlier dual-domain filter (DDF), this letter proposes a progressive DDF to simultaneously enhance and denoise low-quality optical remote-sensing images. The main procedure of the proposed enhancement filter has two parts. First, a bilateral filter is exploited as a guide filter to obtain high-contrast images, which are enhanced by a histogram modification method. Then, low-contrast useful structures are restored by a short-time Fourier transform and are enhanced using an adaptive correction parameter. Both the quantitative and qualitative results of experiments on synthetic and real-world low-quality remote-sensing images demonstrate that the proposed method performs well on contrast enhancement, structure preservation, and noise reduction. Moreover, its satisfactory computation time resulting from its simple implementation makes it suitable for extensive application.

84 citations


Journal ArticleDOI
TL;DR: Experimental results prove that the proposed image fusion method generates better effects on both visual perception and objective quantization than traditional methods.

59 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of approximating a truncated Gaussian kernel using Fourier functions, where the truncation in question is the dynamic range of the input image and the error from such an approximation depends on the period, the number of sinusoids, and the coefficient of each sinusoid.
Abstract: We consider the problem of approximating a truncated Gaussian kernel using Fourier (trigonometric) functions. The computation-intensive bilateral filter can be expressed using fast convolutions by applying such an approximation to its range kernel, where the truncation in question is the dynamic range of the input image. The error from such an approximation depends on the period, the number of sinusoids, and the coefficient of each sinusoid. For a fixed period, we recently proposed a model for optimizing the coefficients using least squares fitting. Following the compressive bilateral filter (CBF), we demonstrate that the approximation can be improved by taking the period into account during the optimization. The accuracy of the resulting filtering is found to be at least as good as the CBF, but significantly better for certain cases. The proposed approximation can also be used for non-Gaussian kernels, and it comes with guarantees on the filtering accuracy.

42 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed an adaptive bilateral filter, where the center and width of the Gaussian range kernel is allowed to change from pixel to pixel, which can also be used for other applications such as artifact removal and texture filtering.
Abstract: In the classical bilateral filter, a fixed Gaussian range kernel is used along with a spatial kernel for edge-preserving smoothing. We consider a generalization of this filter, the so-called adaptive bilateral filter, where the center and width of the Gaussian range kernel is allowed to change from pixel to pixel. Though this variant was originally proposed for sharpening and noise removal, it can also be used for other applications such as artifact removal and texture filtering. Similar to the bilateral filter, the brute-force implementation of its adaptive counterpart requires intense computations. While several fast algorithms have been proposed in the literature for bilateral filtering, most of them work only with a fixed range kernel. In this paper, we propose a fast algorithm for adaptive bilateral filtering, whose complexity does not scale with the spatial filter width. This is based on the observation that the concerned filtering can be performed purely in range space using an appropriately defined local histogram. We show that by replacing the histogram with a polynomial and the finite range-space sum with an integral, we can approximate the filter using analytic functions. In particular, an efficient algorithm is derived using the following innovations: the polynomial is fitted by matching its moments to those of the target histogram (this is done using fast convolutions), and the analytic functions are recursively computed using integration-by-parts. Our algorithm can accelerate the brute-force implementation by at least $20 \times$, without perceptible distortions in the visual quality. We demonstrate the effectiveness of our algorithm for sharpening, JPEG deblocking, and texture filtering.

42 citations


Posted Content
TL;DR: In this article, normalized cut loss is proposed to evaluate network output with criteria standard in "shallow" segmentation, e.g., cross-entropy and consistency of all pixels.
Abstract: Most recent semantic segmentation methods train deep convolutional neural networks with fully annotated masks requiring pixel-accuracy for good quality training. Common weakly-supervised approaches generate full masks from partial input (e.g. scribbles or seeds) using standard interactive segmentation methods as preprocessing. But, errors in such masks result in poorer training since standard loss functions (e.g. cross-entropy) do not distinguish seeds from potentially mislabeled other pixels. Inspired by the general ideas in semi-supervised learning, we address these problems via a new principled loss function evaluating network output with criteria standard in "shallow" segmentation, e.g. normalized cut. Unlike prior work, the cross entropy part of our loss evaluates only seeds where labels are known while normalized cut softly evaluates consistency of all pixels. We focus on normalized cut loss where dense Gaussian kernel is efficiently implemented in linear time by fast Bilateral filtering. Our normalized cut loss approach to segmentation brings the quality of weakly-supervised training significantly closer to fully supervised methods.

38 citations


Journal ArticleDOI
TL;DR: This work explains about the various kinds of noises present within the ultrasound medical images and also the filters that are used for the noise removal purpose, and the performance and comparisons of different filters supported their PSNR, MSE, and RMSE values.

35 citations


Journal ArticleDOI
TL;DR: The proposed hybrid algorithm improved the contrast, SNR and quantitative accuracy compared to Gaussian and bilateral approaches, and can be utilized as an alternative post-reconstruction filter in clinical PET/CT imaging.
Abstract: PET images commonly suffer from the high noise level and poor signal-to-noise ratio (SNR), thus adversely impacting lesion detectability and quantitative accuracy. In this work, a novel hybrid dual-domain PET denoising approach is proposed, which combines the advantages of both spatial and transform domain filtering to preserve image textures while minimizing quantification uncertainty. Spatial domain denoising techniques excel at preserving high-contrast patterns compared to transform domain filters, which perform well in recovering low-contrast details normally smoothed out by spatial domain filters. For spatial domain filtering, the non-local mean algorithm was chosen owing to its performance in denoising high-contrast features whereas multi-scale curvelet denoising was exploited for the transform domain owing to its capability to recover small details. The proposed hybrid method was compared to conventional post-reconstruction Gaussian and edge preserving bilateral filters. Computer simulations of a thorax phantom containing three small lesions, experimental measurements using the Jaszczak phantom and clinical whole-body PET/CT studies were used to evaluate the performance of the proposed PET denoising technique. The proposed hybrid filter increased the SNR from 8.0 (non-filtered PET image) to 39.3 for small lesions in the computerized thorax phantom, while Gaussian and bilateral filtering led to SNRs of 23.3 and 24.4, respectively. For the experimental Jaszczak phantom, the contrast-to-noise ratio (CNR) improved from 10.84 when using Gaussian smoothing to 14.02 and 19.39 using the bilateral and the proposed hybrid filters, respectively. The clinical studies further demonstrated the superior performance of the hybrid method, yielding a quantification change (the original noisy OSEM image was used as reference in the absence of ground truth) in malignant lesions of -2.4% compared to -11.9% and -6.6% achieved using Gaussian and bilateral filters, respectively. In some cases, the visual difference between the bilateral and hybrid filtered images is not substantial; however the improved CNR score from 11.3 by OSEM to 17.1 and 21.8 by bilateral to the hybrid filtering, respectively, demonstrates the overall gain achieved by the hybrid approach. The proposed hybrid algorithm improved the contrast, SNR and quantitative accuracy compared to Gaussian and bilateral approaches, and can be utilized as an alternative post-reconstruction filter in clinical PET/CT imaging.

33 citations


Journal ArticleDOI
TL;DR: Inspired by guided image filter, this paper takes the position information of the point into account to derive the linear model with respect to guidance point cloud and filtered point cloud, which can successfully remove the undesirable noise while offering better performance in feature-preserving.
Abstract: 3D point cloud has gained significant attention in recent years. However, raw point clouds captured by 3D sensors are unavoidably contaminated with noise resulting in detrimental efforts on the practical applications. Although many widely used point cloud filters such as normal-based bilateral filter, can produce results as expected, they require a higher running time. Therefore, inspired by guided image filter, this paper takes the position information of the point into account to derive the linear model with respect to guidance point cloud and filtered point cloud. Experimental results show that the proposed algorithm, which can successfully remove the undesirable noise while offering better performance in feature-preserving, is significantly superior to several state-of-the-art methods, particularly in terms of efficiency.

31 citations


Book ChapterDOI
08 Sep 2018
TL;DR: A novel method for MPI removal and depth refinement exploiting an ad-hoc deep learning architecture working on data from a multi-frequency ToF camera using a Convolutional Neural Network made of two sub-networks.
Abstract: The removal of Multi-Path Interference (MPI) is one of the major open challenges in depth estimation with Time-of-Flight (ToF) cameras. In this paper we propose a novel method for MPI removal and depth refinement exploiting an ad-hoc deep learning architecture working on data from a multi-frequency ToF camera. In order to estimate the MPI we use a Convolutional Neural Network (CNN) made of two sub-networks: a coarse network analyzing the global structure of the data at a lower resolution and a fine one exploiting the output of the coarse network in order to remove the MPI while preserving the small details. The critical issue of the lack of ToF data with ground truth is solved by training the CNN with synthetic information. Finally, the residual zero-mean error is removed with an adaptive bilateral filter guided from a noise model for the camera. Experimental results prove the effectiveness of the proposed approach on both synthetic and real data.

Journal ArticleDOI
Yabo Fu1, Shi Liu1, H. Harold Li1, Hua Li1, Deshan Yang1 
TL;DR: An adaptive direction-dependent DVF regularization method has been developed to model the sliding tissue motion of the thoracic and abdominal organs and the overall motion estimation accuracy has been improved especially near the chest wall and abdominal wall where large organ sliding motion occurs.
Abstract: Purpose Isotropic smoothing has been conventionally used to regularize deformation vector fields (DVFs) in deformable image registration (DIR). However, the isotropic smoothing method enforces global smoothness and therefore cannot accurately model the complex tissue deformation, such as sliding motion at organ boundaries. To accurately model and estimate sliding tissue motion, an adaptive direction-dependent DVF regularization technique was developed in this study. Methods A DVF is computed and updated iteratively by minimizing the intensity differences between the images. In each iteration, the DVF was smoothed using an adaptive direction-dependent filter which enforces different motion propagation mechanisms along the primary normal and tangential directions of soft tissue local structures. A Gaussian isotropic filter was used along the normal direction while a bilateral filter was used along the tangential direction. To support large sliding motion, an automatic method was developed to delineate sliding surfaces, such as the chest wall and abdominal wall, where large organ sliding motion occurs. Parameters of the DVF regularization were adjusted adaptively based on a distance map to the sliding surfaces. The proposed method was tested on 14 4D-CT datasets at End-Inhalation (EI) and End-Exhalation (EE) phases of a respiratory cycle (10 public lung datasets, 3 upper abdomen datasets and 1 digital phantom dataset). TRE results of the 10 lung datasets were compared to results from six other existing DIR methods. For the three upper abdomen patient datasets, DIR accuracy was evaluated using manually defined landmarks across the lung and the abdomen. For the digital phantom dataset, DIR accuracy was evaluated using the ground truth displacement of a total 40,000 points that were evenly distributed across the phantom. Results The results showed that the sliding motion was preserved near the surface of chest wall and abdominal wall. The average target registration error (TRE) was reduced by 35.1% using the proposed method in comparison with five other methods on the 10 lung datasets. The sum of squared difference (SSD) after registration using the proposed method was 4.4% and 11.4% smaller than the SSDs obtained using isotropic smoothing and bilateral smoothing respectively. On the digital phantom, the average TRE was reduced by 59.6% near the surface of liver and by 53.7% near the surface of spleen using the proposed method. Contour propagation and Jacobian determinant analysis of DVF suggested an overall improved accuracy using the proposed method. Conclusion An adaptive direction-dependent DVF regularization method has been developed to model the sliding tissue motion of the thoracic and abdominal organs. The overall motion estimation accuracy has been improved especially near the chest wall and abdominal wall where large organ sliding motion occurs.

Journal ArticleDOI
TL;DR: The bilateral filter with locally adaptive spatial and radiometric parameters could selectively denoise homogeneous regions, without degrading the morphological edges.

Journal ArticleDOI
01 Nov 2018-Optik
TL;DR: The experimental results of color image filtering show that the proposed improved filter based on combining the advantages of non-local means filter and bilateral filter has a better performance for reducing Gaussian noise and mixture noise.

Journal ArticleDOI
TL;DR: To the best of the knowledge, this is the first scalable FPGA implementation of the bilateral filter that requires just $O(1)$ operations for any arbitrary operations and is both scalable and reconfigurable.
Abstract: Bilateral filter is an edge-preserving smoother that has applications in image processing, computer vision, and computational photography. In the past, field-programmable gate array (FPGA) implementations of the filter have been proposed that can achieve high throughput using parallelization and pipelining. An inherent limitation with direct implementations is that their complexity scales as $O(\omega ^2)$ with the filter width $\omega$ . In this paper, we propose an FPGA implementation of a fast bilateral filter that requires just $O(1)$ operations for any arbitrary $\omega$ . The attractive feature of the FPGA implementation is that it is both scalable and reconfigurable. To the best of our knowledge, this is the first scalable FPGA implementation of the bilateral filter. As an application, we use the FPGA implementation for image denoising.

Journal ArticleDOI
TL;DR: An innovative algorithm, named adaptive bilateral filter (ABF) + segment-based neural network, which is based on the fusion of deep convolutional neural networks (DCNNs) and adaptive ABF and has resulted in improvements in the accuracy of building extraction from remote sensing images with high spatial resolution.
Abstract: Solving the problem of building extraction from remote sensing images, which have high spatial resolution, is considered to be one of the most challenging issues in the field of photogrammetry science and remote sensing. The purpose of this study is to present an innovative algorithm, named adaptive bilateral filter (ABF) + segment-based neural network, which is based on the fusion of deep convolutional neural networks (DCNNs) and adaptive ABF and has resulted in improvements in the accuracy of building extraction from remote sensing images with high spatial resolution. The building extraction process in this study includes the following steps: applying the ABF to the research data set and optimizing its parameters in order to improve the building outlines, designing, and training the DCNN, SegNet, based on the improved data set and optimizing it using an adaptive moment estimation algorithm and assessing the impact of applying the ABF + SegNet algorithm to automatic building outline extraction. The proposed algorithm in this study is tested on three sets of remote sensing data from the cities of Potsdam, Indianapolis, and Tehran. The results indicate that the ABF + SegNet algorithm is able to extract the buildings from remote sensing color images with suitable accuracy.

Journal ArticleDOI
TL;DR: In this paper, a vectorization pattern with kernel subsampling was proposed for general finite impulse response image filtering, which was shown to be effective for various filters, such as Gaussian range filtering, bilateral filtering, adaptive Gaussian filtering, randomly-kernel-subsampled Gaussian ranges filtering, and randomly kernel-sub-sampled bilateral filtering.
Abstract: This study examines vectorized programming for finite impulse response image filtering. Finite impulse response image filtering occupies a fundamental place in image processing, and has several approximated acceleration algorithms. However, no sophisticated method of acceleration exists for parameter adaptive filters or any other complex filter. For this case, simple subsampling with code optimization is a unique solution. Under the current Moore’s law, increases in central processing unit frequency have stopped. Moreover, the usage of more and more transistors is becoming insuperably complex due to power and thermal constraints. Most central processing units have multi-core architectures, complicated cache memories, and short vector processing units. This change has complicated vectorized programming. Therefore, we first organize vectorization patterns of vectorized programming to highlight the computing performance of central processing units by revisiting the general finite impulse response filtering. Furthermore, we propose a new vectorization pattern of vectorized programming and term it as loop vectorization. Moreover, these vectorization patterns mesh well with the acceleration method of subsampling of kernels for general finite impulse response filters. Experimental results reveal that the vectorization patterns are appropriate for general finite impulse response filtering. A new vectorization pattern with kernel subsampling is found to be effective for various filters. These include Gaussian range filtering, bilateral filtering, adaptive Gaussian filtering, randomly-kernel-subsampled Gaussian range filtering, randomly-kernel-subsampled bilateral filtering, and randomly-kernel-subsampled adaptive Gaussian filtering.

Journal ArticleDOI
24 Sep 2018
TL;DR: A sinogram-based dynamic image guided filtering (SDIGF) algorithm for noise reduction that reduces statistical noise, while preserving the image edges, and improves the quantitative time activity curve accuracy, compared to the other algorithms.
Abstract: In order to improve the denoising performance in positron emission tomography (PET) images, various smoothing filters have been applied. Recent reports on PET denoising indicate that the signal-to-noise and contrast-to-noise ratios have been improved by sinogram-based predenoising. In this paper, we propose a sinogram-based dynamic image guided filtering (SDIGF) algorithm for noise reduction. The proposed algorithm uses a normalized static PET sinogram as the guidance image, acquiring the entire data from the start to end of the data acquisition. In the evaluation, dynamic PET simulation data and real dynamic data obtained from a living monkey brain using a [18F]fluoro-2-deoxy-D-glucose are used for comparing the SDIGF, image-based dynamic image guided filter, Gaussian filter (GF), and bilateral filter (BF). In the simulation data, the proposed algorithm improves the peak signal-to-noise ratio, as well as the structural similarity index, in all time frames, compared to the GF and BF algorithms. In the real data, the proposed algorithm subjectively reduces the statistical noise compared to the other algorithms. The SDIGF algorithm reduces statistical noise, while preserving the image edges, and improves the quantitative time activity curve accuracy, compared to the other algorithms. Thus, this paper demonstrates that the proposed algorithm is a simple and powerful denoising algorithm.

Journal ArticleDOI
TL;DR: A novel bilateral filter regularized L 2 sparse NMF is proposed for HU, where the L 2 -norm is utilized in order to improve the sparsity of the abundance matrix and NeNMF is used to solve the object function in order of the convergence rate.
Abstract: Hyperspectral unmixing (HU) is one of the most active hyperspectral image (HSI) processing research fields, which aims to identify the materials and their corresponding proportions in each HSI pixel. The extensions of the nonnegative matrix factorization (NMF) have been proved effective for HU, which usually uses the sparsity of abundances and the correlation between the pixels to alleviate the non-convex problem. However, the commonly used L 1 / 2 sparse constraint will introduce an additional local minima because of the non-convexity, and the correlation between the pixels is not fully utilized because of the separation of the spatial and structural information. To overcome these limitations, a novel bilateral filter regularized L 2 sparse NMF is proposed for HU. Firstly, the L 2 -norm is utilized in order to improve the sparsity of the abundance matrix. Secondly, a bilateral filter regularizer is adopted so as to explore both the spatial information and the manifold structure of the abundance maps. In addition, NeNMF is used to solve the object function in order to improve the convergence rate. The results of the simulated and real data experiments have demonstrated the advantage of the proposed method.

Journal ArticleDOI
TL;DR: In this paper, a CNN is trained end-to-end to estimate the transmission map and an adaptive bilateral filter is used to refine the transmission maps, and then the output image is transformed into the Hybrid Wavelets and Directional Filter Banks (HWD) domain for denoising and edge enhancing.
Abstract: De-scattering and edge enhancing are critical procedures for underwater images which suffer from serious contrast attenuation, color deviation, and edge blurring. In this paper, a novel method is proposed to enhance underwater images. Firstly, a Convolutional Neural Network (CNN) is trained end-to-end to estimate the transmission map. Simultaneously, the adaptive bilateral filter is used to refine the transmission map. Secondly, a strategy based on the white balance is proposed to remove the color deviation. Laplace pyramid fusion is utilized to obtain the fusion result of the haze-free and color-corrected image. Finally, the output image is transformed into the Hybrid Wavelets and Directional Filter Banks (HWD) domain for de-noising and edge enhancing. The experimental results show that the proposed method can remove color distortion and improve the clarity of the underwater images. Objective and subjective results demonstrate that the proposed method outperforms several state-of-the- art methods in different circumstances.

Journal ArticleDOI
TL;DR: This work proposes the first pixel-based JND algorithm that includes a very important component of the human vision, namely CS by measuring RMS contrast by forming a comprehensive pixel-domain model to efficiently estimate JND in the low frequency regions.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: This paper demonstrates that the variance of MC sampling can be reduced using stratified sampling, i.e., by conditionally sampling the low and high frequency terms, and cuts down the pixelwise fluctuation of the filtered output as a result.
Abstract: Brute force implementation of the bilateral filter is known to be prohibitively slow. Several fast approximations have been proposed in the literature that are able to accelerate the filtering without perceptible loss of filtering quality. In particular, it has been shown that by replacing the range kernel (usually Gaussian) of the bilateral filter with its Fourier approximation, the filtering can be performed using fast convolutions. While an accurate Fourier approximation of a one-dimensional Gaussian (for grayscale filtering) can be obtained using N ~ 10 terms, a comparable approximation in three dimensions (for color filtering) requires N3 Fourier terms, and proportionate number of convolutions. As shown in prior work, we can overcome this problem using Monte Carlo (MC) sampling. In this paper, we demonstrate that the variance of MC sampling can be reduced using stratified sampling, i.e., by conditionally sampling the low and high frequency terms. Importantly, we are able to cut down the pixelwise fluctuation of the filtered output as a result. We analytically compute the variances of MC and stratified sampling, whereby the variance reduction is evident. The PSNR fluctuation of our approximation is also shown to be smaller than existing Monte-Carlo algorithms.

Journal ArticleDOI
TL;DR: This paper proposes a radically different approach to constructing fair B-spline surfaces, which consists of fitting a surface without a fairing term to capture sharp edges, smoothing the normal field of the constructed surface with feature preservation, and reconstructing the B- Spline surface from the smoothed normal field.
Abstract: Reverse engineering of 3D industrial objects such as automobiles and electric appliances is typically performed by fitting B-spline surfaces to scanned point cloud data with a fairing term to ensure smoothness, which often smooths out sharp features. This paper proposes a radically different approach to constructing fair B-spline surfaces, which consists of fitting a surface without a fairing term to capture sharp edges, smoothing the normal field of the constructed surface with feature preservation, and reconstructing the B-spline surface from the smoothed normal field. The core of our method is an image processing based feature-preserving normal field fairing technique. This is inspired by the success of many recent research works on the use of normal field for reconstructing mesh models, and makes use of the impressive simplicity and effectiveness of bilateral-like filtering for image denoising. In particular, our approach adaptively partitions the B-spline surface into a set of segments such that each segment has approximately uniform parameterization, generates an image from each segment in the parameter space whose pixel values are the normal vectors of the surface, and then applies a bilateral filter in the parameter domain to fair the normal field. As a result, our approach inherits the advantages of image bilateral filtering techniques and is able to effectively smooth B-spline surfaces with feature preservation as demonstrated by various examples.

Journal ArticleDOI
TL;DR: Results of both simulated and real datasets validate the superiority of MDNF over the state-of-the-art methods, and it improves in the false alarm rate further by 5.5% at maximum detection performance.
Abstract: We present a novel neighborhood filter (NF)-based clutter removal algorithm in ground-penetrating radar (GPR) images. Since NF uses only range kernel of the well-known bilateral filter, it is less complex and makes clutter removal method appropriate for real-time implementations. We extend NF to multiscale–multidirectional case: MDNF and then decompose the GPR image into approximation and detail subbands to capture the intrinsic geometrical structures that contain both target and clutter information. After directional decomposition, the clutter is eliminated by keeping the diagonal information for target component. Finally, the inverse transform is applied to the remaining subbands for reconstruction of clutter-free GPR image. Results of both simulated and real datasets validate the superiority of MDNF over the state-of-the-art methods, and it improves in the false alarm rate further by 5.5% at maximum detection performance.

Journal ArticleDOI
TL;DR: A new binarization algorithm for historical documents is presented, inspired by Otsu’s method, which shows very good results and classification of the binarized images to decide which of the RGB components best preserved the document information in the foreground.
Abstract: Monochromatic documents claim for much less computer bandwidth for network transmission and storage space than their color or even grayscale equivalent. The binarization of historical documents is far more complex than recent ones as paper aging, color, texture, translucidity, stains, back-to-front interference, kind and color of ink used in handwriting, printing process, digitalization process, etc. are some of the factors that affect binarization. This article presents a new binarization algorithm for historical documents. The new global filter proposed is performed in four steps: filtering the image using a bilateral filter, splitting image into the RGB components, decision-making for each RGB channel based on an adaptive binarization method inspired by Otsu’s method with a choice of the threshold level, and classification of the binarized images to decide which of the RGB components best preserved the document information in the foreground. The quantitative and qualitative assessment made with 23 binarization algorithms in three sets of “real world” documents showed very good results.

Journal ArticleDOI
TL;DR: The proposed stochastic filter implementations are considerably faster than the conventional and existing “fast” implementations for high dimensional image data.
Abstract: We propose stochastic bilateral filter (SBF) and stochastic non-local means (SNLM), efficient randomized processes that agree with conventional bilateral filter (BF) and non-local means (NLM) on average, respectively. By Monte-Carlo, we repeat this process a few times with different random instantiations so that they can be averaged to attain the correct BF/NLM output. The computational bottleneck of the SBF and SNLM are constant with respect to the window size and the color dimension of the edge image, meaning the execution times for color and hyperspectral images are nearly the same as for the grayscale images. In addition, for SNLM, the complexity is constant with respect to the block size. The proposed stochastic filter implementations are considerably faster than the conventional and existing "fast" implementations for high dimensional image data.

Journal ArticleDOI
01 Apr 2018-Optik
TL;DR: The proposed method emphases the texture and artifacts in an image while removing noise efficiently through weighted bilateral filter and curvelet transforms, which has superior performance as compared to existing state-of-the-art methods pertaining to Gaussian noise.

Journal ArticleDOI
TL;DR: This self-regulating trilateral filter outperformed many state-of-the-art noise reduction methods both qualitatively and quantitatively and is of potential in many brain MR image processing applications that require expedition and automation.
Abstract: Objective: Noise reduction in brain magnetic resonance (MR) images has been a challenging and demanding task. This study develops a new trilateral filter that aims to achieve robust and efficient image restoration. Methods: Extended from the bilateral filter, the proposed algorithm contains one additional intensity similarity funct-ion, which compensates for the unique characteristics of noise in brain MR images. An entropy function adaptive to intensity variations is introduced to regulate the contributions of the weighting components. To hasten the computation, parallel computing based on the graphics processing unit (GPU) strategy is explored with emphasis on memory allocations and thread distributions. To automate the filtration, image texture feature analysis associated with machine learning is investigated. Among the 98 candidate features, the sequential forward floating selection scheme is employed to acquire the optimal texture features for regularization. Subsequently, a two-stage classifier that consists of support vector machines and artificial neural networks is established to predict the filter parameters for automation. Results: A speedup gain of 757 was reached to process an entire MR image volume of 256 × 256 × 256 pixels, which completed within 0.5 s. Automatic restoration results revealed high accuracy with an ensemble average relative error of 0.53 ± 0.85% in terms of the peak signal-to-noise ratio. Conclusion: This self-regulating trilateral filter outperformed many state-of-the-art noise reduction methods both qualitatively and quantitatively. Significance: We believe that this new image restoration algorithm is of potential in many brain MR image processing applications that require expedition and automation.

Journal ArticleDOI
TL;DR: The experimental results show that bilateral filter based Laws’ mask feature extraction technique provides better classification accuracy for all the four databases for various combinations of bilateral filter range and domain parameters.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the new 3D reconstruction methods proposed increases the robustness and accuracy of the geometric models which were reconstructed from a consumer-grade depth camera.
Abstract: A robust approach to elaborately reconstruct the indoor scene with a consumer depth camera is proposed in this paper. In order to ensure the accuracy and completeness of 3D scene model reconstructed from a freely moving camera, this paper proposes new 3D reconstruction methods, as follows: 1) Depth images are processed with a depth adaptive bilateral filter to effectively improve the image quality; 2) A local-to-global registration with the content-based segmentation is performed, which is more reliable and robust to reduce the visual odometry drifts and registration errors; 3) An adaptive weighted volumetric method is used to fuse the registered data into a global model with sufficient geometrical details. Experimental results demonstrate that our approach increases the robustness and accuracy of the geometric models which were reconstructed from a consumer-grade depth camera.