scispace - formally typeset
Search or ask a question

Showing papers on "Bilateral filter published in 2016"


Journal Article
TL;DR: In this paper, the first stage of many stereo algorithms, matching cost computation, is addressed by learning a similarity measure on small image patches using a convolutional neural network, and then a series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter.
Abstract: We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.

860 citations


Journal ArticleDOI
TL;DR: The proposed hybrid-MSD transform enables to better capture important multi-scale IR spectral features and separate fine-scale texture details from large-scale edge features and proves the superiority of the proposed method compared with conventional MSD-based fusion methods.

275 citations


Journal ArticleDOI
TL;DR: A novel framework for the single depth image superresolution is proposed that is guided by a high-resolution edge map, which is constructed from the edges of the low-resolution depth image through a Markov random field optimization in a patch synthesis based manner.
Abstract: Recently, consumer depth cameras have gained significant popularity due to their affordable cost. However, the limited resolution and the quality of the depth map generated by these cameras are still problematic for several applications. In this paper, a novel framework for the single depth image superresolution is proposed. In our framework, the upscaling of a single depth image is guided by a high-resolution edge map, which is constructed from the edges of the low-resolution depth image through a Markov random field optimization in a patch synthesis based manner. We also explore the self-similarity of patches during the edge construction stage, when limited training data are available. With the guidance of the high-resolution edge map, we propose upsampling the high-resolution depth image through a modified joint bilateral filter. The edge-based guidance not only helps avoiding artifacts introduced by direct texture prediction, but also reduces jagged artifacts and preserves the sharp edges. Experimental results demonstrate the effectiveness of our method both qualitatively and quantitatively compared with the state-of-the-art methods.

145 citations


Proceedings ArticleDOI
01 Dec 2016
TL;DR: In this paper, a method for transferring the RGB color spectrum to near-infrared (NIR) images using deep multi-scale convolutional neural networks is proposed, which does not require user guidance or a reference image database in the recall phase to produce images with a natural appearance.
Abstract: This paper proposes a method for transferring the RGB color spectrum to near-infrared (NIR) images using deep multi-scale convolutional neural networks. A direct and integrated transfer between NIR and RGB pixels is trained. The trained model does not require any user guidance or a reference image database in the recall phase to produce images with a natural appearance. To preserve the rich details of the NIR image, its high frequency features are transferred to the estimated RGB image. The presented approach is trained and evaluated on a real-world dataset containing a large amount of road scene images in summer. The dataset was captured by a multi-CCD NIR/RGB camera, which ensures a perfect pixel to pixel registration.

111 citations


Book ChapterDOI
08 Oct 2016
TL;DR: In this article, a bilateral inception module is proposed to propagate information between super pixels while respecting image edges, thus using the structured information of the problem for improved results, and the layer recovers a full resolution segmentation result from the lower resolution solution of a CNN.
Abstract: In this paper we propose a CNN architecture for semantic image segmentation. We introduce a new “bilateral inception” module that can be inserted in existing CNN architectures and performs bilateral filtering, at multiple feature-scales, between superpixels in an image. The feature spaces for bilateral filtering and other parameters of the module are learned end-to-end using standard backpropagation techniques. The bilateral inception module addresses two issues that arise with general CNN segmentation architectures. First, this module propagates information between (super) pixels while respecting image edges, thus using the structured information of the problem for improved results. Second, the layer recovers a full resolution segmentation result from the lower resolution solution of a CNN. In the experiments, we modify several existing CNN architectures by inserting our inception module between the last CNN (\(1\times 1\) convolution) layers. Empirical results on three different datasets show reliable improvements not only in comparison to the baseline networks, but also in comparison to several dense-pixel prediction techniques such as CRFs, while being competitive in time.

108 citations


Journal ArticleDOI
TL;DR: This letter proposes a novel adaptive fuzzy switching weighted mean filter to remove salt-and-pepper (SAP) noise and shows that compared to some state-of-the-art algorithms, it keeps more texture details and is better at removing SAP noise and depressing artifacts.
Abstract: An image degraded by noise is a common phenomenon. In this letter, we propose a novel adaptive fuzzy switching weighted mean filter to remove salt-and-pepper (SAP) noise. The process of denoising includes two stages: noise detection and noise elimination. In the first stage, pixels in a corrupted image are classified into two categories: original pixels and possible noise pixels. For the latter, we compute the maximum absolute luminance difference of processed pixels next to possible noise pixels to classify them into three categories: uncorrupted pixels, lightly corrupted pixels, and heavily corrupted pixels. In the second stage, under the assumption that pixels at a short distance tend to have similar values, the distance relevant weighted mean of the original pixels in the neighborhood of a noise pixel are computed. For a nonnoise pixel, retain it as unchanged; for a lightly corrupted pixel, replace it with the weighted average value of the weighted mean and its own value; and for a heavily corrupted pixel, change it to be the weighted mean. Experimental results show that compared to some state-of-the-art algorithms, our method keeps more texture details and is better at removing SAP noise and depressing artifacts.

76 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian, which can cut the complexity to O(1)$ per pixel for any arbitrary $S$.
Abstract: The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires $O(S)$ operations per pixel, where $S$ is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to $O(1)$ per pixel for any arbitrary $S$ . The algorithm has a simple implementation involving $N+1$ spatial filterings, where $N$ is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order $N$ required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy.

66 citations


Journal ArticleDOI
TL;DR: A novel method with local difference value is applied to extract corrupted pixels and the improved method performs well in both edge preservation and noise removing.

63 citations


Proceedings ArticleDOI
01 Jan 2016
TL;DR: Quantitative and qualitative results from experiments on the KITTI Database, using LIDAR point clouds only, show very satisfactory performance of the approach introduced in this work, which relies on local spatial interpolation using sliding-window (mask) technique and the Bilateral Filter.
Abstract: High resolution depth-maps, obtained by upsampling sparse range data from a 3D-LIDAR, find applications in many fields ranging from sensory perception to semantic segmentation and object detection. Upsampling is often based on combining data from a monocular camera to compensate the low-resolution of a LIDAR. This paper, on the other hand, introduces a novel framework to obtain dense depth-map solely from a single LIDAR point cloud; which is a research direction that has been barely explored. The formulation behind the proposed depth-mapping process relies on local spatial interpolation, using sliding-window (mask) technique, and on the Bilateral Filter (BF) where the variable of interest, the distance from the sensor, is considered in the interpolation problem. In particular, the BF is conveniently modified to perform depth-map upsampling such that the edges (foreground-background discontinuities) are better preserved by means of a proposed method which influences the range-based weighting term. Other methods for spatial upsampling are discussed, evaluated and compared in terms of different error measures. This paper also researches the role of the mask's size in the performance of the implemented methods. Quantitative and qualitative results from experiments on the KITTI Database, using LIDAR point clouds only, show very satisfactory performance of the approach introduced in this work.

50 citations


Journal ArticleDOI
01 Jun 2016
TL;DR: The experimental results showed that the BF with parameters proposed by the authors showed a better performance than BF with other previously proposed parameters in both the preservation of edges and removal of different level of Rician noise from MR images.
Abstract: This is the first GA based-optimization study to find optimal parameters of bilateral filter.Both the simulated and clinical brain MR images were used for Rician noise removal.The preservation of edges and removal of noise were investigated for different noise levels.A better performance in computation time of our approach was observed.The quality of the denoised images with the proposed parameters was validated using quantitative metrics. Noise elimination is an important pre-processing step in magnetic resonance (MR) images for clinical purposes. In the present study, as an edge-preserving method, bilateral filter (BF) was used for Rician noise removal in MR images. The choice of BF parameters affects the performance of denoising. Therefore, as a novel approach, the parameters of BF were optimized using genetic algorithm (GA). First, the Rician noise with different variances (?=10, 20, 30) was added to simulated T1-weighted brain MR images. To find the optimum filter parameters, GA was applied to the noisy images in searching regions of window size 3×3, 5×5, 7×7, 11×11, and 21×21, spatial sigma 0.1-10 and intensity sigma 1-60. The peak signal-to-noise ratio (PSNR) was adjusted as fitness value for optimization.After determination of optimal parameters, we investigated the results of proposed BF parameters with both the simulated and clinical MR images. In order to understand the importance of parameter selection in BF, we compared the results of denoising with proposed parameters and other previously used BFs using the quality metrics such as mean squared error (MSE), PSNR, signal-to-noise ratio (SNR) and structural similarity index metric (SSIM). The quality of the denoised images with the proposed parameters was validated using both visual inspection and quantitative metrics. The experimental results showed that the BF with parameters proposed by us showed a better performance than BF with other previously proposed parameters in both the preservation of edges and removal of different level of Rician noise from MR images. It can be concluded that the performance of BF for denoising is highly dependent on optimal parameter selection.

46 citations


Journal ArticleDOI
TL;DR: This work proposes a novel approximation that can be applied to any range kernel, provided it has a pointwise-convergent Fourier series and is able to guarantee subpixel accuracy for the overall filtering, which is not provided by the most existing methods for fast bilateral filtering.
Abstract: It was demonstrated in earlier work that, by approximating its range kernel using shiftable functions, the nonlinear bilateral filter can be computed using a series of fast convolutions. Previous approaches based on shiftable approximation have, however, been restricted to Gaussian range kernels. In this work, we propose a novel approximation that can be applied to any range kernel, provided it has a pointwise-convergent Fourier series. More specifically, we propose to approximate the Gaussian range kernel of the bilateral filter using a Fourier basis, where the coefficients of the basis are obtained by solving a series of least-squares problems. The coefficients can be efficiently computed using a recursive form of the QR decomposition. By controlling the cardinality of the Fourier basis, we can obtain a good tradeoff between the run-time and the filtering accuracy. In particular, we are able to guarantee subpixel accuracy for the overall filtering, which is not provided by the most existing methods for fast bilateral filtering. We present simulation results to demonstrate the speed and accuracy of the proposed algorithm.

Journal ArticleDOI
TL;DR: An iterative approach based on bilateral filtering is proposed for speckle reduction in multiframe OCT data and results on phantom images and real OCT retinal images demonstrate the effectiveness of the proposed filter.

Book ChapterDOI
08 Oct 2016
TL;DR: This paper proposes an extension to the bilateral filter for non-Gaussian filters which allows us to treat pixels at very different depth layers as missing values and shows results for a medical application (tremors) where it improves current baselines for motion magnification and motion measurements.
Abstract: This paper adds depth to motion magnification. With the rise of cheap RGB+D cameras depth information is readily available. We make use of depth to make motion magnification robust to occlusion and large motions. Current approaches require a manual drawn pixel mask over all frames in the area of interest which is cumbersome and error-prone. By including depth, we avoid manual annotation and magnify motions at similar depth levels while ignoring occlusions at distant depth pixels. To achieve this, we propose an extension to the bilateral filter for non-Gaussian filters which allows us to treat pixels at very different depth layers as missing values. As our experiments will show, these missing values should be ignored, and not inferred with inpainting. We show results for a medical application (tremors) where we improve current baselines for motion magnification and motion measurements.

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed algorithm can simultaneously enhance the low-light images and reduce noise effectively and could also perform quite well compared with the current common image enhancement and noise reduction algorithms in terms of the subjective visual effects and objective quality assessments.
Abstract: Images obtained under low-light conditions tend to have the characteristics of low-grey levels, high-noise levels, and indistinguishable details. Image degradation not only affects the recognition of images, but also influences the performance of the computer vision system. The low-light image enhancement algorithm based on the dark channel prior de-hazing technique can enhance the contrast of images effectively and can highlight the details of images. However, the dark channel prior de-hazing technique ignores the effects of noise, which leads to significant noise amplification after the enhancement process. In this study, a de-hazing-based simultaneous enhancement and noise reduction algorithm of are proposed by analysing the essence of the dark channel prior de-hazing technique and bilateral filter. First, the authors estimate the values of the initial parameters of the hazy image model by de-hazing technique. Then, they correct the parameters of the hazy image model alternately with the iterative joint bilateral filter. Experimental results indicate that the proposed algorithm can simultaneously enhance the low-light images and reduce noise effectively. The proposed algorithm could also perform quite well compared with the current common image enhancement and noise reduction algorithms in terms of the subjective visual effects and objective quality assessments.

Journal ArticleDOI
TL;DR: The proposed method consists of a non-parametric image registration based on diffusion regularization and a nonlocal Laplace regularizer combined with a bilateral filter in the reconstruction step to remove noise and motion outliers and proves the existence of a solution to the well posed registration problem.
Abstract: In this paper, we present a new approach of multi-frame super-resolution (SR). The SR techniques strongly depend on the availability of accurate motion estimation. When the estimation of motion is not well established, as usually happens for non-parametric motion, annoying artifacts appear in the super-resolved image. Since SR problems suffer from the motion and blur estimations, new techniques are considered to improve the registration and restoration steps. The proposed method consists of a non-parametric image registration based on diffusion regularization and a nonlocal Laplace regularizer combined with a bilateral filter (BTV) in the reconstruction step to remove noise and motion outliers. The diffusion registration is employed to handle the small deformation between the unregistered images, while the combination of nonlocal Laplace and BTV is used to increase the robustness of the restoration step with respect to the blurring effect and to the noise. We also prove the existence of a solution to the well posed registration problem. Simulation results using different images show the effectiveness and robustness of our algorithm against noise and outliers compared to other existing methods.

Journal ArticleDOI
01 Sep 2016
TL;DR: This paper proposes new methods to accelerate the whole process and improve the quality of the color information using entropy information, and compares different approaches for noise and artifacts reduction: Gaussian, mean and bilateral filter.
Abstract: Graphical abstractDisplay Omitted HighlightsWe propose three improvements for those smoothing methods, improving the color quality or the computation time.One is based on entropy, speeding up the whole process.The second one obtains the optimal processing radius to improve the color quality.The last one uses a heuristic approach to select the optimal radius while improving the speed up. RGB-D sensors are capable of providing 3D points (depth) together with color information associated with each point. These sensors suffer from different sources of noise. With some kinds of RGB-D sensors, it is possible to pre-process the color image before assigning the color information to the 3D data. However, with other kinds of sensors that is not possible: RGB-D data must be processed directly. In this paper, we compare different approaches for noise and artifacts reduction: Gaussian, mean and bilateral filter. These methods are time consuming when managing 3D data, which can be a problem with several real time applications. We propose new methods to accelerate the whole process and improve the quality of the color information using entropy information. Entropy provides a framework for speeding up the involved methods allowing certain data not to be processed if the entropy value of that data is over or under a given threshold. The experimental results provide a way to balance the quality and the acceleration of these methods. The current results show that our methods improve both the image quality and processing time, as compared to the original methods.

Journal ArticleDOI
TL;DR: In this article, a hybrid denoising filter with an adaptive GA was used to remove the noise from the images and the noisy free images were restored by shining the pixel values using AGA.

Journal ArticleDOI
TL;DR: The developed method is more efficient in removing speckle noise from the ultrasound images compared to other current methods because it is able to adapt the filtering process according to the image contents, thus avoiding the loss of any relevant structural features in the input images.
Abstract: A new method is proposed to perform selective smoothing of images affected by speckle noise.A new smoothing criterion is defined for the average smoothing filter.The convolution window of the smoothing filter is adjustable.The method is evaluated using real ultrasound medical images based on image quality metrics.The proposed method produced better results than the current methods evaluated. Ultrasound images are strongly affected by speckle noise making visual and computational analysis of the structures more difficult. Usually, the interference caused by this kind of noise reduces the efficiency of extraction and interpretation of the structural features of interest. In order to overcome this problem, a new method of selective smoothing based on average filtering and the radiation intensity of the image pixels is proposed. The main idea of this new method is to identify the pixels belonging to the borders of the structures of interest in the image, and then apply a reduced smoothing to these pixels, whilst applying more intense smoothing to the remaining pixels. Experimental tests were conducted using synthetic ultrasound images with speckle noisy added and real ultrasound images from the female pelvic cavity. The new smoothing method is able to perform selective smoothing in the input images, enhancing the transitions between the different structures presented. The results achieved are promising, as the evaluation analysis performed shows that the developed method is more efficient in removing speckle noise from the ultrasound images compared to other current methods. This improvement is because it is able to adapt the filtering process according to the image contents, thus avoiding the loss of any relevant structural features in the input images.

Journal ArticleDOI
TL;DR: An iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT is proposed, referred to as structure preserving iterative reconstruction (SPIR).
Abstract: Conventional dual-energy CT (DECT) reconstruction requires two full-size projection datasets with two different energy spectra. In this study, we propose an iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT. A bilateral filter is calculated as a similarity matrix from the first full-scan CT image to quantify the similarity between any two pixels, which is assumed unchanged on a second CT image since DECT scans are performed on the same object. The second CT image from reduced projections is reconstructed by an iterative algorithm which updates the image by minimizing the total variation of the difference between the image and its filtered image by the similarity matrix under data fidelity constraint. As the redundant structural information of the two CT images is contained in the similarity matrix for CT reconstruction, we refer to the algorithm as structure preserving iterative reconstruction (SPIR). The proposed method is evaluated on both digital and physical phantoms, and is compared with the filtered-backprojection (FBP) method, the conventional total-variation-regularization-based algorithm (TVR) and prior-image-constrained-compressed-sensing (PICCS). SPIR with a second 10-view scan reduces the image noise STD by a factor of one order of magnitude with same spatial resolution as full-view FBP image. SPIR substantially improves over TVR on the reconstruction accuracy of a 10-view scan by decreasing the reconstruction error from 6.18% to 1.33%, and outperforms TVR at 50 and 20-view scans on spatial resolution with a higher frequency at the modulation transfer function value of 10% by an average factor of 4. Compared with the 20-view scan PICCS result, the SPIR image has 7 times lower noise STD with similar spatial resolution. The electron density map obtained from the SPIR-based DECT images with a second 10-view scan has an average error of less than 1%.

Patent
18 Aug 2016
TL;DR: In this paper, a method for denoising a range image acquired by a time-of-flight (ToF) camera by first determining locations of edges, and a confidence value of each pixel, and based on the locations of the edges, determining geodesic distances of neighboring pixels.
Abstract: A method for denoising a range image acquired by a time-of-flight (ToF) camera by first determining locations of edges, and a confidence value of each pixel, and based on the locations of the edges, determining geodesic distances of neighboring pixels. Based on the confidence values, reliabilities of the neighboring pixels are determined and scene dependent noise is reduced using a filter.

Journal ArticleDOI
TL;DR: A number of popular de-speckling algorithms are investigated, including filters based on frequency domain, filtersbased on local statistical properties, filters Based on minimum mean square error (MMSE), and filters basedon Partial Differential Equation (PDE), which conclude that Bilateral Filter (BF) achieves the best visual effect.
Abstract: Breast ultrasound is an important tool used in the medical treatment and diagnosis of breast tumor. However, noise defined as speckles are generated inevitably. Although the existence of speckle may be beneficial to diagnosis if used by a well-trained observer, it often causes disturbance which negatively affects clinical diagnosis, not only by reducing resolution and contrast of ultrasound images, but also by adding difficulties to recognize tumor region accurately. In this paper, we investigate a number of popular de-speckling algorithms, including filters based on frequency domain, filters based on local statistical properties, filters based on minimum mean square error (MMSE), and filters based on Partial Differential Equation (PDE). Two visual measurement evaluation criteria, i.e., Mean to Variance Ratio (VMR) and Laplace Response of Domain (LRD), are chosen for the performance comparison of those filters in the application of ultrasound breast image filtering. Moreover, the filtering effect is further evaluated with respect to the segmentation accuracy of tumor regions. According to the evaluation results, we conclude that Bilateral Filter (BF) achieves the best visual effect. Although Weickert J Diffusion (WJD) and Total Variation (TV) can also obtain good visual effect and segmentation accuracy, they are very time-consuming.

Journal Article
TL;DR: A new contrast and luminosity correction technique is developed based on bilateral filtering and superimpose techniques that is more effective in normalizing the illumination and contrast compared to other illumination techniques such as homomorphic filtering, high pass filter and double mean filtering (DMV).
Abstract: Illumination normalization and contrast variation on images are one of the most challenging tasks in the image processing field. Normally, the degrade contrast images are caused by pose, occlusion, illumination, and luminosity. In this paper, a new contrast and luminosity correction technique is developed based on bilateral filtering and superimpose techniques. Background pixels was used in order to estimate the normalized background using their local mean and standard deviation. An experiment has been conducted on few badly illuminated images and document images which involve illumination and contrast problem. The results were evaluated based on Signal Noise Ratio (SNR) and Misclassification Error (ME). The performance of the proposed method based on SNR and ME was very encouraging. The results also show that the proposed method is more effective in normalizing the illumination and contrast compared to other illumination techniques such as homomorphic filtering, high pass filter and double mean filtering (DMV).

Journal ArticleDOI
TL;DR: The locality sensitive histogram (LSH) is extended to linear time bilateral filtering, and a new type of histogram for efficiently computing edge-preserving nearest neighbor fields (NNFs) is proposed.
Abstract: The locality sensitive histogram (LSH) injects spatial information into the local histogram in an efficient manner, and has been demonstrated to be very effective for visual tracking. In this paper, we explore the application of this efficient histogram in two important problems. We first extend the LSH to linear time bilateral filtering, and then propose a new type of histogram for efficiently computing edge-preserving nearest neighbor fields (NNFs). While the existing histogram-based bilateral filtering methods are the state of the art for efficient grayscale image processing, they are limited to box spatial filter kernels only. In our first application, we address this limitation by expressing the bilateral filter as a simple ratio of linear functions of the LSH, which is able to extend the box spatial kernel to an exponential kernel. The computational complexity of the proposed bilateral filter is linear in the number of image pixels. In our second application, we derive a new bilateral weighted histogram (BWH) for NNF. The new histogram maintains the efficiency of LSH, which allows approximate NNF to be computed independent of patch size. In addition, BWH takes both spatial and color information into account, and thus provides higher accuracy for histogram-based matching, especially around color edges.

Journal ArticleDOI
TL;DR: An effective spectral-spatial classification method for hyperspectral images based on joint bilateral filtering (JBF) and graph cut segmentation is proposed and can achieve 8.56%–13.68% higher overall accuracies than the pixel-wise SVM classifier.
Abstract: Hyperspectral image classification can be achieved by modeling an energy minimization problem on a graph of image pixels. In this paper, an effective spectral-spatial classification method for hyperspectral images based on joint bilateral filtering (JBF) and graph cut segmentation is proposed. In this method, a novel technique for labeling regions obtained by the spectral-spatial segmentation process is presented. Our method includes the following steps. First, the probabilistic support vector machines (SVM) classifier is used to estimate probabilities belonging to each information class. Second, an extended JBF is employed to perform image smoothing on the probability maps. By using our JBF process, salt-and-pepper classification noise in homogeneous regions can be effectively smoothed out while object boundaries in the original image are better preserved as well. Third, a sequence of modified bi-labeling graph cut models is constructed for each information class to extract the desirable object belonging to the corresponding class from the smoothed probability maps. Finally, a classification map is achieved by merging the segmentation maps obtained in the last step using a simple and effective rule. Experimental results based on three benchmark airborne hyperspectral datasets with different resolutions and contexts demonstrate that our method can achieve 8.56%–13.68% higher overall accuracies than the pixel-wise SVM classifier. The performance of our method was further compared to several classical hyperspectral image classification methods using objective quantitative measures and a visual qualitative evaluation.

Journal ArticleDOI
TL;DR: In this article, the edge-preserved smoothing is carried out on a hyperspectral image (HSI) and the SVM multiclass classifier is applied on the smoothed HSI.
Abstract: Bilateral filter (BF) theory is applied to integrate spatial contextual information into the spectral domain for improving the accuracy of the support vector machine (SVM) classifier. The proposed classification framework is a two-stage process. First, an edge-preserved smoothing is carried out on a hyperspectral image (HSI). Then, the SVM multiclass classifier is applied on the smoothed HSI. One of the advantages of the BF-based implementation is that it considers the spatial as well as spectral closeness for smoothing the HSI. Therefore, the proposed method provides better smoothing in the homogeneous region and preserves the image details, which in turn improves the separability between the classes. The performance of the proposed method is tested using benchmark HSIs obtained from the airborne-visible-infrared-imaging-spectrometer (AVIRIS) and the reflective-optics-system-imaging-spectrometer (ROSIS) sensors. Experimental results demonstrate the effectiveness of the edge-preserved filtering in the classification of the HSI. Average accuracies (with 10% training samples) of the proposed classification framework are 99.04%, 98.11%, and 96.42% for AVIRIS–Salinas, ROSIS–Pavia University, and AVIRIS–Indian Pines images, respectively. Since the proposed method follows a combination of BF and the SVM formulations, it will be quite simple and practical to implement in real applications.

Journal ArticleDOI
TL;DR: The proposed method first decomposes a LDCT image into the low-frequency and high-frequency parts by a bilateral filter, then decomposed into an artifact component and a tissue component by performing dictionary learning (DL) and sparse coding.
Abstract: Streak artifacts and mottle noise often appear in low-dose CT (LDCT) images due to excessive quantum noise in low-dose X-ray imaging process, thus degrading CT image quality. This research is aimed at improving the quality of LDCT images via image decomposition and dictionary learning. The proposed method first decomposes a LDCT image into the low-frequency (LF) and high-frequency (HF) parts by a bilateral filter. The HF part is then decomposed into an artifact component and a tissue component by performing dictionary learning (DL) and sparse coding. The tissue component is combined with the LF part to obtain the artifact-suppressed image. At last, a DL method is applied to further reduce the residual artifacts and noise. Different from previous research works with sparse representation, the proposed method does not need to collect training images in advance. The results of numerical simulation and clinical data experiments indicate the effectiveness of the proposed approach.

Journal ArticleDOI
11 Oct 2016
TL;DR: This paper proposes a structure‐aware bilateral texture algorithm to remove texture patterns and preserve structures, which is simple and fast, as well as effective in removing textures.
Abstract: Photos contain well-structured and plentiful visual information Edges are active and expressive stimuli for human visual perception However, it is hard to separate structure from details because edge strength and object scale are entirely different concepts This paper proposes a structure-aware bilateral texture algorithm to remove texture patterns and preserve structures Our proposed method is simple and fast, as well as effective in removing textures Instead of patch shift, smaller patches represent pixels located at structure edges, and original patches represent the texture regions Specifically, this paper also improves joint bilateral filter to preserve small structures Moreover, a windowed inherent variation is adapted to distinguish textures and structures for detecting structure edges Finally, the proposed method produces excellent experimental results These results are compared to some results of previous studies Besides, structure-preserving filtering is a critical operation in many image processing applications Our proposed filter is also demonstrated in many attractive applications, such as seam carving, detail enhancement, artistic rendering, etc

Journal ArticleDOI
TL;DR: Jointly employing five techniques: kernel truncation, best N -term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time is proposed.
Abstract: Computational complexity of the brute-force implementation of the bilateral filter (BF) depends on its filter kernel size. To achieve the constant-time BF whose complexity is irrelevant to the kernel size, many techniques have been proposed, such as 2D box filtering, dimension promotion, and shiftability property. Although each of the above techniques suffers from accuracy and efficiency problems, previous algorithm designers were used to take only one of them to assemble fast implementations due to the hardness of combining them together. Hence, no joint exploitation of these techniques has been proposed to construct a new cutting edge implementation that solves these problems. Jointly employing five techniques: kernel truncation, best $N$ -term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, we propose a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time. To the best of our knowledge, our algorithm is the first method that can integrate all these acceleration techniques and, therefore, can draw upon one another’s strong point to overcome deficiencies. The strength of our method has been corroborated by several carefully designed experiments. In particular, the filtering accuracy is significantly improved without sacrificing the efficiency at running time.

Journal ArticleDOI
TL;DR: An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT and show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifacts reduction.
Abstract: The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.

Journal ArticleDOI
TL;DR: An algorithm including adaptive median filter and bilateral filter is proposed that is able to suppress the mixed noise which contains Gaussian noise and impulsive noise, while preserving the important structures in the images, and enhances the contrast of image by using gray-level morphology and contrast limited histogram equalization.
Abstract: X-ray image play s a very important role in the medical diagnosis. To help the doctors for diagnosis of the disease, some algorithms for enhancing X-ray images were proposed in the past decades. However, the enhancement of images will also amplify the noise or produce distortion of image, which are unfavorable to the diagnosis. Therefore, appropriate techniques for noise suppression and contrast enhancement are necessary. This paper proposed an algorithm including t wo-stage filtering and contrast enhancement for X-ray images. By using adaptive median filter and bilateral filter, our method is able to suppress the mixed noise which contains Gaussian noise and impulsive noise, while preserving the important structures (e.g., edges) in the images. Afterwards, the contrast of image is enhanced by using gray-level morphology and contrast limited histogram equalization (CLAHE). In the experiments, we evaluate the performance of noise removal and contrast enhancement separately with quantitative indexes and visual results. For the mixed noise case, our method is able to achieve averaged PSNR 39.89 dB and averaged SSIM 0.9449 ; for the contrast enhancement, our method is able to enhance more detail structures (e.g., edges, textures) than CLAHE.