scispace - formally typeset
Search or ask a question

Showing papers on "Bilateral filter published in 2013"


Journal ArticleDOI
TL;DR: The guided filter is a novel explicit image filter derived from a local linear model that can be used as an edge-preserving smoothing operator like the popular bilateral filter, but it has better behaviors near edges.
Abstract: In this paper, we propose a novel explicit image filter called guided filter. Derived from a local linear model, the guided filter computes the filtering output by considering the content of a guidance image, which can be the input image itself or another different image. The guided filter can be used as an edge-preserving smoothing operator like the popular bilateral filter [1], but it has better behaviors near edges. The guided filter is also a more generic concept beyond smoothing: It can transfer the structures of the guidance image to the filtering output, enabling new filtering applications like dehazing and guided feathering. Moreover, the guided filter naturally has a fast and nonapproximate linear time algorithm, regardless of the kernel size and the intensity range. Currently, it is one of the fastest edge-preserving filters. Experiments show that the guided filter is both effective and efficient in a great variety of computer vision and computer graphics applications, including edge-aware smoothing, detail enhancement, HDR compression, image matting/feathering, dehazing, joint upsampling, etc.

4,730 citations


Journal ArticleDOI
TL;DR: Performance of proposed method is superior to wavelet thresholding, bilateral filter and non-local means filter and superior/akin to multi-resolution bilateral filter in terms of method noise, visual quality, PSNR and Image Quality Index.
Abstract: Non-local means filter uses all the possible self-predictions and self-similarities the image can provide to determine the pixel weights for filtering the noisy image, with the assumption that the image contains an extensive amount of self-similarity. As the pixels are highly correlated and the noise is typically independently and identically distributed, averaging of these pixels results in noise suppression thereby yielding a pixel that is similar to its original value. The non-local means filter removes the noise and cleans the edges without losing too many fine structure and details. But as the noise increases, the performance of non-local means filter deteriorates and the denoised image suffers from blurring and loss of image details. This is because the similar local patches used to find the pixel weights contains noisy pixels. In this paper, the blend of non-local means filter and its method noise thresholding using wavelets is proposed for better image denoising. The performance of the proposed method is compared with wavelet thresholding, bilateral filter, non-local means filter and multi-resolution bilateral filter. It is found that performance of proposed method is superior to wavelet thresholding, bilateral filter and non-local means filter and superior/akin to multi-resolution bilateral filter in terms of method noise, visual quality, PSNR and Image Quality Index.

125 citations


Journal ArticleDOI
TL;DR: This paper discusses this problem, and proposes some simple steps to accelerate the implementation, in general, and for small σr in particular, and provides some experimental results to demonstrate the acceleration that is achieved using these modifications.
Abstract: A direct implementation of the bilateral filter requires O(σs2) operations per pixel, where σs is the (effective) width of the spatial kernel. A fast implementation of the bilateral filter that required O(1) operations per pixel with respect to σs was recently proposed. This was done by using trigonometric functions for the range kernel of the bilateral filter, and by exploiting their so-called shiftability property. In particular, a fast implementation of the Gaussian bilateral filter was realized by approximating the Gaussian range kernel using raised cosines. Later, it was demonstrated that this idea could be extended to a larger class of filters, including the popular non-local means filter. As already observed, a flip side of this approach was that the run time depended on the width σr of the range kernel. For an image with dynamic range [0,T], the run time scaled as O(T2/σr2) with σr. This made it difficult to implement narrow range kernels, particularly for images with large dynamic range. In this paper, we discuss this problem, and propose some simple steps to accelerate the implementation, in general, and for small σr in particular. We provide some experimental results to demonstrate the acceleration that is achieved using these modifications.

122 citations


Journal ArticleDOI
TL;DR: An extensive evaluation study of different strategies for computing adaptive support weights in local stereo matching, including the original bilateral filter-based weights, as well as more recent approaches based on geodesic distances or on the guided filter.

104 citations


Proceedings ArticleDOI
01 Sep 2013
TL;DR: A new underwater model to compensate the attenuation discrepancy along the propagation path is proposed, and a fast guided trigonometric bilateral filtering enhancing algorithm and a novel fast automatic color enhancement algorithm are proposed.
Abstract: This paper describes a novel method to enhance underwater optical images by guided trigonometric bilateral filters and color correction. Scattering and color distortion are two major problems of distortion for underwater optical imaging. Scattering is caused by large suspended particles, like fog or turbid water which contains abundant particles. Color distortion corresponds to the varying degrees of attenuation encountered by light traveling in the water with different wavelengths, rendering ambient underwater environments dominated by a bluish tone. Our key contributions are proposed a new underwater model to compensate the attenuation discrepancy along the propagation path, and to propose a fast guided trigonometric bilateral filtering enhancing algorithm and a novel fast automatic color enhancement algorithm. The enhanced images are characterized by reduced noised level, better exposedness of the dark regions, improved global contrast while the finest details and edges are enhance significantly. In addition, our enhancement method is comparable to higher quality than the state-of-the-art methods by assuming in the latest image evaluation systems.

97 citations


Journal ArticleDOI
TL;DR: A novel single image-based dehazing framework to remove haze artifacts from images is proposed, where two novel image priors are proposed, called the pixel-based dark channel prior and the pixels-based bright channel prior, to estimate atmospheric light via haze density analysis.
Abstract: Images/videos captured from optical devices are usually degraded by turbid media such as haze, smoke, fog, rain and snow. Haze is the most common problem in outdoor scenes because of the atmosphere conditions. This paper proposes a novel single image-based dehazing framework to remove haze artifacts from images, where we propose two novel image priors, called the pixel-based dark channel prior and the pixel-based bright channel prior. Based on the two priors with the haze optical model, we propose to estimate atmospheric light via haze density analysis. We can then estimate transmission map, followed by refining it via the bilateral filter. As a result, high-quality haze-free images can be recovered with lower computational complexity compared with the state-of-the-art approach based on patch-based dark channel prior.

92 citations


Journal ArticleDOI
TL;DR: The performance of proposed methods is compared with existing denoising methods and found that, it has inferior performance compared to Bayesian least squares estimate using Gaussian Scale mixture and superior/comparable performance to that of wavelet thresholding, bilateral filter, multi-resolution bilateral filter
Abstract: The Gaussian filter is a local and linear filter that smoothes the whole image irrespective of its edges or details, whereas the bilateral filter is also a local but non-linear, considers both gray level similarities and geometric closeness of the neighboring pixels without smoothing edges The extension of bilateral filter: multi-resolution bilateral filter, where bilateral filter is applied to approximation subbands of an image decomposed and after each level of wavelet reconstruction The application of bilateral filter on the approximation subband results in loss of some image details, whereas that after each level of wavelet reconstruction flattens the gray levels thereby resulting in a cartoon-like appearance To tackle these issues, it is proposed to use the blend of Gaussian/bilateral filter and its method noise thresholding using wavelets In Gaussian noise scenarios, the performance of proposed methods is compared with existing denoising methods and found that, it has inferior performance compared to Bayesian least squares estimate using Gaussian Scale mixture and superior/comparable performance to that of wavelet thresholding, bilateral filter, multi-resolution bilateral filter, NL-means and Kernel based methods Further, proposed methods have the advantage of less computational time compared to other methods except wavelet thresholding, bilateral filter

92 citations


Journal ArticleDOI
TL;DR: Experiments illustrate that the proposed spatially adaptive iterative filtering (SAIF) strategy can significantly relax the base algorithm's sensitivity to its tuning (smoothing) parameters, and effectively boost the performance of several existing denoising filters to generate state-of-the-art results under both simulated and practical conditions.
Abstract: Spatial domain image filters (e.g., bilateral filter, non-local means, locally adaptive regression kernel) have achieved great success in denoising. Their overall performance, however, has not generally surpassed the leading transform domain-based filters (such as BM3-D). One important reason is that spatial domain filters lack efficiency to adaptively fine tune their denoising strength; something that is relatively easy to do in transform domain method with shrinkage operators. In the pixel domain, the smoothing strength is usually controlled globally by, for example, tuning a regularization parameter. In this paper, we propose spatially adaptive iterative filtering (SAIF) a new strategy to control the denoising strength locally for any spatial domain method. This approach is capable of filtering local image content iteratively using the given base filter, and the type of iteration and the iteration number are automatically optimized with respect to estimated risk (i.e., mean-squared error). In exploiting the estimated local signal-to-noise-ratio, we also present a new risk estimator that is different from the often-employed SURE method, and exceeds its performance in many cases. Experiments illustrate that our strategy can significantly relax the base algorithm's sensitivity to its tuning (smoothing) parameters, and effectively boost the performance of several existing denoising filters to generate state-of-the-art results under both simulated and practical conditions.

88 citations


Journal ArticleDOI
TL;DR: A novel approximation of smoothing operators by symmetric doubly stochastic matrices is proposed and it is shown that this approximation is stable and accurate, even more so in higher dimensions.
Abstract: We study a general class of nonlinear and shift-varying smoothing filters that operate based on averaging. This important class of filters includes many well-known examples such as the bilateral filter, nonlocal means, general adaptive moving average filters, and more. (Many linear filters such as linear minimum mean-squared error smoothing filters, Savitzky--Golay filters, smoothing splines, and wavelet smoothers can be considered special cases.) They are frequently used in both signal and image processing as they are elegant, computationally simple, and high performing. The operators that implement such filters, however, are not symmetric in general. The main contribution of this paper is to provide a provably stable method for symmetrizing the smoothing operators. Specifically, we propose a novel approximation of smoothing operators by symmetric doubly stochastic matrices and show that this approximation is stable and accurate, even more so in higher dimensions. We demonstrate that there are several im...

88 citations


Journal ArticleDOI
TL;DR: A new upsampling method that synergistically combines the median and bilateral filters thus it better preserves the depth edges and is more robust to noise.
Abstract: We present a new upsampling method to enhance the spatial resolution of depth images. Given a low-resolution depth image from an active depth sensor and a potentially high-resolution color image from a passive RGB camera, we formulate it as an adaptive cost aggregation problem and solve it using the bilateral filter. The formulation synergistically combines the median and bilateral filters thus it better preserves the depth edges and is more robust to noise. Numerical and visual evaluations on a total of 37 Middlebury data sets demonstrate the effectiveness of our method. A real-time high-resolution depth capturing system is also developed using commercial active depth sensor based on the proposed upsampling method.

87 citations


Book ChapterDOI
13 Dec 2013
TL;DR: The proposed algorithm is quantitatively evaluated on the Middlebury stereo dataset and is applied to inpaint Kinect data and upsample Lidar's range data, showing that the algorithm is competent.
Abstract: In this paper, we propose to conduct inpainting and upsampling for defective depth maps when aligned color images are given. These tasks are referred to as the guided depth enhancement problem. We formulate the problem based on the heat diffusion framework. The pixels with known depth values are treated as the heat sources and the depth enhancement is performed via diffusing the depth from these sources to unknown regions. The diffusion conductivity is designed in terms of the guidance color image so that a linear anisotropic diffusion problem is formed. We further cast the steady state problem of this diffusion into the famous random walk model, by which the enhancement is achieved efficiently by solving a sparse linear system. The proposed algorithm is quantitatively evaluated on the Middlebury stereo dataset and is applied to inpaint Kinect data and upsample Lidar's range data. Comparisons to the commonly used bilateral filter and Markov Random Field based methods are also presented, showing that our algorithm is competent.

Journal ArticleDOI
TL;DR: Comparison with other methods shows that this approach compares well to other state of the art methods in the extraction of polarimetric information and shows superior performance for edge restoration and noise smoothing.
Abstract: In this paper, we introduce an iterative speckle filtering method for polarimetric SAR (PolSAR) images based on the bilateral filter. To locally adapt to the spatial structure of images, this filter relies on pixel similarities in both spatial and radiometric domains. To deal with polarimetric data, we study the use of similarities based on a statistical distance called Kullback-Leibler divergence as well as two geodesic distances on Riemannian manifolds. To cope with speckle, we propose to progressively refine the result thanks to an iterative scheme. Experiments are run over synthetic and experimental data. First, simulations are generated to study the effects of filtering parameters in terms of polarimetric reconstruction error, edge preservation and smoothing of homogeneous areas. Comparison with other methods shows that our approach compares well to other state of the art methods in the extraction of polarimetric information and shows superior performance for edge restoration and noise smoothing. The filter is then applied to experimental data sets from ESAR and FSAR sensors (DLR) at L-band and S-band, respectively. These last experiments show the ability of the filter to restore structures such as buildings and roads and to preserve boundaries between regions while achieving a high amount of smoothing in homogeneous areas.

Journal ArticleDOI
TL;DR: A novel edge-preserving texture suppression filter exploiting the joint bilateral filter as a bridge to achieve the purpose of both properties of texture-smoothing and edge- Preserving, and is extended to a variety of image processing applications.
Abstract: Obtaining a texture-smoothing and edge-preserving filtered output is significant to image decomposition. Although the edge and the texture have salient difference in human vision, automatically distinguishing them is a difficult task, for they have similar intensity difference or gradient response. The state-of-the-art edge-preserving smoothing (EPS) based decomposition approaches are hard to obtain a satisfactory result. We propose a novel edge-preserving texture suppression filter, exploiting the joint bilateral filter as a bridge to achieve the purpose of both properties of texture-smoothing and edge-preserving. We develop the iterative asymmetric sampling and the local linear model to produce the degenerative image to suppress the texture, and apply the edge correction operator to achieve edge-preserving. An efficient accelerating implementation is introduced to improve the performance of filtering response. The experiments demonstrate that our filter produces satisfactory outputs with both properties of texture-smoothing and edge-preserving, while compared with the results of other popular EPS approaches in signal, visual and time analysis. Finally, we extend our filter to a variety of image processing applications.

Proceedings ArticleDOI
03 Jul 2013
TL;DR: A comparative study of seven filters, namely Lee, Frost, Median, Speckle Reduction Anisotropic Diffusion (SRAD), Perona-Malik's An isotropic diffusion (PMAD) filter, Spekle Reduction Bilateral Filter (SRBF) and Speckel Reduction filter based on soft thresholding in the Wavelet transform, to determine which despeckling algorithm is most effective and optimal for real-time implementation.
Abstract: At present, ultrasound is one of the essential tools for noninvasive medical diagnosis. However, speckle noise is inherent in medical ultrasound images and it is the cause for decreased resolution and contrast-to-noise ratio. Low image quality is an obstacle for effective feature extraction, recognition, analysis, and edge detection; it also affects image interpretation by doctor and the accuracy of computer-assisted diagnostic techniques. Thus, speckle reduction is significant and critical step in pre-processing of ultrasound images. Many speckle reduction techniques have been studied by researchers, but to date there is no comprehensive method that takes all the constraints into consideration. In this paper we discuss seven filters, namely Lee, Frost, Median, Speckle Reduction Anisotropic Diffusion (SRAD), Perona-Malik's Anisotropic Diffusion (PMAD) filter, Speckle Reduction Bilateral Filter (SRBF) and Speckle Reduction filter based on soft thresholding in the Wavelet transform. A comparative study of these filters has been made in terms of preserving the features and edges as well as effectiveness of de-noising.We computed five established evaluation metrics in order to determine which despeckling algorithm is most effective and optimal for real-time implementation. In addition, the experimental results have been demonstrated by filtered images and statistical data table.

Journal ArticleDOI
01 Nov 2013
TL;DR: The key idea is a general formulation to modulate the traditional sample distance measures, which are determined by sample position in spatial domain, with a similarity measure that considers arbitrary per sample attributes, which leads to the notion of bilateral blue noise whose properties are influenced by not only the uniformity of the sample positions but also the similarity of thesample attributes.
Abstract: Blue noise sampling is an important component in many graphics applications, but existing techniques consider mainly the spatial positions of samples, making them less effective when handling problems with non-spatial features. Examples include biological distribution in which plant spacing is influenced by non-positional factors such as tree type and size, photon mapping in which photon flux and direction are not a direct function of the attached surface, and point cloud sampling in which the underlying surface is unknown a priori. These scenarios can benefit from blue noise sample distributions, but cannot be adequately handled by prior art.Inspired by bilateral filtering, we propose a bilateral blue noise sampling strategy. Our key idea is a general formulation to modulate the traditional sample distance measures, which are determined by sample position in spatial domain, with a similarity measure that considers arbitrary per sample attributes. This modulation leads to the notion of bilateral blue noise whose properties are influenced by not only the uniformity of the sample positions but also the similarity of the sample attributes. We describe how to incorporate our modulation into various sample analysis and synthesis methods, and demonstrate applications in object distribution, photon density estimation, and point cloud sub-sampling.

Journal ArticleDOI
TL;DR: This paper proposes a novel nonlinear filter, named rank order Laplacian of Gaussian (ROLG) filter, based on which a new interest point detector is developed, which achieves superior performance comparing to four state-of-the-art detectors.

Proceedings Article
01 Jan 2013
TL;DR: Experimental results show that the proposed Filter weighted joint bilateral filter has the best performance of improvement of depth map accuracy, and the proposed filter can perform real-time refinement.
Abstract: In this paper, we propose a new refinement filter for depth maps. The filter convolutes a depth map by a jointly computed kernel on a natural image with a weight map. We call the filter weighted joint bilateral filter. The filter fits an outline of an object in the depth map to the outline of the object in the natural image, and it reduces noises. An additional filter of slope depth compensation filter removes blur across object boundary. The filter set’s computational cost is low and is independent of depth ranges. Thus we can refine depth maps to generate accurate depth map with lower cost. In addition, we can apply the filters for various types of depth map, such as computed by simple block matching, Markov random field based optimization, and Depth sensors. Experimental results show that the proposed filter has the best performance of improvement of depth map accuracy, and the proposed filter can perform real-time refinement.

Journal ArticleDOI
Wei Sun1
01 Nov 2013-Optik
TL;DR: Using gray-scale opening operation and fast joint bilateral filtering techniques, the proposed algorithm can effectively obtain the global atmospheric light and greatly improve the speed and accuracy of atmospheric scattering function solving.

Journal ArticleDOI
TL;DR: Tests on synthetic and real SAR images show that SFAW notably smoothes speckle with unperceivable detail blurring and achieves better performances than other related methods.
Abstract: In this letter, a modified bilateral filter suitable for synthetic aperture radar (SAR) image despeckling, named space-domain filter with alterable window (SFAW), is proposed. SFAW features geometric adaptivity, in both spatial and similarity functions, respectively driven by the following: 1) the local coefficient of variation ( CV) and 2) the joint probability density function model of two pixels having the same reflectivity. Tests on synthetic and real SAR images show that SFAW notably smoothes speckle with unperceivable detail blurring and achieves better performances than other related methods.

Journal ArticleDOI
TL;DR: A ray-based noise-weighting scheme is introduced to the FBP algorithm and this new FBP-type algorithm significantly reduces or removes the streaking artifacts in low-dose CT.
Abstract: Purpose: This paper derives a ray-by-ray weighted filtered backprojection (rFBP) algorithm, based on our recently developed view-by-view weighted, filtered backprojection (vFBP) algorithm. Methods: The rFBP algorithm directly extends the vFBP algorithm by letting the noise weighting vary from channel to channel within each view. The projection data can be weighted in inverse proportion to their noise variances. Also, an edge-preserving bilateral filter is suggested to perform post filtering to further reduce the noise. The proposed algorithm has been implemented for the circular-orbit cone-beam geometry based on Feldkamp's algorithm. Results: Image reconstructions with computer simulations and clinical cadaver data are presented to illustrate the effectiveness and feasibility of the proposed algorithm. The new FBP-type algorithm is able to significantly reduce or remove the noise texture, which the conventional FBP is unable to do. The computation time of the proposed rFBP algorithm is approximately the same as the conventional FBP algorithm. Conclusions: A ray-based noise-weighting scheme is introduced to the FBP algorithm. This new FBP-type algorithm significantly reduces or removes the streaking artifacts in low-dose CT.

Journal ArticleDOI
TL;DR: A hybrid algorithm for 2D-to-3D conversion in 3D displays is proposed which can be reduced by 25%-35% and the depth perception score is between 75 and 85, so the human eye cannot sense the noticeable differences from the final 3D rendering.
Abstract: In recent years, 3D display technology has been receiving increasingly more attention. The most intuitive 3D method is to use two temporally synchronized video streams for the left and right eyes, respectively. However, traditional 2D video contents are captured by one camera, and in order to synthesize the left and right views as the behavior of two cameras, depth map information is required. In this paper, we propose a hybrid algorithm for 2D-to-3D conversion in 3D displays; it is a good way to solve the problem of traditional 2D video contents which need to generate 3D effects in 3D displays. We choose three depth cues for depth estimation: motion information, linear perspective, and texture characteristics. Moreover, we adopt a bilateral filter for depth map smoothing and noise removal. From the experimental results, execution time can be reduced by 25%-35% and the depth perception score is between 75 and 85. Thus, the human eye cannot sense the noticeable differences from the final 3D rendering. Furthermore, it is very suitable to apply our proposed hybrid algorithm to 2D-to-3D conversion in 3D displays.

Book ChapterDOI
01 Nov 2013
TL;DR: This chapter exemplarily focuses on two state-of-the-art methods, the bilateral filtering and total variation (TV) denoising and discusses several alternatives of positions in the pipeline, where these methods can be applied.
Abstract: When considering the task of denoising ToF data, two issues arise concerning the optimal strategy. The first one is the choice of an appropriate denoising method and its adaptation to ToF data, the second one is the issue of the optimal positioning of the denoising step within the processing pipeline between acquisition of raw data of the sensor and the final output of the depth map. Concerning the first issue, several denoising approaches specifically for ToF data have been proposed in literature, and one contribution of this chapter is to provide an overview. To tackle the second issue, we exemplarily focus on two state-of-the-art methods, the bilateral filtering and total variation (TV) denoising and discuss several alternatives of positions in the pipeline, where these methods can be applied. In our experiments, we compare and evaluate the results of each combination of method and position both qualitatively and quantitatively. It turns out, that for TV denoising the optimal position is at the very end of the pipeline. For the bilateral filter, a quantitative comparison shows that applying it to the raw data together with a subsequent median filtering provides a low error to ground truth. Qualitatively, it competes with applying the (cross-)bilateral filter to the depth data. In particular, the optimal position in general depends on the considered method. As a consequence, for any newly introduced denoising technique, finding its optimal position within the pipeline is an open issue.

Patent
26 Jun 2013
TL;DR: In this paper, an FPGA (field programmable gate array)-based infrared image detail enhancing system and method is presented. But the system is not suitable for the use of the infrared image.
Abstract: The invention discloses an FPGA (field programmable gate array)-based infrared image detail enhancing system and method. The system comprises a bilateral filtering module, a gaussian filtering module, a histogram projecting module and an automatic gain control module, wherein the bilateral filtering module is connected with the gaussian filtering module which is connected with the histogram projecting module and the automatic gain control module respectively, and original input data firstly passes through the bilateral filtering module to obtain image pattern fundamental frequency information; the fundamental frequency information passes through the gaussian filtering module to be smoothened, and differencing is carried out between a result and the original input data so as to obtain image detail information; and the detail information is amplified by the automatic gain control module, the fundamental frequency information is compressed by the histogram projecting module, and the detail information and the fundamental frequency information are summed to obtain an output image. According to the invention, the contrast ratio of the image can be improved, the detail information can be enhanced, background noise can be restrained, and the common problems that the edge is fuzzy and a visual effect is poor in the image of a thermal infrared imager imaging system in the prior art can be solved.

Journal ArticleDOI
TL;DR: The multiscale segmentation method is robust and accurate and can be used for MRI-based attenuation correction in combined MR/PET applications.

Patent
Xiaofeng Fan1
14 Mar 2013
TL;DR: In this article, an image sensor can include pixels that are grouped into subsets of pixels, with each subset including three or more pixels, and charge in N pixels is read out and summed together.
Abstract: An image sensor can include pixels that are grouped into subsets of pixels, with each subset including three or more pixels. A method for asymmetrical high dynamic range imaging can include capturing an image of a subject scene using a single integration time for all of the pixels. In a subset of pixels, charge in N pixels is read out and summed together. N represents a number that is between two and one less than a total number of pixels in the subset. Un-summed charge is read out from one pixel in the subset. The un-summed charge and the summed charge are combined when producing a high dynamic range image.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: Wang et al. as mentioned in this paper proposed a joint trilateral filtering (JTF) algorithm for solving depth map super-resolution problems, which utilizes and preserves edge information from the associated high-resolution (HR) image by taking spatial and range information of local pixels.
Abstract: Depth map super-resolution is an emerging topic due to the increasing needs and applications using RGB-D sensors. Together with the color image, the corresponding range data provides additional information and makes visual analysis tasks more tractable. However, since the depth maps captured by such sensors are typically with limited resolution, it is preferable to enhance its resolution for improved recognition. In this paper, we present a novel joint trilateral filtering (JTF) algorithm for solving depth map super-resolution (SR) problems. Inspired by bilateral filtering, our JTF utilizes and preserves edge information from the associated high-resolution (HR) image by taking spatial and range information of local pixels. Our proposed further integrates local gradient information of the depth map when synthesizing its HR output, which alleviates textural artifacts like edge discontinuities. Quantitative and qualitative experimental results demonstrate the effectiveness and robustness of our approach over prior depth map upsampling works.

Journal ArticleDOI
TL;DR: This method extracts all stripe noise spectra from each frame, and then excludes relatively stationary images by sub-pixel registration to obtain continuously moving image sequences, and calculates the new histogram of the current column image, thereby effectively diminishing all frequency noises.

Journal ArticleDOI
TL;DR: This paper presents a different bas-relief generation algorithm based on geometric compression and starting from a 3D mesh input, which offers control over the level of detail, making it flexible for the adjustment of the appearance of details.
Abstract: Most of the existing approaches to bas-relief generation operate in image space, which is quite time-consuming in practice. This paper presents a different bas-relief generation algorithm based on geometric compression and starting from a 3D mesh input. The feature details are first extracted from the original objects using a spatial bilateral filtering technique. Then, a view-dependent coordinate mapping method is applied to build the height domain for the current view. After fitting the compression datum plane, the algorithm uses an adaptive compression function to scale and combine the Z values of the base mesh and the fine details. This approach offers control over the level of detail, making it flexible for the adjustment of the appearance of details. For a typical input mesh with 100k triangles, this algorithm computes a bas-relief in 0.214s.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed p Dixon-based approach has a reduced computational load and a better accuracy compared to the other existing pixon-based image segmentation techniques.
Abstract: In this paper, a new pixon-based method is presented for image segmentation. In the proposed algorithm, bilateral filtering is used as a kernel function to form a pixonal image. Using this filter reduces the noise and smoothes the image slightly. By using this pixon-based method, the image over segmentation could be avoided. Indeed, the bilateral filtering, as a preprocessing step, eliminates the unnecessary details of the image and results in a few numbers of pixons, faster performance and more robustness against unwanted environmental noises. Then, the obtained pixonal image is segmented using the hierarchical clustering method (Fuzzy C-means algorithm). The experimental results show that the proposed pixon-based approach has a reduced computational load and a better accuracy compared to the other existing pixon-based image segmentation techniques.

Journal ArticleDOI
TL;DR: A new adaptive wavelet packet-based approach to minimize speckle noise in ultrasound images is proposed, which combinesWavelet packet thresholding with a bilateral filter and a modified NeighShrink technique.
Abstract: A new adaptive wavelet packet-based approach to minimize speckle noise in ultrasound images is proposed. This method combines wavelet packet thresholding with a bilateral filter. Here, the best bases after wavelet packet decomposition are selected by comparing the first singular value of all sub-bands, and the noisy coefficients are thresholded using a modified NeighShrink technique. The algorithm is tested with various ultrasound images, and the results, in terms of peak signal-to-noise ratio and mean structural similarity values, are compared with those for some well-known de-speckling techniques. The simulation results indicate that the proposed method has better potential to minimize speckle noise and retain fine details of the ultrasound image.