scispace - formally typeset
Search or ask a question

Showing papers on "Bilateral filter published in 2017"


Journal ArticleDOI
TL;DR: This paper presents a novel method for underwater image enhancement inspired by the Retinex framework, which simulates the human visual system and utilizes the combination of the bilateral filter and trilateral filter on the three channels of the image in CIELAB color space according to the characteristics of each channel.

244 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: Zhang et al. as discussed by the authors introduced robust and synergetic hand-crafted features and a simple but efficient deep feature from a convolutional neural network (CNN) architecture for defocus estimation.
Abstract: In this paper, we introduce robust and synergetic hand-crafted features and a simple but efficient deep feature from a convolutional neural network (CNN) architecture for defocus estimation. This paper systematically analyzes the effectiveness of different features, and shows how each feature can compensate for the weaknesses of other features when they are concatenated. For a full defocus map estimation, we extract image patches on strong edges sparsely, after which we use them for deep and hand-crafted feature extraction. In order to reduce the degree of patch-scale dependency, we also propose a multi-scale patch extraction strategy. A sparse defocus map is generated using a neural network classifier followed by a probability-joint bilateral filter. The final defocus map is obtained from the sparse defocus map with guidance from an edge-preserving filtered input image. Experimental results show that our algorithm is superior to state-of-the-art algorithms in terms of defocus estimation. Our work can be used for applications such as segmentation, blur magnification, all-in-focus image generation, and 3-D estimation.

95 citations


Journal ArticleDOI
TL;DR: An adaptive image restoration algorithm based on 2-D bilateral filtering has been proposed to enhance the signal-to-noise ratio (SNR) of the intrusion location for phase-sensitive optical time domain reflectometry system and has the potential to precisely extract intrusion location from a harsh environment with strong background noise.
Abstract: An adaptive image restoration algorithm based on 2-D bilateral filtering has been proposed to enhance the signal-to-noise ratio (SNR) of the intrusion location for phase-sensitive optical time domain reflectometry (Ф-OTDR) system. By converting the spatial and time information of the Ф-OTDR traces into 2-D image, the proposed 2-D bilateral filtering algorithm can smooth the noise and preserve the useful signal efficiently. To simplify the algorithm, a Lorentz spatial function is adopted to replace the original Gaussian function, which has higher practicability. Furthermore, an adaptive parameter setting method is developed according to the relation between the optimal gray level standard deviation and noise standard deviation, which is much faster and more robust for different types of signals. In the experiment, the SNR of location information has been improved over 14 dB without spatial resolution loss for a signal with original SNR of 6.43 dB in 27.6 km sensing fiber. The proposed method has the potential to precisely extract intrusion location from a harsh environment with strong background noise.

77 citations


Journal ArticleDOI
TL;DR: This paper analyzes a parallel implementation of the bilateral filter adapted for point clouds, which denoises a point with respect to its neighbors by considering not only the distance from the neighbors to the point but also the distance along a normal direction.
Abstract: Point sets obtained by 3D scanners are often corrupted with noise, that can have several causes, such as a tangential acquisition direction, changing environmental lights or a reflective object material. It is thus crucial to design efficient tools to remove noise from the acquired data without removing important information such as sharp edges or shape details. To do so, Fleish-man et al. introduced a bilateral filter for meshes adapted from the bilateral filter for gray level images. This anisotropic filter denoises a point with respect to its neighbors by considering not only the distance from the neighbors to the point but also the distance along a normal direction. This simple fact allows for a much better preservation of sharp edges. In this paper, we analyze a parallel implementation of the bilateral filter adapted for point clouds. Source Code The ANSI C++ source code permitting to reproduce results from the on-line demo is available on the web page of the article 1 .

68 citations


Journal ArticleDOI
TL;DR: A novel classification framework for hyperspectral images based on the joint bilateral filter and sparse representation classification (SRC) and the spectral similarity-based joint SRC (SS-JSRC) is proposed to overcome the weakness of the traditional JSRC method.

62 citations


Journal ArticleDOI
TL;DR: A fast implementation of bilateral filtering is presented, which is based on an optimal expansion of the filter kernel into a sum of factorized terms, which leads to a simple and elegant solution in terms of eigenvectors of a square matrix.
Abstract: A fast implementation of bilateral filtering is presented, which is based on an optimal expansion of the filter kernel into a sum of factorized terms. These terms are computed by minimizing the expansion error in the mean-square-error sense. This leads to a simple and elegant solution in terms of eigenvectors of a square matrix. In this way, the bilateral filter is applied through computing a few Gaussian convolutions, for which very efficient algorithms are readily available. Moreover, the expansion functions are optimized for the histogram of the input image, leading to improved accuracy. It is shown that this further optimization it made possible by removing the commonly deployed constrain of shiftability of the basis functions. Experimental validation is carried out in the context of digital rock imaging. Results on large 3D images of rock samples show the superiority of the proposed method with respect to other fast approximations of bilateral filtering.

48 citations


Journal ArticleDOI
TL;DR: The results of various quantitative, qualitative measures and by visual inspection of denoise synthetic and real ultrasound images demonstrate that the proposed hybrid algorithm have strong denoising capability and able to preserve the fine image details such as edge of a lesion better than previously developed methods for speckle noise reduction.

44 citations


Journal ArticleDOI
Tao Dai1, Weizhi Lu1, Wei Wang1, Jilei Wang1, Shu-Tao Xia1 
TL;DR: An entropy-based bilateral filter with new range kernel which contains a new range distance which is robust to noise and in order to consider the local statistics of images, local entropy is applied to adaptively guide the range parameter selections.

35 citations


Journal ArticleDOI
TL;DR: A comparative study shows that the SANLM denoising filter gives the best performance in terms of better PSNR and SSIM in visual interpretation and also helps in clinical diagnosis of the brain.
Abstract: The magnetic resonance imaging (MRI) modality is an effective tool in the diagnosis of the brain. These MR images are introduced with noise during acquisition which reduces the image quality and limits the accuracy in diagnosis. Elimination of noise in medical images is an important task in preprocessing and there exist different methods to eliminate noise in medical images. In this article, different denoising algorithms such as nonlocal means, principal component analysis, bilateral, and spatially adaptive nonlocal means (SANLM) filters are studied to eliminate noise in MR. Comparative analysis of these techniques have been with help of various metrics such as signal-to-noise ratio, peak signal-to-noise ratio (PSNR), mean squared error, root mean squared error, and structure similarity (SSIM). This comparative study shows that the SANLM denoising filter gives the best performance in terms of better PSNR and SSIM in visual interpretation. It also helps in clinical diagnosis of the brain.

34 citations


Journal ArticleDOI
TL;DR: Experimental results on two real hyperspectral data sets demonstrate that the proposed method outperforms some traditional alternatives.

32 citations


Proceedings ArticleDOI
21 Jul 2017
TL;DR: The CoF extends the BF to deal with boundaries, not just edges, and learns co-occurrences directly from the image, which can achieve various filtering results.
Abstract: Co-occurrence Filter (CoF) is a boundary preserving filter. It is based on the Bilateral Filter (BF) but instead of using a Gaussian on the range values to preserve edges it relies on a co-occurrence matrix. Pixel values that co-occur frequently in the image (i.e., inside textured regions) will have a high weight in the co-occurrence matrix. This, in turn, means that such pixel pairs will be averaged and hence smoothed, regardless of their intensity differences. On the other hand, pixel values that rarely co-occur (i.e., across texture boundaries) will have a low weight in the co-occurrence matrix. As a result, they will not be averaged and the boundary between them will be preserved. The CoF therefore extends the BF to deal with boundaries, not just edges. It learns co-occurrences directly from the image. We can achieve various filtering results by directing it to learn the co-occurrence matrix from a part of the image, or a different image. We give the definition of the filter, discuss how to use it with color images and show several use cases.

Journal ArticleDOI
TL;DR: The KMGB filter is compared with the partial temporal non‐local means filter (PATEN), with the time‐intensity profile similarity (TIPS) filter, and with a new version derived from it, by introducing the guiding image (GB‐TIPS), and suggested to be a more robust solution for halved‐dose CTP datasets.
Abstract: Purpose Dynamic CT perfusion (CTP) consists in repeated acquisitions of the same volume in different time steps, slightly before, during and slightly afterwards the injection of contrast media. Important functional information can be derived for each voxel, which reflect the local hemodynamic properties and hence the metabolism of the tissue. Different approaches are being investigated to exploit data redundancy and prior knowledge for noise reduction of such datasets, ranging from iterative reconstruction schemes to high dimensional filters. Methods We propose a new spatial bilateral filter which makes use of the k-means clustering algorithm and of an optimal calculated guiding image. We named the proposed filter as k-means clustering guided bilateral filter (KMGB). In this study, the KMGB filter is compared with the partial temporal non-local means filter (PATEN), with the time-intensity profile similarity (TIPS) filter, and with a new version derived from it, by introducing the guiding image (GB-TIPS). All the filters were tested on a digital in-house developed brain CTP phantom, were noise was added to simulate 80 kV and 200 mAs (default scanning parameters), 100 mAs and 30 mAs. Moreover, the filters performances were tested on 7 noisy clinical datasets with different pathologies in different body regions. The original contribution of our work is two-fold: first we propose an efficient algorithm to calculate a guiding image to improve the results of the TIPS filter, secondly we propose the introduction of the k-means clustering step and demonstrate how this can potentially replace the TIPS part of the filter obtaining better results at lower computational efforts. Results As expected, in the GB-TIPS, the introduction of the guiding image limits the oversmoothing of the TIPS filter, improving spatial resolution by more than 50%. Furthermore, replacing the time-intensity profile similarity calculation with a fuzzy k-means clustering strategy (KMGB) allows to control the edge preserving features of the filter, resulting in improved spatial resolution 35 and CNR both for CT images and for functional maps. In the phantom study, the PATEN filter showed overall the poorest results, while the other filters showed comparable performances in terms of perfusion values preservation, with the KMGB filter having overall the best image quality. Conclusion In conclusion, the KMGB filter leads to superior results for CT images and functional maps quality improvement, in significantly shorter computational times compared to the other filters. Our results suggest that the KMGB filter might be a more robust solution for halved-dose CTP datasets. For all the filters investigated, some artifacts start to appear on the BF maps if one sixth of the dose is simulated, suggesting that no one of the filters investigated in this study might be optimal for such a drastic dose reduction scenario. This article is protected by copyright. All rights reserved.

Journal ArticleDOI
TL;DR: Experimental results prove that the proposed spectral-spatial hyperspectral image classification approach is robust and offers more classification accuracy than state-of-the-art methods when the number of labeled samples is small.

Journal ArticleDOI
01 Dec 2017-Heliyon
TL;DR: This paper takes advantage of Faber Schauder Wavelet (FSW) and Otsu threshold to detect edges in a multi-scale way with low complexity, since the extrema coefficients of this wavelet are located on edge points and contain only arithmetic operations.

Posted Content
TL;DR: Co-occurrence filter (CoF) as discussed by the authors is a boundary preserving filter based on the Bilateral Filter (BF) that relies on a co-occurence matrix.
Abstract: Co-occurrence Filter (CoF) is a boundary preserving filter. It is based on the Bilateral Filter (BF) but instead of using a Gaussian on the range values to preserve edges it relies on a co-occurrence matrix. Pixel values that co-occur frequently in the image (i.e., inside textured regions) will have a high weight in the co-occurrence matrix. This, in turn, means that such pixel pairs will be averaged and hence smoothed, regardless of their intensity differences. On the other hand, pixel values that rarely co-occur (i.e., across texture boundaries) will have a low weight in the co-occurrence matrix. As a result, they will not be averaged and the boundary between them will be preserved. The CoF therefore extends the BF to deal with boundaries, not just edges. It learns co-occurrences directly from the image. We can achieve various filtering results by directing it to learn the co-occurrence matrix from a part of the image, or a different image. We give the definition of the filter, discuss how to use it with color images and show several use cases.

Posted Content
TL;DR: A new geometry filtering technique called static/dynamic filter, which utilizes both static and dynamic guidances to achieve state-of-the-art results, is proposed, based on a nonlinear optimization that enforces smoothness of the signal while preserving variations that correspond to features of certain scales.
Abstract: The joint bilateral filter, which enables feature-preserving signal smoothing according to the structural information from a guidance, has been applied for various tasks in geometry processing. Existing methods either rely on a static guidance that may be inconsistent with the input and lead to unsatisfactory results, or a dynamic guidance that is automatically updated but sensitive to noises and outliers. Inspired by recent advances in image filtering, we propose a new geometry filtering technique called static/dynamic filter, which utilizes both static and dynamic guidances to achieve state-of-the-art results. The proposed filter is based on a nonlinear optimization that enforces smoothness of the signal while preserving variations that correspond to features of certain scales. We develop an efficient iterative solver for the problem, which unifies existing filters that are based on static or dynamic guidances. The filter can be applied to mesh face normals followed by vertex position update, to achieve scale-aware and feature-preserving filtering of mesh geometry. It also works well for other types of signals defined on mesh surfaces, such as texture colors. Extensive experimental results demonstrate the effectiveness of the proposed filter for various geometry processing applications such as mesh denoising, geometry feature enhancement, and texture color filtering.

Patent
25 Jan 2017
TL;DR: In this paper, a binocular stereoscopic vision matching method combining color, gradients and depth characteristics was proposed, which can effectively reduce the incorrect matching rate of three-dimensional matching.
Abstract: The invention discloses a binocular stereoscopic vision matching method combining depth characteristics The binocular stereoscopic vision matching method comprises: obtaining a depth characteristic pattern from left and right images through a convolutional neural network; calculating a truncation similarity measurement degree of pixel depth characteristics by taking the depth characteristics as the standard, and constructing a truncation matching cost function combining color, gradients and depth characteristics to obtain a matched cost volume; processing the matched cost volume by adopting a fixed window, a variable window and a self-adaptive weight polymerization or guide filtering method to obtain a cost volume polymerized by a matching cost; selecting an optimal parallax error of the cost volume by adopting WTA (Wireless Telephony Application) to obtain an initial parallax error pattern; then finding a shielding region by adopting a double-peak test, left-right consistency detection, sequence consistency detection or shielding constraint algorithm, and giving a shielding point to a parallax error value of a same-row point closest to the shielding point to obtain a parallax error pattern; and filtering the parallax error pattern by adopting a mean value or bilateral filter to obtain a final parallax error pattern By adopting the binocular stereoscopic vision matching method combining the depth characteristics, the incorrect matching rate of three-dimensional matching can be effectively reduced, the images are smooth and image edges including edges of small objects are effectively kept

Proceedings ArticleDOI
01 Dec 2017
TL;DR: This paper proposes the use of a bilateral filter as a coding tool for video compression, based on a look-up table (LUT), making it fast enough to give a reasonable trade-off between complexity and compression efficiency.
Abstract: This paper proposes the use of a bilateral filter as a coding tool for video compression. The filter is applied after transform and reconstruction, and the filtered result is used both for output as well as for spatial and temporal prediction. The implementation is based on a look-up table (LUT), making it fast enough to give a reasonable trade-off between complexity and compression efficiency. By varying the center filter coefficient and avoiding storing zero LUT entries, it is possible to reduce the size of the LUT to 2202 bytes. It is also demonstrated that the filter can be implemented without divisions, which is important for full custom ASIC implementations. The method has been implemented and tested according to the common test conditions in JEM version 5.0.1. For still images, or intra frames, we report a 0.4% bitrate reduction with a complexity increase of 6% in the encoder and 5% in the decoder. For video, we report a 0.5% bitrate reduction with a complexity increase of 3% in the encoder and 0% in the decoder.

Journal ArticleDOI
TL;DR: This paper presents a novel retinal image denoising approach which is able to preserve the details of retinal vessels while effectively eliminating image noise and can also be served as a preprocessing tool for improving the accuracy of the vessel detection technique.
Abstract: Filtering belongs to the most fundamental operations of retinal image processing and for which the value of the filtered image at a given location is a function of the values in a local window centered at this location. However, preserving thin retinal vessels during the filtering process is challenging due to vessels’ small area and weak contrast compared to background, caused by the limited resolution of imaging and less blood flow in the vessel. In this paper, we present a novel retinal image denoising approach which is able to preserve the details of retinal vessels while effectively eliminating image noise. Specifically, our approach is carried out by determining an optimal spatial kernel for the bilateral filter, which is represented by a line spread function with an orientation and scale adjusted adaptively to the local vessel structure. Moreover, this approach can also be served as a preprocessing tool for improving the accuracy of the vessel detection technique. Experimental results show the superiority of our approach over state-of-the-art image denoising techniques such as the bilateral filter.

Proceedings ArticleDOI
08 May 2017
TL;DR: An algorithm for automatic filter size selection for each pixel of Guided Filter based stereo matching based on the response of the Different of Gaussian (DoG) and the experimental results shows its superiority in accuracy.
Abstract: Local matching is one of approaches for stereo matching which needs cost aggregation. In Guided Filter based method proposed by Hosni, the cost map is smoothed by Guided Filter using original image as a guiding image. However, the Guided Filter sometimes fails when there are regions whose textures are same but disparities are different. Thus, parameter tuning for filter size of Guided Filter is difficult to obtain the best accuracy. In this paper we propose an algorithm for automatic filter size selection for each pixel of Guided Filter based stereo matching based on the response of the Different of Gaussian (DoG). In our algorithm, we generate the Filter-Size map whose pixel value for each pixel is appropriate filter size. The value of the Filter-Size map is the largest size of the filtering area around the pixel in interest calculated such that more than two edges are not included in filtering area. In our experiments, we evaluated accuracy of Guided Filter based method with our algorithm for selecting filter size compared with the original Guided Filter based method without our algorithm. By using the Middle-bury datasets, the experimental results shows our algorithm's superiority in accuracy.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: The effectiveness of Gaussian blur filter over bilateral filter is presented and the range of noises which the user can tolerate while dealing with the image is decided.
Abstract: Bilateral filter is proven to be the best filter for edge detection techniques in literature, as it preserves the edges of the image while de-noising but this paper presents the effectiveness of Gaussian blur filter over bilateral filter. With the experiments performed on the natural scenes with different noises, we came on a conclusion that Gaussian blur filter perform better with respect to the bilateral filter. This paper also decides the range of noises which the user can tolerate while dealing with the image. The experiment is conducted on three natural scenes with the help of graph based image segmentation technique and result is validated using correlation.

Journal ArticleDOI
TL;DR: This paper proposes a patch-based method to remove the out-of-focus blur of a video and build an all-in-focus video, and employs the idea of a bilateral filter to temporally smooth the reconstructed video.
Abstract: Amateur videos always contain focusing issues. A focusing mistake may produce out-of-focus blur, which seriously degrades the expressive force of the video. In this paper, we propose a patch-based method to remove the out-of-focus blur of a video and build an all-in-focus video. We assume that the out-of-focus blurry region in one frame will be clear in a portion of other frames; thus, the clear corresponding regions can be used to reconstruct the blurry one. We divide each video frame into a grid of patches and track each patch in the surrounding frames. We independently reconstruct each video frame by building a Markov random field model to identify the optimal target patches that are sharp, similar to the original patches, and are coherent with their neighboring patches within the overlapped regions. To recover an all-in-focus video, an iterative framework is utilized, in which the reconstructed video of each iteration is substituted in the next iteration. Finally, we employ the idea of a bilateral filter to temporally smooth the reconstructed video. The experimental results and the comparison with the previous works demonstrate the effectiveness of our method.

Journal ArticleDOI
TL;DR: The qualitative and quantitative comparisons show that the proposed highly efficient approach for mesh denoising while preserving geometric features can outperform the selected state of the art methods, in particular, its computational efficiency.

Proceedings ArticleDOI
Jianjie Ma1
01 Jul 2017
TL;DR: The paper suggests the concept of dynamic updation which means the standard image is updated during the detection process and an improved region growing method is proposed to obtain the complete ranges of the real defect regions.
Abstract: Defect detection and recognition of bare PCB plays a significant role in computer vision applications. An accurate and efficient approach is implemented in this paper. The approach is based on the comparison between the standard PCB image and the target image. Multiple images of the qualified PCBs are acquired at the same position. We take an average of the images and consider it to be an initial standard image. The paper suggests the concept of dynamic updation which means we update the standard image during the detection process. The bilateral filtering method is used in the preprocessing phase. Then the target image is compared with the standard image to get the difference image. And a suitable threshold is obtained by analyzing the histogram of the difference image to distinguish potential defect regions. Using the boundary-length-range method, the authenticity of each potential defect region is preliminarily judged. After that, we can identify the bare PCBs which have no defect. Moreover, an improved region growing method is proposed to obtain the complete ranges of the real defect regions. Finally, a simple but effective method is presented to recognize the type of the defects. Experimental results show that the proposed method in this paper works well.

Journal ArticleDOI
TL;DR: This paper generates the porosity volume from the seismic attributes using an artificial neural network (ANN) and applies a set of filters to the output of the ANN to regularize the predicted porosityVolume.
Abstract: This paper proposes a diffusion filter based scheme to denoise seismic attributes and to improve the porosity volume, which is predicted from seismic attributes. We compare the performances of multiple diffusion [such as Perona–Malik diffusion filter, complex diffusion filter, improved complex adaptive diffusion filter (ICADF)] and nondiffusion (such as two-dimensional (2-D) median, 3-D median, smoothing, and bilateral filter) based filters in terms of four metrics such as root mean square error (RMSE), normalized RMSE, signal to noise ratio (SNR), and peak SNR (PSNR). In our earlier publication, we used an artificial neural network (ANN) to predict a lithological property (sand fraction) over a study area. We trained the ANN using an integrated dataset of low-resolution seismic attributes and a limited number of high-resolution well logs. In this paper, we generate the porosity volume from the seismic attributes using an ANN. The predicted porosity logs contain irregularities and artifacts due to the nonlinear mapping of the learning algorithm (e.g., ANN). We apply a set of filters to the output of the ANN to regularize the predicted porosity volume. The filtered porosity logs are compared with the generated log. The ICADF has been found to be most suitable for denoising the seismic data and the porosity volume. Generation of porosity maps from seismic inputs would be helpful to petroleum engineers for reservoir characterization.

Journal ArticleDOI
TL;DR: Results of this study revealed that noise removal is an important preprocessing step for a more successful analysis of digital images and bilateral filter is an effective filtering method for segmentation accuracy and FD analysis performance.

Journal ArticleDOI
Jin Wang1, Jiaji Wu1, Zhensen Wu1, Gwanggil Jeon1, Jechang Jeong2 
TL;DR: An efficient image demosaicking method using a bilateral filter and directional differentiation with consideration of both the spatial closeness and the similarity between the interpolated pixel and the neighbor pixels is introduced.
Abstract: In this paper, we introduce an efficient image demosaicking method using a bilateral filter and directional differentiation with consideration of both the spatial closeness and the similarity between the interpolated pixel and the neighbor pixels. Spatial closeness is considered as spatial locality. We utilize an adaptive weighted average to estimate the missing pixel value, where the adaptive weight is calculated based on three components: directional differentiation, similarity between the pixel and each of its neighbor pixels, and spatial locality. The experimental results show that the proposed method outperforms existing approaches in both objective and subjective performance.

Journal ArticleDOI
TL;DR: Images captured in hazy weather are usually of poor quality, which has a negative effect on the performance of outdoor computer imaging systems, and haze removal is critical for outdoor imaging systems.
Abstract: Images captured in hazy weather are usually of poor quality, which has a negative effect on the performance of outdoor computer imaging systems. Therefore, haze removal is critical for outdoor imag...

Journal ArticleDOI
Haiyan Li1, Jun Wu1, Miao Aimin1, Pengfei Yu1, Jianhua Chen1, Zhang Yufeng1 
TL;DR: A novel method, called Rayleigh-maximum-likelihood switching bilateral filter (RSBF) is proposed to enhance ultrasound images by two steps: noise, speckle and edge detection followed by filtering, which is effective in enhancing edge while smoothing the spekle and noise in clinical ultrasound images.
Abstract: Ultrasound imaging plays an important role in computer diagnosis since it is non-invasive and cost-effective. However, ultrasound images are inevitably contaminated by noise and speckle during acquisition. Noise and speckle directly impact the physician to interpret the images and decrease the accuracy in clinical diagnosis. Denoising method is an important component to enhance the quality of ultrasound images; however, several limitations discourage the results because current denoising methods can remove noise while ignoring the statistical characteristics of speckle and thus undermining the effectiveness of despeckling, or vice versa. In addition, most existing algorithms do not identify noise, speckle or edge before removing noise or speckle, and thus they reduce noise and speckle while blurring edge details. Therefore, it is a challenging issue for the traditional methods to effectively remove noise and speckle in ultrasound images while preserving edge details. To overcome the above-mentioned limitations, a novel method, called Rayleigh-maximum-likelihood switching bilateral filter (RSBF) is proposed to enhance ultrasound images by two steps: noise, speckle and edge detection followed by filtering. Firstly, a sorted quadrant median vector scheme is utilized to calculate the reference median in a filtering window in comparison with the central pixel to classify the target pixel as noise, speckle or noise-free. Subsequently, the noise is removed by a bilateral filter and the speckle is suppressed by a Rayleigh-maximum-likelihood filter while the noise-free pixels are kept unchanged. To quantitatively evaluate the performance of the proposed method, synthetic ultrasound images contaminated by speckle are simulated by using the speckle model that is subjected to Rayleigh distribution. Thereafter, the corrupted synthetic images are generated by the original image multiplied with the Rayleigh distributed speckle of various signal to noise ratio (SNR) levels and added with Gaussian distributed noise. Meanwhile clinical breast ultrasound images are used to visually evaluate the effectiveness of the method. To examine the performance, comparison tests between the proposed RSBF and six state-of-the-art methods for ultrasound speckle removal are performed on simulated ultrasound images with various noise and speckle levels. The results of the proposed RSBF are satisfying since the Gaussian noise and the Rayleigh speckle are greatly suppressed. The proposed method can improve the SNRs of the enhanced images to nearly 15 and 13 dB compared with images corrupted by speckle as well as images contaminated by speckle and noise under various SNR levels, respectively. The RSBF is effective in enhancing edge while smoothing the speckle and noise in clinical ultrasound images. In the comparison experiments, the proposed method demonstrates its superiority in accuracy and robustness for denoising and edge preserving under various levels of noise and speckle in terms of visual quality as well as numeric metrics, such as peak signal to noise ratio, SNR and root mean squared error. The experimental results show that the proposed method is effective for removing the speckle and the background noise in ultrasound images. The main reason is that it performs a “detect and replace” two-step mechanism. The advantages of the proposed RBSF lie in two aspects. Firstly, each central pixel is classified as noise, speckle or noise-free texture according to the absolute difference between the target pixel and the reference median. Subsequently, the Rayleigh-maximum-likelihood filter and the bilateral filter are switched to eliminate speckle and noise, respectively, while the noise-free pixels are unaltered. Therefore, it is implemented with better accuracy and robustness than the traditional methods. Generally, these traits declare that the proposed RSBF would have significant clinical application.

Journal ArticleDOI
TL;DR: Two perceptual filters as pre-processing techniques to reduce the bitrate of compressed high-definition (HD) video sequences at constant visual quality by adapting the strength of the filtering process to take into account the human visual sensitivity to signal distortion.
Abstract: In this paper, we introduce two perceptual filters as pre-processing techniques to reduce the bitrate of compressed high-definition (HD) video sequences at constant visual quality. The goal of these perceptual filters is to remove spurious noise and insignificant details from the original video prior to encoding. The proposed perceptual filters rely on two novel adaptive filters (called BilAWA and TBil) which combine the good properties of the bilateral and Adaptive Weighting Average (AWA) filters. The bilateral and AWA filters being initially dedicated to denoising, the behaviour of the proposed BilAWA and TBil adaptive filters is first analyzed in the context of noise removal on HD test images. The first set of experimental results demonstrates their effectiveness in terms of noise removal while preserving image sharpness. A just noticeable distortion (JND) model is then introduced in the novel BilAWA and TBil filters to adaptively control the strength of the filtering process, taking into account the human visual sensitivity to signal distortion. Visual details which cannot be perceived are smoothed, hence saving bitrate without compromising perceived quality. A thorough experimental analysis of the perceptual JND-guided filters is conducted when using these filters as a pre-processing step prior to MPEG-4/AVC encoding. Psychovisual evaluation tests show that the proposed BilAWA pre-processing filter leads to an average bitrate saving of about 19.3% (up to 28.7%) for the same perceived visual quality. The proposed new pre-filtering approach has been also tested with the new state-of-the-art HEVC standard and has given similar efficiency in terms of bitrate savings for constant visual quality. Display Omitted Perceptual pre-filtering for rate-quality optimization of video codecs.High bit rate saving for same subjective quality demonstrated with an H.264/AVC and an HEVC codec.Perceptually -guided filters offering a good trade-off between noise removal, blurring and complexity.