scispace - formally typeset
Search or ask a question

Showing papers on "Bilateral filter published in 2020"


Journal ArticleDOI
TL;DR: A novel method to segment the breast tumor via semantic classification and merging patches and achieved competitive results compared to conventional methods in terms of TP and FP, and produced good approximations to the hand-labelled tumor contours.

135 citations


Journal ArticleDOI
TL;DR: A novel local region model based on adaptive bilateral filter is presented for segmenting noisy images and is more efficient and robust to noise than the state-of-art region-based models.
Abstract: Image segmentation plays an important role in the computer vision . However, it is extremely challenging due to low resolution, high noise and blurry boundaries. Recently, region-based models have been widely used to segment such images. The existing models often utilized Gaussian filtering to filter images, which caused the loss of edge gradient information. Accordingly, in this paper, a novel local region model based on adaptive bilateral filter is presented for segmenting noisy images. Specifically, we firstly construct a range-based adaptive bilateral filter, in which an image can well be preserved edge structures as well as resisted noise. Secondly, we present a data-driven energy model, which utilizes local information of regions centered at each pixel of image to approximate intensities inside and outside of the circular contour. The estimation approach has improved the accuracy of noisy image segmentation. Thirdly, under the premise of keeping the image original shape, a regularization function is used to accelerate the convergence speed and smoothen the segmentation contour. Experimental results of both synthetic and real images demonstrate that the proposed model is more efficient and robust to noise than the state-of-art region-based models.

87 citations


Journal ArticleDOI
TL;DR: The proposed novel scheme can reconstruct the alteration of extremely high rates (up to 80%), obtaining good quality for altered regions that are self-recovered with higher visual performance compared with a similar scheme from state of the-art methods.
Abstract: In this paper, a fragile watermarking scheme for color-image authentication and self-recovery is proposed. Original image is divided into non-overlapping blocks, and for each i -th block, the watermarks used for recovery and authentication are generated, which are embedded into a different block according to an embedding sequence given by a permutation process. The designed scheme embeds the watermarks generated by each block within the 2-LSB, where a bit-adjustment phase is subsequently applied to increase the quality of the watermarked image. In order to increase the quality of the recovered image, we use in the post-processing stage the bilateral filter that efficiently suppresses noise preserving image edges. Additionally, in the tamper detection process high accuracy is achieved employing a hierarchical tamper detection algorithm. Finally, to solve tampering coincidence problem, three recovery watermarks are embedded in different positions to reconstruct a specific block, and a proposed inpainting algorithm is implemented to regenerate those regions affected by this problem. Simulation results demonstrate that the watermarked images appear to demonstrate higher quality, and the proposed novel scheme can reconstruct the alteration of extremely high rates (up to 80%), obtaining good quality for altered regions that are self-recovered with higher visual performance compared with a similar scheme from state of the-art methods.

65 citations


Journal ArticleDOI
TL;DR: A data-driven approach based on the deep convolutional neural network with global and local residual learning to restore the depth structure from coarse to fine via multi-scale frequency synthesis is proposed.
Abstract: The depth maps obtained by the consumer-level sensors are always noisy in the low-resolution (LR) domain. Existing methods for the guided depth super-resolution, which are based on the pre-defined local and global models, perform well in general cases (e.g., joint bilateral filter and Markov random field). However, such model-based methods may fail to describe the potential relationship between RGB-D image pairs. To solve this problem, this paper proposes a data-driven approach based on the deep convolutional neural network with global and local residual learning. It progressively upsamples the LR depth map guided by the high-resolution intensity image in multiple scales. A global residual learning is adopted to learn the difference between the ground truth and the coarsely upsampled depth map, and the local residual learning is introduced in each scale-dependent reconstruction sub-network. This scheme can restore the depth structure from coarse to fine via multi-scale frequency synthesis. In addition, batch normalization layers are used to improve the performance of depth map denoising. Our method is evaluated in noise-free and noisy cases. A comprehensive comparison against 17 state-of-the-art methods is carried out. The experimental results show that the proposed method has faster convergence speed as well as improved performances based on the qualitative and quantitative evaluations.

55 citations


Journal ArticleDOI
TL;DR: A multi-scale deep feature learning network with bilateral filtering (MDFLN-BF) is proposed for SAR image classification, which aims to extract discriminative features and reduce the requirement of labeled samples.
Abstract: Synthetic aperture radar (SAR) image classification using deep neural network has drawn great attention, which generally requires various layers of deep model for feature learning. However, a deeper neural network will result in overfitting with limited training samples. In this paper, a multi-scale deep feature learning network with bilateral filtering (MDFLN-BF) is proposed for SAR image classification, which aims to extract discriminative features and reduce the requirement of labeled samples. In the proposed framework, MDFLN is proposed to extract features from SAR image on multiple scales, where the SAR image is stratified into different scales and a full convolutional network is utilized to extract features from each scale sub-image. Then, features of multiple scales are classified by multiple softmax classifiers and combined by majority vote algorithm. Further, bilateral filtering is developed to optimize the classification map based on spatial relation, which aims to improve the spatial smoothness. Experiments are tested on three SAR images with different sensors, bands, resolutions, and polarizations in order to prove the generalization ability. It is demonstrated that the proposed MDFLN-BF is able to yield superior results than other related deep networks.

49 citations


Journal ArticleDOI
TL;DR: The multi-modal brain images dataset (BraTs 2012) was used and achieves the dice overlap score of 88% for the whole tumour area localization, which is similar to the declared score in MICCAI BraTS challenge.

46 citations


Journal ArticleDOI
TL;DR: A novel Gaussian-adaptive bilateral filter (GABF) is proposed to acquire a low-pass guidance for the range kernel by a Gaussian spatial kernel to lead to a clean Gaussian range kernel for later bilateral composite.
Abstract: Recent studies have demonstrated that a bilateral filter can increase the quality of edge-preserving image smoothing significantly. Different strategies or mechanisms have been used to eliminate the brute-force computation in bilateral filters. However, blindly decreasing the processing time of the bilateral filter cannot further ameliorate the effectiveness of filter. In addition, even when the processing speed of the filter is increased, inherent problem occurred in the Gaussian range kernel when facing a noise filtering input and its effect on edge-preserving image smoothing operation are barely discussed. In this letter, we propose a novel Gaussian-adaptive bilateral filter (GABF) to resolve the aforementioned problem. The basic idea is to acquire a low-pass guidance for the range kernel by a Gaussian spatial kernel. Such low-pass guidance lead to a clean Gaussian range kernel for later bilateral composite. The results of experiments conducted on several test datasets indicate that the proposed GABF outperforms most existing bilateral-filter-based methods.

46 citations


Journal ArticleDOI
TL;DR: A modified FCM method named FCM_SICM for noise image segmentation is proposed, which achieves superior segmentation performance in terms of segmentation accuracy (SA), average intersection-over-union (mIoU), E-measure and number of iteration steps on mixed noise images compared with several state-of-the-art methods.

37 citations


Journal ArticleDOI
TL;DR: A new global method that embeds the bilateral filter (BLF) in the least squares (LS) model for efficient edge-preserving smoothing and can show comparable performance with the state-of-the-art global method.
Abstract: Edge-preserving smoothing is a fundamental procedure for many computer vision and graphic applications. This can be achieved with either local methods or global methods. In most cases, global methods can yield superior performance over the local ones. However, local methods usually run much faster than the global ones. In this paper, we propose a new global method that embeds the bilateral filter (BLF) in the least squares (LS) model for efficient edge-preserving smoothing. The proposed method can show comparable performance with the state-of-the-art global method. Meanwhile, since the proposed method can take advantages of the efficiency of the BLF and the LS model, it runs much faster. In addition, we show the flexibility of our method which can be easily extended by replacing the BLF with its variants. They can be further modified to handle more applications. We validate the effectiveness and efficiency of the proposed method through comprehensive experiments in a range of applications.

35 citations


Journal ArticleDOI
TL;DR: The proposed Adaptive Cuckoo Search based bilateral filter denoising gives better results in terms of Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE), Feature Similarity Index (FSIM), Entropy and CPU time in comparison to traditional methods such as Median filter and RGB spatial filter.
Abstract: A satellite image transmitted from satellite to the ground station is corrupted by different kinds of noises such as impulse noise, speckle noise and Gaussian noise. The traditional methods of denoising can remove the noise components but cannot preserve the quality of the image and lead to over-blurring of the edges in the image. To overcome these drawbacks, this paper develops an optimized bilateral filter for image denoising and preserving the edges using different nature inspired optimization algorithms which can effectively denoise the image without blurring the edges in the image. Denoising the image using a bilateral filter requires the decision of the control parameters so that the noise is removed and the edge details are preserved. With the help of optimization algorithms such as Particle Swarm Optimization (PSO), Cuckoo Search (CS) and Adaptive Cuckoo Search (ACS), the control parameters in the bilateral filter are decided for optimal performance. It is observed that the proposed Adaptive Cuckoo Search based bilateral filter denoising gives better results in terms of Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE), Feature Similarity Index (FSIM), Entropy and CPU time in comparison to traditional methods such as Median filter and RGB spatial filter.

30 citations


Journal ArticleDOI
TL;DR: This paper demonstrates that the superior texture filtering results can be obtained by adapting the spatial kernel at each pixel by using the classical bilateral filter for texture smoothing, and describes a simple and efficient gradient-based rule for this task.
Abstract: In the classical bilateral filter, a range kernel is used together with a spatial kernel for smoothing out fine details while simultaneously preserving edges. More recently, it has been demonstrated that even coarse textures can be smoothed using joint bilateral filtering. In this paper, we demonstrate that the superior texture filtering results can be obtained by adapting the spatial kernel at each pixel. To the best of our knowledge, spatial adaptation (of the bilateral filter) has not been explored for texture smoothing. The rationale behind adapting the spatial kernel is that one cannot smooth beyond a certain level using a fixed spatial kernel, no matter how we manipulate the range kernel. In fact, we should simply aggregate more pixels using a sufficiently wide spatial kernel to locally enhance the smoothing. Based on this reasoning, we propose to use the classical bilateral filter for texture smoothing, where we adapt the width of the spatial kernel at each pixel. We describe a simple and efficient gradient-based rule for the latter task. The attractive aspect is that we are able to develop a fast algorithm that can accelerate the computations by an order without visibly compromising the filtering quality. We demonstrate that our method outperforms classical bilateral filtering, joint bilateral filtering, and other filtering methods, and is competitive with the optimization methods. We also present some applications of texture smoothing using the proposed method.

Journal ArticleDOI
TL;DR: The various quantitative and qualitative results suggest that the proposed local statistics-based bilateral filter (LSBF) outperforms the various existing speckle noise suppression techniques in term of denoising and restoration of fine textural information in the denoised images.
Abstract: One of the most widely used medical modality by healthcare industry is ultrasound imaging, which is often corrupted by multiplicative noise (known as speckle). The reduction of such kind of noise from ultrasound images is highly desirable for providing the proper diagnosis of a disease in real-time. The classical bilateral filter (CBF) is well known as most effective edge preserving and denoising filter for Gaussian noise reduction. Therefore, in this paper, a new speckle denoising filter is designed which is based on local statistics, Chi-square-based distance measure and box-based kernel function in bilateral filter framework for application and use in real time. The proposed speckle denoising scheme is tested on various synthetic, B-mode, simulated and real ultrasound images. The various quantitative and qualitative results suggest that the proposed local statistics-based bilateral filter (LSBF) outperforms the various existing speckle noise suppression techniques in term of denoising and restoration of fine textural information in the denoised images. The proposed LSBF method is compared with existing speckle noise reduction methods and experimental results demonstrate that, the proposed LSBF method have better noise removing and structure preserving capability as compared to existing standard denoising filters for speckle noise.

Journal ArticleDOI
TL;DR: A fuzzy c-means based method to segment multiple spectral images where bias correction and noise suppression are both included and smoothness of the estimated bias is ensured by bilateral filtering and noises are suppressed by a spatial constraint on memberships.

Journal ArticleDOI
01 Aug 2020-Optik
TL;DR: An advanced framework for image denoising using bilateral filtering based non-subsampled shearlet transform (NSST) and the inverse NSST is applied on the resultant output to estimate the final denoised image.

Journal ArticleDOI
TL;DR: A new method, FWD (Farthest point Weighted mean Down-sampling), this method uses down-sampled to find the center of gravity, it is added to the furthest point sampling and performed ten iterations and obtained 11-point distance, weighted average to finding the feature point.
Abstract: For the issue of using the center of gravity during down-sampling, some points of their feature will be lost. We propose a new method, FWD(Farthest point Weighted mean Down-sampling), this method uses down-sampling to find the center of gravity, it is added to the furthest point sampling and performed ten iterations. The obtained 11-point distance is weighted average to find the feature point. Influences of environmental noise and self-noises on the subsequent processing of point cloud are considered. A PWB (Principal component analysis Wavelet function Bilateral Filtering) method is proposed. The normal vector of points is calculated by PCA. The distance between two points in the optimal neighborhood is obtained by the particle swarm optimization(PSO) method. This method performs wavelet smoothing and utilizes the Gaussian function to retain the edge eigenvalues. FWD simplified 90840 points in 48 seconds in the case of retaining the complete feature points. Compared with other latest methods, better results have been obtained. PWB reached de-noising precision of 0.9696 within 72.31s. Accuracy of de-noising is superior to the latest method. The loss of feature points is completed by FWD, the removal of noise is by PWB. Images of de-noising precision prove the priority of the method. The verification shows that the feature points are retained and the noise is eliminated.

Journal ArticleDOI
TL;DR: A multi-focus image fusion algorithm based on a dual convolutional neural network (DualCNN), in which the focus area is detected from super-resolved images, which can achieve better visual perception according to subjective evaluation and objective indexes.
Abstract: Multi-focus image fusion is an image processing that generates an integrated image by merging multiple images from different focus area in the same scene. For most fusion methods, the detection of the focus area is a critical step. In this paper, we propose a multi-focus image fusion algorithm based on a dual convolutional neural network (DualCNN), in which the focus area is detected from super-resolved images. Firstly, the source image is input into a DualCNN to restore the details and structure from its super-resolved image, as well as to improve the contrast of the source image. Secondly, the bilateral filter is used to reduce noise on the fused image, and the guided filter is used to detect the focus area of the image and refine the decision map. Finally, the fused image is obtained by weighting the source image according to the decision map. Experimental results show that our algorithm can well retain image details and maintain spatial consistency. Compared with existing methods in multiple groups of experiments, our algorithm can achieve better visual perception according to subjective evaluation and objective indexes.

Proceedings ArticleDOI
01 Oct 2020
TL;DR: This work develops novel algorithms that take advantage of image properties, so that the NNK approach can scale to large images and shows that sparse NNK graphs achieve improved energy compaction and denoising performance when compared to using graphs directly derived from the bilateral filter.
Abstract: Graphs are useful to interpret widely used image processing methods, e.g., bilateral filtering, or to develop new ones, e.g., kernel based techniques. However, simple graph constructions are often used, where edge weight and connectivity depend on a few parameters. In particular, the sparsity of the graph is determined by the choice of a window size. As an alternative, we extend and adapt to images recently introduced non negative kernel regression (NNK) graph construction. In NNK graphs sparsity adapts to intrinsic data properties. Moreover, while previous work considered NNK graphs in generic settings, here we develop novel algorithms that take advantage of image properties, so that the NNK approach can scale to large images. Our experiments show that sparse NNK graphs achieve improved energy compaction and denoising performance when compared to using graphs directly derived from the bilateral filter.

Journal ArticleDOI
18 Jan 2020-Entropy
TL;DR: An image fusion method using multi-scale decomposition and joint sparse representation is introduced and can achieve better performance compared to the state-of-the-art methods in visual perception and objective quantification.
Abstract: Image fusion is a very practical technology that can be applied in many fields, such as medicine, remote sensing and surveillance. An image fusion method using multi-scale decomposition and joint sparse representation is introduced in this paper. First, joint sparse representation is applied to decompose two source images into a common image and two innovation images. Second, two initial weight maps are generated by filtering the two source images separately. Final weight maps are obtained by joint bilateral filtering according to the initial weight maps. Then, the multi-scale decomposition of the innovation images is performed through the rolling guide filter. Finally, the final weight maps are used to generate the fused innovation image. The fused innovation image and the common image are combined to generate the ultimate fused image. The experimental results show that our method’s average metrics are: mutual information ( M I )—5.3377, feature mutual information ( F M I )—0.5600, normalized weighted edge preservation value ( Q A B / F )—0.6978 and nonlinear correlation information entropy ( N C I E )—0.8226. Our method can achieve better performance compared to the state-of-the-art methods in visual perception and objective quantification.

Journal ArticleDOI
TL;DR: Compared with the mainstream denoising algorithms, the proposed method can detect and filter out the random-value impulse noise in the image more effectively and faster, while better retaining the edges and other details of the image.
Abstract: A two-stage denoising algorithm based on local similarity is proposed to process lowly and moderate corrupted images with random-valued impulse noise in this paper. In the noise detection stage, the pixel to be detected is centered and the local similarity between the pixel and each pixel in its neighborhood is calculated, which can be used as the probability that the pixel is noise. By obtaining the local similarity of each pixel in the image and setting an appropriate threshold, the noise pixels and clean pixels in the damaged image can be detected. In the image restoration stage, an improved bilateral filter based on local similarity and geometric distance is designed. The pixel detected as noise in the first stage is filtered and the new intensity value is the weighted average of all pixel intensities in its neighborhood. A large number of experiments have been conducted on different test images and the results show that compared with the mainstream denoising algorithms, the proposed method can detect and filter out the random-value impulse noise in the image more effectively and faster, while better retaining the edges and other details of the image.

Journal ArticleDOI
TL;DR: A selective mean filter (SMF) is proposed and implemented and significantly reduces the noise and preserves the spatial resolution of the image.
Abstract: Background Noise reduction is a method for reducing CT dose; however, it can reduce image quality. Objective This study aims to propose a selective mean filter (SMF) and evaluate its effectiveness for noise suppression in CT images. Material and methods This experimental study proposed and implemented the new noise reduction algorithm. The proposed algorithm is based on a mean filter (MF), but the calculation of the mean pixel value using the neighboring pixels in a kernel selectively applied a threshold value based on the noise of the image. The SMF method was evaluated using images of phantoms. The dose reduction was estimated by comparing the image noise acquired with a lower dose after implementing the SMF method and the noise in the original image acquired with a higher dose. For comparison, the images were also filtered with an adaptive mean filter (AMF) and a bilateral filter (BF). Results The spatial resolution of the image filtered with the SMF was similar to the original images and the images filtered with the BF. While using the AMF, spatial resolution was significantly corrupted. The noise reduction achieved using the SMF was up to 75%, while it was up to 50% using the BF. Conclusion SMF significantly reduces the noise and preserves the spatial resolution of the image. The noise reduction was more pronounced with BF, and less pronounced with AMF.

Journal ArticleDOI
TL;DR: This paper focuses on the mammogram image analysis for early prediction of breast cancer (Screening) and reduce the mortality rate rather than using an invasive diagnosis technique using a novel network called Deep Convolutional neural network.
Abstract: Medical image processing needs attention towards accurate analysis rate which directly implies on the treatment This paper focuses on the mammogram image analysis for early prediction of breast cancer (Screening) and reduce the mortality rate rather than using an invasive diagnosis technique To classify the mammogram images a novel network called Deep Convolutional neural network (DCNN) is utilized in which multi-layer perceptron is used in the fully connected layer to accurately classify the mammogram images as three classes benign, malignant and normal Before classifying the breast cancer, image pre-processing and feature extraction plays a major role in preserving the useful information and extracting the desired features The Bilateral filter with a vector grid computing is used as the noise reduction filter to preserve the edge information which is essential in differentiating the masses and the dense tissue The Features like Area, Radius, Perimeter and smoothness are extracted to train the network and to detect the malignant tumor stating if the patient is positive or negative with the cancer Five stages have been proposed and implemented such as: (a) Crop and resize of the original mammogram; (b) De-Noising the DDSM (Digital Database for Screening Mammography) image to preserve the edge information (c) Train the proposed DCNN model using the features extracted, (d) Classifying the DDSM images (e) Evaluating the performance using hyper parameter tuning of the proposed system Unstinted Observations are made to justify the listed findings and by comparing the proposed outline with the help of the literature about the several in-use image classification models A confusion matrix is drawn with the classes based on: Those with Benign, Malignant and normal tissues The results are discussed (benchmarked) to show that fine-tuning of the final layers or the entire network parameters leads in achieving 9623% of overall test accuracy and 9746% of Average Classification Accuracy

Journal ArticleDOI
TL;DR: In this paper, a feed-forward denoising CNN (DnCNN) with a parametric rectified linear unit (PReLU) is used to improve the denoizing performance.
Abstract: Convolutional neural networks (CNNs) based on the discriminative learning model have been widely used for image denoising. In this study, a feed-forward denoising CNN (DnCNN) with a parametric rectified linear unit (PReLU) is used to improve the denoising performance. PReLU enhances the model fitting of the DnCNN network without affecting computational cost. This network learns the leaky parameter of negative inputs in an activation function and therefore finds a proper slope in a negative direction. The proposed denoising network is based on residual learning, which comprises repeated convolutional and PReLU units along with batch normalisation. Residual learning with batch normalisation accelerates the network training, which can be used for blind Gaussian denoising. In this network, feature maps are processed by principal component analysis and transferred to subsequent convolution layers. An adaptive bilateral filter further processes the output image of the proposed CNN for image smoothening and sharpening. The mean and variance of the Gaussian kernel of adaptive filter vary from pixel to pixel. The performance of this network is analysed on BSD-68 and Set-12 datasets, and it exhibits an improvement in peak signal-to-noise ratio and structural similarity index metric and visual representation over other state-of-the-art methods.

Journal ArticleDOI
TL;DR: A low-cost hardware architecture of the bilateral filter for real-time image processing based on the techniques of distance-oriented grouping and hardware resource sharing and an efficient quantization method is applied to reduce the size of required look up tables.
Abstract: In this paper, a low-cost hardware architecture of the bilateral filter for real-time image processing is proposed. Based on the techniques of distance-oriented grouping and hardware resource sharing, the usage of multipliers can decrease 48% as compared to the previous approach. Besides, an efficient quantization method is applied to reduce the size of required look up tables. The experimental results show that the proposed architecture is cost efficient while maintaining the same image quality, frame rate and working clock frequency.

Journal ArticleDOI
01 Nov 2020
TL;DR: KIBM5D enables the efficient denoising of dynamic PET images by improving the sparsity of the 5-D spectrum and generating the best image quality not only in simulations but also with human data.
Abstract: Dynamic positron emission tomography (PET) scans of short-time frames are required to quantitatively estimate the uptake of PET ligands. Because such short-frame scans tend to be noisy, we propose kinetics-induced block matching and 5-D transform domain filtering (KIBM5D) specialized for dynamic PET image denoising. In the proposed algorithm, kinetics-induced block matching (KIBM) and 5-D transform domain filtering are alternately repeated in two cascading stages. In each stage of KIBM5D, all time frames are included in a patch of the KIBM to collect similar patch-wise time activity curves. These similar 4-D patches are then five-dimensionally grouped and transformed to the 5-D spectrum. In the 5-D transform domain, the 5-D spectrum is shrunk by hard thresholding and Wiener filtering in the first and second stage of KIBM5D, respectively. The sparsity of the 5-D spectrum is improved because signals of similar 4-D patches are correlated, while noises of these are uncorrelated. To evaluate the performance of KIBM5D, we used both computer simulation data of a dynamic digital brain phantom using [18F]FDG kinetics, and experimental data of a normal healthy volunteer using [11C]MeQAA, and compared the results of KIBM5D, Gaussian filter (GF), bilateral filter, nonlocal means, block matching and 4-D filtering, and 4-D Gaussian filtering. For simulation data, KIBM5D performed superiorly to the other methods in terms of the peak signal to noise ratio and structural similarity measures, in all time frames. Additionally, KIBM5D generated the best image quality not only in simulations but also with human data. Accordingly, KIBM5D enables the efficient denoising of dynamic PET images.

Book ChapterDOI
04 Oct 2020
TL;DR: JBFnet as discussed by the authors is a neural network for low-dose CT denoising, where the guidance image is estimated by a deep neural network and the filter functions of the joint bilateral filter are learned via shallow convolutional networks.
Abstract: Deep neural networks have shown great success in low dose CT denoising. However, most of these deep neural networks have several hundred thousand trainable parameters. This, combined with the inherent non-linearity of the neural network, makes the deep neural network difficult to understand with low accountability. In this study we introduce JBFnet, a neural network for low dose CT denoising. The architecture of JBFnet implements iterative bilateral filtering. The filter functions of the Joint Bilateral Filter (JBF) are learned via shallow convolutional networks. The guidance image is estimated by a deep neural network. JBFnet is split into four filtering blocks, each of which performs Joint Bilateral Filtering. Each JBF block consists of 112 trainable parameters, making the noise removal process comprehendable. The Noise Map (NM) is added after filtering to preserve high level features. We train JBFnet with the data from the body scans of 10 patients, and test it on the AAPM low dose CT Grand Challenge dataset. We compare JBFnet with state-of-the-art deep learning networks. JBFnet outperforms CPCE3D, GAN and deep GFnet on the test dataset in terms of noise removal while preserving structures. We conduct several ablation studies to test the performance of our network architecture and training method. Our current setup achieves the best performance, while still maintaining behavioural accountability.

Journal ArticleDOI
TL;DR: A novel local spatial filtering framework named as the guided trilateral filter (GTF) to implement restoration of a noisy image with a well-fitting statistical distribution model is proposed and demonstrated that the GTDF not only can achieve superior performance against state of the art methods, but also has fast and robust convergence and parameter setting insensitivity.

Journal ArticleDOI
TL;DR: A new video super-resolution method that can generate high-quality and temporally coherent high-resolution (HR) videos by incorporating deep features generated by VGG16 into the authors' sparse reconstruction based video SR method.

Journal ArticleDOI
21 Aug 2020-Sensors
TL;DR: A noise-aware range kernel, which estimates noise using an intensity difference-based image noise model and dynamically adjusts weights according to the estimated noise, in order to alleviate the quality degradation of bilateral filters by noise is proposed.
Abstract: The range kernel of bilateral filter degrades image quality unintentionally in real environments because the pixel intensity varies randomly due to the noise that is generated in image sensors. Furthermore, the range kernel increases the complexity due to the comparisons with neighboring pixels and the multiplications with the corresponding weights. In this paper, we propose a noise-aware range kernel, which estimates noise using an intensity difference-based image noise model and dynamically adjusts weights according to the estimated noise, in order to alleviate the quality degradation of bilateral filters by noise. In addition, to significantly reduce the complexity, an approximation scheme is introduced, which converts the proposed noise-aware range kernel into a binary kernel while using the statistical hypothesis test method. Finally, blue a fully parallelized and pipelined very-large-scale integration (VLSI) architecture of a noise-aware bilateral filter (NABF) that is based on the proposed binary range kernel is presented, which was successfully implemented in field-programmable gate array (FPGA). The experimental results show that the proposed NABF is more robust to noise than the conventional bilateral filter under various noise conditions. Furthermore, the proposed VLSI design of the NABF achieves 10.5 and 95.7 times higher throughput and uses 63.6–97.5% less internal memory than state-of-the-art bilateral filter designs.

Journal ArticleDOI
TL;DR: The experimental results show that compared with the traditional bilateral filtering algorithm and fuzzy C-means algorithm, the proposed method can effectively remove noise of different scales and maintain good performance on the basis of maintaining human body features.
Abstract: Aiming at the problem of 3D point cloud noise affecting the efficiency and precision of human body 3D reconstruction in complex scenes, a 3D point cloud registration denoising method for human motion image using depth learning algorithm is proposed. First, two Kinect sensors are used to collect the three-dimensional data of the human body in the scene, and the spatial alignment under the Bursa linear model is used to pre-process the background point cloud data. The depth image of the point cloud is calculated, and the depth image pair is extracted by the convolutional neural network. Furthermore, the feature difference of the depth image pair is taken as the input of the fully connected network and the point cloud registration parameter is calculated, and the above operation is performed iteratively until the registration error is less than the acceptable threshold. Then, the improved C-means algorithm is used to remove the outlier, the noise is clustered, and the large-scale outlier noise is removed. Finally, the high-frequency information is processed by the depth data bilateral filtering method. The experimental results show that compared with the traditional bilateral filtering algorithm and fuzzy C-means algorithm, the proposed method can effectively remove noise of different scales and maintain good performance on the basis of maintaining human body features. In the point cloud model of A, B, and C, the average error of the proposed method is lower than that of the traditional bilateral filtering algorithm with 15.7%, 15.9%, and 19.8%, respectively, and it is lower than that of the fuzzy C-means algorithm with 25.8%, 26.9%, and 30.2%, respectively.

Journal ArticleDOI
TL;DR: A novel bilateral filtering scheme with a dual-range kernel is proposed, which provides a more robust range of distance estimation at various noise levels compared with existing methods and outperforms conventional bilateral filter and its major state-of-the-art variants.
Abstract: The bilateral filter is a classical technique for edge-preserving smoothing. It has been widely used as an effective image denoising approach to remove Gaussian noise. The performance of bilateral filtering highly depends on the accuracy of its range distance estimation, which is used for pixel-neighbourhood similarity measurement. However, in the conventional bilateral filtering approach, estimating the range distance directly from noisy observation results in the degradation of denoising performance. To address this issue, the authors propose a novel bilateral filtering scheme with a dual-range kernel, which provides a more robust range of distance estimation at various noise levels compared with existing methods. To further improve the denoising performance, they employ a linear model to retrieve the remaining image details from the method noise and add them back to the denoised image by employing an optimal approach based on Stein's unbiased risk estimate. Experiments on standard test images demonstrate that the proposed method outperforms conventional bilateral filter and its major state-of-the-art variants.