scispace - formally typeset
Search or ask a question

Showing papers on "Bilateral filter published in 2015"


Journal ArticleDOI
TL;DR: The proposed method to fuse source images by weighted average using the weights computed from the detail images that are extracted from the source images using CBF has shown good performance and the visual quality of the fused image by the proposed method is superior to other methods.
Abstract: Like bilateral filter (BF), cross bilateral filter (CBF) considers both gray-level similarities and geometric closeness of the neighboring pixels without smoothing edges, but it uses one image for finding the kernel and other to filter, and vice versa. In this paper, it is proposed to fuse source images by weighted average using the weights computed from the detail images that are extracted from the source images using CBF. The performance of the proposed method has been verified on several pairs of multisensor and multifocus images and compared with the existing methods visually and quantitatively. It is found that, none of the methods have shown consistence performance for all the performance metrics. But as compared to them, the proposed method has shown good performance in most of the cases. Further, the visual quality of the fused image by the proposed method is superior to other methods.

417 citations


Journal ArticleDOI
TL;DR: The cost aggregation problem is re-examined and a non-local solution is proposed that guarantees that the depth edges will be preserved when the temporal coherency between all the video frames are considered.
Abstract: Matching cost aggregation is one of the oldest and still popular methods for stereo correspondence. While effective and efficient, cost aggregation methods typically aggregate the matching cost by summing/averaging over a user-specified, local support region. This is obviously only locally-optimal, and the computational complexity of the full-kernel implementation usually depends on the region size. In this paper, the cost aggregation problem is re-examined and a non-local solution is proposed. The matching cost values are aggregated adaptively based on pixel similarity on a tree structure derived from the stereo image pair to preserve depth edges. The nodes of this tree are all the image pixels, and the edges are all the edges between the nearest neighboring pixels. The similarity between any two pixels is decided by their shortest distance on the tree. The proposed method is non-local as every node receives supports from all other nodes on the tree. The proposed method can be naturally extended to the time domain for enforcing temporal coherence. Unlike previous methods, the non-local property guarantees that the depth edges will be preserved when the temporal coherency between all the video frames are considered. A non-local weighted median filter is also proposed based on the non-local cost aggregation algorithm. It has been demonstrated to outperform all local weighted median filters on disparity/depth upsampling and refinement.

176 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel mesh normal filtering framework based on the joint bilateral filter, with applications in mesh denoising, and compute the guidance normal on a face using a neighboring patch with the most consistent normal orientations, which provides a reliable estimation of the true normal even with a high‐level of noise.
Abstract: The joint bilateral filter is a variant of the standard bilateral filter, where the range kernel is evaluated using a guidance signal instead of the original signal. It has been successfully applied to various image processing problems, where it provides more flexibility than the standard bilateral filter to achieve high quality results. On the other hand, its success is heavily dependent on the guidance signal, which should ideally provide a robust estimation about the features of the output signal. Such a guidance signal is not always easy to construct. In this paper, we propose a novel mesh normal filtering framework based on the joint bilateral filter, with applications in mesh denoising. Our framework is designed as a two-stage process: first, we apply joint bilateral filtering to the face normals, using a properly constructed normal field as the guidance; afterwards, the vertex positions are updated according to the filtered face normals. We compute the guidance normal on a face using a neighboring patch with the most consistent normal orientations, which provides a reliable estimation of the true normal even with a high-level of noise. The effectiveness of our approach is validated by extensive experimental results.

146 citations


Journal ArticleDOI
TL;DR: A robust and effective specular highlight removal method is proposed, based on a key observation-the maximum fraction of the diffuse colour component in diffuse local patches in colour images changes smoothly and can be treated as noise in this case.
Abstract: A robust and effective specular highlight removal method is proposed in this paper. It is based on a key observation—the maximum fraction of the diffuse colour component in diffuse local patches in colour images changes smoothly. The specular pixels can thus be treated as noise in this case. This property allows the specular highlights to be removed in an image denoising fashion: an edge-preserving low-pass filter (e.g., the bilateral filter) can be used to smooth the maximum fraction of the colour components of the original image to remove the noise contributed by the specular pixels. Recent developments in fast bilateral filtering techniques enable the proposed method to run over $200\times$ faster than state-of-the-art techniques on a standard CPU and differentiates it from previous work.

92 citations


Posted Content
TL;DR: A new “bilateral inception” module that can be inserted in existing CNN architectures and performs bilateral filtering, at multiple feature-scales, between superpixels in an image.
Abstract: In this paper we propose a CNN architecture for semantic image segmentation. We introduce a new 'bilateral inception' module that can be inserted in existing CNN architectures and performs bilateral filtering, at multiple feature-scales, between superpixels in an image. The feature spaces for bilateral filtering and other parameters of the module are learned end-to-end using standard backpropagation techniques. The bilateral inception module addresses two issues that arise with general CNN segmentation architectures. First, this module propagates information between (super) pixels while respecting image edges, thus using the structured information of the problem for improved results. Second, the layer recovers a full resolution segmentation result from the lower resolution solution of a CNN. In the experiments, we modify several existing CNN architectures by inserting our inception module between the last CNN (1x1 convolution) layers. Empirical results on three different datasets show reliable improvements not only in comparison to the baseline networks, but also in comparison to several dense-pixel prediction techniques such as CRFs, while being competitive in time.

80 citations


Journal ArticleDOI
TL;DR: The fingertip OCT images indicated that the proposed NLM filter provides superior denoising performance, among the filters in terms of the contrast-to-noise ratio (CNR), the equivalent number of looks (ENL), and the speckle suppression index (SSI).
Abstract: Non-local means (NLM) filter is one of the state-of-the-art denoising filters. It exploits the presence of similar features in an image and averages those similar features to remove noise. However, a conventional NLM filter shows somewhat inferior performance of noise reduction around edges, suffering from low efficiency of collecting similar features to be averaged. In order to overcome this phenomenon, we propose a NLM filter with double Gaussian anisotropic kernels as a substitute for the conventional homogeneous kernel to effectively remove noise from OCT images corrupted by speckle noise. The proposed filter was evaluated by comparing with various denoising filters such as conventional NLM filter, median filter, bilateral filter, and Wiener filter. The fingertip OCT images, which were processed with the different denoising filters, indicated that the proposed NLM filter provides superior denoising performance, among the filters in terms of the contrast-to-noise ratio (CNR), the equivalent number of looks (ENL), and the speckle suppression index (SSI). A human retina OCT image was also used to compare and show the performances of noise reduction among different filters. In addition, the denoising performance with the proposed NLM filter was also investigated in the synthetic images for fair comparison among the filters by calculating the peak signal-to-noise ratio (PSNR). The proposed NLM filter outperformed the conventional NLM filter as well as the other filters.

78 citations


Journal ArticleDOI
TL;DR: The CBLF achieves a near-optimal performance tradeoff by two key ideas: an approximate Gaussian range kernel through Fourier analysis and a period length optimization, and it significantly outperforms state-of-the-art algorithms in terms of approximate accuracy, computational complexity, and usability.
Abstract: This paper presents an efficient constant-time bilateral filter that produces a near-optimal performance tradeoff between approximate accuracy and computational complexity without any complicated parameter adjustment, called a compressive bilateral filter (CBLF). The constant-time means that the computational complexity is independent of its filter window size. Although many existing constant-time bilateral filters have been proposed step-by-step to pursue a more efficient performance tradeoff, they have less focused on the optimal tradeoff for their own frameworks. It is important to discuss this question, because it can reveal whether or not a constant-time algorithm still has plenty room for improvements of performance tradeoff. This paper tackles the question from a viewpoint of compressibility and highlights the fact that state-of-the-art algorithms have not yet touched the optimal tradeoff. The CBLF achieves a near-optimal performance tradeoff by two key ideas: 1) an approximate Gaussian range kernel through Fourier analysis and 2) a period length optimization. Experiments demonstrate that the CBLF significantly outperforms state-of-the-art algorithms in terms of approximate accuracy, computational complexity, and usability.

78 citations


Journal ArticleDOI
TL;DR: The guided bilateral filter is proposed, which is iterative, generic, inherits the robustness properties of the robust bilateral filter, and uses a guide image, and can handle non-Gaussian noise on the image to be filtered.
Abstract: The bilateral filter and its variants, such as the joint/cross bilateral filter, are well-known edge-preserving image smoothing tools used in many applications. The reason of this success is its simple definition and the possibility of many adaptations. The bilateral filter is known to be related to robust estimation. This link is lost by the ad hoc introduction of the guide image in the joint/cross bilateral filter. We here propose a new way to derive the joint/cross bilateral filter as a particular case of a more generic filter, which we name the guided bilateral filter. This new filter is iterative, generic, inherits the robustness properties of the robust bilateral filter, and uses a guide image. The link with robust estimation allows us to relate the filter parameters with the statistics of input images. A scheme based on graduated nonconvexity is proposed, which allows converging to an interesting local minimum even when the cost function is nonconvex. With this scheme, the guided bilateral filter can handle non-Gaussian noise on the image to be filtered. A complementary scheme is also proposed to handle non-Gaussian noise on the guide image even if both are strongly correlated. This allows the guided bilateral filter to handle situations with more noise than the joint/cross bilateral filter can work with and leads to high peak signal-to-noise ratio values as shown experimentally.

77 citations


Journal ArticleDOI
Jun Wang1, Xiucheng Yang1, Xuebin Qin1, Xin Ye1, Qiming Qin1 
TL;DR: This letter presents a graph search-based perceptual grouping approach to hierarchically group previously detected line segments into candidate rectangular buildings that has the potential to be adopted in online applications and industrial use in the near future.
Abstract: This letter presents a new approach for rapid automatic building extraction from very high resolution (VHR) optical satellite imagery. The proposed method conducts building extraction based on distinctive image primitives such as lines and line intersections. The optimized framework consists of three stages: First, a developed edge-preserving bilateral filter is adopted to reduce noise and enhance building edge contrast for preprocessing. Second, a state-of-the-art line segment detector called EDLines is introduced for the real-time accurate extraction of building line segments. Finally, we present a graph search-based perceptual grouping approach to hierarchically group previously detected line segments into candidate rectangular buildings. The recursive process was improved through the efficient examination of geometrical information with line linking and closed contour search, in order to obtain more reasonable omission and commission rate in building contour grouping. Extensive experiments performed on VHR optical QuickBird imageries justify the effectiveness and robustness of the proposed linear-time procedure with an overall accuracy of 80.9% and completeness of 87.3%. This method does not require user intervention and thereby has the potential to be adopted in online applications and industrial use in the near future.

72 citations


Journal ArticleDOI
TL;DR: The enhanced images, as a result of implementing the proposed approach, are characterized by relatively genuine color, increased contrast and brightness, reduced noise level, and better visibility.
Abstract: Poor visibility due to the effects of light absorption and scattering is challenging for processing underwater images. We propose an approach based on dehazing and color correction algorithms for underwater image enhancement. First, a simple dehazing algorithm is applied to remove the effects of haze in the underwater image. Second, color compensation, histogram equalization, saturation, and intensity stretching are used to improve contrast, brightness, color, and visibility of the underwater image. Furthermore, bilateral filtering is utilized to address the problem of the noise caused by the physical properties of the medium and the histogram equalization algorithm. In order to evaluate the performance of the proposed approach, we compared our results with six existing methods using the subjective technique, objective technique, and color cast tests. The results show that the proposed approach outperforms the six existing methods. The enhanced images, as a result of implementing the proposed approach, are characterized by relatively genuine color, increased contrast and brightness, reduced noise level, and better visibility.

67 citations


Journal ArticleDOI
26 Oct 2015
TL;DR: This work presents an efficient method to process different scale geometric features based on a novel rolling-guidance normal filter to face normals at a specified scale, which empirically smooths small-scale geometric features while preserving large-scale features.
Abstract: 3D geometric features constitute rich details of polygonal meshes. Their analysis and editing can lead to vivid appearance of shapes and better understanding of the underlying geometry for shape processing and analysis. Traditional mesh smoothing techniques mainly focus on noise filtering and they cannot distinguish different scales of features well, even mixing them up. We present an efficient method to process different scale geometric features based on a novel rolling-guidance normal filter. Given a 3D mesh, our method iteratively applies a joint bilateral filter to face normals at a specified scale, which empirically smooths small-scale geometric features while preserving large-scale features. Our method recovers the mesh from the filtered face normals by a modified Poisson-based gradient deformation that yields better surface quality than existing methods. We demonstrate the effectiveness and superiority of our method on a series of geometry processing tasks, including geometry texture removal and enhancement, coating transfer, mesh segmentation and level-of-detail meshing.

Journal ArticleDOI
TL;DR: This paper formulates both the median filter and bilateral filter as a cost volume aggregation problem whose computational complexity is independent of the filter kernel size and results in a general bilateral filter that can have arbitrary spatial and range filter kernels.
Abstract: This paper formulates both the median filter and bilateral filter as a cost volume aggregation problem whose computational complexity is independent of the filter kernel size. Unlike most of the previous works, the proposed framework results in a general bilateral filter that can have arbitrary spatial$$^{1}$$1 and arbitrary range filter kernels. This bilateral filter takes about 3.5 s to exactly filter a one megapixel 8-bit grayscale image on a 3.2 GHz Intel Core i7 CPU. In practice, the intensity/range and spatial domain can be downsampled to improve the efficiency. This compression can maintain very high accuracy (e.g., 40 dB) but over $$100\times $$100? faster.

Journal ArticleDOI
Zhang Ju1, Lin Guangkuo1, Wu Lili1, Chen Wang1, Yun Cheng 
TL;DR: An improved de-speckling method for medical ultrasound images is proposed, based on the wavelet transformation and fast bilateral filter, which has better reduction performance than other methods but also can preserve image details such as the edge of lesions.

Journal ArticleDOI
TL;DR: The proposed iterative bilateral filter improves the denoising efficiency, preserves the fine structures and also reduces the bias due to Rician noise.
Abstract: Noise removal from magnetic resonance images is important for further processing and visual analysis. Bilateral filter is known for its effectiveness in edge-preserved image denoising. In this paper, an iterative bilateral filter for filtering the Rician noise in the magnitude magnetic resonance images is proposed. The proposed iterative bilateral filter improves the denoising efficiency, preserves the fine structures and also reduces the bias due to Rician noise. The visual and diagnostic quality of the image is well preserved. The quantitative analysis based on the standard metrics like peak signal-to-noise ratio and mean structural similarity index matrix shows that the proposed method performs better than the other recently proposed denoising methods for MRI.

Journal ArticleDOI
TL;DR: A novel segmentation method for extracting objects of interest (OOI) in 3D ultrasound images by taking advantage of graph theory to construct a 3D graph, and merge sub-graphs into larger one during the segmentation process, indicating improved performance for potential clinical applications.

Posted Content
TL;DR: This paper provides a concrete introduction to edge preserving filters starting from the heat diffusion equation in olden to recent eras, an overview of its numerous applications, as well as mathematical analysis, various efficient and optimized ways of implementation and their interrelationships.
Abstract: Edge preserving filters preserve the edges and its information while blurring an image In other words they are used to smooth an image, while reducing the edge blurring effects across the edge like halos, phantom etc They are nonlinear in nature Examples are bilateral filter, anisotropic diffusion filter, guided filter, trilateral filter etc Hence these family of filters are very useful in reducing the noise in an image making it very demanding in computer vision and computational photography applications like denoising, video abstraction, demosaicing, optical-flow estimation, stereo matching, tone mapping, style transfer, relighting etc This paper provides a concrete introduction to edge preserving filters starting from the heat diffusion equation in olden to recent eras, an overview of its numerous applications, as well as mathematical analysis, various efficient and optimized ways of implementation and their interrelationships, keeping focus on preserving the boundaries, spikes and canyons in presence of noise Furthermore it provides a realistic notion for efficient implementation with a research scope for hardware realization for further acceleration

Journal ArticleDOI
TL;DR: A novel trilateral filter (TF)-based ASW method that remedies ambiguities by considering the possible disparity discontinuities through color discontinuity boundaries, i.e., the boundary strength between two pixels, which is measured by a local energy model.
Abstract: Adaptive support weight (ASW) methods represent the state of the art in local stereo matching, while the bilateral filter-based ASW method achieves outstanding performance. However, this method fails to resolve the ambiguity induced by nearby pixels at different disparities but with similar colors. In this paper, we introduce a novel trilateral filter (TF)-based ASW method that remedies such ambiguities by considering the possible disparity discontinuities through color discontinuity boundaries, i.e., the boundary strength between two pixels, which is measured by a local energy model. We also present a recursive TF-based ASW method whose computational complexity is $O(N)$ for the cost aggregation step, and $O(N{\rm Log}_{2}(N))$ for boundary detection, where $N$ denotes the input image size. This complexity is thus independent of the support window size. The recursive TF-based method is a nonlocal cost aggregation strategy. The experimental evaluation on the Middlebury benchmark shows that the proposed method, whose average error rate is 4.95%, outperforms other local methods in terms of accuracy. Equally, the average runtime of the proposed TF-based cost aggregation is roughly 260 ms on a 3.4-GHz Inter Core i7 CPU, which is comparable with state-of-the-art efficiency.

Journal ArticleDOI
TL;DR: A novel single-image based dehazing framework is proposed to remove haze artifacts from images through local atmospheric light estimation using a novel strategy based on a physical model where the extreme intensity of each RGB pixel is used to define an initial atmospheric veil.

Journal ArticleDOI
Xiang Yan1, Hanlin Qin1, Li Jia1, Huixin Zhou1, Zong Jingguo1 
TL;DR: This paper proposes a novel infrared and visible image fusion method based on spectral graph wavelet transform (SGWT) and bilateral filter and demonstrates that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.
Abstract: Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.

Posted Content
TL;DR: In this paper, the first stage of many stereo algorithms, matching cost computation, is addressed by learning a similarity measure on small image patches using a convolutional neural network, and then a series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter.
Abstract: We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.

Proceedings ArticleDOI
01 May 2015
TL;DR: A simple pre-processing step is reported that can substantially improve the denoising performance of the bilateral filter, at almost no additional cost, and the optimally-weighted bilateral filter is competitive with the computation-intensive non-local means filter.
Abstract: The bilateral filter is known to be quite effective in denoising images corrupted with small dosages of additive Gaussian noise. The denoising performance of the filter, however, is known to degrade quickly with the increase in noise level. Several adaptations of the filter have been proposed in the literature to address this shortcoming, but often at a substantial computational overhead. In this paper, we report a simple pre-processing step that can substantially improve the denoising performance of the bilateral filter, at almost no additional cost. The modified filter is designed to be robust at large noise levels, and often tends to perform poorly below a certain noise threshold. To get the best of the original and the modified filter, we propose to combine them in a weighted fashion, where the weights are chosen to minimize (a surrogate of) the oracle mean-squared-error (MSE). The optimally-weighted filter is thus guaranteed to perform better than either of the component filters in terms of the MSE, at all noise levels. We also provide a fast algorithm for the weighted filtering. Visual and quantitative denoising results on standard test images are reported which demonstrate that the improvement over the original filter is significant both visually and in terms of PSNR. Moreover, the denoising performance of the optimally-weighted bilateral filter is competitive with the computation-intensive non-local means filter.

Journal ArticleDOI
TL;DR: This work proposes a fast approximation to the bilateral filter for color images that combines color sparseness and local statistics, yields a fast and accurate bilateral filter approximation and obtains the state-of-the-art results.
Abstract: The property of smoothing while preserving edges makes the bilateral filter a very popular image processing tool. However, its non-linear nature results in a computationally costly operation. Various works propose fast approximations to the bilateral filter. However, the majority does not generalize to vector input as is the case with color images. We propose a fast approximation to the bilateral filter for color images. The filter is based on two ideas. First, the number of colors, which occur in a single natural image, is limited. We exploit this color sparseness to rewrite the initial non-linear bilateral filter as a number of linear filter operations. Second, we impose a statistical prior to the image values that are locally present within the filter window. We show that this statistical prior leads to a closed-form solution of the bilateral filter. Finally, we combine both ideas into a single fast and accurate bilateral filter for color images. Experimental results show that our bilateral filter based on the local prior yields an extremely fast bilateral filter approximation, but with limited accuracy, which has potential application in real-time video filtering. Our bilateral filter, which combines color sparseness and local statistics, yields a fast and accurate bilateral filter approximation and obtains the state-of-the-art results.

Journal ArticleDOI
TL;DR: This paper considers the image super-resolution (SR) reconstitution problem, and proposes a novel approach based on a regularized criterion that allows to overcome efficiently the blurring effect while removing the noise.
Abstract: In this paper, we consider the image super-resolution (SR) reconstitution problem. The main goal consists of obtaining a high-resolution (HR) image from a set of low-resolution (LR) ones. For that, we propose a novel approach based on a regularized criterion. The criterion is composed of the classical generalized total variation (TV) but adding a bilateral filter (BTV) regularizer. The main goal of our approach consists of the derivation and the use of an efficient combined deblurring and denoising stage that is applied on the high-resolution image. We demonstrate the existence of minimizers of the combined variational problem in the bounded variation space, and we propose a minimization algorithm. The numerical results obtained by our approach are compared with the classical robust super-resolution (RSR) algorithm and the SR with TV regularization. They confirm that the proposed combined approach allows to overcome efficiently the blurring effect while removing the noise.

Posted Content
TL;DR: In this article, a simple pre-processing step was proposed to improve the denoising performance of the bilateral filter by combining the original and modified filter in a weighted fashion, where the weights were chosen to minimize (a surrogate of) the oracle mean-squared-error (MSE).
Abstract: The bilateral filter is known to be quite effective in denoising images corrupted with small dosages of additive Gaussian noise. The denoising performance of the filter, however, is known to degrade quickly with the increase in noise level. Several adaptations of the filter have been proposed in the literature to address this shortcoming, but often at a substantial computational overhead. In this paper, we report a simple pre-processing step that can substantially improve the denoising performance of the bilateral filter, at almost no additional cost. The modified filter is designed to be robust at large noise levels, and often tends to perform poorly below a certain noise threshold. To get the best of the original and the modified filter, we propose to combine them in a weighted fashion, where the weights are chosen to minimize (a surrogate of) the oracle mean-squared-error (MSE). The optimally-weighted filter is thus guaranteed to perform better than either of the component filters in terms of the MSE, at all noise levels. We also provide a fast algorithm for the weighted filtering. Visual and quantitative denoising results on standard test images are reported which demonstrate that the improvement over the original filter is significant both visually and in terms of PSNR. Moreover, the denoising performance of the optimally-weighted bilateral filter is competitive with the computation-intensive non-local means filter.

Patent
04 May 2015
TL;DR: In this article, the pixel data of the blue pixel B of the first pair of RB pixels is compared with pixel data C of the second pair of BR pixels to detect whether or not anomalous oblique incident light is incident on the color image pickup device.
Abstract: In an aspect of the invention, plural pixels constituting a color image pickup device include a first pair of RB pixels and a second pair of RB pixels both constituted by a red pixel R having a red color filter and a blue pixel B having a blue color filter in a horizontal direction A and vertical direction B, the red pixel and the blue pixel being adjacent to each other. A position of the red pixel R and a position of the blue pixel B are opposite to each other between the first pair of RB pixels and the second pair of RB pixels. Pixel data of the blue pixel B of the first pair of RB pixels is compared with pixel data of the blue pixel of the second pair of RB pixels to detect whether or not anomalous oblique incident light is incident on the color image pickup device.

Proceedings ArticleDOI
01 Aug 2015
TL;DR: Comparison of image segmentation results of thyroid nodules with and without bilateral filter is attached, and the combination of bilateral filter and active contour showed better results with the edge of the nodules firmly and clear.
Abstract: Utilization of ultrasound imaging with various resolutions as modalities for thyroid nodules examination is growing rapidly This is consistent with an increase in incidence of thyroid malignancy Thyroid ultrasound examination is considered superior to other medical imaging modalities for its non-invasive, practical, inexpensive and painless In the examination process, a radiologist expects areas of thyroid nodules that can be localized precisely from the surrounding normal tissue Thus boundary of the nodules can be seen as to be regular or irregular Boundary is one of the important features of malignancy that doctors use to make a diagnosis Most malignant nodules have unclear and irregular boundaries Imprecise segmentation result will lead to misdiagnosis based on boundary characteristics Active contour segmentation technique is applied for detecting boundary of thyroid nodules and separating them with normal tissue iteratively However, the characteristics of the ultrasound image that brings speckle noises make the segmentation process more complicated It also resulted in interpretation errors and inaccuracies diagnosis made by a doctor The intricacy of irregular nodule area can not easily be solved by changing the value of iteration on active contour Therefore, speckle noise reduction method is needed to overcome this problem so that nodule area segmented properly In this paper speckle noise reduction is done with bilateral filter Comparison of image segmentation results of thyroid nodules with and without bilateral filter is attached at the end of this article The combination of bilateral filter and active contour showed better results with the edge of the nodules firmly and clear

Journal ArticleDOI
TL;DR: The author's proposed adaptive GF (AGF) integrates the shift-variant technique, a part of ABF, into a guided filter to render crisp and sharpened outputs and it is efficiently implemented using a fast linear-time algorithm.
Abstract: Enhancing the sharpness and reducing the noise of blurred, noisy images are crucial functions of image processing. Widely used unsharp masking filter-based approaches suffer from halo-artefacts and/or noise amplification, while noise- and halo-free adaptive bilateral filtering (ABF) is computationally intractable. In this study, the authors present an efficient sharpening algorithm inspired by guided image filtering (GF). The author's proposed adaptive GF (AGF) integrates the shift-variant technique, a part of ABF, into a guided filter to render crisp and sharpened outputs. Experiments showed the superiority of their proposed algorithm to existing algorithms. The proposed AGF sharply enhances edges and textures without causing halo-artefacts or noise amplification, and it is efficiently implemented using a fast linear-time algorithm.

Journal ArticleDOI
TL;DR: The proposed BLPP utilizes both the spatial information and the image content information, which results in higher recognition rate, and experimental results on the Salinas and Indian Pine hyperspectral databases demonstrate the effectiveness of BLPP.

Journal ArticleDOI
TL;DR: The quantitative results from real data show that this newly developed method could reduce the phase noise efficiently while also outperforming the Goldstein, Baran and empirical mode decomposition (EMD) filters by preserving the edges in interferograms.
Abstract: The Goldstein filter is one of the most commonly used filters for synthetic aperture radar (SAR) interferograms. The level of noise after filtering is controlled by a filter parameter, "alpha," the value of which is determined by pixels within the moving window. However, when there exist different features within a single filter window, especially along the border, the value of alpha as estimated from the pixels within the window can be inaccurate and this may result in blurred borders in filtered interferograms. This letter proposes a modified Goldstein filter based on the adaptive-neighborhood technique. The idea of this method is to filter each pixel of the interferogram within an adjusted filter patch. In this adjusted patch, the adaptive-neighborhood pixels retain the original phase values while the "background" pixels are replaced by the mean value of adaptive-neighborhood pixels. Then, the Fourier transform of the complex phase is applied to this adjusted filter patch. The difficulty of estimating the noise level near the borders of different features can be decreased using this new filtering method. The quantitative results from real data show that this newly developed method could reduce the phase noise efficiently while also outperforming the Goldstein, Baran and empirical mode decomposition (EMD) filters by preserving the edges in interferograms.

Journal ArticleDOI
TL;DR: An architecture design of a hardware accelerator capable to expand the dynamic range of low dynamic range images to the 32-bit high dynamic range counterpart is presented, obtaining, in both implementations, state-of-the-art performances.
Abstract: In this paper, an architecture design of a hardware accelerator capable to expand the dynamic range of low dynamic range images to the 32-bit high dynamic range counterpart is presented. The processor implements on-the-fly calculation of the edge-preserving bilateral filtering and luminance average, to elaborate a full-HD (1920 $ \times $ 1080 pixels) image in 16.6 ms (60 frames/s) on field-programmable logic (FPL), by processing the incoming pixels in streaming order, without frame buffers. In this way, the design avoids the use of external DRAM and can be tightly coupled with acquiring devices, thus to enable the implementation of smart sensors. The processor complexity can be configured with different area/speed ratios to meet the requirements of different target platforms from FPLs to ASICs, obtaining, in both implementations, state-of-the-art performances.