scispace - formally typeset
Search or ask a question

Showing papers on "Bilateral filter published in 2009"


Journal ArticleDOI
TL;DR: In this article, the authors proposed a new signal processing analysis of the bilateral filter, which complements the recent studies that analyzed it as a PDE or as a robust statistical estimator.
Abstract: The bilateral filter is a nonlinear filter that smoothes a signal while preserving strong edges. It has demonstrated great effectiveness for a variety of problems in computer vision and computer graphics, and fast versions have been proposed. Unfortunately, little is known about the accuracy of such accelerations. In this paper, we propose a new signal-processing analysis of the bilateral filter which complements the recent studies that analyzed it as a PDE or as a robust statistical estimator. The key to our analysis is to express the filter in a higher-dimensional space where the signal intensity is added to the original domain dimensions. Importantly, this signal-processing perspective allows us to develop a novel bilateral filtering acceleration using downsampling in space and intensity. This affords a principled expression of accuracy in terms of bandwidth and sampling. The bilateral filter can be expressed as linear convolutions in this augmented space followed by two simple nonlinearities. This allows us to derive criteria for downsampling the key operations and achieving important acceleration of the bilateral filter. We show that, for the same running time, our method is more accurate than previous acceleration techniques. Typically, we are able to process a 2 megapixel image using our acceleration technique in less than a second, and have the result be visually similar to the exact computation that takes several tens of minutes. The acceleration is most effective with large spatial kernels. Furthermore, this approach extends naturally to color images and cross bilateral filtering.

789 citations


Proceedings ArticleDOI
20 Jun 2009
TL;DR: A new bilateral filtering algorithm with computational complexity invariant to filter kernel size, so-called O(1) or constant time in the literature, that yields a new class of constant time bilateral filters that can have arbitrary spatial and arbitrary range kernels.
Abstract: We propose a new bilateral filtering algorithm with computational complexity invariant to filter kernel size, so-called O(1) or constant time in the literature. By showing that a bilateral filter can be decomposed into a number of constant time spatial filters, our method yields a new class of constant time bilateral filters that can have arbitrary spatial and arbitrary range kernels. In contrast, the current available constant time algorithm requires the use of specific spatial or specific range kernels. Also, our algorithm lends itself to a parallel implementation leading to the first real-time O(1) algorithm that we know of. Meanwhile, our algorithm yields higher quality results since we are effectively quantizing the range function instead of quantizing both the range function and the input image. Empirical experiments show that our algorithm not only gives higher PSNR, but is about 10× faster than the state-of-the-art. It also has a small memory footprint, needed only 2% of the memory required by the state-of-the-art for obtaining the same quality as exact using 8-bit images. We also show that our algorithm can be easily extended for O(1) median filtering. Our bilateral filtering algorithm was tested in a number of applications, including HD video conferencing, video abstraction, highlight removal, and multi-focus imaging.

325 citations


Journal ArticleDOI
TL;DR: The results demonstrate that bilateral filtering incorporating a CT noise model can achieve a significantly better noise-resolution trade-off than a series of commercial reconstruction kernels and can be translated into substantial dose reduction.
Abstract: Purpose: To investigate a novel locally adaptive projection space denoising algorithm for low-dose CT data. Methods: The denoising algorithm is based on bilateral filtering, which smooths values using a weighted average in a local neighborhood, with weights determined according to both spatial proximity and intensity similarity between the center pixel and the neighboring pixels. This filtering is locally adaptive and can preserve important edge information in the sinogram, thus maintaining high spatial resolution. A CTnoise model that takes into account the bowtie filter and patient-specific automatic exposure control effects is also incorporated into the denoising process. The authors evaluated the noise-resolution properties of bilateral filtering incorporating such a CTnoise model in phantom studies and preliminary patient studies with contrast-enhanced abdominal CT exams. Results: On a thin wire phantom, the noise-resolution properties were significantly improved with the denoising algorithm compared to commercial reconstruction kernels. The noise-resolution properties on low-dose (40 mA s) data after denoising approximated those of conventional reconstructions at twice the dose level. A separate contrast plate phantom showed improved depiction of low-contrast plates with the denoising algorithm over conventional reconstructions when noise levels were matched. Similar improvement in noise-resolution properties was found on CT colonography data and on five abdominal low-energy (80 kV) CT exams. In each abdominal case, a board-certified subspecialized radiologist rated the denoised 80 kV images markedly superior in image quality compared to the commercially available reconstructions, and denoising improved the image quality to the point where the 80 kV images alone were considered to be of diagnostic quality. Conclusions: The results demonstrate that bilateral filtering incorporating a CTnoise model can achieve a significantly better noise-resolution trade-off than a series of commercial reconstruction kernels. This improvement in noise-resolution properties can be used for improving image quality in CT and can be translated into substantial dose reduction.

290 citations


Journal ArticleDOI
27 Jul 2009
TL;DR: A Monte-Carlo kd-tree sampling algorithm that efficiently computes any filter that can be expressed in this way, along with a GPU implementation of this technique and a fast adaptation of non-local means to geometry.
Abstract: We propose a method for accelerating a broad class of non-linear filters that includes the bilateral, non-local means, and other related filters. These filters can all be expressed in a similar way: First, assign each value to be filtered a position in some vector space. Then, replace every value with a weighted linear combination of all values, with weights determined by a Gaussian function of distance between the positions. If the values are pixel colors and the positions are (x, y) coordinates, this describes a Gaussian blur. If the positions are instead (x, y, r, g, b) coordinates in a five-dimensional space-color volume, this describes a bilateral filter. If we instead set the positions to local patches of color around the associated pixel, this describes non-local means. We describe a Monte-Carlo kd-tree sampling algorithm that efficiently computes any filter that can be expressed in this way, along with a GPU implementation of this technique. We use this algorithm to implement an accelerated bilateral filter that respects full 3D color distance; accelerated non-local means on single images, volumes, and unaligned bursts of images for denoising; and a fast adaptation of non-local means to geometry. If we have n values to filter, and each is assigned a position in a d-dimensional space, then our space complexity is O(dn) and our time complexity is O(dn log n), whereas existing methods are typically either exponential in d or quadratic in n.

224 citations


Proceedings ArticleDOI
01 Jan 2009
TL;DR: This article proposes a computationally efficient method of scene compositing using edge-prese rving filters such as bilateral filters and considers the High Dynamic Range Imaging (HDRI) problem.
Abstract: Compositing a scene from multiple images is of considerableinterest to graphics professionals. Typical compositing techniques involve estimation or explicit prepar ation of matte by an artist. In this article, we address the problem of automatic compositing of a scene from images o btained through variable exposure photography. We consider the High Dynamic Range Imaging (HDRI) problem an d review some of the existing approaches for directly generating a Low Dynamic Range (LDR) image from mul ti-exposure images. We propose a computationally efficient method of scene compositing using edge-prese rving filters such as bilateral filters. The key challenge is to composite the multi-exposure images in such a way so as t o preserve details in both brightly and poorly illuminated regions of the scene within the limited dynamicrange.

162 citations


Book
17 Aug 2009
TL;DR: A graphical, intuitive introduction to bilateral filtering, a practical guide for efficient implementation, an overview of its numerous applications, as well as mathematical analysis.
Abstract: Bilateral filtering is one of the most popular image processing techniques. The bilateral filter is a nonlinear process that can blur an image while respecting strong edges. Its ability to decompose an image into different scales without causing haloes after modification has made it ubiquitous in computational photography applications such as tone mapping, style transfer, relighting, and denoising. Bilateral Filtering: Theory and Applications provides a graphical, intuitive introduction to bilateral filtering, a practical guide for efficient implementation, an overview of its numerous applications, as well as mathematical analysis. This broad and detailed overview covers theoretical and practical issues that will be useful to researchers and software developers.

136 citations


Journal ArticleDOI
TL;DR: An efficient algorithm for removing Gaussian noise from corrupted image is proposed by incorporating a wavelet-based trivariate shrinkage filter with a spatial-based joint bilateral filter and the experimental results indicate that the algorithm is competitive with other denoising techniques.
Abstract: This correspondence proposes an efficient algorithm for removing Gaussian noise from corrupted image by incorporating a wavelet-based trivariate shrinkage filter with a spatial-based joint bilateral filter. In the wavelet domain, the wavelet coefficients are modeled as trivariate Gaussian distribution, taking into account the statistical dependencies among intrascale wavelet coefficients, and then a trivariate shrinkage filter is derived by using the maximum a posteriori (MAP) estimator. Although wavelet-based methods are efficient in image denoising, they are prone to producing salient artifacts such as low-frequency noise and edge ringing which relate to the structure of the underlying wavelet. On the other hand, most spatial-based algorithms output much higher quality denoising image with less artifacts. However, they are usually too computationally demanding. In order to reduce the computational cost, we develop an efficient joint bilateral filter by using the wavelet denoising result rather than directly processing the noisy image in the spatial domain. This filter could suppress the noise while preserve image details with small computational cost. Extension to color image denoising is also presented. We compare our denoising algorithm with other denoising techniques in terms of PSNR and visual quality. The experimental results indicate that our algorithm is competitive with other denoising techniques.

111 citations


Patent
07 May 2009
TL;DR: In this article, the Hough transform is configured to function within the context of noisy images, as simulated by the pseudo-random selection and processing of less than the total number of pixels in the image.
Abstract: A digital image includes a plurality of pixels arranged in an array. In a method of analyzing the image, some of the pixels are purposefully not processed. In particular, only those pixels in a particular subgroup are processed according to a Hough or similar transform. The number of pixels in the subgroup is less than the total number of pixels in the image (e.g., as little as about 5% of the total pixels), and each pixel in the subgroup is pseudo-randomly selected. The Hough transform is inherently configured to function within the context of noisy images, for identifying features of interest in the image, as simulated by the pseudo-random selection and processing of less than the total number of pixels in the image. This significantly reduces the processor resources required to analyze the image.

84 citations


Proceedings ArticleDOI
18 Jan 2009
TL;DR: These experiments show that the dual image resolution range function alleviates the aliasing artifacts and therefore improves the temporal stability of the output depth map.
Abstract: Depth maps are used in many applications, eg 3D television, stereo matching, segmentation, etc Often, depth maps are available at a lower resolution compared to the corresponding image data For these applications, depth maps must be upsampled to the image resolution Recently, joint bilateral filters are proposed to upsample depth maps in a single step In this solution, a high-resolution output depth is computed as a weighted average of surrounding low-resolution depth values, where the weight calculation depends on spatial distance function and intensity range function on the related image data Compared to that, we present two novel ideas Firstly, we apply anti-alias prefiltering on the high-resolution image to derive an image at the same low resolution as the input depth map The upsample filter uses samples from both the high-resolution and the low-resolution images in the range term of the bilateral filter Secondly, we propose to perform the upsampling in multiple stages, refining the resolution by a factor of 2×2 at each stage We show experimental results on the consequences of the aliasing issue, and we apply our method to two use cases: a high quality ground-truth depth map and a real-time generated depth map of lower quality For the first use case a relatively small filter footprint is applied; the second use case benefits from a substantially larger footprint These experiments show that the dual image resolution range function alleviates the aliasing artifacts and therefore improves the temporal stability of the output depth map On both use cases, we achieved comparable or better image quality with respect to upsampling with the joint bilateral filter in a single step On the former use case, we feature a reduction of a factor of 5 in computational cost, whereas on the latter use case, the cost saving is a factor of 50

78 citations


Journal ArticleDOI
TL;DR: Experiments indicate that the results produced by the method are less prone to visible artifacts than the ones obtained with the state-of-the-art technique for real-time automatic computation of brightness enhancement functions.
Abstract: This paper presents an automatic technique for producing high-quality brightness-enhancement functions for real-time reverse tone mapping of images and videos. Our approach uses a bilateral filter to obtain smooth results while preserving sharp luminance discontinuities, and can be efficiently implemented on GPUs. We demonstrate the effectiveness of our approach by reverse tone mapping several images and videos. Experiments based on HDR visible difference predicator and on an image distortion metric indicate that the results produced by our method are less prone to visible artifacts than the ones obtained with the state-of-the-art technique for real-time automatic computation of brightness enhancement functions.

68 citations


Journal ArticleDOI
TL;DR: A novel fuzzy reasoning-based directional median filter is proposed to remove the random-value impulse noise efficiently and outperforms several existing filter schemes for impulse noise removal in an image.

Journal ArticleDOI
TL;DR: A new dynamic range compression technique for infrared (IR) imaging systems that enhances details visibility and allows the control and adjustment of the image appearance by setting a number of tunable parameters is proposed.
Abstract: We propose a new dynamic range compression technique for infrared (IR) imaging systems that enhances details visibility and allows the control and adjustment of the image appearance by setting a number of tunable parameters. This technique adopts a bilateral filter to extract a details component and a coarse component. The two components are processed independently and then recombined to obtain the output-enhanced image that fits the display dynamic range. The contribution made is threefold. We propose a new technique for the visualization of high dynamic range (HDR) images that is specifically tailored to IR images. We show the effectiveness of the method by analyzing experimental IR images that represent typical area surveillance and object recognition applications. Last, we quantitatively assess the performance of the proposed technique, comparing the quality of the enhanced image with that obtained through two well-established visualization methods.

Proceedings ArticleDOI
29 May 2009
TL;DR: This paper presents an automatic and robust system to convert 2D videos to 3D videos that combines two major depth generation modules, the depth from motion and depth from geometrical perspective.
Abstract: The three-dimensional (3D) displays provide a dramatic improvement of visual quality over the 2D displays. The conversion of existing 2D videos to 3D videos is necessary for multimedia application. This paper presents an automatic and robust system to convert 2D videos to 3D videos. The proposed 2D-to-3D conversion combines two major depth generation modules, the depth from motion and depth from geometrical perspective. A block-based algorithm is applied and cooperates with the bilateral filter to diminish block effect and generate comfortable depth map. After generating the depth map, the multi-view video is rendered to 3D display.

Patent
12 Jun 2009
TL;DR: In this paper, a first reference value indicating an image of the plurality of images to which a pixel whose sharpness is the highest among the pixels located on the identical positions in the same positions of images belongs is obtained on each pixel of the images.
Abstract: Sharpness is calculated in all of focus-bracketed images on a pixel basis. Then, a first reference value indicating an image of the plurality of images to which a pixel whose sharpness is the highest among the pixels located on the identical positions in the plurality of images belongs is obtained on each pixel of the images, and a second reference value is calculated based on the first reference value on each pixel by spatially smoothing the first reference value on each pixel based on the first reference values on adjacent pixels. The focus-bracketed images are processed based on the second reference values to generate an omni-focus image or a blur-enhanced image. Accordingly, it is possible to judge a region having high contrast as an in-focus region and acquire a synthesized image having smooth gradation.

Proceedings ArticleDOI
13 Nov 2009
TL;DR: This work investigated two image space based nonlinear filters for noise reduction: the bilateral filter (BF) and the nonlocal means (NLM) algorithm and found that both the BF and NLM methods improve the tradeoff between noise and high contrast spatial resolution with no significant difference in LCD.
Abstract: Optimal noise control is important for improving image quality and reducing radiation dose in computed tomography. Here we investigated two image space based nonlinear filters for noise reduction: the bilateral filter (BF) and the nonlocal means (NLM) algorithm. Images from both methods were compared against those from a commercially available weighted filtered backprojection (WFBP) method. A standard phantom for quality assurance testing was used to quantitatively compare noise and spatial resolution, as well as low contrast detectability (LCD). Additionally, an image dataset from a patient's abdominal CT exam was used to assess the effectiveness of the filters on full dose and simulated half dose acquisitions. We found that both the BF and NLM methods improve the tradeoff between noise and high contrast spatial resolution with no significant difference in LCD. Results from the patient dataset demonstrated the potential of dose reduction with the denoising methods. Care must be taken when choosing the NLM parameters in order to minimize the generation of artifacts that could possibly compromise diagnostic value.

Proceedings Article
01 Jan 2009
TL;DR: It is shown that a smooth, realistic, output image can be obtained by fusing the base layer of the visible image with the near-infrared detail layer, and this method not only outperforms equivalent decomposition in the wavelet domain, but the results also look more realistic than with a simple luminance transfer.
Abstract: Skin tone images, portraits in particular, are of tremendous importance in digital photography, but a number of factors, such as pigmentation irregularities (e.g., moles, freckles), irritation, roughness, or wrinkles can reduce their appeal. Moreover, such “defects” are oftentimes enhanced by lighting conditions, e.g., when a flash is used. Starting with the observations that melanin and hemoglobin, the key components of skin colour, have little absorption in the near-infrared part of the spectrum, and that the depth of light penetration in the epidermis is proportional to the incident light’s wavelength, we propose that near-infrared images provide information that can be used to automatically smooth skin tones in a physically realistic manner. Specifically, we develop a framework that consists of capturing a pair of visible/near-infrared images and separating both of them into base and detail layers (akin to a low/high frequency decomposition) with the fast bilateral filter. We show that a smooth, realistic, output image can be obtained by fusing the base layer of the visible image with the near-infrared detail layer. This method not only outperforms equivalent decomposition in the wavelet domain, but the results also look more realistic than with a simple luminance transfer. Moreover, the proposed method delivers consistently good results across various skin types.

Journal ArticleDOI
TL;DR: A new dimensionally-reduced linear image space that allows a number of recent image manipulation techniques to be performed efficiently and robustly and is useful for energy-minimization methods in achieving efficient processing and providing better matrix conditioning at a minimal quality sacrifice.
Abstract: This article presents a new dimensionally-reduced linear image space that allows a number of recent image manipulation techniques to be performed efficiently and robustly. The basis vectors spanning this space are constructed from a scale-adaptive image decomposition, based on kernels of the bilateral filter. Each of these vectors locally binds together pixels in smooth regions and leaves pixels across edges independent. Despite the drastic reduction in the number of degrees of freedom, this representation can be used to perform a number of recent gradient-based tonemapping techniques. In addition to reducing computation time, this space can prevent the bleeding artifacts which are common to Poisson-based integration methods. In addition, we show that this reduced representation is useful for energy-minimization methods in achieving efficient processing and providing better matrix conditioning at a minimal quality sacrifice.

Patent
30 Jan 2009
TL;DR: In this article, a method for filtering distance information from a 3D-measurement camera system comprises comparing amplitude and/or distance information for pixels to adjacent pixels and averaging distance for the pixels with the adjacent pixels when amplitude or distance information of the pixels is within a range of the amplitudes and distances for adjacent pixels.
Abstract: A method for filtering distance information from a 3D-measurement camera system comprises comparing amplitude and/or distance information for pixels to adjacent pixels and averaging distance information for the pixels with the adjacent pixels when amplitude and/or distance information for the pixels is within a range of the amplitudes and/or distances for the adjacent pixels. In addition to that the range of distances may or may not be defined as a function depending on the amplitudes.

Journal ArticleDOI
TL;DR: Experimental results show that the visual quality and evaluation indexes of the proposed algorithm outperform the classical Lee filtering.
Abstract: Bilateral filtering (BF) can realise both smoothing images and preserving edges, whereas its filtering results are always influenced since its two parameters are difficult to configure to the optimum. In this reported work, the application of BF is extended to synthetic aperture radar (SAR) image despeckling, and the despeckling evaluation indexes, including the equivalent number of looks and the edge save index, are used to estimate the parameters. After BF with estimated parameters imposed on a normalised SAR image, further processing can achieve both despeckling and edge preservation simultaneously. Experimental results show that the visual quality and evaluation indexes of the proposed algorithm outperform the classical Lee filtering.

Proceedings ArticleDOI
H. Bruder1, Rainer Raupach1, Ernst Klotz1, Karl Stierstorfer1, Thomas Flohr1 
TL;DR: A method for spatio-temporal filtration of dynamic CT data, to increase the signal-to-noise ratio (SNR) of image data at the same time maintaining image quality, in particular spatial and temporal sharpness of the images.
Abstract: We present a method for spatio-temporal filtration of dynamic CT data, to increase the signal-to-noise ratio (SNR) of image data at the same time maintaining image quality, in particular spatial and temporal sharpness of the images. Alternatively, the radiation dose applied to the patient can be reduced at the same time maintaining the noise level and the image sharpness. In contrast to classical methods, which generally operate on the three spatial dimensions of image data, noise statistics is improved by extending the filtration to the temporal dimension. Our approach is based on nonlinear and anisotropic diffusion filters, which are based on a model of heat diffusion adapted to medical CT data. Bilateral filters are a special class of diffusion filters, which do not need iteration to reach a convergence image, but represent the fixed point of a dedicated diffusion filter. Spatio-temporal, anisotropic bilateral filters are developed and applied to dynamic CT image data. The potential was evaluated using data from perfusion CT and cardiac dual source CT (DSCT) data, respectively. It was shown, that in perfusion CT, SNR can be improved by a factor of 4 at the same radiation dose. On basis of clinical data it was shown, that alternatively the radiation dose to the patient can be reduced by a factor of at least 2. A more accurate evaluation of the perfusion parameters blood flow, blood volume and time-to-peak is supported. In DSCT noise statistics can be improved using more projection data than needed for image reconstruction, however, as a consequence the temporal resolution is significantly impaired. Due to the anisotropy of the spatio-temporal bilateral filter temporal contrast edges between adjacent time samples are preserved, at the same time substantially smoothing image data in homogeneous regions. Also temporal contrast edges are preserved, maintaining the very high temporal resolution of DSCT acquisitions (~ 80 ms). CT examinations of the heart require careful dose management to reduce the radiation dose burden to the patient. The use of spatio-temporal diffusion filters allows for dose reduction at the same noise level, at the same time preserving spatial and temporal image resolution. Our approach can be extended to any imaging method, that is based on dynamic data, as an efficient tool for edge-preserving noise reduction.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: A family of non-local image smoothing algorithms which approximate the application of diffusion PDE's on a specific Euclidean space of image patches and show that the Bilateral Filtering and Non-Local Means methods are the isotropic cases of the denoising framework.
Abstract: We design a family of non-local image smoothing algorithms which approximate the application of diffusion PDE's on a specific Euclidean space of image patches. We first map a noisy image onto this high-dimensional space and estimate its geometric structure thanks to a straightforward extension of the structure tensor field. The tensors spectral elements allows us to design an oriented high-dimensional smoothing process by the means of anisotropic regularization PDE's which have both local and non-local properties and whose solutions are estimated by locally oriented high-dimensional convolutions. We show that the Bilateral Filtering and Non-Local Means methods are the isotropic cases of our denoising framework.

Proceedings ArticleDOI
18 Jan 2009
TL;DR: In this article, a spatially adaptive method was proposed to reduce compression artifacts observed in block discrete cosine transform (DCT) based image/video compression standards. But the method is based on the bilateral filter, which is very effective in denoising images without smoothing edges.
Abstract: In this paper, we present a spatially adaptive method to reduce compression artifacts observed in block discrete cosine transform (DCT) based image/video compression standards. The method is based on the bilateral filter, which is very effective in denoising images without smoothing edges. When applied to reduce compression artifacts, the parameters of the bilateral filter should be chosen carefully to have a good performance. To avoid over-smoothing texture regions and to effectively eliminate blocking and ringing artifacts, in this paper, texture regions and block boundary discontinuities are first detected; these are then used to control/adapt the spatial and intensity parameters of the bilateral filter. Experiments show that the proposed method improves over the standard non-adaptive bilateral filter visually and quantitatively.

Journal Article
TL;DR: The proposed robust statistics based filter performs well in removing low to medium density impulse noise with detail preservation upto a noise density of 70% and outperforms in restoring the original image with superior preservation of edges and better suppression of impulse noise.
Abstract: � Abstract—In this paper, a robust statistics based filter to remove salt and pepper noise in digital images is presented. The function of the algorithm is to detect the corrupted pixels first since the impulse noise only affect certain pixels in the image and the remaining pixels are uncorrupted. The corrupted pixels are replaced by an estimated value using the proposed robust statistics based filter. The proposed method perform well in removing low to medium density impulse noise with detail preservation upto a noise density of 70% compared to standard median filter, weighted median filter, recursive weighted median filter, progressive switching median filter, signal dependent rank ordered mean filter, adaptive median filter and recently proposed decision based algorithm. The visual and quantitative results show the proposed algorithm outperforms in restoring the original image with superior preservation of edges and better suppression of impulse noise Keywords—Image denoising, Nonlinear filter, Robust Statistics, and Salt and Pepper Noise.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: An approach is proposed to extend bilateral filtering to the vector case so as to simultaneously take spectral and spatial information into account by using spectral distances and multivariate Gaussian functions.
Abstract: An approach is proposed to extend bilateral filtering to the vector case so as to simultaneously take spectral and spatial information into account by using spectral distances and multivariate Gaussian functions. To simplify the determination of the parameters of the corresponding covariance matrix, the data vectors are transformed to eigenspace through principal component analysis (PCA). By locally adapting to the spectral distribution in decorrelated PCA space, the proposed approach offers effective noise removal while keeping the spatial details in the band images. It also provides dynamic range enhancement of severely affected bands to make meaningful data extraction possible. Experimental results with the proposed approach using remote-sensed hyperspectral data demonstrate improved denoising and enhancement in comparison to existing methods.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: A method of improving coding efficiency is proposed by using the Wiener filter as an in-loop filter that divides the decoded image into some fixed blocks, and decides whether to apply the filter for each block adaptively, so that the coding efficiency can be improved.
Abstract: In this paper, a method of improving coding efficiency is proposed by using the Wiener filter as an in-loop filter. The Wiener filter can minimize the mean square error between the input image and the decoded image. However, errors of some pixels increase by filtering process. Since the filtered pixels are used for motion-compensated prediction, these errors are propagated to the subsequent images. The proposed method divides the decoded image into some fixed blocks, and decides whether to apply the filter for each block adaptively. As a result, by preventing the increase in errors after the filtering process, the coding efficiency can be improved. Experimental results show that the proposed method achieves bitrate reduction of up to 33.9% in Baseline Profile and up to 33.0 % in High Profile at the same PSNR compared to H.264.

Patent
24 Jun 2009
TL;DR: In this article, a bilateral filter accounts for edge effects by filtering based both on spatial separation between image points and photometric separation, and the resulting image signals are corrected with a bilateral filtering.
Abstract: Correction of spatial nonuniformities among detectors in a focal plane array. Incoming image data is incident on the array, and the resulting image signals are corrected with a bilateral filter. The bilateral filter accounts for edge effects by filtering based both on spatial separation between image points and photometric separation between image points.

Proceedings Article
01 Jan 2009
TL;DR: This study introduces an adaptive bilateral filter, employing two Gaussian smoothing filters in different domains, which avoids the loss of edge information when smoothing the image, and proposes a method to optimize the parameters to be adaptive to the corresponding viewing conditions, and the quantity and homogeneity of information contained in an image.
Abstract: Color image difference metrics are of great importance in the field of color image reproduction. In this study, we introduce an adaptive bilateral filter for predicting color image difference. This filter is simple, employing two Gaussian smoothing filters in different domains, which avoids the loss of edge information when smoothing the image. However, the challenge is to select appropriate parameters to result in a better performance when applying for color image deference prediction. We propose a method to optimize the parameters, which are designed to be adaptive to the corresponding viewing conditions, and the quantity and homogeneity of information contained in an image. We have conducted psychophysical experiments to evaluate the performance of our approach. The experimental sample images are reproduced with variations in six image attributes: Lightness, Chroma, Hue, Compression, Noise, and Sharpness. The Pearson’s correlation value between the predicted difference and the z-score of visual judgments was employed to evaluate the performance and compare it with that of s-CIELAB and iCAM. Background Theories of spatial characterization of the human visual system are of much current interest in the development of image difference metrics [1-4]. They all involve the conception that the human visual system is optimally designed to process the spatial information in images or complex scenes. The study [5] of the human visual system has shown that the human visual system is composed of spatial frequency channels. The light sensors of the human visual system, cones and rods, are sensitive to the spatial changes of stimuli. Both contrast sensitivity and color appearance vary as a function of the spatial pattern [6, 7]. Attempts to computationally assess color image difference have typically created models of human perception suitable for determining the discriminations introduced by spatial alteration, such as image compression, halftone reproduction, etc. On the other hand, the successful applications of color difference formulae, such as CIELAB 1976 color difference, CIE94, and CIEDE2000, have encouraged researchers to apply them also to image difference evaluation. An important motivation of our work is the development of an image difference metric on various image reproduction tasks. Image difference may originate due to different image reproduction methods, such as the discriminations from chromatic and spatial modifications. Several studies [8-12] have measured the discriminations introduced by chromatic changes of the images alone. In this work, we study the general statistics over both spatial and chromatic image reproductions. Spatial filtering was introduced into the color difference formula for measuring image reproduction errors, and later, replaced with the simulator of the human contrast sensitivity functions (CSFs) [2, 13, 14]. There are many models developed for simulating the CSFs. The model developed by Movshon and Kiorpes [15] was suggested [2] and also adopted by the CIE TC802 [16]. Generally, the spatial filters (or CSF models) are applied in the opponent color space to deduct the high frequency components in an image. The decrease in sensitivity at higher frequencies has been attributed to blurring because of the optical limitation of the eye and spatial summation in the human visual system [17]. Thus, a blurrier image is the output, in which the imperceptible information is attenuated, including, inevitably, high frequency edges. There is a broad consensus, however, that the human visual system is particularly sensitive to the edges in an image. Edge detection is believed to be necessary to distinguish objects from their background, and establish their shape and position. It has been proved to be a crucial early step in the process of scene analysis by the human visual system. To overcome the undesirable loss of edges whilst using the spatial filter, recent studies [3, 18] employed edge enhancement in the workflow for spatial localization. Many image processing methods have been developed to smooth the image and keep the edges. Recently, Tomasi and Manduchi [19] described an alternative bilateral filter which extended the concept of Gaussian smoothing by weighting the filter coefficients with their corresponding relative pixel intensities. Two Gaussian filters are applied at a localized pixel neighborhood, one in the spatial domain (domain filter) and the other in the intensity domain (range filter). The result is a blurrier image than the original while preserving edges. However, the behavior of this filter is governed by a number of parameters which need to be selected with care for color image difference evaluation. In this paper, we propose an adaptive bilateral filter for color image difference evaluation and design the parameters based on the spatial frequency and the quantity and the homogeneity of the information contained in a certain image. We describe a psychophysical experiment to validate its performance and compare it with other two models, sCIELAB and iCAM, which are both recognized as the human visual system based models. The testing images are reproduced in terms of both spatial and chromatic attributes. The evaluation is based on the Pearson’s correlation value between the visual psychophysical judgments and the predicted difference. Adaptive Bilateral Filter The idea behind the bilateral filter is to combine domain and range filters together. Pixels in the neighborhood which are geometrically closer and photometrically more similar to the filtering centre will be weighted more. Given a color image f(x), the bilateral filter [19] can be expressed as: ξ ξ ξ ξ d x f f s x c f x k x h )) ( ), ( ( ) , ( ) ( ) ( ) ( 1 ∫ ∫ ∞

Patent
13 Aug 2009
TL;DR: In this article, an interpolation filtering method includes selecting (S33) from an image, pixels to be used to interpolate a pixel to be interpolated; determining (S34) weight coefficients, each for a corresponding one of the pixels selected in the selecting of pixels; and calculating (S35) a pixel value of the pixel value, by performing a weighted sum of pixel values of pixels using the weight coefficients determined in the determining of weight coefficients.
Abstract: An interpolation filtering method includes selecting (S33), from an image, pixels to be used to interpolate a pixel to be interpolated; determining (S34) weight coefficients, each for a corresponding one of the pixels selected in the selecting of pixels; and calculating (S35) a pixel value of the pixel to be interpolated, by performing a weighted sum of pixel values of the pixels using the weight coefficients determined in the determining (S34) of weight coefficients. In the determining (S34) of weight coefficients, each of the weight coefficients is determined for the corresponding one of the pixels such that a smaller weight coefficient is assigned to a pixel when the pixel is included in a neighboring block than when the pixel is included in a current block in which the pixel to be interpolated is included and which is different from the neighboring block.

Proceedings ArticleDOI
01 Nov 2009
TL;DR: A new FPGA design concept of a bilateral filter for image processing that can be realized as a highly parallelized pipeline structure with very good utilization of dedicated resources is presented.
Abstract: In this paper a new FPGA design concept of a bilateral filter for image processing is presented. With the aid of this design the bilateral filter can be realized as a highly parallelized pipeline structure with very good utilization of dedicated resources. The innovation of the design concept lies in sorting the input data into groups in a manner that kernel based processing is possible. Another feature of the kernel based design concept is the increase of the clock to the quadruple of the pixel clock in the filter architecture. The sorting of the pixels and the quadruplication of the pixel clock are the key to the synchronous FPGA design using a parallelized pipeline architecture. The synchronicity of the design assures constant output delay which can be computed after the hardware specification is known. For acceleration of the design concept the separability and symmetry of the geometric filter component is utilized, also reducing the complexity of the design. Combined with parallel pipeline design a significant decrease of resource consumption can also be achieved. Thus the presented design can easily be implemented on a common medium sized FPGA.

Book ChapterDOI
24 Sep 2009
TL;DR: An automatic skin beautification framework based on color-temperature-insensitive skin-color detection and Poisson image cloning to integrate the beautified parts into the original input is proposed.
Abstract: In this paper, we propose an automatic skin beautification framework based on color-temperature-insensitive skin-color detection. To polish selected skin region, we apply bilateral filter to smooth the facial flaw. Last, we use Poisson image cloning to integrate the beautified parts into the original input. Experimental results show that the proposed method can be applied in varied light source environment. In addition, this method can naturally beautify the portrait skin.