scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Image Processing in 2006"


Journal ArticleDOI
TL;DR: This work addresses the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image, and uses the K-SVD algorithm to obtain a dictionary that describes the image content effectively.
Abstract: We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods

5,493 citations


Journal ArticleDOI
TL;DR: An image information measure is proposed that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image and combined these two quantities form a visual information fidelity measure for image QA.
Abstract: Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Image QA algorithms generally interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by signal fidelity measures. In this paper, we approach the image QA problem as an information fidelity problem. Specifically, we propose to quantify the loss of image information to the distortion process and explore the relationship between image information and visual quality. QA systems are invariably involved with judging the visual quality of "natural" images and videos that are meant for "human consumption." Researchers have developed sophisticated models to capture the statistics of such natural signals. Using these models, we previously presented an information fidelity criterion for image QA that related image quality with the amount of information shared between a reference and a distorted image. In this paper, we propose an image information measure that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image. Combining these two quantities, we propose a visual information fidelity measure for image QA. We validate the performance of our algorithm with an extensive subjective study involving 779 images and show that our method outperforms recent state-of-the-art image QA algorithms by a sizeable margin in our simulations. The code and the data from the subjective study are available at the LIVE website.

3,146 citations


Journal ArticleDOI
TL;DR: This paper presents results of an extensive subjective quality assessment study in which a total of 779 distorted images were evaluated by about two dozen human subjects and is the largest subjective image quality study in the literature in terms of number of images, distortion types, and number of human judgments per image.
Abstract: Measurement of visual quality is of fundamental importance for numerous image and video processing applications, where the goal of quality assessment (QA) algorithms is to automatically assess the quality of images or videos in agreement with human quality judgments. Over the years, many researchers have taken different approaches to the problem and have contributed significant research in this area and claim to have made progress in their respective domains. It is important to evaluate the performance of these algorithms in a comparative setting and analyze the strengths and weaknesses of these methods. In this paper, we present results of an extensive subjective quality assessment study in which a total of 779 distorted images were evaluated by about two dozen human subjects. The "ground truth" image quality data obtained from about 25 000 individual human quality judgments is used to evaluate the performance of several prominent full-reference image quality assessment algorithms. To the best of our knowledge, apart from video quality studies conducted by the Video Quality Experts Group, the study presented in this paper is the largest subjective image quality study in the literature in terms of number of images, distortion types, and number of human judgments per image. Moreover, we have made the data from the study freely available to the research community . This would allow other researchers to easily report comparative results in the future

2,598 citations


Journal ArticleDOI
TL;DR: This paper proposes a design framework based on the mapping approach, that allows for a fast implementation based on a lifting or ladder structure, and only uses one-dimensional filtering in some cases.
Abstract: In this paper, we develop the nonsubsampled contourlet transform (NSCT) and study its applications. The construction proposed in this paper is based on a nonsubsampled pyramid structure and nonsubsampled directional filter banks. The result is a flexible multiscale, multidirection, and shift-invariant image decomposition that can be efficiently implemented via the a trous algorithm. At the core of the proposed scheme is the nonseparable two-channel nonsubsampled filter bank (NSFB). We exploit the less stringent design condition of the NSFB to design filters that lead to a NSCT with better frequency selectivity and regularity when compared to the contourlet transform. We propose a design framework based on the mapping approach, that allows for a fast implementation based on a lifting or ladder structure, and only uses one-dimensional filtering in some cases. In addition, our design ensures that the corresponding frame elements are regular, symmetric, and the frame is close to a tight one. We assess the performance of the NSCT in image denoising and enhancement applications. In both applications the NSCT compares favorably to other existing methods in the literature

1,900 citations


Journal ArticleDOI
TL;DR: A new edge-guided nonlinear interpolation technique is proposed through directional filtering and data fusion that can preserve edge sharpness and reduce ringing artifacts in image interpolation algorithms.
Abstract: Preserving edge structures is a challenge to image interpolation algorithms that reconstruct a high-resolution image from a low-resolution counterpart. We propose a new edge-guided nonlinear interpolation technique through directional filtering and data fusion. For a pixel to be interpolated, two observation sets are defined in two orthogonal directions, and each set produces an estimate of the pixel value. These directional estimates, modeled as different noisy measurements of the missing pixel are fused by the linear minimum mean square-error estimation (LMMSE) technique into a more robust estimate, using the statistics of the two observation sets. We also present a simplified version of the LMMSE-based interpolation algorithm to reduce computational cost without sacrificing much the interpolation performance. Experiments show that the new interpolation techniques can preserve edge sharpness and reduce ringing artifacts

971 citations


Journal ArticleDOI
TL;DR: An appearance-based face recognition method, called orthogonal Laplacianface, based on the locality preserving projection (LPP) algorithm, which aims at finding a linear approximation to the eigenfunctions of the Laplace Beltrami operator on the face manifold.
Abstract: Following the intuition that the naturally occurring face data may be generated by sampling a probability distribution that has support on or near a submanifold of ambient space, we propose an appearance-based face recognition method, called orthogonal Laplacianface. Our algorithm is based on the locality preserving projection (LPP) algorithm, which aims at finding a linear approximation to the eigenfunctions of the Laplace Beltrami operator on the face manifold. However, LPP is nonorthogonal, and this makes it difficult to reconstruct the data. The orthogonal locality preserving projection (OLPP) method produces orthogonal basis functions and can have more locality preserving power than LPP. Since the locality preserving power is potentially related to the discriminating power, the OLPP is expected to have more discriminating power than LPP. Experimental results on three face databases demonstrate the effectiveness of our proposed algorithm

783 citations


Journal ArticleDOI
TL;DR: Results clearly show that the proposed switching median filter substantially outperforms all existing median-based filters, in terms of suppressing impulse noise while preserving image details, and yet, the proposed BDND is algorithmically simple, suitable for real-time implementation and application.
Abstract: A novel switching median filter incorporating with a powerful impulse noise detection method, called the boundary discriminative noise detection (BDND), is proposed in this paper for effectively denoising extremely corrupted images. To determine whether the current pixel is corrupted, the proposed BDND algorithm first classifies the pixels of a localized window, centering on the current pixel, into three groups-lower intensity impulse noise, uncorrupted pixels, and higher intensity impulse noise. The center pixel will then be considered as "uncorrupted," provided that it belongs to the "uncorrupted" pixel group, or "corrupted." For that, two boundaries that discriminate these three groups require to be accurately determined for yielding a very high noise detection accuracy-in our case, achieving zero miss-detection rate while maintaining a fairly low false-alarm rate, even up to 70% noise corruption. Four noise models are considered for performance evaluation. Extensive simulation results conducted on both monochrome and color images under a wide range (from 10% to 90%) of noise corruption clearly show that our proposed switching median filter substantially outperforms all existing median-based filters, in terms of suppressing impulse noise while preserving image details, and yet, the proposed BDND is algorithmically simple, suitable for real-time implementation and application.

614 citations


Journal ArticleDOI
TL;DR: This study reveals the highly non-Gaussian marginal statistics and strong interlocation, interscale, and interdirection dependencies of contourlet coefficients and finds that conditioned on the magnitudes of their generalized neighborhood coefficients, contours coefficients can be approximately modeled as Gaussian random variables.
Abstract: The contourlet transform is a new two-dimensional extension of the wavelet transform using multiscale and directional filter banks. The contourlet expansion is composed of basis images oriented at various directions in multiple scales, with flexible aspect ratios. Given this rich set of basis images, the contourlet transform effectively captures smooth contours that are the dominant feature in natural images. We begin with a detailed study on the statistics of the contourlet coefficients of natural images: using histograms to estimate the marginal and joint distributions and mutual information to measure the dependencies between coefficients. This study reveals the highly non-Gaussian marginal statistics and strong interlocation, interscale, and interdirection dependencies of contourlet coefficients. We also find that conditioned on the magnitudes of their generalized neighborhood coefficients, contourlet coefficients can be approximately modeled as Gaussian random variables. Based on these findings, we model contourlet coefficients using a hidden Markov tree (HMT) model with Gaussian mixtures that can capture all interscale, interdirection, and interlocation dependencies. We present experimental results using this model in image denoising and texture retrieval applications. In denoising, the contourlet HMT outperforms other wavelet methods in terms of visual quality, especially around edges. In texture retrieval, it shows improvements in performance for various oriented textures.

583 citations


Journal ArticleDOI
TL;DR: Based on the concepts of luminance-weighted chrominance blending and fast intrinsic distance computations, high-quality colorization results for still images and video are obtained at a fraction of the complexity and computational cost of previously reported techniques.
Abstract: Colorization, the task of coloring a grayscale image or video, involves assigning from the single dimension of intensity or luminance a quantity that varies in three dimensions, such as red, green, and blue channels. Mapping between intensity and color is, therefore, not unique, and colorization is ambiguous in nature and requires some amount of human interaction or external information. A computationally simple, yet effective, approach of colorization is presented in this paper. The method is fast and it can be conveniently used "on the fly," permitting the user to interactively get the desired results promptly after providing a reduced set of chrominance scribbles. Based on the concepts of luminance-weighted chrominance blending and fast intrinsic distance computations, high-quality colorization results for still images and video are obtained at a fraction of the complexity and computational cost of previously reported techniques. Possible extensions of the algorithm introduced here included the capability of changing the colors of an existing color image or video, as well as changing the underlying luminance, and many other special effects demonstrated here.

540 citations


Journal ArticleDOI
TL;DR: A novel adaptive and patch-based approach is proposed for image denoising and representation based on a pointwise selection of small image patches of fixed size in the variable neighborhood of each pixel to associate with each pixel the weighted sum of data points within an adaptive neighborhood.
Abstract: A novel adaptive and patch-based approach is proposed for image denoising and representation. The method is based on a pointwise selection of small image patches of fixed size in the variable neighborhood of each pixel. Our contribution is to associate with each pixel the weighted sum of data points within an adaptive neighborhood, in a manner that it balances the accuracy of approximation and the stochastic error, at each spatial position. This method is general and can be applied under the assumption that there exists repetitive patterns in a local neighborhood of a point. By introducing spatial adaptivity, we extend the work earlier described by Buades et al. which can be considered as an extension of bilateral filtering to image patches. Finally, we propose a nearly parameter-free algorithm for image denoising. The method is applied to both artificially corrupted (white Gaussian noise) and real images and the performance is very close to, and in some cases even surpasses, that of the already published denoising methods

486 citations


Journal ArticleDOI
TL;DR: A fast and robust hybrid method of super-resolution and demosaicing, based on a maximum a posteriori estimation technique by minimizing a multiterm cost function is proposed.
Abstract: In the last two decades, two related categories of problems have been studied independently in image restoration literature: super-resolution and demosaicing. A closer look at these problems reveals the relation between them, and, as conventional color digital cameras suffer from both low-spatial resolution and color-filtering, it is reasonable to address them in a unified context. In this paper, we propose a fast and robust hybrid method of super-resolution and demosaicing, based on a maximum a posteriori estimation technique by minimizing a multiterm cost function. The L/sub 1/ norm is used for measuring the difference between the projected estimate of the high-resolution image and each low-resolution image, removing outliers in the data and errors due to possibly inaccurate motion estimation. Bilateral regularization is used for spatially regularizing the luminance component, resulting in sharp edges and forcing interpolation along the edges and not across them. Simultaneously, Tikhonov regularization is used to smooth the chrominance components. Finally, an additional regularization term is used to force similar edge location and orientation in different color channels. We show that the minimization of the total cost function is relatively easy and fast. Experimental results on synthetic and real data sets confirm the effectiveness of our method.

Journal ArticleDOI
TL;DR: The results demonstrate that the new subband-adaptive shrinkage function outperforms Bayesian thresholding approaches in terms of mean-squared error and the spatially adaptive version of the proposed method yields better results than the existing spatiallyadaptive ones of similar and higher complexity.
Abstract: We develop three novel wavelet domain denoising methods for subband-adaptive, spatially-adaptive and multivalued image denoising. The core of our approach is the estimation of the probability that a given coefficient contains a significant noise-free component, which we call "signal of interest". In this respect, we analyze cases where the probability of signal presence is 1) fixed per subband, 2) conditioned on a local spatial context, and 3) conditioned on information from multiple image bands. All the probabilities are estimated assuming a generalized Laplacian prior for noise-free subband data and additive white Gaussian noise. The results demonstrate that the new subband-adaptive shrinkage function outperforms Bayesian thresholding approaches in terms of mean-squared error. The spatially adaptive version of the proposed method yields better results than the existing spatially adaptive ones of similar and higher complexity. The performance on color and on multispectral images is superior with respect to recent multiband wavelet thresholding.

Journal ArticleDOI
TL;DR: The novelties of the method is first to use an adaptive filter, whose shape follows the image high-contrast edges, thus reducing halo artifacts common to other methods, and only the luminance channel is processed.
Abstract: We propose a new method to render high dynamic range images that models global and local adaptation of the human visual system. Our method is based on the center-surround Retinex model. The novelties of our method is first to use an adaptive filter, whose shape follows the image high-contrast edges, thus reducing halo artifacts common to other methods. Second, only the luminance channel is processed, which is defined by the first component of a principal component analysis. Principal component analysis provides orthogonality between channels and thus reduces the chromatic changes caused by the modification of luminance. We show that our method efficiently renders high dynamic range images and we compare our results with the current state of the art

Journal ArticleDOI
TL;DR: The proposed method utilizes the void and cluster algorithm to encode a secret binary image into n halftone shares (images) carrying significant visual information, and shows that the visual quality of the obtained halftones are observably better than that attained by any available visual cryptography method known to date.
Abstract: Visual cryptography encodes a secret binary image (SI) into n shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the n shares, however, have no visual meaning and hinder the objectives of visual cryptography. Extended visual cryptography was proposed recently to construct meaningful binary images as shares using hypergraph colourings, but the visual quality is poor. In this paper, a novel technique named halftone visual cryptography is proposed to achieve visual cryptography via halftoning. Based on the blue-noise dithering principles, the proposed method utilizes the void and cluster algorithm to encode a secret binary image into n halftone shares (images) carrying significant visual information. The simulation shows that the visual quality of the obtained halftone shares are observably better than that attained by any available visual cryptography method known to date.

Journal ArticleDOI
TL;DR: A PDE-based level set method that needs to minimize a smooth convex functional under a quadratic constraint, and shows numerical results using the method for segmentation of digital images.
Abstract: In this paper, we propose a PDE-based level set method. Traditionally, interfaces are represented by the zero level set of continuous level set functions. Instead, we let the interfaces be represented by discontinuities of piecewise constant level set functions. Each level set function can at convergence only take two values, i.e., it can only be 1 or -1; thus, our method is related to phase-field methods. Some of the properties of standard level set methods are preserved in the proposed method, while others are not. Using this new method for interface problems, we need to minimize a smooth convex functional under a quadratic constraint. The level set functions are discontinuous at convergence, but the minimization functional is smooth. We show numerical results using the method for segmentation of digital images.

Journal ArticleDOI
TL;DR: The proposed image hashing paradigm using visually significant feature points is proposed, which withstands standard benchmark attacks, including compression, geometric distortions of scaling and small-angle rotation, and common signal-processing operations.
Abstract: We propose an image hashing paradigm using visually significant feature points. The feature points should be largely invariant under perceptually insignificant distortions. To satisfy this, we propose an iterative feature detector to extract significant geometry preserving feature points. We apply probabilistic quantization on the derived features to introduce randomness, which, in turn, reduces vulnerability to adversarial attacks. The proposed hash algorithm withstands standard benchmark (e.g., Stirmark) attacks, including compression, geometric distortions of scaling and small-angle rotation, and common signal-processing operations. Content changing (malicious) manipulations of image data are also accurately detected. Detailed statistical analysis in the form of receiver operating characteristic (ROC) curves is presented and reveals the success of the proposed scheme in achieving perceptual robustness while avoiding misclassification

Journal ArticleDOI
TL;DR: A new grayscale image quality measure that can be used as a graphical or a scalar measure to predict the distortion introduced by a wide range of noise sources based on singular value decomposition is presented.
Abstract: The important criteria used in subjective evaluation of distorted images include the amount of distortion, the type of distortion, and the distribution of error. An ideal image quality measure should, therefore, be able to mimic the human observer. We present a new grayscale image quality measure that can be used as a graphical or a scalar measure to predict the distortion introduced by a wide range of noise sources. Based on singular value decomposition, it reliably measures the distortion not only within a distortion type at different distortion levels, but also across different distortion types. The measure was applied to five test images (airplane, boat, Goldhill, Lena, and peppers) using six types of distortion (JPEG, JPEG 2000, Gaussian blur, Gaussian noise, sharpening, and DC-shifting), each with five distortion levels. Its performance is compared with PSNR and two recent measures.

Journal ArticleDOI
TL;DR: Experimental results and statistical models of the induced ordering are presented and several applications are discussed: image enhancement, normalization, watermarking, etc.
Abstract: While in the continuous case, statistical models of histogram equalization/specification would yield exact results, their discrete counterparts fail. This is due to the fact that the cumulative distribution functions one deals with are not exactly invertible. Otherwise stated, exact histogram specification for discrete images is an ill-posed problem. Invertible cumulative distribution functions are obtained by translating the problem in a K-dimensional space and further inducing a strict ordering among image pixels. The proposed ordering refines the natural one. Experimental results and statistical models of the induced ordering are presented and several applications are discussed: image enhancement, normalization, watermarking, etc.

Journal ArticleDOI
TL;DR: This study provides an overview of Gabor filters in image processing, a short literature survey of the most significant results, and establishes invariance properties and restrictions to the use of Gbps filters in feature extraction.
Abstract: For almost three decades the use of features based on Gabor filters has been promoted for their useful properties in image processing. The most important properties are related to invariance to illumination, rotation, scale, and translation. These properties are based on the fact that they are all parameters of Gabor filters themselves. This is especially useful in feature extraction, where Gabor filters have succeeded in many applications, from texture analysis to iris and face recognition. This study provides an overview of Gabor filters in image processing, a short literature survey of the most significant results, and establishes invariance properties and restrictions to the use of Gabor filters in feature extraction. Results are demonstrated by application examples.

Journal ArticleDOI
TL;DR: Both the classification and the verification performances are found to be very satisfactory as it was shown that, at least for groups of about five hundred subjects, hand-based recognition is a viable secure access control scheme.
Abstract: The problem of person recognition and verification based on their hand images has been addressed. The system is based on the images of the right hands of the subjects, captured by a flatbed scanner in an unconstrained pose at 45 dpi. In a preprocessing stage of the algorithm, the silhouettes of hand images are registered to a fixed pose, which involves both rotation and translation of the hand and, separately, of the individual fingers. Two feature sets have been comparatively assessed, Hausdorff distance of the hand contours and independent component features of the hand silhouette images. Both the classification and the verification performances are found to be very satisfactory as it was shown that, at least for groups of about five hundred subjects, hand-based recognition is a viable secure access control scheme.

Journal ArticleDOI
TL;DR: This work presents a new lattice-based perfect reconstruction and critically sampled anisotropic M-DIR WT, which provides an efficient tool for nonlinear approximation of images, achieving the approximation power O(N/sup -1.55/), which, while slower than the optimal rate O-2/, is much better than O-1/ achieved with wavelets, but at similar complexity.
Abstract: In spite of the success of the standard wavelet transform (WT) in image processing in recent years, the efficiency of its representation is limited by the spatial isotropy of its basis functions built in the horizontal and vertical directions. One-dimensional (1-D) discontinuities in images (edges and contours) that are very important elements in visual perception, intersect too many wavelet basis functions and lead to a nonsparse representation. To efficiently capture these anisotropic geometrical structures characterized by many more than the horizontal and vertical directions, a more complex multidirectional (M-DIR) and anisotropic transform is required. We present a new lattice-based perfect reconstruction and critically sampled anisotropic M-DIR WT. The transform retains the separable filtering and subsampling and the simplicity of computations and filter design from the standard two-dimensional WT, unlike in the case of some other directional transform constructions (e.g., curvelets, contourlets, or edgelets). The corresponding anisotropic basis functions (directionlets) have directional vanishing moments along any two directions with rational slopes. Furthermore, we show that this novel transform provides an efficient tool for nonlinear approximation of images, achieving the approximation power O(N/sup -1.55/), which, while slower than the optimal rate O(N/sup -2/), is much better than O(N/sup -1/) achieved with wavelets, but at similar complexity.

Journal ArticleDOI
TL;DR: A practical quality-aware image encoding, decoding and quality analysis system, which employs a novel reduced-reference image quality assessment algorithm based on a statistical model of natural images and a previously developed quantization watermarking-based data hiding technique in the wavelet transform domain.
Abstract: We propose the concept of quality-aware image , in which certain extracted features of the original (high-quality) image are embedded into the image data as invisible hidden messages. When a distorted version of such an image is received, users can decode the hidden messages and use them to provide an objective measure of the quality of the distorted image. To demonstrate the idea, we build a practical quality-aware image encoding, decoding and quality analysis system, which employs: 1) a novel reduced-reference image quality assessment algorithm based on a statistical model of natural images and 2) a previously developed quantization watermarking-based data hiding technique in the wavelet transform domain.

Journal ArticleDOI
TL;DR: The basic procedure is to first group the histogram components of a low-contrast image into a proper number of bins according to a selected criterion, then redistribute these bins uniformly over the grayscale, and finally ungroup the previously grouped gray-levels.
Abstract: This is Part II of the paper, "Gray-Level Grouping (GLG): an Automatic Method for Optimized Image Contrast Enhancement". Part I of this paper introduced a new automatic contrast enhancement technique: gray-level grouping (GLG). GLG is a general and powerful technique, which can be conveniently applied to a broad variety of low-contrast images and outperforms conventional contrast enhancement techniques. However, the basic GLG method still has limitations and cannot enhance certain classes of low-contrast images well, e.g., images with a noisy background. The basic GLG also cannot fulfill certain special application purposes, e.g., enhancing only part of an image which corresponds to a certain segment of the image histogram. In order to break through these limitations, this paper introduces an extension of the basic GLG algorithm, selective gray-level grouping (SGLG), which groups the histogram components in different segments of the grayscale using different criteria and, hence, is able to enhance different parts of the histogram to various extents. This paper also introduces two new preprocessing methods to eliminate background noise in noisy low-contrast images so that such images can be properly enhanced by the (S)GLG technique. The extension of (S)GLG to color images is also discussed in this paper. SGLG and its variations extend the capability of the basic GLG to a larger variety of low-contrast images, and can fulfill special application requirements. SGLG and its variations not only produce results superior to conventional contrast enhancement techniques, but are also fully automatic under most circumstances, and are applicable to a broad variety of images.

Journal ArticleDOI
TL;DR: An anisotropic diffusion filter is derived that does not depend on a linear approximation of the speckle model assumed, which is the case of a previously reported filter, namely, SRAD, and performs fairly closely, a fact that emphasizes the importance of the correct estimation of the coefficients of variation.
Abstract: In this paper, we focus on the problem of speckle removal by means of anisotropic diffusion and, specifically, on the importance of the correct estimation of the statistics involved First, we derive an anisotropic diffusion filter that does not depend on a linear approximation of the speckle model assumed, which is the case of a previously reported filter, namely, SRAD Then, we focus on the problem of estimation of the coefficient of variation of both signal and noise and of noise itself Our experiments indicate that neighborhoods used for parameter estimation do not need to coincide with those used in the diffusion equations Then, we show that, as long as the estimates are good enough, the filter proposed here and the SRAD perform fairly closely, a fact that emphasizes the importance of the correct estimation of the coefficients of variation

Journal ArticleDOI
TL;DR: A novel framework for lossless (invertible) authentication watermarking is presented, which enables zero-distortion reconstruction of the un-watermarked images upon verification and enables public(-key) authentication without granting access to the perfect original and allows for efficient tamper localization.
Abstract: We present a novel framework for lossless (invertible) authentication watermarking, which enables zero-distortion reconstruction of the un-watermarked images upon verification. As opposed to earlier lossless authentication methods that required reconstruction of the original image prior to validation, the new framework allows validation of the watermarked images before recovery of the original image. This reduces computational requirements in situations when either the verification step fails or the zero-distortion reconstruction is not needed. For verified images, integrity of the reconstructed image is ensured by the uniqueness of the reconstruction procedure. The framework also enables public(-key) authentication without granting access to the perfect original and allows for efficient tamper localization. Effectiveness of the framework is demonstrated by implementing the framework using hierarchical image authentication along with lossless generalized-least significant bit data embedding.

Journal ArticleDOI
TL;DR: The proposed algorithm is based on an interesting use of the integer wavelet transform followed by a fast adaptive context-based Golomb-Rice coding for lossless compression of color mosaic images generated by a Bayer CCD color filter array.
Abstract: Lossless compression of color mosaic images poses a unique and interesting problem of spectral decorrelation of spatially interleaved R, G, B samples. We investigate reversible lossless spectral-spatial transforms that can remove statistical redundancies in both spectral and spatial domains and discover that a particular wavelet decomposition scheme, called Mallat wavelet packet transform, is ideally suited to the task of decorrelating color mosaic data. We also propose a low-complexity adaptive context-based Golomb-Rice coding technique to compress the coefficients of Mallat wavelet packet transform. The lossless compression performance of the proposed method on color mosaic images is apparently the best so far among the existing lossless image codecs.

Journal ArticleDOI
TL;DR: The experimental results demonstrate that while majority of palmprint or hand-shape features are useful in predicting the subjects identity, only a small subset of these features are necessary in practice for building an accurate model for identification.
Abstract: This paper proposes a new bimodal biometric system using feature-level fusion of hand shape and palm texture. The proposed combination is of significance since both the palmprint and hand-shape images are proposed to be extracted from the single hand image acquired from a digital camera. Several new hand-shape features that can be used to represent the hand shape and improve the performance are investigated. The new approach for palmprint recognition using discrete cosine transform coefficients, which can be directly obtained from the camera hardware, is demonstrated. None of the prior work on hand-shape or palmprint recognition has given any attention on the critical issue of feature selection. Our experimental results demonstrate that while majority of palmprint or hand-shape features are useful in predicting the subjects identity, only a small subset of these features are necessary in practice for building an accurate model for identification. The comparison and combination of proposed features is evaluated on the diverse classification schemes; naive Bayes (normal, estimated, multinomial), decision trees (C4.5, LMT), /spl kappa/-NN, SVM, and FFN. Although more work remains to be done, our results to date indicate that the combination of selected hand-shape and palmprint features constitutes a promising addition to the biometrics-based personal recognition systems.

Journal ArticleDOI
TL;DR: In this paper, an on-board real-time monocular vehicle detection system that is capable of acquiring grey-scale images, using Ford's proprietary low-light camera, achieving an average detection rate of 10 Hz.
Abstract: Robust and reliable vehicle detection from images acquired by a moving vehicle (i.e., on-road vehicle detection) is an important problem with applications to driver assistance systems and autonomous, self-guided vehicles. The focus of this work is on the issues of feature extraction and classification for rear-view vehicle detection. Specifically, by treating the problem of vehicle detection as a two-class classification problem, we have investigated several different feature extraction methods such as principal component analysis, wavelets, and Gabor filters. To evaluate the extracted features, we have experimented with two popular classifiers, neural networks and support vector machines (SVMs). Based on our evaluation results, we have developed an on-board real-time monocular vehicle detection system that is capable of acquiring grey-scale images, using Ford's proprietary low-light camera, achieving an average detection rate of 10 Hz. Our vehicle detection algorithm consists of two main steps: a multiscale driven hypothesis generation step and an appearance-based hypothesis verification step. During the hypothesis generation step, image locations where vehicles might be present are extracted. This step uses multiscale techniques not only to speed up detection, but also to improve system robustness. The appearance-based hypothesis verification step verifies the hypotheses using Gabor features and SVMs. The system has been tested in Ford's concept vehicle under different traffic conditions (e.g., structured highway, complex urban streets, and varying weather conditions), illustrating good performance.

Journal ArticleDOI
TL;DR: A Bayesian age difference classifier is developed that classifies face images of individuals based on age differences and performs face verification across age progression and a preprocessing methods for minimizing such variations are proposed.
Abstract: Human faces undergo considerable amounts of variations with aging. While face recognition systems have been proven to be sensitive to factors such as illumination and pose, their sensitivity to facial aging effects is yet to be studied. How does age progression affect the similarity between a pair of face images of an individual? What is the confidence associated with establishing the identity between a pair of age separated face images? In this paper, we develop a Bayesian age difference classifier that classifies face images of individuals based on age differences and performs face verification across age progression. Further, we study the similarity of faces across age progression. Since age separated face images invariably differ in illumination and pose, we propose preprocessing methods for minimizing such variations. Experimental results using a database comprising of pairs of face images that were retrieved from the passports of 465 individuals are presented. The verification system for faces separated by as many as nine years, attains an equal error rate of 8.5%

Journal ArticleDOI
TL;DR: A new algorithm that is especially developed for reducing all kinds of impulse noise: fuzzy impulse noise detection and reduction method (FIDRM), which can also be applied to images having a mixture of impulse Noise and other types of noise.
Abstract: Removing or reducing impulse noise is a very active research area in image processing. In this paper we describe a new algorithm that is especially developed for reducing all kinds of impulse noise: fuzzy impulse noise detection and reduction method (FIDRM). It can also be applied to images having a mixture of impulse noise and other types of noise. The result is an image quasi without (or with very little) impulse noise so that other filters can be used afterwards. This nonlinear filtering technique contains two separated steps: an impulse noise detection step and a reduction step that preserves edge sharpness. Based on the concept of fuzzy gradient values, our detection method constructs a fuzzy set impulse noise. This fuzzy set is represented by a membership function that will be used by the filtering method, which is a fuzzy averaging of neighboring pixels. Experimental results show that FIDRM provides a significant improvement on other existing filters. FIDRM is not only very fast, but also very effective for reducing little as well as very high impulse noise.