scispace - formally typeset
Search or ask a question
Author

Pierre Moulin

Other affiliations: Qualcomm, Rensselaer Polytechnic Institute, Microsoft  ...read more
Bio: Pierre Moulin is an academic researcher from University of Illinois at Urbana–Champaign. The author has contributed to research in topics: Digital watermarking & Wavelet. The author has an hindex of 46, co-authored 292 publications receiving 10151 citations. Previous affiliations of Pierre Moulin include Qualcomm & Rensselaer Polytechnic Institute.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a simple spatially adaptive statistical model for wavelet image coefficients was introduced and applied to image denoising. But the model is inspired by a recent wavelet compression algorithm, the estimationquantization coder.
Abstract: We introduce a simple spatially adaptive statistical model for wavelet image coefficients and apply it to image denoising. Our model is inspired by a recent wavelet image compression algorithm, the estimation-quantization (EQ) coder. We model wavelet image coefficients as zero-mean Gaussian random variables with high local correlation. We assume a marginal prior distribution on wavelet coefficients variances and estimate them using an approximate maximum a posteriori probability rule. Then we apply an approximate minimum mean squared error estimation procedure to restore the noisy wavelet image coefficients. Despite the simplicity of our method, both in its concept and implementation, our denoising results are among the best reported in the literature.

847 citations

Journal ArticleDOI
25 Jun 2000
TL;DR: An information-theoretic analysis of information hiding is presented, forming the theoretical basis for design of information-hiding systems and evaluating the hiding capacity, which upper-bounds the rates of reliable transmission and quantifies the fundamental tradeoff between three quantities.
Abstract: An information-theoretic analysis of information hiding is presented, forming the theoretical basis for design of information-hiding systems. Information hiding is an emerging research area which encompasses applications such as copyright protection for digital media, watermarking, fingerprinting, steganography, and data embedding. In these applications, information is hidden within a host data set and is to be reliably communicated to a receiver. The host data set is intentionally corrupted, but in a covert way, designed to be imperceptible to a casual analysis. Next, an attacker may seek to destroy this hidden information, and for this purpose, introduce additional distortion to the data set. Side information (in the form of cryptographic keys and/or information about the host signal) may be available to the information hider and to the decoder. We formalize these notions and evaluate the hiding capacity, which upper-bounds the rates of reliable transmission and quantifies the fundamental tradeoff between three quantities: the achievable information-hiding rates and the allowed distortion levels for the information hider and the attacker. The hiding capacity is the value of a game between the information hider and the attacker. The optimal attack strategy is the solution of a particular rate-distortion problem, and the optimal hiding strategy is the solution to a channel-coding problem. The hiding capacity is derived by extending the Gel'fand-Pinsker (1980) theory of communication with side information at the encoder. The extensions include the presence of distortion constraints, side information at the decoder, and unknown communication channel. Explicit formulas for capacity are given in several cases, including Bernoulli and Gaussian problems, as well as the important special case of small distortions. In some cases, including the last two above, the hiding capacity is the same whether or not the decoder knows the host data set. It is shown that many existing information-hiding systems in the literature operate far below capacity.

729 citations

Proceedings ArticleDOI
10 Sep 2000
TL;DR: A novel image indexing technique that may be called an image hash function, which uses randomized signal processing strategies for a non-reversible compression of images into random binary strings, and is shown to be robust against image changes due to compression, geometric distortions, and other attacks.
Abstract: The proliferation of digital images creates problems for managing large image databases, indexing individual images, and protecting intellectual property. This paper introduces a novel image indexing technique that may be called an image hash function. The algorithm uses randomized signal processing strategies for a non-reversible compression of images into random binary strings, and is shown to be robust against image changes due to compression, geometric distortions, and other attacks. This algorithm brings to images a direct analog of message authentication codes (MACs) from cryptography, in which a main goal is to make hash values on a set of distinct inputs pairwise independent. This minimizes the probability that two hash values collide, even, when inputs are generated by an adversary.

585 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: A deep neural network is developed to seek multiple hierarchical non-linear transformations to learn compact binary codes for large scale visual search and shows the superiority of the proposed approach over the state-of-the-arts.
Abstract: In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for large scale visual search. Unlike most existing binary codes learning methods which seek a single linear projection to map each sample into a binary vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the nonlinear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the deep network: 1) the loss between the original real-valued feature descriptor and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) by including one discriminative term into the objective function of DH which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes. Experimental results show the superiority of the proposed approach over the state-of-the-arts.

569 citations

Journal ArticleDOI
TL;DR: This paper investigates various connections between shrinkage methods and maximum a posteriori (MAP) estimation using such priors, and introduces a new family of complexity priors based upon Rissanen's universal prior on integers.
Abstract: Research on universal and minimax wavelet shrinkage and thresholding methods has demonstrated near-ideal estimation performance in various asymptotic frameworks. However, image processing practice has shown that universal thresholding methods are outperformed by simple Bayesian estimators assuming independent wavelet coefficients and heavy-tailed priors such as generalized Gaussian distributions (GGDs). In this paper, we investigate various connections between shrinkage methods and maximum a posteriori (MAP) estimation using such priors. In particular, we state a simple condition under which MAP estimates are sparse. We also introduce a new family of complexity priors based upon Rissanen's universal prior on integers. One particular estimator in this class outperforms conventional estimators based on earlier applications of the minimum description length (MDL) principle. We develop analytical expressions for the shrinkage rules implied by GGD and complexity priors. This allows us to show the equivalence between universal hard thresholding, MAP estimation using a very heavy-tailed GGD, and MDL estimation using one of the new complexity priors. Theoretical analysis supported by numerous practical experiments shows the robustness of some of these estimates against mis-specifications of the prior-a basic concern in image processing applications.

537 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Journal ArticleDOI
TL;DR: A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.
Abstract: We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.

11,413 citations

Journal ArticleDOI
TL;DR: This work addresses the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image, and uses the K-SVD algorithm to obtain a dictionary that describes the image content effectively.
Abstract: We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods

5,493 citations

Journal ArticleDOI
TL;DR: An image information measure is proposed that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image and combined these two quantities form a visual information fidelity measure for image QA.
Abstract: Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Image QA algorithms generally interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by signal fidelity measures. In this paper, we approach the image QA problem as an information fidelity problem. Specifically, we propose to quantify the loss of image information to the distortion process and explore the relationship between image information and visual quality. QA systems are invariably involved with judging the visual quality of "natural" images and videos that are meant for "human consumption." Researchers have developed sophisticated models to capture the statistics of such natural signals. Using these models, we previously presented an information fidelity criterion for image QA that related image quality with the amount of information shared between a reference and a distorted image. In this paper, we propose an image information measure that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image. Combining these two quantities, we propose a visual information fidelity measure for image QA. We validate the performance of our algorithm with an extensive subjective study involving 779 images and show that our method outperforms recent state-of-the-art image QA algorithms by a sizeable margin in our simulations. The code and the data from the subjective study are available at the LIVE website.

3,146 citations