scispace - formally typeset
Search or ask a question
Author

Olivier Rioul

Bio: Olivier Rioul is an academic researcher from Télécom ParisTech. The author has contributed to research in topics: Mutual information & Entropy power inequality. The author has an hindex of 24, co-authored 151 publications receiving 5706 citations. Previous affiliations of Olivier Rioul include École Normale Supérieure & Orange S.A..


Papers
More filters
Journal ArticleDOI
Olivier Rioul1, Martin Vetterli
TL;DR: A simple, nonrigorous, synthetic view of wavelet theory is presented for both review and tutorial purposes, which includes nonstationary signal analysis, scale versus frequency,Wavelet analysis and synthesis, scalograms, wavelet frames and orthonormal bases, the discrete-time case, and applications of wavelets in signal processing.
Abstract: A simple, nonrigorous, synthetic view of wavelet theory is presented for both review and tutorial purposes. The discussion includes nonstationary signal analysis, scale versus frequency, wavelet analysis and synthesis, scalograms, wavelet frames and orthonormal bases, the discrete-time case, and applications of wavelets in signal processing. The main definitions and properties of wavelet transforms are covered, and connections among the various fields where results have been developed are shown. >

2,945 citations

Journal ArticleDOI
Olivier Rioul1, Pierre Duhamel1
TL;DR: The goal of this work is to develop guidelines for implementing discrete and continuous wavelet transforms efficiently, and to compare the various algorithms obtained and give an idea of possible gains by providing operation counts.
Abstract: Several algorithms are reviewed for computing various types of wavelet transforms: the Mallat algorithm (1989), the 'a trous' algorithm, and their generalizations by Shensa. The goal of this work is to develop guidelines for implementing discrete and continuous wavelet transforms efficiently, and to compare the various algorithms obtained and give an idea of possible gains by providing operation counts. Most wavelet transform algorithms compute sampled coefficients of the continuous wavelet transform using the filter bank structure of the discrete wavelet transform. Although this general method is already efficient, it is shown that noticeable computational savings can be obtained by applying known fast convolution techniques, such as the FFT (fast Fourier transform), in a suitable manner. The modified algorithms are termed 'fast' because of their ability to reduce the computational complexity per computed coefficient from L to log L (within a small constant factor) for large filter lengths L. For short filters, smaller gains are obtained: 'fast running FIR (finite impulse response) filtering' techniques allow one to achieve typically 30% savings in computations. >

639 citations

Journal ArticleDOI
TL;DR: The theory of a new general class of signal energy representations depending on time and scale is developed, and specific choices allow recovery of known definitions, and provide a continuous transition from Wigner-Ville to either spectrograms or scalograms (squared modulus of the WT).
Abstract: The theory of a new general class of signal energy representations depending on time and scale is developed Time-scale analysis has been introduced recently as a powerful tool through linear representations called (continuous) wavelet transforms (WTs), a concept for which an exhaustive bilinear generalization is given Although time scale is presented as an alternative method to time frequency, strong links relating the two are emphasized, thus combining both descriptions into a unified perspective The authors provide a full characterization of the new class: the result is expressed as an affine smoothing of the Wigner-Ville distribution, on which interesting properties may be further imposed through proper choices of the smoothing function parameters Not only do specific choices allow recovery of known definitions, but they also provide, via separable smoothing, a continuous transition from Wigner-Ville to either spectrograms or scalograms (squared modulus of the WT) This property makes time-scale representations a very flexible tool for nonstationary signal analysis >

326 citations

Journal ArticleDOI
TL;DR: In this article, the existence and Holder regularity of limit functions of binary subdivision schemes is studied. And the exact regularity order is determined by a polynomial description, which can be easily implemented on a computer and can be accurately determined after a few iterations.
Abstract: Convergent subdivision schemes arise in several fields of applied mathematics (computer-aided geometric design, fractals, compactly supported wavelets) and signal processing (multiresolution decomposition, filter banks). In this paper, a polynomial description is used to study the existence and Holder regularity of limit functions of binary subdivision schemes. Sharp regularity estimates are derived; they are optimal in most cases. They can easily be implemented on a computer, and simulations show that the exact regularity order is accurately determined after a few iterations. Connection is made to regularity estimates of solutions to two-scale difference equations as derived by Daubechies and Lagarias, and other known Fourier-based estimates. The former are often optimal, while the latter are optimal only for a subclass of symmetric limit functions.

228 citations

Journal ArticleDOI
TL;DR: A new and brief proof of the EPI is developed through a mutual information inequality, which replaces Stam and Blachman's Fisher information inequality (FII) and an inequality for MMSE by Guo, Shamai, and Verdú used in earlier proofs.
Abstract: While most useful information theoretic inequalities can be deduced from the basic properties of entropy or mutual information, up to now Shannon's entropy power inequality (EPI) is an exception: Existing information theoretic proofs of the EPI hinge on representations of differential entropy using either Fisher information or minimum mean-square error (MMSE), which are derived from de Bruijn's identity. In this paper, we first present an unified view of these proofs, showing that they share two essential ingredients: 1) a data processing argument applied to a covariance-preserving linear transformation; 2) an integration over a path of a continuous Gaussian perturbation. Using these ingredients, we develop a new and brief proof of the EPI through a mutual information inequality, which replaces Stam and Blachman's Fisher information inequality (FII) and an inequality for MMSE by Guo, Shamai, and Verdu used in earlier proofs. The result has the advantage of being very simple in that it relies only on the basic properties of mutual information. These ideas are then generalized to various extended versions of the EPI: Zamir and Feder's generalized EPI for linear transformations of the random variables, Takano and Johnson's EPI for dependent variables, Liu and Viswanath's covariance-constrained EPI, and Costa's concavity inequality for the entropy power.

197 citations


Cited by
More filters
Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
J.M. Shapiro1
TL;DR: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code.
Abstract: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code The embedded code represents a sequence of binary decisions that distinguish an image from the "null" image Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding >

5,559 citations

Book
30 Sep 2010
TL;DR: Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images and takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene.
Abstract: Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art? Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos. More than just a source of recipes, this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques Topics and features: structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses; presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects; provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory; suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book; supplies supplementary course material for students at the associated website, http://szeliski.org/Book/. Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.

4,146 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a state-of-the-art survey of ANN applications in forecasting and provide a synthesis of published research in this area, insights on ANN modeling issues, and future research directions.

3,680 citations