scispace - formally typeset
Search or ask a question
Author

Michael Unser

Bio: Michael Unser is an academic researcher from École Polytechnique Fédérale de Lausanne. The author has contributed to research in topics: Wavelet & Wavelet transform. The author has an hindex of 102, co-authored 862 publications receiving 49220 citations. Previous affiliations of Michael Unser include National Institutes of Health & French Institute of Health and Medical Research.


Papers
More filters
Journal ArticleDOI
TL;DR: An automatic subpixel registration algorithm that minimizes the mean square intensity difference between a reference and a test data set, which can be either images (two-dimensional) or volumes (three-dimensional).
Abstract: We present an automatic subpixel registration algorithm that minimizes the mean square intensity difference between a reference and a test data set, which can be either images (two-dimensional) or volumes (three-dimensional). It uses an explicit spline representation of the images in conjunction with spline processing, and is based on a coarse-to-fine iterative strategy (pyramid approach). The minimization is performed according to a new variation (ML*) of the Marquardt-Levenberg algorithm for nonlinear least-square optimization. The geometric deformation model is a global three-dimensional (3-D) affine transformation that can be optionally restricted to rigid-body motion (rotation and translation), combined with isometric scaling. It also includes an optional adjustment of image contrast differences. We obtain excellent results for the registration of intramodality positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) data. We conclude that the multiresolution refinement strategy is more robust than a comparable single-stage method, being less likely to be trapped into a false local optimum. In addition, our improved version of the Marquardt-Levenberg algorithm is faster.

2,801 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems, which combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure.
Abstract: In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyperparameter selection. The starting point of this paper is the observation that unrolled iterative methods have the form of a CNN (filtering followed by pointwise non-linearity) when the normal operator ( $H^{*}H$ , where $H^{*}$ is the adjoint of the forward imaging operator, $H$ ) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a $512\times 512$ image on the GPU.

1,757 citations

Journal ArticleDOI
TL;DR: The article provides arguments in favor of an alternative approach that uses splines, which is equally justifiable on a theoretical basis, and which offers many practical advantages, and brings out the connection with the multiresolution theory of the wavelet transform.
Abstract: The article provides arguments in favor of an alternative approach that uses splines, which is equally justifiable on a theoretical basis, and which offers many practical advantages. To reassure the reader who may be afraid to enter new territory, it is emphasized that one is not losing anything because the traditional theory is retained as a particular case (i.e., a spline of infinite degree). The basic computational tools are also familiar to a signal processing audience (filters and recursive algorithms), even though their use in the present context is less conventional. The article also brings out the connection with the multiresolution theory of the wavelet transform. This article attempts to fulfil three goals. The first is to provide a tutorial on splines that is geared to a signal processing audience. The second is to gather all their important properties and provide an overview of the mathematical and computational tools available; i.e., a road map for the practitioner with references to the appropriate literature. The third goal is to give a review of the primary applications of splines in signal and image processing.

1,732 citations

Journal ArticleDOI
TL;DR: In this paper, a new approach to the characterization of texture properties at multiple scales using the wavelet transform is described, which uses an overcomplete wavelet decomposition, which yields a description that is translation invariant.
Abstract: This paper describes a new approach to the characterization of texture properties at multiple scales using the wavelet transform. The analysis uses an overcomplete wavelet decomposition, which yields a description that is translation invariant. It is shown that this representation constitutes a tight frame of l/sub 2/ and that it has a fast iterative algorithm. A texture is characterized by a set of channel variances estimated at the output of the corresponding filter bank. Classification experiments with l/sub 2/ Brodatz textures indicate that the discrete wavelet frame (DWF) approach is superior to a standard (critically sampled) wavelet transform feature extraction. These results also suggest that this approach should perform better than most traditional single resolution techniques (co-occurrences, local linear transform, and the like). A detailed comparison of the classification performance of various orthogonal and biorthogonal wavelet transforms is also provided. Finally, the DWF feature extraction technique is incorporated into a simple multicomponent texture segmentation algorithm, and some illustrative examples are presented. >

1,467 citations

Journal ArticleDOI
01 Apr 2000
TL;DR: The standard sampling paradigm is extended for a presentation of functions in the more general class of "shift-in-variant" function spaces, including splines and wavelets, and variations of sampling that can be understood from the same unifying perspective are reviewed.
Abstract: This paper presents an account of the current state of sampling, 50 years after Shannon's formulation of the sampling theorem. The emphasis is on regular sampling, where the grid is uniform. This topic has benefitted from a strong research revival during the past few years, thanks in part to the mathematical connections that were made with wavelet theory. To introduce the reader to the modern, Hilbert-space formulation, we reinterpret Shannon's sampling procedure as an orthogonal projection onto the subspace of band-limited functions. We then extend the standard sampling paradigm for a presentation of functions in the more general class of "shift-in-variant" function spaces, including splines and wavelets. Practically, this allows for simpler-and possibly more realistic-interpolation models, which can be used in conjunction with a much wider class of (anti-aliasing) prefilters that are not necessarily ideal low-pass. We summarize and discuss the results available for the determination of the approximation error and of the sampling rate when the input of the system is essentially arbitrary; e.g., nonbandlimited. We also review variations of sampling that can be understood from the same unifying perspective. These include wavelets, multiwavelets, Papoulis generalized sampling, finite elements, and frames. Irregular sampling and radial basis functions are briefly mentioned.

1,461 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Journal ArticleDOI
TL;DR: A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.
Abstract: We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.

11,413 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: DARTEL has been applied to intersubject registration of 471 whole brain images, and the resulting deformations were evaluated in terms of how well they encode the shape information necessary to separate male and female subjects and to predict the ages of the subjects.

6,999 citations