scispace - formally typeset
Search or ask a question
Author

Jan Biemond

Other affiliations: Katholieke Universiteit Leuven
Bio: Jan Biemond is an academic researcher from Delft University of Technology. The author has contributed to research in topics: Image restoration & Motion estimation. The author has an hindex of 34, co-authored 137 publications receiving 5159 citations. Previous affiliations of Jan Biemond include Katholieke Universiteit Leuven.


Papers
More filters
Journal Article•DOI•
01 May 1990
TL;DR: In this paper, the authors discuss the use of iterative restoration algorithms for the removal of linear blurs from photographic images that may also be degraded by pointwise nonlinearities such as film saturation and additive noise.
Abstract: The authors discuss the use of iterative restoration algorithms for the removal of linear blurs from photographic images that may also be assumed to be degraded by pointwise nonlinearities such as film saturation and additive noise. Iterative algorithms allow for the incorporation of various types of prior knowledge about the class of feasible solutions, can be used to remove nonstationary blurs, and are fairly robust with respect to errors in the approximation of the blurring operator. Special attention is given to the problem of convergence of the algorithms, and classical solutions such as inverse filters, Wiener filters, and constrained least-squares filters are shown to be limiting solutions of variations of the iterations. Regularization is introduced as a means for preventing the excessive noise magnification that is typically associated with ill-conditioned inverse problems such as the deblurring problem, and it is shown that noise effects can be minimized by terminating the algorithms after a finite number of iterations. The role and choice of constraints on the class of feasible solutions are also discussed. >

513 citations

Journal Article•DOI•
TL;DR: A regularized iterative image restoration algorithm is proposed in which both ringing reduction methods are included by making use of the theory of the projections onto convex sets and the concept of norms in a weighted Hilbert space.
Abstract: Linear space-invariant image restoration algorithms often introduce ringing effects near sharp intensity transitions. It is shown that these artifacts are attributable to the regularization of the ill-posed image restoration problem. Two possible methods to reduce the ringing effects in restored images are proposed. The first method incorporates deterministic a priori knowledge about the original image into the restoration algorithm. The second method locally regulates the severity of the noise magnification and the ringing phenomenon, depending on the edge information in the image. A regularized iterative image restoration algorithm is proposed in which both ringing reduction methods are included by making use of the theory of the projections onto convex sets and the concept of norms in a weighted Hilbert space. Both the numerical performance and the visual evaluation of the results are improved by the use of ringing reduction. >

336 citations

Journal Article•DOI•
TL;DR: A newly developed strategy for automatically segmenting movies into logical story units, designed to work on MPEG-DC sequences, where it is taken into account that at least a partial decoding is required for performing content-based operations on MPEG compressed video streams.
Abstract: We present a newly developed strategy for automatically segmenting movies into logical story units. A logical story unit can be understood as an approximation of a movie episode, which is a high-level temporal movie segment, characterized either by a single event (dialog, action scene, etc.) or by several events taking place in parallel. Since we consider a whole event and not a single shot to be the most natural retrieval unit for the movie category of video programs, the proposed segmentation is the crucial first step toward a concise and comprehensive content-based movie representation for browsing and retrieval purposes. The automation aspect is becoming increasingly important with the rising amount of information to be processed in video archives of the future. The segmentation process is designed to work on MPEG-DC sequences, where we have taken into account that at least a partial decoding is required for performing content-based operations on MPEG compressed video streams. The proposed technique allows for carrying out the segmentation procedure in a single pass through a video sequence.

305 citations

Journal Article•DOI•
TL;DR: The expectation-maximization algorithm is proposed to optimize the nonlinear likelihood function in an efficient way and low-order parametric image and blur models are incorporated into the identification method.
Abstract: A maximum-likelihood approach to the blur identification problem is presented. The expectation-maximization algorithm is proposed to optimize the nonlinear likelihood function in an efficient way. In order to improve the performance of the identification algorithm, low-order parametric image and blur models are incorporated into the identification method. The resulting iterative technique simultaneously identifies and restores noisy blurred images. >

264 citations

01 Jan 1991
TL;DR: In this paper, the blur identification problem is formulated as a constrained maximum-likelihood problem and the constraints directly incorporate a priori known relations between the blur coefficients and image model coefficients, such as symmetry properties, into the identification procedure.
Abstract: The blur identification problem is formulated as a constrained maximum-likelihood problem. The constraints directly incorporate a priori known relations between the blur (and image model) coefficients, such as symmetry properties, into the identification procedure. The resulting nonlinear minimization problem is solved iteratively, yielding a very general identification algorithm. An example of blur identification using synthetic data is given.<>

229 citations


Cited by
More filters
Journal Article•DOI•
TL;DR: A scheme for image compression that takes into account psychovisual features both in the space and frequency domains is proposed and it is shown that the wavelet transform is particularly well adapted to progressive transmission.
Abstract: A scheme for image compression that takes into account psychovisual features both in the space and frequency domains is proposed. This method involves two steps. First, a wavelet transform used in order to obtain a set of biorthogonal subclasses of images: the original image is decomposed at different scales using a pyramidal algorithm architecture. The decomposition is along the vertical and horizontal directions and maintains constant the number of pixels required to describe the image. Second, according to Shannon's rate distortion theory, the wavelet coefficients are vector quantized using a multiresolution codebook. To encode the wavelet coefficients, a noise shaping bit allocation procedure which assumes that details at high resolution are less visible to the human eye is proposed. In order to allow the receiver to recognize a picture as quickly as possible at minimum cost, a progressive transmission scheme is presented. It is shown that the wavelet transform is particularly well adapted to progressive transmission. >

3,925 citations

Journal Article•DOI•
TL;DR: An adaptive, data-driven threshold for image denoising via wavelet soft-thresholding derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution widely used in image processing applications.
Abstract: The first part of this paper proposes an adaptive, data-driven threshold for image denoising via wavelet soft-thresholding. The threshold is derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution (GGD) widely used in image processing applications. The proposed threshold is simple and closed-form, and it is adaptive to each subband because it depends on data-driven estimates of the parameters. Experimental results show that the proposed method, called BayesShrink, is typically within 5% of the MSE of the best soft-thresholding benchmark with the image assumed known. It also outperforms SureShrink (Donoho and Johnstone 1994, 1995; Donoho 1995) most of the time. The second part of the paper attempts to further validate claims that lossy compression can be used for denoising. The BayesShrink threshold can aid in the parameter selection of a coder designed with the intention of denoising, and thus achieving simultaneous denoising and compression. Specifically, the zero-zone in the quantization step of compression is analogous to the threshold value in the thresholding function. The remaining coder design parameters are chosen based on a criterion derived from Rissanen's minimum description length (MDL) principle. Experiments show that this compression method does indeed remove noise significantly, especially for large noise power. However, it introduces quantization noise and should be used only if bitrate were an additional concern to denoising.

2,917 citations

Book•
24 Oct 2001
TL;DR: Digital Watermarking covers the crucial research findings in the field and explains the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied.
Abstract: Digital watermarking is a key ingredient to copyright protection. It provides a solution to illegal copying of digital material and has many other useful applications such as broadcast monitoring and the recording of electronic transactions. Now, for the first time, there is a book that focuses exclusively on this exciting technology. Digital Watermarking covers the crucial research findings in the field: it explains the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied. As a result, additional groundwork is laid for future developments in this field, helping the reader understand and anticipate new approaches and applications.

2,849 citations

Book•
01 Mar 1995
TL;DR: Wavelets and Subband Coding offered a unified view of the exciting field of wavelets and their discrete-time cousins, filter banks, or subband coding and developed the theory in both continuous and discrete time.
Abstract: First published in 1995, Wavelets and Subband Coding offered a unified view of the exciting field of wavelets and their discrete-time cousins, filter banks, or subband coding. The book developed the theory in both continuous and discrete time, and presented important applications. During the past decade, it filled a useful need in explaining a new view of signal processing based on flexible time-frequency analysis and its applications. Since 2007, the authors now retain the copyright and allow open access to the book.

2,793 citations

Journal Article•DOI•
T.K. Moon1•
TL;DR: The EM (expectation-maximization) algorithm is ideally suited to problems of parameter estimation, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation.
Abstract: A common task in signal processing is the estimation of the parameters of a probability distribution function Perhaps the most frequently encountered estimation problem is the estimation of the mean of a signal in noise In many parameter estimation problems the situation is more complicated because direct access to the data necessary to estimate the parameters is impossible, or some of the data are missing Such difficulties arise when an outcome is a result of an accumulation of simpler outcomes, or when outcomes are clumped together, for example, in a binning or histogram operation There may also be data dropouts or clustering in such a way that the number of underlying data points is unknown (censoring and/or truncation) The EM (expectation-maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation The EM algorithm is presented at a level suitable for signal processing practitioners who have had some exposure to estimation theory

2,573 citations