scispace - formally typeset
Search or ask a question
Author

D.E. Boekee

Bio: D.E. Boekee is an academic researcher from Delft University of Technology. The author has contributed to research in topics: Sub-band coding & Image restoration. The author has an hindex of 16, co-authored 31 publications receiving 1535 citations.

Papers
More filters
Journal Article•DOI•
TL;DR: A regularized iterative image restoration algorithm is proposed in which both ringing reduction methods are included by making use of the theory of the projections onto convex sets and the concept of norms in a weighted Hilbert space.
Abstract: Linear space-invariant image restoration algorithms often introduce ringing effects near sharp intensity transitions. It is shown that these artifacts are attributable to the regularization of the ill-posed image restoration problem. Two possible methods to reduce the ringing effects in restored images are proposed. The first method incorporates deterministic a priori knowledge about the original image into the restoration algorithm. The second method locally regulates the severity of the noise magnification and the ringing phenomenon, depending on the edge information in the image. A regularized iterative image restoration algorithm is proposed in which both ringing reduction methods are included by making use of the theory of the projections onto convex sets and the concept of norms in a weighted Hilbert space. Both the numerical performance and the visual evaluation of the results are improved by the use of ringing reduction. >

336 citations

Journal Article•DOI•
TL;DR: The expectation-maximization algorithm is proposed to optimize the nonlinear likelihood function in an efficient way and low-order parametric image and blur models are incorporated into the identification method.
Abstract: A maximum-likelihood approach to the blur identification problem is presented. The expectation-maximization algorithm is proposed to optimize the nonlinear likelihood function in an efficient way. In order to improve the performance of the identification algorithm, low-order parametric image and blur models are incorporated into the identification method. The resulting iterative technique simultaneously identifies and restores noisy blurred images. >

264 citations

Journal Article•DOI•
TL;DR: A novel two-dimensional subband coding technique is presented that can be applied to images as well as speech and has a performance that is comparable to that of more complex coding techniques.
Abstract: A novel two-dimensional subband coding technique is presented that can be applied to images as well as speech. A frequency-band decomposition of the image is carried out by means of 2D separable quadrature mirror filters, which split the image spectrum into 16 equal-rate subbands. These 16 parallel subband signals are regarded as a 16-dimensional vector source and coded as such using vector quantization. In the asymptotic case of high bit rates, a theoretical analysis yields that a lower bound to the gain is attainable by choosing this approach over scalar quantization of each subband with an optimal bit allocation. It is shown that vector quantization in this scheme has several advantages over coding the subbands separately. Experimental results are given, and it is shown the scheme has a performance that is comparable to that of more complex coding techniques. >

196 citations

Proceedings Article•DOI•
11 Apr 1988
TL;DR: An optimal bit allocation algorithm is presented that is suitable for all practical situations and can be applied to any practical coding scheme (such as a subband coder) that needs dynamic bit allocation.
Abstract: An optimal bit allocation algorithm is presented that is suitable for all practical situations. Each source to be coded is assumed to have its own set of admissible quantizers (which can be either scalar or vector quantizers) which do not need to have integer bit rates. The distortion versus rate characteristic of each quantizer set may have an arbitrary shape. The algorithm is very simple in structure and can be applied to any practical coding scheme (such as a subband coder) that needs dynamic bit allocation. >

161 citations

Journal Article•DOI•
TL;DR: The R -norm information measure includes Shannon's information measure as a special case and its properties, as well as an axiomatic characterization, are given.
Abstract: The R -norm information measure is discussed and its properties, as well as an axiomatic characterization, are given. The measure is extended to conditional and joint measures. Applications to coding and hypothesis testing are given. The R -norm information measure includes Shannon's information measure as a special case.

111 citations


Cited by
More filters
Book•
01 Jan 1996
TL;DR: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols.
Abstract: From the Publisher: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols; more than 200 tables and figures; more than 1,000 numbered definitions, facts, examples, notes, and remarks; and over 1,250 significant references, including brief comments on each paper.

13,597 citations

Journal Article•DOI•
TL;DR: A scheme for image compression that takes into account psychovisual features both in the space and frequency domains is proposed and it is shown that the wavelet transform is particularly well adapted to progressive transmission.
Abstract: A scheme for image compression that takes into account psychovisual features both in the space and frequency domains is proposed. This method involves two steps. First, a wavelet transform used in order to obtain a set of biorthogonal subclasses of images: the original image is decomposed at different scales using a pyramidal algorithm architecture. The decomposition is along the vertical and horizontal directions and maintains constant the number of pixels required to describe the image. Second, according to Shannon's rate distortion theory, the wavelet coefficients are vector quantized using a multiresolution codebook. To encode the wavelet coefficients, a noise shaping bit allocation procedure which assumes that details at high resolution are less visible to the human eye is proposed. In order to allow the receiver to recognize a picture as quickly as possible at minimum cost, a progressive transmission scheme is presented. It is shown that the wavelet transform is particularly well adapted to progressive transmission. >

3,925 citations

Journal Article•DOI•
TL;DR: An adaptive, data-driven threshold for image denoising via wavelet soft-thresholding derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution widely used in image processing applications.
Abstract: The first part of this paper proposes an adaptive, data-driven threshold for image denoising via wavelet soft-thresholding. The threshold is derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution (GGD) widely used in image processing applications. The proposed threshold is simple and closed-form, and it is adaptive to each subband because it depends on data-driven estimates of the parameters. Experimental results show that the proposed method, called BayesShrink, is typically within 5% of the MSE of the best soft-thresholding benchmark with the image assumed known. It also outperforms SureShrink (Donoho and Johnstone 1994, 1995; Donoho 1995) most of the time. The second part of the paper attempts to further validate claims that lossy compression can be used for denoising. The BayesShrink threshold can aid in the parameter selection of a coder designed with the intention of denoising, and thus achieving simultaneous denoising and compression. Specifically, the zero-zone in the quantization step of compression is analogous to the threshold value in the thresholding function. The remaining coder design parameters are chosen based on a criterion derived from Rissanen's minimum description length (MDL) principle. Experiments show that this compression method does indeed remove noise significantly, especially for large noise power. However, it introduces quantization noise and should be used only if bitrate were an additional concern to denoising.

2,917 citations

Book•
01 Mar 1995
TL;DR: Wavelets and Subband Coding offered a unified view of the exciting field of wavelets and their discrete-time cousins, filter banks, or subband coding and developed the theory in both continuous and discrete time.
Abstract: First published in 1995, Wavelets and Subband Coding offered a unified view of the exciting field of wavelets and their discrete-time cousins, filter banks, or subband coding. The book developed the theory in both continuous and discrete time, and presented important applications. During the past decade, it filled a useful need in explaining a new view of signal processing based on flexible time-frequency analysis and its applications. Since 2007, the authors now retain the copyright and allow open access to the book.

2,793 citations

Journal Article•DOI•
T.K. Moon1•
TL;DR: The EM (expectation-maximization) algorithm is ideally suited to problems of parameter estimation, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation.
Abstract: A common task in signal processing is the estimation of the parameters of a probability distribution function Perhaps the most frequently encountered estimation problem is the estimation of the mean of a signal in noise In many parameter estimation problems the situation is more complicated because direct access to the data necessary to estimate the parameters is impossible, or some of the data are missing Such difficulties arise when an outcome is a result of an accumulation of simpler outcomes, or when outcomes are clumped together, for example, in a binning or histogram operation There may also be data dropouts or clustering in such a way that the number of underlying data points is unknown (censoring and/or truncation) The EM (expectation-maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation The EM algorithm is presented at a level suitable for signal processing practitioners who have had some exposure to estimation theory

2,573 citations