scispace - formally typeset
Search or ask a question
Author

G. de Haan

Bio: G. de Haan is an academic researcher from Philips. The author has contributed to research in topics: Motion estimation & Motion compensation. The author has an hindex of 21, co-authored 56 publications receiving 2329 citations. Previous affiliations of G. de Haan include Eindhoven University of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: A new recursive block-matching motion estimation algorithm with only eight candidate vectors per block is presented and is shown to have a superior performance over alternative algorithms, while its complexity is significantly less.
Abstract: A new recursive block-matching motion estimation algorithm with only eight candidate vectors per block is presented. A fast convergence and a high accuracy, also in the vicinity of discontinuities in the velocity plane, was realized with such new techniques as bidirectional convergence and convergence accelerators. A new search strategy, asynchronous cyclic search, which allows a highly efficient implementation, is presented. A new block erosion postprocessing proposal further effectively eliminates block structures from the generated vector field. Measured with criteria relevant for the field rate conversion application, the new motion estimator is shown to have a superior performance over alternative algorithms, while its complexity is significantly less. >

533 citations

Journal ArticleDOI
G. de Haan1, Erwin B. Bellers1
01 Dec 1998
TL;DR: This paper outlines the most relevant proposals, ranging front simple linear methods to advanced motion-compensated algorithms, and provides a relative performance comparison for 12 of these methods.
Abstract: The question "to interlace or not to interlace" divides the television and the personal computer communities. A proper answer requires a common understanding of what is possible nowadays in deinterlacing video signals. This paper outlines the most relevant proposals, ranging front simple linear methods to advanced motion-compensated algorithms, and provides a relative performance comparison for 12 of these methods. Next to objective performance indicators, screen photographs have been used to illustrate typical artifacts of individual deinterlacers. The overview provides no final answer in the interlace debate, as such requires unavailable capabilities in balancing technical and nontechnical issues.

400 citations

Journal ArticleDOI
O.A. Ojo1, G. de Haan1
TL;DR: This work introduces and evaluates a new and very robust upconversion algorithm which is unique in that it estimates motion vector reliability and uses this information to control the filtering process, and outperforms others in its class.
Abstract: The quality of field-rate conversion improves significantly with motion-compensation techniques. It becomes possible to interpolate new fields at their correct temporal position. This results in smooth motion portrayal without loss of temporal resolution. However, motion vectors are not always valid for every pixel or object in an image. Therefore, visible artifacts occur wherever such wrong vectors are used on the image. One effective method to solve this problem is the use of non-linear filtering. In this method, a wrongly interpolated pixel is either substituted or averaged with neighbouring pixels. We introduce and evaluate a new and very robust upconversion algorithm which is based on the non-linear filtering approach. It is unique in that it estimates motion vector reliability and uses this information to control the filtering process. This algorithm outperforms others in its class, especially when we have complex image sequences.

142 citations

Journal ArticleDOI
G. de Haan1
01 Aug 1999
TL;DR: An IC for consumer television applies motion estimation and compensation for high-quality video format conversion and achieves a perfect motion portrayal for all source material and many display formats.
Abstract: An IC for consumer television applies motion estimation and compensation for high quality video format conversion. The chip achieves perfect motion portrayal for all sources including 24, 25, and 30 Hz film material, and many display formats. The true-motion vectors are estimated with a sub-pixel resolution and are used to optimally de-interlace video broadcast signals, perform a motion compensated picture rate conversion and improve temporal noise reduction.

118 citations


Cited by
More filters
Book
01 Jan 2005
TL;DR: The author explains the motivation behind the work and some of the techniques used in the writing of this book, which aim to improve the quality of measurement in the field of color space perception.
Abstract: About the Author. Acknowledgements. Acronyms. 1 Introduction. 1.1 Motivation. 1.2 Outline. 2 Vision. 2.1 Eye. 2.2 Retina. 2.3 Visual Pathways. 2.4 Sensitivity to Light. 2.5 Color Perception. 2.6 Masking and Adaptation. 2.7 Multi-channel Organization. 2.8 Summary. 3 Video Quality. 3.1 Video Coding and Compression. 3.2 Artifacts. 3.3 Visual Quality. 3.4 Quality Metrics. 3.5 Metric Evaluation. 3.6 Summary. 4 Models and Metrics. 4.1 Isotropic Contrast. 4.2 Perceptual Distortion Metric. 4.3 Summary. 5 Metric Evaluation. 5.1 Still Images. 5.2 Video. 5.3 Component Analysis. 5.4 Summary. 6 Metric Extensions. 6.1 Blocking Artifacts. 6.2 Object Segmentation. 6.3 Image Appeal. 6.4 Summary. 7 Closing Remarks. 7.1 Summary. 7.2 Perspectives. Appendix: Color Space Conversions. References. Index.

521 citations

01 Dec 1996

452 citations

Journal ArticleDOI
TL;DR: The idea is to use MCMC to solve the resulting problem articulated under a Bayesian framework, but to deploy purely deterministic mechanisms for dealing with the solution, which results in a relatively fast implementation that unifies many of the pixel-by-pixel schemes previously described in the literature.
Abstract: Recently, the problem of automated restoration of archived sequences has caught the attention of the Video Broadcast industry. One of the main problems is deadling with Blotches caused by film abrasion or dirt adhesion. This paper presents a new framework for the simultaneous treatment of missing data and motion in degraded video sequences. Using simple, translational models of motion, a joint solution for the detection, and reconstruction of missing data is proposed. The framework also incorporates the unique notion of dealing with occlusion and uncovering as it pertains to picture building. The idea is to use MCMC to solve the resulting problem articulated under a Bayesian framework, but to deploy purely deterministic mechanisms for dealing with the solution. This results in a relatively fast implementation that unifies many of the pixel-by-pixel schemes previously described in the literature.

434 citations

Journal ArticleDOI
G. de Haan1, Erwin B. Bellers1
01 Dec 1998
TL;DR: This paper outlines the most relevant proposals, ranging front simple linear methods to advanced motion-compensated algorithms, and provides a relative performance comparison for 12 of these methods.
Abstract: The question "to interlace or not to interlace" divides the television and the personal computer communities. A proper answer requires a common understanding of what is possible nowadays in deinterlacing video signals. This paper outlines the most relevant proposals, ranging front simple linear methods to advanced motion-compensated algorithms, and provides a relative performance comparison for 12 of these methods. Next to objective performance indicators, screen photographs have been used to illustrate typical artifacts of individual deinterlacers. The overview provides no final answer in the interlace debate, as such requires unavailable capabilities in balancing technical and nontechnical issues.

400 citations

Journal ArticleDOI
TL;DR: A new taxonomy based on image representations is introduced for a better understanding of state-of-the-art image denoising techniques and methods based on overcomplete representations using learned dictionaries perform better than others.
Abstract: Image denoising is a well explored topic in the field of image processing. In the past several decades, the progress made in image denoising has benefited from the improved modeling of natural images. In this paper, we introduce a new taxonomy based on image representations for a better understanding of state-of-the-art image denoising techniques. Within each category, several representative algorithms are selected for evaluation and comparison. The experimental results are discussed and analyzed to determine the overall advantages and disadvantages of each category. In general, the nonlocal methods within each category produce better denoising results than local ones. In addition, methods based on overcomplete representations using learned dictionaries perform better than others. The comprehensive study in this paper would serve as a good reference and stimulate new research ideas in image denoising.

376 citations