scispace - formally typeset
Search or ask a question
Author

Patrick Vandewalle

Bio: Patrick Vandewalle is an academic researcher from Katholieke Universiteit Leuven. The author has contributed to research in topics: Aliasing & Computer science. The author has an hindex of 15, co-authored 52 publications receiving 1635 citations. Previous affiliations of Patrick Vandewalle include Dolby Laboratories & École Normale Supérieure.


Papers
More filters
Journal ArticleDOI
TL;DR: A frequency domain technique to precisely register a set of aliased images, based on their low-frequency, aliasing-free part, and a high-resolution image is then reconstructed using cubic interpolation.
Abstract: Super-resolution algorithms reconstruct a high-resolution image from a set of low-resolution images of a scene. Precise alignment of the input images is an essential part of such algorithms. If the low-resolution images are undersampled and have aliasing artifacts, the performance of standard registration algorithms decreases. We propose a frequency domain technique to precisely register a set of aliased images, based on their low-frequency, aliasing-free part. A high-resolution image is then reconstructed using cubic interpolation. Our algorithm is compared to other algorithms in simulations and practical experiments using real aliased images. Both show very good visual results and prove the attractivity of our approach in the case of aliased input images. A possible application is to digital cameras where a set of rapidly acquired images can be used to recover a higher-resolution final image.

520 citations

Journal ArticleDOI
TL;DR: If the experiments are performed on a large data set, the algorithm is compared to the state-of-the-art methods, the code and/or data are well documented and available online, the community will all benefit and make it easier to build upon each other's work.
Abstract: What should we do to raise the quality of signal processing publications to an even higher level? We believe it to be crucial to maintain the precision in describing our work in publications, ensured through a high-quality reviewing process. We also believe that if the experiments are performed on a large data set, the algorithm is compared to the state-of-the-art methods, the code and/or data are well documented and available online, we will all benefit and make it easier to build upon each other's work. It is a clear win-win situation for our community: we will have access to more and more algorithms and can spend time inventing new things rather than recreating existing ones.

273 citations

Journal ArticleDOI
TL;DR: If the experiments are performed on a large data set, the algorithm is compared to the state-of-the-art methods, the code and/or data are well documented and available online, the community will all benefit and make it easier to build upon each other's work.
Abstract: Reproducible research results become more and more an important issue as systems under investigation are growing permanently in complexity, and it becomes thus almost impossible to judge the accuracy of research results merely on the bare paper presentation.

152 citations

Journal ArticleDOI
TL;DR: It is proved that the solution of the registration and reconstruction problem is generically unique if MN ges L + M - 1, and two subspace-based methods to compute this solution are described.
Abstract: In many applications, the sampling frequency is limited by the physical characteristics of the components: the pixel pitch, the rate of the analog-to-digital (AID) converter, etc. A low- pass filter is usually applied before the sampling operation to avoid aliasing. However, when multiple copies are available, it is possible to use the information that is inherently present in the aliasing to reconstruct a higher resolution signal. If the different copies have unknown relative offsets, this is a nonlinear problem in the offsets and the signal coefficients. They are not easily separable in the set of equations describing the super-resolution problem. Thus, we perform joint registration and reconstruction from multiple unregistered sets of samples. We give a mathematical formulation for the problem when there are M sets of N samples of a signal that is described by L expansion coefficients. We prove that the solution of the registration and reconstruction problem is generically unique if MN ges L + M - 1. We describe two subspace-based methods to compute this solution. Their complexity is analyzed, and some heuristic methods are proposed. Finally, some numerical simulation results on one- and two-dimensional signals are given to show the performance of these methods.

115 citations

Patent
21 Sep 2010
TL;DR: In this article, a method of encoding a video data signal is proposed, which consists of providing at least a first image (21) of a scene (100) as seen from a first viewpoint, providing rendering information (22) for enabling the generation of at least one rendered image of the scene(100) from a rendering viewpoint, and defining a preferred orientation of the rendering viewpoint relative to the first viewpoint.
Abstract: A method of encoding a video data signal (15) is provided, the method comprising providing at least a first image (21) of a scene (100) as seen from a first viewpoint, providing rendering information (22) for enabling the generation of at least one rendered image of the scene (100) as seen from a rendering viewpoint, providing a preferred direction indicator (23), defining a preferred orientation of the rendering viewpoint relative to the first viewpoint, and generating (24) the video data signal (15) comprising encoded data representing the first image, the rendering information and the preferred direction indicator.

58 citations


Cited by
More filters
01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

Journal ArticleDOI
TL;DR: Icy is a collaborative bioimage informatics platform that combines a community website for contributing and sharing tools and material, and software with a high-end visual programming framework for seamless development of sophisticated imaging workflows.
Abstract: Icy is a collaborative platform for biological image analysis that extends reproducible research principles by facilitating and stimulating the contribution and sharing of algorithm-based tools and protocols between researchers. Current research in biology uses evermore complex computational and imaging tools. Here we describe Icy, a collaborative bioimage informatics platform that combines a community website for contributing and sharing tools and material, and software with a high-end visual programming framework for seamless development of sophisticated imaging workflows. Icy extends the reproducible research principles, by encouraging and facilitating the reusability, modularity, standardization and management of algorithms and protocols. Icy is free, open-source and available at http://icy.bioimageanalysis.org/ .

1,261 citations

Journal ArticleDOI
TL;DR: This paper shows how this denoising method is generalized to become a relatively simple super-resolution algorithm with no explicit motion estimation, and results show that the proposed method is very successful in providing super- resolution on general sequences.
Abstract: Super-resolution reconstruction proposes a fusion of several low-quality images into one higher quality result with better optical resolution. Classic super-resolution techniques strongly rely on the availability of accurate motion estimation for this fusion task. When the motion is estimated inaccurately, as often happens for nonglobal motion fields, annoying artifacts appear in the super-resolved outcome. Encouraged by recent developments on the video denoising problem, where state-of-the-art algorithms are formed with no explicit motion estimation, we seek a super-resolution algorithm of similar nature that will allow processing sequences with general motion patterns. In this paper, we base our solution on the Nonlocal-Means (NLM) algorithm. We show how this denoising method is generalized to become a relatively simple super-resolution algorithm with no explicit motion estimation. Results on several test movies show that the proposed method is very successful in providing super-resolution on general sequences.

845 citations

01 Jan 2013
TL;DR: Four rationales for sharing data are examined, drawing examples from the sciences, social sciences, and humanities: to reproduce or to verify research, to make results of publicly funded research available to the public, to enable others to ask new questions of extant data, and to advance the state of research and innovation.
Abstract: We must all accept that science is data and that data are science, and thus provide for, and justify the need for the support of, much-improved data curation. (Hanson, Sugden, & Alberts) Researchers are producing an unprecedented deluge of data by using new methods and instrumentation. Others may wish to mine these data for new discoveries and innovations. However, research data are not readily available as sharing is common in only a few fields such as astronomy and genomics. Data sharing practices in other fields vary widely. Moreover, research data take many forms, are handled in many ways, using many approaches, and often are difficult to interpret once removed from their initial context. Data sharing is thus a conundrum. Four rationales for sharing data are examined, drawing examples from the sciences, social sciences, and humanities: (1) to reproduce or to verify research, (2) to make results of publicly funded research available to the public, (3) to enable others to ask new questions of extant data, and (4) to advance the state of research and innovation. These rationales differ by the arguments for sharing, by beneficiaries, and by the motivations and incentives of the many stakeholders involved. The challenges are to understand which data might be shared, by whom, with whom, under what conditions, why, and to what effects. Answers will inform data policy and practice. © 2012 Wiley Periodicals, Inc.

634 citations

Journal ArticleDOI
01 Aug 2014
TL;DR: The current comprehensive survey provides an overview of most of these published works by grouping them in a broad taxonomy, and common issues in super-resolution algorithms, such as imaging models and registration algorithms, optimization of the cost functions employed, dealing with color information, improvement factors, assessment of super- resolution algorithms, and the most commonly employed databases are discussed.
Abstract: Super-resolution, the process of obtaining one or more high-resolution images from one or more low-resolution observations, has been a very attractive research topic over the last two decades. It has found practical applications in many real-world problems in different fields, from satellite and aerial imaging to medical image processing, to facial image analysis, text image analysis, sign and number plates reading, and biometrics recognition, to name a few. This has resulted in many research papers, each developing a new super-resolution algorithm for a specific purpose. The current comprehensive survey provides an overview of most of these published works by grouping them in a broad taxonomy. For each of the groups in the taxonomy, the basic concepts of the algorithms are first explained and then the paths through which each of these groups have evolved are given in detail, by mentioning the contributions of different authors to the basic concepts of each group. Furthermore, common issues in super-resolution algorithms, such as imaging models and registration algorithms, optimization of the cost functions employed, dealing with color information, improvement factors, assessment of super-resolution algorithms, and the most commonly employed databases are discussed.

602 citations