About: Deblurring is a research topic. Over the lifetime, 4631 publications have been published within this topic receiving 137283 citations.
Papers published on a yearly basis
TL;DR: In this article, a constrained optimization type of numerical algorithm for removing noise from images is presented, where the total variation of the image is minimized subject to constraints involving the statistics of the noise.
TL;DR: A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.
Abstract: We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.
TL;DR: This paper proposes an alternate approach using L/sub 1/ norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models and demonstrates its superiority to other super-resolution methods.
Abstract: Super-resolution reconstruction produces one or a set of high-resolution images from a set of low-resolution images. In the last two decades, a variety of super-resolution methods have been proposed. These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their shortcomings. We propose an alternate approach using L/sub 1/ norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. This computationally inexpensive method is robust to errors in motion and blur estimation and results in images with sharp edges. Simulation results confirm the effectiveness of our method and demonstrate its superiority to other super-resolution methods.
TL;DR: A fast algorithm is derived for the constrained TV-based image deblurring problem with box constraints by combining an acceleration of the well known dual approach to the denoising problem with a novel monotone version of a fast iterative shrinkage/thresholding algorithm (FISTA).
Abstract: This paper studies gradient-based schemes for image denoising and deblurring problems based on the discretized total variation (TV) minimization model with constraints. We derive a fast algorithm for the constrained TV-based image deburring problem. To achieve this task, we combine an acceleration of the well known dual approach to the denoising problem with a novel monotone version of a fast iterative shrinkage/thresholding algorithm (FISTA) we have recently introduced. The resulting gradient-based algorithm shares a remarkable simplicity together with a proven global rate of convergence which is significantly better than currently known gradient projections-based methods. Our results are applicable to both the anisotropic and isotropic discretized TV functionals. Initial numerical results demonstrate the viability and efficiency of the proposed algorithms on image deblurring problems with box constraints.
TL;DR: Model studies suggest that the author may be able to localize multiple cortical sources with spatial resolution as good as PET with this technique, while retaining a much finer grained picture of activity over time.
Abstract: We describe a comprehensive linear approach to the problem of imaging brain activity with high temporal as well as spatial resolution based on combining EEG and MEG data with anatomical constraints derived from MRI images. The "inverse problem" of estimating the distribution of dipole strengths over the cortical surface is highly underdetermined, even given closely spaced EEG and MEG recordings. We have obtained much better solutions to this problem by explicitly incorporating both local cortical orientation as well as spatial covariance of sources and sensors into our formulation. An explicit polygonal model of the cortical manifold is first constructed as follows: (1) slice data in three orthogonal planes of section (needle-shaped voxels) are combined with a linear deblurring technique to make a single high-resolution 3-D image (cubic voxels), (2) the image is recursively flood-filled to determine the topology of the gray-white matter border, and (3) the resulting continuous surface is refined by relaxing it against the original 3-D gray-scale image using a deformable template method, which is also used to computationally flatten the cortex for easier viewing. The explicit solution to an error minimization formulation of an optimal inverse linear operator (for a particular cortical manifold, sensor placement, noise and prior source covariance) gives rise to a compact expression that is practically computable for hundreds of sensors and thousands of sources. The inverse solution can then be weighted for a particular (averaged) event using the sensor covariance for that event. Model studies suggest that we may be able to localize multiple cortical sources with spatial resolution as good as PET with this technique, while retaining a much finer grained picture of activity over time.
Trending Questions (10)
Related Topics (5)