scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Bayesian-Based Iterative Method of Image Restoration

01 Jan 1972-Journal of the Optical Society of America (Optical Society of America)-Vol. 62, Iss: 1, pp 55-59
TL;DR: An iterative method of restoring degraded images was developed by treating images, point spread functions, and degraded images as probability-frequency functions and by applying Bayes’s theorem.
Abstract: An iterative method of restoring degraded images was developed by treating images, point spread functions, and degraded images as probability-frequency functions and by applying Bayes’s theorem. The method functions effectively in the presence of noise and is adaptable to computer operation.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The analogy between images and statistical mechanics systems is made and the analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations, creating a highly parallel ``relaxation'' algorithm for MAP estimation.
Abstract: We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, nonlinear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low energy states (``annealing''), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result is a highly parallel ``relaxation'' algorithm for MAP estimation. We establish convergence properties of the algorithm and we experiment with some simple pictures, for which good restorations are obtained at low signal-to-noise ratios.

18,761 citations

Journal ArticleDOI
21 Oct 1999-Nature
TL;DR: An algorithm for non-negative matrix factorization is demonstrated that is able to learn parts of faces and semantic features of text and is in contrast to other methods that learn holistic, not parts-based, representations.
Abstract: Is perception of the whole based on perception of its parts? There is psychological and physiological evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations. But little is known about how brains or computers might learn the parts of objects. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts-based, representations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign.

11,500 citations

01 Jan 1999
TL;DR: In this article, non-negative matrix factorization is used to learn parts of faces and semantic features of text, which is in contrast to principal components analysis and vector quantization that learn holistic, not parts-based, representations.
Abstract: Is perception of the whole based on perception of its parts? There is psychological and physiological evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations. But little is known about how brains or computers might learn the parts of objects. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts-based, representations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign.

9,604 citations

Proceedings Article
01 Jan 2000
TL;DR: Two different multiplicative algorithms for non-negative matrix factorization are analyzed and one algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence.
Abstract: Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence.

7,345 citations


Cites methods from "Bayesian-Based Iterative Method of ..."

  • ...Algorithms similar to ours where only one of the factors is adapted have previously been used for the deconvolution of emission tomography and astronomical images [9, 10, 11, 12]....

    [...]

Proceedings ArticleDOI
21 Jul 2017
TL;DR: It is concluded that the NTIRE 2017 challenge pushes the state-of-the-art in single-image super-resolution, reaching the best results to date on the popular Set5, Set14, B100, Urban100 datasets and on the authors' newly proposed DIV2K.
Abstract: This paper introduces a novel large dataset for example-based single image super-resolution and studies the state-of-the-art as emerged from the NTIRE 2017 challenge. The challenge is the first challenge of its kind, with 6 competitions, hundreds of participants and tens of proposed solutions. Our newly collected DIVerse 2K resolution image dataset (DIV2K) was employed by the challenge. In our study we compare the solutions from the challenge to a set of representative methods from the literature and evaluate them using diverse measures on our proposed DIV2K dataset. Moreover, we conduct a number of experiments and draw conclusions on several topics of interest. We conclude that the NTIRE 2017 challenge pushes the state-of-the-art in single-image super-resolution, reaching the best results to date on the popular Set5, Set14, B100, Urban100 datasets and on our newly proposed DIV2K.

2,388 citations


Cites background from "Bayesian-Based Iterative Method of ..."

  • ...Single image super-resolution as well as image restoration research literature spans over decades [36, 20, 4, 13, 16, 3, 15, 14, 6, 32, 54, 30, 17, 23, 12, 47, 48, 10, 21]....

    [...]

References
More filters
Book
01 Jan 1960
TL;DR: Probability Theory as the study of Mathematical Models of Random Phenomena as mentioned in this paper is a generalization of probability theory for the study and analysis of statistical models of random variables.
Abstract: Probability Theory as the Study of Mathematical Models of Random Phenomena. Basic Probability Theory. Independence and Dependence. Numerical-Valued Random Phenomena. Mean and Variance of a Probability Law. Normal, Poisson, and Related Probability Laws. Random Variables. Expectation of a Random Variable. Sums of Independent Random Variables. Sequences of Random Variables. Tables. Answers to Odd-Numbered Exercises. Index.

766 citations

Journal ArticleDOI
TL;DR: The extent to which the processing approaches the optimum can be evaluated by determining the fraction of the total information content of the image which can be visually extracted after processing.
Abstract: The evaluation of an image must depend upon the purpose for which the image was obtained and the manner in which the image is to be examined. Where the goal is extraction of information and where the image is to be processed prior to viewing, the information content of the image is the only true evaluation criterion. Under these conditions, the improvement achieved by processing can be evaluated by comparing the ability of the human observer to extract information from the image before and after processing. The extent to which the processing approaches the optimum can be evaluated by determining the fraction of the total information content of the image which can be visually extracted after processing. The basic mathematical concepts of image processing are indicated, relating the input point spread function (p. s. f. of the unprocessed image), the processing point spread function (p. s. f. which defines the processing operation), and the output point spread function (p. s. f. of the processed image), and their Fourier domain equivalents. Examples are shown of images which have been processed and the details of the processing operations are described.

148 citations

Journal ArticleDOI
TL;DR: In this paper, the amplitude and phase coefficients of the two-dimensional Fourier series representing the degraded images were corrected by applying corrections to the optical transfer function of the turbulence measured at the time the images were photographed.
Abstract: Turbulence-degraded images have been processed to obtain an improvement of their visual image quality. The initial objects were photographed through laboratory-generated turbulence. The resulting transparencies of the degraded images were digitized by a photoelectric scanner and processed on a digital computer. The processing consisted of applying corrections to the amplitude and phase coefficients of the two-dimensional Fourier series representing the degraded images. The correction factors were obtained from the optical transfer function of the turbulence measured at the time the images were photographed. The experiment was done for 5-msec and 1-min exposure times. The processed data were used to generate photographs. The processed images were found to have significantly more visual detail than the original degraded images; the 5-msec-exposure restorations were superior to the 1-min-exposure restorations.

127 citations