scispace - formally typeset
Search or ask a question
Author

Aggelos K. Katsaggelos

Bio: Aggelos K. Katsaggelos is an academic researcher from Northwestern University. The author has contributed to research in topics: Image restoration & Image processing. The author has an hindex of 76, co-authored 946 publications receiving 26196 citations. Previous affiliations of Aggelos K. Katsaggelos include University of Stavanger & Delft University of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: A modified Hopfield neural network model for regularized image restoration is presented, which allows negative autoconnections for each neuron and allows a neuron to have a bounded time delay to communicate with other neurons.
Abstract: A modified Hopfield neural network model for regularized image restoration is presented. The proposed network allows negative autoconnections for each neuron. A set of algorithms using the proposed neural network model is presented, with various updating modes: sequential updates; n-simultaneous updates; and partially asynchronous updates. The sequential algorithm is shown to converge to a local minimum of the energy function after a finite number of iterations. Since an algorithm which updates all n neurons simultaneously is not guaranteed to converge, a modified algorithm is presented, which is called a greedy algorithm. Although the greedy algorithm is not guaranteed to converge to a local minimum, the l/sub 1/ norm of the residual at a fixed point is bounded. A partially asynchronous algorithm is presented, which allows a neuron to have a bounded time delay to communicate with other neurons. Such an algorithm can eliminate the synchronization overhead of synchronous algorithms. >

233 citations

Journal ArticleDOI
TL;DR: A new paradigm is adopted, according to which the required prior information is extracted from the available data at the previous iteration step, i.e., the partially restored image at each step, allowing for the simultaneous determination of its value and the restoration of the degraded image.
Abstract: The determination of the regularization parameter is an important issue in regularized image restoration, since it controls the trade-off between fidelity to the data and smoothness of the solution. A number of approaches have been developed in determining this parameter. In this paper, a new paradigm is adopted, according to which the required prior information is extracted from the available data at the previous iteration step, i.e., the partially restored image at each step. We propose the use of a regularization functional instead of a constant regularization parameter. The properties such a regularization functional should satisfy are investigated, and two specific forms of it are proposed. An iterative algorithm is proposed for obtaining a restored image. The regularization functional is defined in terms of the restored image at each iteration step, therefore allowing for the simultaneous determination of its value and the restoration of the degraded image. Both proposed iteration adaptive regularization functionals are shown to result in a smoothing functional with a global minimum, so that its iterative optimization does not depend on the initial conditions. The convergence of the algorithm is established and experimental results are shown. >

230 citations

Journal ArticleDOI
01 Jun 1998
TL;DR: This work addresses the problem of the efficient encoding of object boundaries and describes the techniques developed for shape coding within the MPEG-4 standardization effort, and presents a framework for the representation of shapes using their contours.
Abstract: We address the problem of the efficient encoding of object boundaries. This problem is becoming increasingly important in applications such as content-based storage and retrieval, studio and television postproduction, and mobile multimedia applications. The MPEG-4 visual standard will allow the transmission of arbitrarily shaped video objects. The techniques developed for shape coding within the MPEG-4 standardization effort are described and compared first. A framework for the representation of shapes using their contours is presented next. Such representations are achieved using curves of various orders, and they are optimal in the rate-distortion sense. Finally, conclusions are drawn.

221 citations

Book
31 Dec 1990
TL;DR: The blur identification problem is formulated as a constrained maximum-likelihood problem, which directly incorporate a priori known relations between the blur (and image model) coefficients, such as symmetry properties, into the identification procedure.
Abstract: The blur identification problem is formulated as a constrained maximum-likelihood problem. The constraints directly incorporate a priori known relations between the blur (and image model) coefficients, such as symmetry properties, into the identification procedure. The resulting nonlinear minimization problem is solved iteratively, yielding a very general identification algorithm. An example of blur identification using synthetic data is given. >

218 citations

Journal ArticleDOI
TL;DR: This paper derives expressions for the iterative evaluation of the two hyperparameters applying the evidence and maximum a posteriori (MAP) analysis within the hierarchical Bayesian paradigm.
Abstract: In this paper, we propose the application of the hierarchical Bayesian paradigm to the image restoration problem. We derive expressions for the iterative evaluation of the two hyperparameters applying the evidence and maximum a posteriori (MAP) analysis within the hierarchical Bayesian paradigm. We show analytically that the analysis provided by the evidence approach is more realistic and appropriate than the MAP approach for the image restoration problem. We furthermore study the relationship between the evidence and an iterative approach resulting from the set theoretic regularization approach for estimating the two hyperparameters, or their ratio, defined as the regularization parameter. Finally the proposed algorithms are tested experimentally.

209 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a cloud centric vision for worldwide implementation of Internet of Things (IoT) and present a Cloud implementation using Aneka, which is based on interaction of private and public Clouds, and conclude their IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.

9,593 citations

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a deep learning method for single image super-resolution (SR), which directly learns an end-to-end mapping between the low/high-resolution images.
Abstract: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.

6,122 citations