scispace - formally typeset
Search or ask a question
Author

Nikolas P. Galatsanos

Bio: Nikolas P. Galatsanos is an academic researcher from University of Ioannina. The author has contributed to research in topics: Image restoration & Image processing. The author has an hindex of 36, co-authored 127 publications receiving 6661 citations. Previous affiliations of Nikolas P. Galatsanos include Illinois Institute of Technology & University of Wisconsin-Madison.


Papers
More filters
Journal ArticleDOI
TL;DR: It was from here that "Bayesian" ideas first spread through the mathematical world, as Bayes's own article was ignored until 1780 and played no important role in scientific debate until the 20th century.
Abstract: The influence of this Thomas Bayes' work was immense. It was from here that "Bayesian" ideas first spread through the mathematical world, as Bayes's own article was ignored until 1780 and played no important role in scientific debate until the 20th century. It was also this article of Laplace's that introduced the mathematical techniques for the asymptotic analysis of posterior distributions that are still employed today. And it was here that the earliest example of optimum estimation can be found, the derivation and characterization of an estimator that minimized a particular measure of posterior expected loss. After more than two centuries, we mathematicians, statisticians cannot only recognize our roots in this masterpiece of our science, we can still learn from it.

774 citations

Journal ArticleDOI
TL;DR: The ability of SVM to outperform several well-known methods developed for the widely studied problem of MC detection suggests that SVM is a promising technique for object detection in a medical imaging application.
Abstract: We investigate an approach based on support vector machines (SVMs) for detection of microcalcification (MC) clusters in digital mammograms, and propose a successive enhancement learning scheme for improved performance. SVM is a machine-learning method, based on the principle of structural risk minimization, which performs well when applied to data outside the training set. We formulate MC detection as a supervised-learning problem and apply SVM to develop the detection algorithm. We use the SVM to detect at each location in the image whether an MC is present or not. We tested the proposed method using a database of 76 clinical mammograms containing 1120 MCs. We use free-response receiver operating characteristic curves to evaluate detection performance, and compare the proposed algorithm with several existing methods. In our experiments, the proposed SVM framework outperformed all the other methods tested. In particular, a sensitivity as high as 94% was achieved by the SVM method at an error rate of one false-positive cluster per image. The ability of SVM to outperform several well-known methods developed for the widely studied problem of MC detection suggests that SVM is a promising technique for object detection in a medical imaging application.

574 citations

Journal ArticleDOI
TL;DR: An error analysis based on an objective mean-square-error (MSE) criterion is used to motivate regularization and two approaches for choosing the regularization parameter and estimating the noise variance are proposed.
Abstract: The application of regularization to ill-conditioned problems necessitates the choice of a regularization parameter which trades fidelity to the data with smoothness of the solution. The value of the regularization parameter depends on the variance of the noise in the data. The problem of choosing the regularization parameter and estimating the noise variance in image restoration is examined. An error analysis based on an objective mean-square-error (MSE) criterion is used to motivate regularization. Two approaches for choosing the regularization parameter and estimating the noise variance are proposed. The proposed and existing methods are compared and their relationship to linear minimum-mean-square-error filtering is examined. Experiments are presented that verify the theoretical results. >

551 citations

Journal ArticleDOI
TL;DR: The reconstruction of images from incomplete block discrete cosine transform (BDCT) data is examined and two methods are proposed for solving this regularized recovery problem based on the theory of projections onto convex sets (POCS) and the constrained least squares (CLS).
Abstract: The reconstruction of images from incomplete block discrete cosine transform (BDCT) data is examined. The problem is formulated as one of regularized image recovery. According to this formulation, the image in the decoder is reconstructed by using not only the transmitted data but also prior knowledge about the smoothness of the original image, which complements the transmitted data. Two methods are proposed for solving this regularized recovery problem. The first is based on the theory of projections onto convex sets (POCS) while the second is based on the constrained least squares (CLS) approach. For the POCS-based method, a new constraint set is defined that conveys smoothness information not captured by the transmitted BDCT coefficients, and the projection onto it is computed. For the CLS method an objective function is proposed that captures the smoothness properties of the original image. Iterative algorithms are introduced for its minimization. Experimental results are presented. >

428 citations

Journal ArticleDOI
TL;DR: A spatially adaptive image recovery algorithm is proposed based on the theory of projections onto convex sets that captures both the local statistical properties of the image and the human perceptual characteristics.
Abstract: At the present time, block-transform coding is probably the most popular approach for image compression. For this approach, the compressed images are decoded using only the transmitted transform data. We formulate image decoding as an image recovery problem. According to this approach, the decoded image is reconstructed using not only the transmitted data but, in addition, the prior knowledge that images before compression do not display between-block discontinuities. A spatially adaptive image recovery algorithm is proposed based on the theory of projections onto convex sets. Apart from the data constraint set, this algorithm uses another new constraint set that enforces between-block smoothness. The novelty of this set is that it captures both the local statistical properties of the image and the human perceptual characteristics. A simplified spatially adaptive recovery algorithm is also proposed, and the analysis of its computational complexity is presented. Numerical experiments are shown that demonstrate that the proposed algorithms work better than both the JPEG deblocking recommendation and our previous projection-based image decoding approach. >

384 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
J.M. Shapiro1
TL;DR: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code.
Abstract: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code The embedded code represents a sequence of binary decisions that distinguish an image from the "null" image Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding >

5,559 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of the classification of hyperspectral remote sensing images by support vector machines by understanding and assessing the potentialities of SVM classifiers in hyperdimensional feature spaces and concludes that SVMs are a valid and effective alternative to conventional pattern recognition approaches.
Abstract: This paper addresses the problem of the classification of hyperspectral remote sensing images by support vector machines (SVMs) First, we propose a theoretical discussion and experimental analysis aimed at understanding and assessing the potentialities of SVM classifiers in hyperdimensional feature spaces Then, we assess the effectiveness of SVMs with respect to conventional feature-reduction-based approaches and their performances in hypersubspaces of various dimensionalities To sustain such an analysis, the performances of SVMs are compared with those of two other nonparametric classifiers (ie, radial basis function neural networks and the K-nearest neighbor classifier) Finally, we study the potentially critical issue of applying binary SVMs to multiclass problems in hyperspectral data In particular, four different multiclass strategies are analyzed and compared: the one-against-all, the one-against-one, and two hierarchical tree-based strategies Different performance indicators have been used to support our experimental studies in a detailed and accurate way, ie, the classification accuracy, the computational time, the stability to parameter setting, and the complexity of the multiclass architecture The results obtained on a real Airborne Visible/Infrared Imaging Spectroradiometer hyperspectral dataset allow to conclude that, whatever the multiclass strategy adopted, SVMs are a valid and effective alternative to conventional pattern recognition approaches (feature-reduction procedures combined with a classification method) for the classification of hyperspectral remote sensing data

3,607 citations

Journal ArticleDOI
TL;DR: For instance, mean-field variational inference as discussed by the authors approximates probability densities through optimization, which is used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling.
Abstract: One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this article, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find a member of that family which is close to the target density. Closeness is measured by Kullback–Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data...

3,421 citations

Book
19 Dec 2003
TL;DR: In this article, the MPEG-4 and H.264 standards are discussed and an overview of the technologies involved in their development is presented. But the focus is on the performance and not the technical aspects.
Abstract: About the Author.Foreword.Preface.Glossary.1. Introduction.2. Video Formats and Quality.3. Video Coding Concepts.4. The MPEG-4 and H.264 Standards.5. MPEG-4 Visual.6. H.264/MPEG-4 Part 10.7. Design and Performance.8. Applications and Directions.Bibliography.Index.

2,491 citations