scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian published in 2007"


01 Aug 2007
TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

7,124 citations



Journal ArticleDOI
TL;DR: A library of Gaussian basis sets that has been specifically optimized to perform accurate molecular calculations based on density functional theory and can be used in first principles molecular dynamics simulations and is well suited for linear scaling calculations.
Abstract: We present a library of Gaussian basis sets that has been specifically optimized to perform accurate molecular calculations based on density functional theory. It targets a wide range of chemical environments, including the gas phase, interfaces, and the condensed phase. These generally contracted basis sets, which include diffuse primitives, are obtained minimizing a linear combination of the total energy and the condition number of the overlap matrix for a set of molecules with respect to the exponents and contraction coefficients of the full basis. Typically, for a given accuracy in the total energy, significantly fewer basis functions are needed in this scheme than in the usual split valence scheme, leading to a speedup for systems where the computational cost is dominated by diagonalization. More importantly, binding energies of hydrogen bonded complexes are of similar quality as the ones obtained with augmented basis sets, i.e., have a small (down to 0.2 kcal/mol) basis set superposition error, and the monomers have dipoles within 0.1 D of the basis set limit. However, contrary to typical augmented basis sets, there are no near linear dependencies in the basis, so that the overlap matrix is always well conditioned, also, in the condensed phase. The basis can therefore be used in first principles molecular dynamics simulations and is well suited for linear scaling calculations.

2,700 citations


Journal ArticleDOI
TL;DR: The implementation of the penalized likelihood methods for estimating the concentration matrix in the Gaussian graphical model is nontrivial, but it is shown that the computation can be done effectively by taking advantage of the efficient maxdet algorithm developed in convex optimization.
Abstract: SUMMARY We propose penalized likelihood methods for estimating the concentration matrix in the Gaussian graphical model. The methods lead to a sparse and shrinkage estimator of the concentration matrix that is positive definite, and thus conduct model selection and estimation simultaneously. The implementation of the methods is nontrivial because of the positive definite constraint on the concentration matrix, but we show that the computation can be done effectively by taking advantage of the efficient maxdet algorithm developed in convex optimization. We propose a BIC-type criterion for the selection of the tuning parameter in the penalized likelihood methods. The connection between our methods and existing methods is illustrated. Simulations and real examples demonstrate the competitive performance of the new methods.

1,824 citations


Journal ArticleDOI
TL;DR: The Gaussian-4 theory (G4 theory) for the calculation of energies of compounds containing first- (Li-F), second- (Na-Cl), and third-row main group (K, Ca, and Ga-Kr) atoms is presented and a significant improvement is found for 79 nonhydrogen systems.
Abstract: The Gaussian-4 theory (G4 theory) for the calculation of energies of compounds containing first- (Li–F), second- (Na–Cl), and third-row main group (K, Ca, and Ga–Kr) atoms is presented. This theoretical procedure is the fourth in the Gaussian-n series of quantum chemical methods based on a sequence of single point energy calculations. The G4 theory modifies the Gaussian-3 (G3) theory in five ways. First, an extrapolation procedure is used to obtain the Hartree-Fock limit for inclusion in the total energy calculation. Second, the d-polarization sets are increased to 3d on the first-row atoms and to 4d on the second-row atoms, with reoptimization of the exponents for the latter. Third, the QCISD(T) method is replaced by the CCSD(T) method for the highest level of correlation treatment. Fourth, optimized geometries and zero-point energies are obtained with the B3LYP density functional. Fifth, two new higher level corrections are added to account for deficiencies in the energy calculations. The new method is ...

1,733 citations


Posted Content
01 Jan 2007
TL;DR: In this article, the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse is formulated as a maximum likelihood problem with an added l 1-norm penalty term.
Abstract: We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l1-norm penalized regression. Our second algorithm, based on Nesterov’s first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright and Jordan [2006]), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for the binary case. We test our algorithms on synthetic data, as well as on gene expression and senate voting records data.

1,172 citations


Journal ArticleDOI
01 Apr 2007
TL;DR: A programming-by-demonstration framework for generically extracting the relevant features of a given task and for addressing the problem of generalizing the acquired knowledge to different contexts is presented.
Abstract: We present a programming-by-demonstration framework for generically extracting the relevant features of a given task and for addressing the problem of generalizing the acquired knowledge to different contexts. We validate the architecture through a series of experiments, in which a human demonstrator teaches a humanoid robot simple manipulatory tasks. A probability-based estimation of the relevance is suggested by first projecting the motion data onto a generic latent space using principal component analysis. The resulting signals are encoded using a mixture of Gaussian/Bernoulli distributions (Gaussian mixture model/Bernoulli mixture model). This provides a measure of the spatio-temporal correlations across the different modalities collected from the robot, which can be used to determine a metric of the imitation performance. The trajectories are then generalized using Gaussian mixture regression. Finally, we analytically compute the trajectory which optimizes the imitation metric and use this to generalize the skill to different contexts

1,089 citations


Journal ArticleDOI
TL;DR: It is shown that finding small solutions to random modular linear equations is at least as hard as approximating several lattice problems in the worst case within a factor almost linear in the dimension of the lattice, and it is proved that the distribution that one obtains after adding Gaussian noise to a lattice has the following interesting property.
Abstract: We show that finding small solutions to random modular linear equations is at least as hard as approximating several lattice problems in the worst case within a factor almost linear in the dimension of the lattice. The lattice problems we consider are the shortest vector problem, the shortest independent vectors problem, the covering radius problem, and the guaranteed distance decoding problem (a variant of the well-known closest vector problem). The approximation factor we obtain is $n \log^{O(1)} n$ for all four problems. This greatly improves on all previous work on the subject starting from Ajtai’s seminal paper [Generating hard instances of lattice problems, in Complexity of Computations and Proofs, Quad. Mat. 13, Dept. Math., Seconda Univ. Napoli, Caserta, Italy, 2004, pp. 1-32] up to the strongest previously known results by Micciancio [SIAM J. Comput., 34 (2004), pp. 118-169]. Our results also bring us closer to the limit where the problems are no longer known to be in NP intersect coNP. Our main tools are Gaussian measures on lattices and the high-dimensional Fourier transform. We start by defining a new lattice parameter which determines the amount of Gaussian noise that one has to add to a lattice in order to get close to a uniform distribution. In addition to yielding quantitatively much stronger results, the use of this parameter allows us to simplify many of the complications in previous work. Our technical contributions are twofold. First, we show tight connections between this new parameter and existing lattice parameters. One such important connection is between this parameter and the length of the shortest set of linearly independent vectors. Second, we prove that the distribution that one obtains after adding Gaussian noise to the lattice has the following interesting property: the distribution of the noise vector when conditioning on the final value behaves in many respects like the original Gaussian noise vector. In particular, its moments remain essentially unchanged.

793 citations


Journal ArticleDOI
01 Oct 2007
TL;DR: It is shown that there is no source-channel separation theorem even when the individual sources are independent, and joint source- channel strategies are developed that are optimal when the structure of the channel probability transition matrix and the function are appropriately matched.
Abstract: The problem of reliably reconstructing a function of sources over a multiple-access channel (MAC) is considered. It is shown that there is no source-channel separation theorem even when the individual sources are independent. Joint source-channel strategies are developed that are optimal when the structure of the channel probability transition matrix and the function are appropriately matched. Even when the channel and function are mismatched, these computation codes often outperform separation-based strategies. Achievable distortions are given for the distributed refinement of the sum of Gaussian sources over a Gaussian multiple-access channel with a joint source-channel lattice code. Finally, computation codes are used to determine the multicast capacity of finite-field multiple-access networks, thus linking them to network coding.

758 citations


Journal ArticleDOI
02 Jul 2007
TL;DR: The Gaussian sum-quadrature Kalman filter (GS-QKF) as mentioned in this paper approximates the predicted and posterior densities as a finite number of weighted sums of Gaussian densities.
Abstract: In this paper, a new version of the quadrature Kalman filter (QKF) is developed theoretically and tested experimentally. We first derive the new QKF for nonlinear systems with additive Gaussian noise by linearizing the process and measurement functions using statistical linear regression (SLR) through a set of Gauss-Hermite quadrature points that parameterize the Gaussian density. Moreover, we discuss how the new QKF can be extended and modified to take into account specific details of a given application. We then go on to extend the use of the new QKF to discrete-time, nonlinear systems with additive, possibly non-Gaussian noise. A bank of parallel QKFs, called the Gaussian sum-quadrature Kalman filter (GS-QKF) approximates the predicted and posterior densities as a finite number of weighted sums of Gaussian densities. The weights are obtained from the residuals of the QKFs. Three different Gaussian mixture reduction techniques are presented to alleviate the growing number of the Gaussian sum terms inherent to the GS-QKFs. Simulation results exhibit a significant improvement of the GS-QKFs over other nonlinear filtering approaches, namely, the basic bootstrap (particle) filters and Gaussian-sum extended Kalman filters, to solve nonlinear non- Gaussian filtering problems.

523 citations


Journal ArticleDOI
TL;DR: The theory of continuous-variable entanglement with special emphasis on foundational aspects, conceptual structures and mathematical methods has been studied in this paper, where the most important results on the separability and distillability of Gaussian states are discussed.
Abstract: We review the theory of continuous-variable entanglement with special emphasis on foundational aspects, conceptual structures and mathematical methods. Much attention is devoted to the discussion of separability criteria and entanglement properties of Gaussian states, for their great practical relevance in applications to quantum optics and quantum information, as well as for the very clean framework that they allow for the study of the structure of nonlocal correlations. We give a self-contained introduction to phase-space and symplectic methods in the study of Gaussian states of infinite-dimensional bosonic systems. We review the most important results on the separability and distillability of Gaussian states and discuss the main properties of bipartite entanglement. These include the extremal entanglement, minimal and maximal, of two-mode mixed Gaussian states, the ordering of two-mode Gaussian states according to different measures of entanglement, the unitary (reversible) localization and the scaling of bipartite entanglement in multimode Gaussian states. We then discuss recent advances in the understanding of entanglement sharing in multimode Gaussian states, including the proof of the monogamy inequality of distributed entanglement for all Gaussian states. Multipartite entanglement of Gaussian states is reviewed by discussing its qualification by different classes of separability, and the main consequences of the monogamy inequality, such as the quantification of genuine tripartite entanglement in three-mode Gaussian states, the promiscuous nature of entanglement sharing in symmetric Gaussian states and the possible coexistence of unlimited bipartite and multipartite entanglement. We finally review recent advances and discuss possible perspectives on the qualification and quantification of entanglement in non-Gaussian states, a field of research that is to a large extent yet to be explored.

Journal ArticleDOI
30 Jul 2007
TL;DR: The message-passing approach to model-based signal processing is developed with a focus on Gaussian message passing in linear state-space models, which includes recursive least squares, linear minimum-mean-squared-error estimation, and Kalman filtering algorithms.
Abstract: The message-passing approach to model-based signal processing is developed with a focus on Gaussian message passing in linear state-space models, which includes recursive least squares, linear minimum-mean-squared-error estimation, and Kalman filtering algorithms. Tabulated message computation rules for the building blocks of linear models allow us to compose a variety of such algorithms without additional derivations or computations. Beyond the Gaussian case, it is emphasized that the message-passing approach encourages us to mix and match different algorithmic techniques, which is exemplified by two different approaches - steepest descent and expectation maximization - to message passing through a multiplier node.

Journal ArticleDOI
TL;DR: The d-dimensional Gaussian free field (GFF) as mentioned in this paper is a generalization of the simple random walk (when time and space are appropriately scaled), which is the limit of many incrementally varying random functions on a ddimensional grid.
Abstract: The d-dimensional Gaussian free field (GFF), also called the (Euclidean bosonic) massless free field, is a d-dimensional-time analog of Brownian motion. Just as Brownian motion is the limit of the simple random walk (when time and space are appropriately scaled), the GFF is the limit of many incrementally varying random functions on d-dimensional grids. We present an overview of the GFF and some of the properties that are useful in light of recent connections between the GFF and the Schramm–Loewner evolution.

Journal ArticleDOI
TL;DR: The least-squares Gaussian approximations of the diffraction-limited 2D-3D paraxial-nonparaxial point-spread functions (PSFs) of the wide field fluorescence microscope (WFFM), the laser scanning confocal microscope (LSCM), and the disk scanning conf focal microscope (DSCM) are studied.
Abstract: We comprehensively study the least-squares Gaussian approximations of the diffraction-limited 2D-3D paraxial-nonparaxial point-spread functions (PSFs) of the wide field fluorescence microscope (WFFM), the laser scanning confocal microscope (LSCM), and the disk scanning confocal microscope (DSCM). The PSFs are expressed using the Debye integral. Under an L(infinity) constraint imposing peak matching, optimal and near-optimal Gaussian parameters are derived for the PSFs. With an L1 constraint imposing energy conservation, an optimal Gaussian parameter is derived for the 2D paraxial WFFM PSF. We found that (1) the 2D approximations are all very accurate; (2) no accurate Gaussian approximation exists for 3D WFFM PSFs; and (3) with typical pinhole sizes, the 3D approximations are accurate for the DSCM and nearly perfect for the LSCM. All the Gaussian parameters derived in this study are in explicit analytical form, allowing their direct use in practical applications.

Journal ArticleDOI
TL;DR: A precise analysis of what kind of penalties should be used in order to perform model selection via the minimization of a penalized least-squares type criterion within some general Gaussian framework including the classical ones is mainly devoted.
Abstract: This paper is mainly devoted to a precise analysis of what kind of penalties should be used in order to perform model selection via the minimiza- tion of a penalized least-squares type criterion within some general Gaussian framework including the classical ones. As compared to our previous paper on this topic (Birge and Massart in J. Eur. Math. Soc. 3, 203-268 (2001)), more elaborate forms of the penalties are given which are shown to be, in some sense, optimal. We indeed provide more precise upper bounds for the risk of the penalized estimators and lower bounds for the penalty terms, showing that the use of smaller penalties may lead to disastrous results. These lower bounds may also be used to design a practical strategy that allows to estimate the penalty from the data when the amount of noise is unknown. We provide an illustra- tion of the method for the problem of estimating a piecewise constant signal in Gaussian noise when neither the number, nor the location of the change points are known.

01 Jan 2007
TL;DR: In this paper, the message-passing approach to model-based signal processing is developed with a focus on Gaussian message passing in linear state-space models, which includes recursive least squares, linear minimum-mean-squared-error estimation, and Kalman filtering algorithms.
Abstract: The message-passing approach to model-based signal processing is developed with a focus on Gaussian message passing in linear state-space models, which includes recursive least squares, linear minimum-mean-squared-error estimation, and Kalman filtering algorithms. Tabulated mes- sage computation rules for the building blocks of linear models allow us to compose a variety of such algorithms without additional derivations or computations. Beyond the Gaussian case, it is emphasized that the message-passing approach encourages us to mix and match different algorithmic tech- niques, which is exemplified by two different approachesV steepest descent and expectation maximizationVto message passing through a multiplier node.

Proceedings ArticleDOI
17 Jun 2007
TL;DR: A tractable lower and upper bounds on the partition function of models based on filter outputs and efficient learning algorithms that do not require any sampling are presented and applied to previous models shows that the nonintuitive features learned are not an artifact of the learning process but rather are capturing robust properties of natural images.
Abstract: Many low-level vision algorithms assume a prior probability over images, and there has been great interest in trying to learn this prior from examples. Since images are very non Gaussian, high dimensional, continuous signals, learning their distribution presents a tremendous computational challenge. Perhaps the most successful recent algorithm is the Fields of Experts (FOE) [20] model which has shown impressive performance by modeling image statistics with a product of potentials defined on filter outputs. However, as in previous models of images based on filter outputs [30], calculating the probability of an image given the model requires evaluating an intractable partition function. This makes learning very slow (requires Monte-Carlo sampling at every step) and makes it virtually impossible to compare the likelihood of two different models. Given this computational difficulty, it is hard to say whether nonintu-itive features learned by such models represent a true property of natural images or an artifact of the approximations used during learning. In this paper we present (1) tractable lower and upper bounds on the partition function of models based on filter outputs and (2) efficient learning algorithms that do not require any sampling. Our results are based on recent results in machine learning that deal with Gaussian potentials. We extend these results to non-Gaussian potentials and derive a novel, basis rotation algorithm for approximating the maximum likelihood filters. Our results allow us to (1) rigorously compare the likelihood of different models and (2) calculate high likelihood models of natural image statistics in a matter of minutes. Applying our results to previous models shows that the nonintuitive features are not an artifact of the learning process but rather are capturing robust properties of natural images.

Proceedings ArticleDOI
24 Jun 2007
TL;DR: It is shown that, the optimal communication strategy in all cases, is beamforming.
Abstract: A Gaussian MISO (multiple input single output) channel is considered where a transmitter is communicating to a receiver in the presence of an eavesdropper. The transmitter is equipped with multiple antennas, while the receiver and the eavesdropper each have a single antenna. The transmitter maximizes the communication rate, while concealing the message from the eavesdropper. The channel input is restricted to Gaussian signalling, with no preprocessing of information. For these channel inputs, and under different channel fading assumptions, optimal transmission strategies are found, in terms of the input covariance matrices. It is shown that, the optimal communication strategy in all cases, is beamforming.

Journal ArticleDOI
TL;DR: The paper shows how an easily computed upper bound can be used as a pair-selection criterion which avoids the anomalies of the earlier approaches and proposes that a key consideration should be the Kullback-Leibler (KL) discrimination of the reduced mixture with respect to the original mixture.
Abstract: A common problem in multi-target tracking is to approximate a Gaussian mixture by one containing fewer components; similar problems can arise in integrated navigation. A common approach is successively to merge pairs of components, replacing the pair with a single Gaussian component whose moments up to second order match those of the merged pair. Salmond [1] and Williams [2, 3] have each proposed algorithms along these lines, but using different criteria for selecting the pair to be merged at each stage. The paper shows how under certain circumstances each of these pair-selection criteria can give rise to anomalous behaviour, and proposes that a key consideration should the the Kullback-Leibler (KL) discrimination of the reduced mixture with respect to the original mixture. Although computing this directly would normally be impractical, the paper shows how an easily computed upper bound can be used as a pair-selection criterion which avoids the anomalies of the earlier approaches. The behaviour of the three algorithms is compared using a high-dimensional example drawn from terrain-referenced navigation.

01 Jan 2007
TL;DR: The approximation tool for latent GMRF models is introduced and the approximation for the posterior of the hyperparameters θ in equation (1) is shown to give extremely accurate results in a fraction of the computing time used by MCMC algorithms.
Abstract: This thesis consists of five papers, presented in chronological order. Their content is summarised in this section.Paper I introduces the approximation tool for latent GMRF models and discusses, in particular, the approximation for the posterior of the hyperparameters θ in equation (1). It is shown that this approximation is indeed very accurate, as even long MCMC runs cannot detect any error in it. A Gaussian approximation to the density of χi|θ, y is also discussed. This appears to give reasonable results and it is very fast to compute. However, slight errors are detected when comparing the approximation with long MCMC runs. These are mostly due to the fact that a possible - skewed density is approximated via a symmetric one. Paper I presents also some details about sparse matrices algorithms.The core of the thesis is presented in Paper II. Here most of the remaining issues present in Paper I are solved. Three different approximation for χi|θ, y with different degrees of accuracy and computational costs are described. Moreover, ways to assess the approximation error and considerations about the asymptotical behaviour of the approximations are also discussed. Through a series of examples covering a wide range of commonly used latent GMRF models, the approximations are shown to give extremely accurate results in a fraction of the computing time used by MCMC algorithms.Paper III applies the same ideas as Paper II to generalised linear mixed models where χ represents a latent variable at n spatial sites on a two dimensional domain. Out of these n sites k, with n >> k , are observed through data. The n sites are assumed to be on a regular grid and wrapped on a torus. For the class of models described in Paper III the computations are based on discrete Fourier transform instead of sparse matrices. Paper III illustrates also how marginal likelihood π (y) can be approximated, provides approximate strategies for Bayesian outlier detection and perform approximate evaluation of spatial experimental design.Paper IV presents yet another application of the ideas in Paper II. Here approximate techniques are used to do inference on multivariate stochastic volatility models, a class of models widely used in financial applications. Paper IV discusses also problems deriving from the increased dimension of the parameter vector θ, a condition which makes all numerical integration more computationally intensive. Different approximations for the posterior marginals of the parameters θ, π(θi)|y), are also introduced. Approximations to the marginal likelihood π(y) are used in order to perform model comparison.Finally, Paper V is a manual for a program, named inla which implements all approximations described in Paper II. A large series of worked out examples, covering many well known models, illustrate the use and the performance of the inla program. This program is a valuable instrument since it makes most of the Bayesian inference techniques described in this thesis easily available for everyone.

Journal ArticleDOI
01 Jan 2007
TL;DR: A theorem of Witsenhausen is shown to imply that an optimal communication strategy is uncoded transmission, i.e., each sensors' channel input is merely a scaled version of its noisy observation.
Abstract: One of the simplest sensor network models has one single underlying Gaussian source of interest, observed by many sensors, subject to independent Gaussian observation noise. The sensors communicate over a standard Gaussian multiple-access channel to a fusion center whose goal is to estimate the underlying source with respect to mean-squared error. In this note, a theorem of Witsenhausen is shown to imply that an optimal communication strategy is uncoded transmission, i.e., each sensors' channel input is merely a scaled version of its noisy observation.

Proceedings ArticleDOI
24 Jun 2007
TL;DR: For a noisy linear observation model based on random measurement matrices drawn from general Gaussian measurementMatrices, this paper derives both a set of sufficient conditions for exact support recovery using an exhaustive search decoder, as well as aset of necessary conditions that any decoder must satisfy for exactSupport set recovery.
Abstract: The problem of recovering the sparsity pattern of a fixed but unknown vector beta* and Rp based on a set of n noisy observations arises in a variety of settings, including subset selection in regression, graphical model selection, signal denoising, compressive sensing, and constructive approximation. Of interest are conditions on the model dimension p, the sparsity index s (number of non-zero entries in beta*), and the number of observations n that are necessary and/or sufficient to ensure asymptotically perfect recovery of the sparsity pattern. This paper focuses on the information-theoretic limits of sparsity recovery: in particular, for a noisy linear observation model based on measurement vectors drawn from the standard Gaussian ensemble, we derive both a set of sufficient conditions for asymptotically perfect recovery using the optimal decoder, as well as a set of necessary conditions that any decoder must satisfy for perfect recovery. This analysis of optimal decoding limits complements our previous work on thresholds for the behavior of l1 -constrained quadratic programming for Gaussian measurement ensembles.

Journal ArticleDOI
TL;DR: It is experimentally demonstrated that the entanglement between Gaussian entangled states can be increased by non-Gaussian operations, in good agreement with the theoretical predictions.
Abstract: We experimentally demonstrate that the entanglement between Gaussian entangled states can be increased by non-Gaussian operations. Coherent subtraction of single photons from Gaussian quadrature-entangled light pulses, created by a nondegenerate parametric amplifier, produces delocalized states with negative Wigner functions and complex structures more entangled than the initial states in terms of negativity. The experimental results are in very good agreement with the theoretical predictions.

Journal ArticleDOI
TL;DR: A version of Whittle's approximation to the Gaussian log-likelihood for spatial regular lattices with missing values and for irregularly spaced datasets, which requires O(nlog2n) operations and does not involve calculating determinants.
Abstract: Likelihood approaches for large, irregularly spaced spatial datasets are often very difficult, if not infeasible, to implement due to computational limitations. Even when we can assume normality, exact calculations of the likelihood for a Gaussian spatial process observed at n locations requires O(n3) operations. We present a version of Whittle's approximation to the Gaussian log-likelihood for spatial regular lattices with missing values and for irregularly spaced datasets. This method requires O(nlog2n) operations and does not involve calculating determinants. We present simulations and theoretical results to show the benefits and the performance of the spatial likelihood approximation method presented here for spatial irregularly spaced datasets and lattices with missing values. We apply these methods to estimate the spatial structure of sea surface temperatures using satellite data with missing values.

Journal ArticleDOI
TL;DR: A novel, simple and tight approximation for the Gaussian Q-function and its integer powers is presented, and an accuracy improvement is achieved over the whole range of positive arguments.
Abstract: We present a novel, simple and tight approximation for the Gaussian Q-function and its integer powers. Compared to other known closed-form approximations, an accuracy improvement is achieved over the whole range of positive arguments. The results can be efficiently applied in the evaluation of the symbol error probability (SEP) of digital modulations in the presence of additive white Gaussian noise (AWGN) and the average SEP (ASEP) over fading channels. As an example we evaluate in closed-form the ASEP of differentially encoded QPSK in Nakagami-m fading.

Journal ArticleDOI
TL;DR: This paper addresses the design of an optimal transmit signal and its corresponding optimal detector for a radar or active sonar system with a focus on the temporal aspects of the waveform with the spatial aspects to be described in a future paper.
Abstract: In this paper, we address the design of an optimal transmit signal and its corresponding optimal detector for a radar or active sonar system. The focus is on the temporal aspects of the waveform with the spatial aspects to be described in a future paper. The assumptions involved in modeling the clutter/reverberation return are crucial to the development of the optimal detector and its consequent optimal signal design. In particular, the target is assumed to be a Gaussian point target and the clutter/reverberation a stationary Gaussian random process. In practice, therefore, the modeling will need to be assessed and possibly extended, and additionally a means of measuring the "in-situ" clutter/reverberation spectrum will be required. The advantages of our approach are that a simple analytical result is obtained which is guaranteed to be optimal, and also the extension to spatial-temporal signal design is immediate using ideas of frequency-wavenumber representations. Some examples are given to illustrate the signal design procedure as well as the calculation of the increase in processing gain. Finally, the results are shown to be an extension of the usual procedure which places the signal energy in the noise band having minimum power

Journal ArticleDOI
TL;DR: This work uses concepts like Tsybakov’s noise assumption and local Rademacher averages to establish learning rates up to the order of n −1 for nontrivial distributions and introduces a geometric assumption for distributions that allows us to estimate the approximation properties of Gaussian RBF kernels.
Abstract: For binary classification we establish learning rates up to the order of $n^{-1}$ for support vector machines (SVMs) with hinge loss and Gaussian RBF kernels. These rates are in terms of two assumptions on the considered distributions: Tsybakov's noise assumption to establish a small estimation error, and a new geometric noise condition which is used to bound the approximation error. Unlike previously proposed concepts for bounding the approximation error, the geometric noise assumption does not employ any smoothness assumption.

Journal ArticleDOI
TL;DR: A novel family of paraxial laser beams forming an overcomplete yet nonorthogonal set of modes that have a singular phase profile and are eigenfunctions of the photon orbital angular momentum are studied.
Abstract: We studied a novel family of paraxial laser beams forming an overcomplete yet nonorthogonal set of modes. These modes have a singular phase profile and are eigenfunctions of the photon orbital angular momentum. The intensity profile is characterized by a single brilliant ring with the singularity at its center, where the field amplitude vanishes. The complex amplitude is proportional to the degenerate (confluent) hypergeometric function, and therefore we term such beams hypergeometric-Gaussian (HyGG) modes. Unlike the recently introduced hypergeometric modes [Opt. Lett. 32, 742 (2007)], the HyGG modes carry a finite power and have been generated in this work with a liquid-crystal spatial light modulator. We briefly consider some subfamilies of the HyGG modes as the modified Bessel Gaussian modes, the modified exponential Gaussian modes, and the modified Laguerre-Gaussian modes.

Journal ArticleDOI
TL;DR: In this paper, Tsybakov's noise assumption is used to establish a small estimation error, and a new geometric noise condition is used for bounding the approximation error, which is not in terms of smoothness but describes the concentration and noisiness of the data-generating distribution near the decision boundary.
Abstract: For binary classification we establish learning rates up to the order of n −1 for support vector machines (SVMs) with hinge loss and Gaussian RBF kernels. These rates are in terms of two assumptions on the considered distributions: Tsybakov’s noise assumption to establish a small estimation error, and a new geometric noise condition which is used to bound the approximation error. Unlike previously proposed concepts for bounding the approximation error, the geometric noise assumption does not employ any smoothness assumption. 1. Introduction. In recent years support vector machines (SVMs) have been the subject of many theoretical considerations. Despite this effort, their learning performance on restricted classes of distributions is still widely unknown. In particular, it is unknown under which nontrivial circumstances SVMs can guarantee fast learning rates. The aim of this work is to use concepts like Tsybakov’s noise assumption and local Rademacher averages to establish learning rates up to the order of n −1 for nontrivial distributions. In addition to these concepts that are used to deal with the stochastic part of the analysis we also introduce a geometric assumption for distributions that allows us to estimate the approximation properties of Gaussian RBF kernels. Unlike many other concepts introduced for bounding the approximation error, our geometric assumption is not in terms of smoothness but describes the concentration and the noisiness of the data-generating distribution near the decision boundary. Let us formally introduce the statistical classification problem. To this end let us fix a subset X ⊂ R d . We write Y := {−1,1}. Given a finite training set

Journal ArticleDOI
TL;DR: A novel parametric and global image histogram thresholding method based on the estimation of the statistical parameters of ''object'' and ''background'' classes by the expectation-maximization (EM) algorithm, under the assumption that these two classes follow a generalized Gaussian (GG) distribution.