scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian published in 2008"


Journal ArticleDOI
TL;DR: This work considers the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse, and presents two new algorithms for solving problems with at least a thousand nodes in the Gaussian case.
Abstract: We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright and Jordan, 2006), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for the binary case. We test our algorithms on synthetic data, as well as on gene expression and senate voting records data.

1,189 citations


Journal ArticleDOI
TL;DR: A signal-dependent noise model, which gives the pointwise standard-deviation of the noise as a function of the expectation of the pixel raw-data output, is composed of a Poissonian part, modeling the photon sensing, and Gaussian part, for the remaining stationary disturbances in the output data.
Abstract: We present a simple and usable noise model for the raw-data of digital imaging sensors This signal-dependent noise model, which gives the pointwise standard-deviation of the noise as a function of the expectation of the pixel raw-data output, is composed of a Poissonian part, modeling the photon sensing, and Gaussian part, for the remaining stationary disturbances in the output data We further explicitly take into account the clipping of the data (over- and under-exposure), faithfully reproducing the nonlinear response of the sensor We propose an algorithm for the fully automatic estimation of the model parameters given a single noisy image Experiments with synthetic images and with real raw-data from various sensors prove the practical applicability of the method and the accuracy of the proposed model

789 citations


Journal ArticleDOI
TL;DR: In this article, Bengtsson et al. showed that the ensemble size required for a successful particle filter scales exponentially with the problem size and that the required ensemble size scales with the state dimension.
Abstract: Particle filters are ensemble-based assimilation schemes that, unlike the ensemble Kalman filter, employ a fully nonlinear and non-Gaussian analysis step to compute the probability distribution function (pdf) of a system’s state conditioned on a set of observations. Evidence is provided that the ensemble size required for a successful particle filter scales exponentially with the problem size. For the simple example in which each component of the state vector is independent, Gaussian, and of unit variance and the observations are of each state component separately with independent, Gaussian errors, simulations indicate that the required ensemble size scales exponentially with the state dimension. In this example, the particle filter requires at least 1011 members when applied to a 200-dimensional state. Asymptotic results, following the work of Bengtsson, Bickel, and collaborators, are provided for two cases: one in which each prior state component is independent and identically distributed, and ...

654 citations


Journal ArticleDOI
TL;DR: It is demonstrated that Graphical Processing Units (GPUs) can be used very efficiently to calculate two-electron repulsion integrals over Gaussian basis functions, the first step in most quantum chemistry calculations.
Abstract: Modern videogames place increasing demands on the computational and graphical hardware, leading to novel architectures that have great potential in the context of high performance computing and molecular simulation. We demonstrate that Graphical Processing Units (GPUs) can be used very efficiently to calculate two-electron repulsion integrals over Gaussian basis functionsthe first step in most quantum chemistry calculations. A benchmark test performed for the evaluation of approximately 106 (ss|ss) integrals over contracted s-orbitals showed that a naive algorithm implemented on the GPU achieves up to 130-fold speedup over a traditional CPU implementation on an AMD Opteron. Subsequent calculations of the Coulomb operator for a 256-atom DNA strand show that the GPU advantage is maintained for basis sets including higher angular momentum functions.

526 citations


Journal ArticleDOI
TL;DR: The proposed multimode process monitoring approach based on finite Gaussian mixture model (FGMM) and Bayesian inference strategy is superior to the conventional PCA method and can achieve accurate and early detection of various types of faults in multimode processes.
Abstract: For complex industrial processes with multiple operating conditions, the traditional multivariate process monitoring techniques such as principal component analysis (PCA) and partial least squares (PLS) are ill-suited because the fundamental assumption that the operating data follow a unimodal Gaussian distribution usually becomes invalid. In this article, a novel multimode process monitoring approach based on finite Gaussian mixture model (FGMM) and Bayesian inference strategy is proposed. First, the process data are assumed to be from a number of different clusters, each of which corresponds to an operating mode and can be characterized by a Gaussian component. In the absence of a priori process knowledge, the Figueiredo–Jain (F–J) algorithm is then adopted to automatically optimize the number of Gaussian components and estimate their statistical distribution parameters. With the obtained FGMM, a Bayesian inference strategy is further utilized to compute the posterior probabilities of each monitored sample belonging to the multiple components and derive an integrated global probabilistic index for fault detection of multimode processes. The validity and effectiveness of the proposed monitoring approach are illustrated through three examples: (1) a simple multivariate linear system, (2) a simulated continuous stirred tank heater (CSTH) process, and (3) the Tennessee Eastman challenge problem. The comparison of monitoring results demonstrates that the proposed approach is superior to the conventional PCA method and can achieve accurate and early detection of various types of faults in multimode processes. © 2008 American Institute of Chemical Engineers AIChE J, 2008

452 citations


Journal ArticleDOI
TL;DR: The present results have shown that the cost effectiveness in the numerical basis sets implemented in the DFT code DMol3 is superior to that in Gaussian basis sets in terms of accuracy per computational cost.
Abstract: Binding energies of selected hydrogen bonded complexes have been calculated within the framework of density functional theory (DFT) method to discuss the efficiency of numerical basis sets implemented in the DFT code DMol3 in comparison with Gaussian basis sets. The corrections of basis set superposition error (BSSE) are evaluated by means of counterpoise method. Two kinds of different numerical basis sets in size are examined; the size of the one is comparable to Gaussian double zeta plus polarization function basis set (DNP), and that of the other is comparable to triple zeta plus double polarization functions basis set (TNDP). We have confirmed that the magnitudes of BSSE in these numerical basis sets are comparative to or smaller than those in Gaussian basis sets whose sizes are much larger than the corresponding numerical basis sets; the BSSE corrections in DNP are less than those in the Gaussian 6-311+G(3df,2pd) basis set, and those in TNDP are comparable to those in the substantially large scale Gaussian basis set aug-cc-pVTZ. The differences in counterpoise corrected binding energies between calculated using DNP and calculated using aug-cc-pVTZ are less than 9 kJ/mol for all of the complexes studied in the present work. The present results have shown that the cost effectiveness in the numerical basis sets in DMol3 is superior to that in Gaussian basis sets in terms of accuracy per computational cost.

442 citations


Journal ArticleDOI
TL;DR: The rate of contraction of the posterior distribution based on sampling from a smooth density model when the prior models the log density as a (fractionally integrated) Brownian motion is shown to depend on the position of the true parameter relative to the reproducing kernel Hilbert space of the Gaussian process.
Abstract: We derive rates of contraction of posterior distributions on nonparametric or semiparametric models based on Gaussian processes. The rate of contraction is shown to depend on the position of the true parameter relative to the reproducing kernel Hilbert space of the Gaussian process and the small ball probabilities of the Gaussian process. We determine these quantities for a range of examples of Gaussian priors and in several statistical settings. For instance, we consider the rate of contraction of the posterior distribution based on sampling from a smooth density model when the prior models the log density as a (fractionally integrated) Brownian motion. We also consider regression with Gaussian errors and smooth classification under a logistic or probit link function combined with various priors.

423 citations


Journal ArticleDOI
TL;DR: In this article, the authors apply the same approach to physically motivated non-Gaussian models and find an analytic expression for the halo bias as a function of scale, mass, and redshift, employing only approximations of high peaks and large separations.
Abstract: It has long been known how to analytically relate the clustering properties of the collapsed structures (halos) to those of the underlying dark matter distribution for Gaussian initial conditions. Here we apply the same approach to physically motivated non-Gaussian models. The techniques we use were developed in the 1980s to deal with the clustering of peaks of non-Gaussian density fields. The description of the clustering of halos for non-Gaussian initial conditions has recently received renewed interest, motivated by the forthcoming large galaxy and cluster surveys. For inflationary-motivated non-Gaussianities, we find an analytic expression for the halo bias as a function of scale, mass, and redshift, employing only the approximations of high peaks and large separations.

403 citations


Journal ArticleDOI
TL;DR: This work considers the Gaussian multiple access wire-tap channel (GMAC-WT), which in this scenario, multiple users communicate with an intended receiver in the presence of an intelligent and informed wire-tapper who receives a degraded version of the signal at the receiver.
Abstract: We consider the Gaussian multiple access wire-tap channel (GMAC-WT). In this scenario, multiple users communicate with an intended receiver in the presence of an intelligent and informed wire-tapper who receives a degraded version of the signal at the receiver. We define suitable security measures for this multiaccess environment. Using codebooks generated randomly according to a Gaussian distribution, achievable secrecy rate regions are identified using superposition coding and time-division multiple access (TDMA) coding schemes. An upper bound for the secrecy sum-rate is derived, and our coding schemes are shown to achieve the sum capacity. Numerical results are presented showing the new rate region and comparing it with the capacity region of the Gaussian multiple-access channel (GMAC) with no secrecy constraints, which quantifies the price paid for secrecy.

319 citations


Journal ArticleDOI
TL;DR: This paper proposes in this paper a reliable procedure for constructing complex networks from the correlation matrix of a time series, an original stock timeseries, the corresponding return series and its amplitude series.
Abstract: Recent works show that complex network theory may be a powerful tool in time series analysis. We propose in this paper a reliable procedure for constructing complex networks from the correlation matrix of a time series. An original stock time series, the corresponding return series and its amplitude series are considered. The degree distribution of the original series can be well fitted with a power law, while that of the return series can be well fitted with a Gaussian function. The degree distribution of the amplitude series contains two asymmetric Gaussian branches. Reconstruction of networks from time series is a common problem in diverse research. The proposed strategy may be a reasonable solution to this problem.

316 citations


Journal ArticleDOI
TL;DR: In this article, the authors determine the rate region of the quadratic Gaussian two-encoder source-coding problem, which is achieved by a simple architecture that separates the analog and digital aspects of the compression.
Abstract: We determine the rate region of the quadratic Gaussian two-encoder source-coding problem. This rate region is achieved by a simple architecture that separates the analog and digital aspects of the compression. Furthermore, this architecture requires higher rates to send a Gaussian source than it does to send any other source with the same covariance. Our techniques can also be used to determine the sum-rate of some generalizations of this classical problem. Our approach involves coupling the problem to a quadratic Gaussian ldquoCEO problem.rdquo

Journal ArticleDOI
TL;DR: In this article, the authors apply the same approach to physically motivated non-Gaussian models and find an analytic expression for the halo bias as a function of scale, mass and redshift, employing only the approximations of high-peaks and large separations.
Abstract: It has long been known how to analytically relate the clustering properties of the collapsed structures (halos) to those of the underlying dark matter distribution for Gaussian initial conditions. Here we apply the same approach to physically motivated non-Gaussian models. The techniques we use were developed in the 1980s to deal with the clustering of peaks of non-Gaussian density fields. The description of the clustering of halos for non-Gaussian initial conditions has recently received renewed interest, motivated by the forthcoming large galaxy and cluster surveys. For inflationary-motivated non-Gaussianites, we find an analytic expression for the halo bias as a function of scale, mass and redshift, employing only the approximations of high-peaks and large separations.

Journal ArticleDOI
TL;DR: In this paper, a generalized Gaussian approximation for correlated Gaussian fields observed on part of the sky is proposed and evaluated using a precomputed covariance matrix and set of power spectrum estimators.
Abstract: Microwave background temperature and polarization observations are a powerful way to constrain cosmological parameters if the likelihood function can be calculated accurately. The temperature and polarization fields are correlated, partial-sky coverage correlates power spectrum estimators at different l, and the likelihood function for a theory spectrum given a set of observed estimators is non-Gaussian. An accurate analysis must model all these properties. Most existing likelihood approximations are good enough for a temperature-only analysis, however they cannot reliably handle temperature-polarization correlations. We give a new general approximation applicable for correlated Gaussian fields observed on part of the sky. The approximation models the non-Gaussian form exactly in the ideal full-sky limit and is fast to evaluate using a precomputed covariance matrix and set of power spectrum estimators. We show with simulations that it is good enough to obtain correct results at l?30 where an exact calculation becomes impossible. We also show that some Gaussian approximations give reliable parameter constraints even though they do not capture the shape of the likelihood function at each l accurately. Finally we test the approximations on simulations with realistically anisotropic noise and asymmetric foreground mask.

Proceedings Article
08 Dec 2008
TL;DR: This work provides an algorithm that combines tree methods with the Improved Fast Gauss Transform (IFGT) and employs a tree data structure, resulting in four evaluation methods whose performance varies based on the distribution of sources and targets and input parameters such as desired accuracy and bandwidth.
Abstract: Many machine learning algorithms require the summation of Gaussian kernel functions, an expensive operation if implemented straightforwardly. Several methods have been proposed to reduce the computational complexity of evaluating such sums, including tree and analysis based methods. These achieve varying speedups depending on the bandwidth, dimension, and prescribed error, making the choice between methods difficult for machine learning tasks. We provide an algorithm that combines tree methods with the Improved Fast Gauss Transform (IFGT). As originally proposed the IFGT suffers from two problems: (1) the Taylor series expansion does not perform well for very low bandwidths, and (2) parameter selection is not trivial and can drastically affect performance and ease of use. We address the first problem by employing a tree data structure, resulting in four evaluation methods whose performance varies based on the distribution of sources and targets and input parameters such as desired accuracy and bandwidth. To solve the second problem, we present an online tuning approach that results in a black box method that automatically chooses the evaluation method and its parameters to yield the best performance for the input data, desired accuracy, and bandwidth. In addition, the new IFGT parameter selection approach allows for tighter error bounds. Our approach chooses the fastest method at negligible additional cost, and has superior performance in comparisons with previous approaches.

Journal ArticleDOI
TL;DR: This paper proposes a robust postprocessing model to infer the latent heart rate time series and applies the method to a wide range of heart rate data and obtains convincing predictions along with uncertainty estimates.
Abstract: Heart rate data collected during nonlaboratory conditions present several data-modeling challenges. First, the noise in such data is often poorly described by a simple Gaussian; it has outliers and errors come in bursts. Second, in large-scale studies the ECG waveform is usually not recorded in full, so one has to deal with missing information. In this paper, we propose a robust postprocessing model for such applications. Our model to infer the latent heart rate time series consists of two main components: unsupervised clustering followed by Bayesian regression. The clustering component uses auxiliary data to learn the structure of outliers and noise bursts. The subsequent Gaussian process regression model uses the cluster assignments as prior information and incorporates expert knowledge about the physiology of the heart. We apply the method to a wide range of heart rate data and obtain convincing predictions along with uncertainty estimates. In a quantitative comparison with existing postprocessing methodology, our model achieves a significant increase in performance.

Journal ArticleDOI
TL;DR: In this article, it was shown that in the presence of L Gaussian estimates, so-called Davies-Gaffney estimates, on-diagonal upper bounds imply precise offdiagonal Gaussian upper bounds for the kernels of analytic families of operators on metric measure spaces.
Abstract: We prove that in presence of L Gaussian estimates, so-called Davies-Gaffney estimates, on-diagonal upper bounds imply precise off-diagonal Gaussian upper bounds for the kernels of analytic families of operators on metric measure spaces.

Journal ArticleDOI
TL;DR: In this article, a new method for extracting an errorless secret key in a continuous-variable quantum key distribution protocol, which is based on Gaussian modulation of coherent states and homodyne detection, is proposed.
Abstract: We propose a new method for extracting an errorless secret key in a continuous-variable quantum key distribution protocol, which is based on Gaussian modulation of coherent states and homodyne detection. The crucial novel feature is an eight-dimensional reconciliation method, based on the algebraic properties of octonions. Since the protocol does not use any postselection, it can be proven secure against arbitrary collective attacks, by using well-established theorems on the optimality of Gaussian attacks. By using this new coding scheme with an appropriate signal to noise ratio, the distance for secure continuous-variable quantum key distribution can be significantly extended.

Proceedings ArticleDOI
15 Aug 2008
TL;DR: A low-complexity recursive procedure is presented for minimum mean squared error (MMSE) estimation in linear regression models and a Gaussian mixture is chosen as the prior on the unknown parameter vector.
Abstract: A low-complexity recursive procedure is presented for minimum mean squared error (MMSE) estimation in linear regression models. A Gaussian mixture is chosen as the prior on the unknown parameter vector. The algorithm returns both an approximate MMSE estimate of the parameter vector and a set of high posterior probability mixing parameters. Emphasis is given to the case of a sparse parameter vector. Numerical simulations demonstrate estimation performance and illustrate the distinctions between MMSE estimation and MAP model selection. The set of high probability mixing parameters not only provides MAP basis selection, but also yields relative probabilities that reveal potential ambiguity in the sparse model.

Book
15 Sep 2008
TL;DR: In this paper, a Gaussian stochastic calculus of variations is used to describe a complete elliptic market with a price-volatility feedback rate and an equilibrium and a price volatility feedback rate.
Abstract: Gaussian stochastic calculus of variations.- Pathwise propagation of Greeks in complete elliptic markets.- Market equilibrium and price-volatility feedback rate.- Multivariate conditioning and regularity of laws.- Non-elliptic markets and instability in HJM models.- Insider trading.- Rates of weak convergence and distribution theory on Gaussian spaces.-Fourier series method for the measurement of historical volatilities.

Journal ArticleDOI
TL;DR: A simple description of the most general collective Gaussian attack in continuous-variable quantum cryptography is provided and the asymptotic secret-key rates which are achievable with coherent states, joint measurements of the quadratures and one-way classical communication are analyzed.
Abstract: We provide a simple description of the most general collective Gaussian attack in continuous-variable quantum cryptography. In the scenario of such general attacks, we analyze the asymptotic secret-key rates which are achievable with coherent states, joint measurements of the quadratures and one-way classical communication.

Journal ArticleDOI
TL;DR: A Gaussian-mixture-model approach is proposed for accurate uncertainty propagation through a general nonlinear system and is argued to be an excellent candidate for higher-dimensional uncertainty-propagation problems.
Abstract: A Gaussian-mixture-model approach is proposed for accurate uncertainty propagation through a general nonlinear system. The transition probability density function is approximated by a finite sum of Gaussian density functions for which the parameters (mean and covariance) are propagated using linear propagation theory. Two different approaches are introduced to update the weights of different components of a Gaussian-mixture model for uncertainty propagation through nonlinear system. The first method updates the weights such that they minimize the integral square difference between the true forecast probability density function and its Gaussian-sum approximation. The second method uses the Fokker-Planck-Kohnogorov equation error as feedback to adapt for the amplitude of different Gaussian components while solving a quadratic programming problem. The proposed methods are applied to a variety of problems in the open literature and are argued to be an excellent candidate for higher-dimensional uncertainty-propagation problems.

Journal ArticleDOI
TL;DR: An efficient approach to search for the global threshold of image using Gaussian mixture model is proposed, and the experimental results show that the new algorithm performs better.

Book
01 Jan 2008
TL;DR: In this article, a Stata-specific treatment of generalized linear mixed models, also known as multilevel or hierarchical models, is presented, which allow fixed and random effects and are appropriate not only for continuous Gaussian responses but also for binary, count, and other types of limited dependent variables.
Abstract: This text is a Stata-specific treatment of generalized linear mixed models, also known as multilevel or hierarchical models. These models are "mixed" in the sense that they allow fixed and random effects and are "generalized" in the sense that they are appropriate not only for continuous Gaussian responses but also for binary, count, and other types of limited dependent variables.

Journal ArticleDOI
TL;DR: In this paper, a review of direct dynamics methods for non-adiabatic photochemistry is given, focusing on their application to nonadabatic systems in which a conical intersection plays an important role.
Abstract: A review of direct dynamics methods is given, focusing on their application to non-adiabatic photochemistry–i.e. systems in which a conical intersection plays an important role. Direct dynamics simulations use electronic structure calculations to obtain the potential energy surface only as it is required ‘on-the-fly’. This is in contrast to traditional methods that require the surface to be globally known as an analytic function before a simulation can be performed. The properties and abilities, with descriptions of calculations made, of the three main methods are compared: trajectory surface hopping (TSH), ab initio multiple spawning (AIMS), and variational multi-configuration Gaussian wavepackets (vMCG). TSH is the closest to classical dynamics, is the simplest to implement, but is hard to converge, and even then not always accurate. AIMS solves the time-dependent Schrodinger more rigorously, but as its basis functions follow classical trajectories again suffers from poor convergence. vMCG is harder to ...

Journal ArticleDOI
TL;DR: The Gaussian-based multiconfiguration time-dependent Hartree (G-MCTDH) method is applied to calculate the S(2)(pipi( *)) absorption spectrum of the pyrazine molecule, whose diffuse structure results from the ultrafast nonadiabatic dynamics at the S (2)-S(1) conical intersection.
Abstract: The Gaussian-based multiconfiguration time-dependent Hartree (G-MCTDH) method is applied to calculate the S2(ππ∗) absorption spectrum of the pyrazine molecule, whose diffuse structure results from the ultrafast nonadiabatic dynamics at the S2-S1 conical intersection. The 24-mode second-order vibronic-coupling model of Raab et al. [J. Chem. Phys. 110, 936 (1999)] is employed, along with 4-mode and 10-mode reduced-dimensional variants of this model. G-MCTDH can be used either as an all-Gaussian approach or else as a hybrid method using a partitioning into primary modes, treated by conventional MCTDH basis functions, and secondary modes described by Gaussian particles. Comparison with standard MCTDH calculations shows that the method converges to the exact result. The variational, nonclassical evolution of the moving Gaussian basis is a key element in obtaining convergence. For high-dimensional systems, convergence is significantly accelerated if the method is employed as a hybrid scheme.

Book ChapterDOI
TL;DR: Definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes are reviewed, with a view to applications in nonparametric Bayesian statistics using Gaussian priors.
Abstract: We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described through a concentration function that is expressed in the reproducing Hilbert space. Absolute continuity of Gaussian measures and concentration inequalities play an important role in understanding and deriving this result. Series expansions of Gaussian variables and transformations of their reproducing kernel Hilbert spaces under linear maps are useful tools to compute the concentration function.

Journal ArticleDOI
TL;DR: The theory of ghost imaging is developed in a Gaussian-state framework that both encompasses prior work on thermal-state and biphoton-state imagers and provides a complete understanding of the boundary between classical and quantum behavior in such systems as mentioned in this paper.
Abstract: The theory of ghost imaging is developed in a Gaussian-state framework that both encompasses prior work---on thermal-state and biphoton-state imagers---and provides a complete understanding of the boundary between classical and quantum behavior in such systems. The core of this analysis is the expression derived for the photocurrent-correlation image obtained using a general Gaussian-state source. This image is expressed in terms of the phase-insensitive and phase-sensitive cross correlations between the two detected fields, plus a background. Because any pair of cross correlations is obtainable with classical Gaussian states, the image does not carry a quantum signature per se. However, if the image characteristics of classical and nonclassical Gaussian-state sources with identical autocorrelation functions are compared, the nonclassical source provides resolution improvement in its near field and field-of-view improvement in its far field.

Proceedings ArticleDOI
22 Sep 2008
TL;DR: A new algorithm for estimating the signal-to-noise ratio (SNR) of speech signals, called WADA-SNR (Waveform Amplitude Distribution Analysis) is introduced, which shows significantly less bias and less variability with respect to the type of noise compared to the standard NIST STNR algorithm.
Abstract: In this paper, we introduce a new algorithm for estimating the signal-to-noise ratio (SNR) of speech signals, called WADA-SNR (Waveform Amplitude Distribution Analysis) In this algorithm we assume that the amplitude distribution of clean speech can be approximated by the Gamma distribution with a shaping parameter of 04, and that an additive noise signal is Gaussian Based on this assumption, we can estimate the SNR by examining the amplitude distribution of the noise-corrupted speech We evaluate the performance of the WADA-SNR algorithm on databases corrupted by white noise, background music, and interfering speech The WADA-SNR algorithm shows significantly less bias and less variability with respect to the type of noise compared to the standard NIST STNR algorithm In addition, the algorithm is quite computationally efficient Index Terms : SNR estimation, Gamma distribution, Gaussian distribution 1 Introduction The estimation of signal-to-noise ratios (SNRs) has been extensively investigated for decades and it is still an active field of research (

Journal ArticleDOI
TL;DR: By combining the Minkowski inequality and the quantum Chernoff bound, easy-to-compute upper bounds for the error probability affecting the optimal discrimination of Gaussian states are derived.
Abstract: By combining the Minkowski inequality and the quantum Chernoff bound, we derive easy-to-compute upper bounds for the error probability affecting the optimal discrimination of Gaussian states. In particular, these bounds are useful when the Gaussian states are unitarily inequivalent, i.e., they differ in their symplectic invariants.

Journal ArticleDOI
TL;DR: The limit of the Gaussian operations and classical communication in the problem of quantum state discrimination is addressed and it is shown that the optimal Gaussian strategy for the discrimination of the binary phase shift keyed (BPSK) coherent signal is a simple homodyne detection.
Abstract: We address the limit of the Gaussian operations and classical communication in the problem of quantum state discrimination. We show that the optimal Gaussian strategy for the discrimination of the binary phase shift keyed (BPSK) coherent signal is a simple homodyne detection. We also propose practical near-optimal quantum receivers that beat the BPSK homodyne limit in all areas of the signal power. Our scheme is simple and does not require realtime electrical feedback.