scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian published in 2017"


Journal ArticleDOI
TL;DR: In this article, the authors present an updated summary of the penalized pixel-fitting (pPXF) method, which is used to extract the stellar and gas kinematics, as well as the stellar population of galaxies via full spectrum fitting.
Abstract: I start by providing an updated summary of the penalized pixel-fitting (pPXF) method, which is used to extract the stellar and gas kinematics, as well as the stellar population of galaxies, via full spectrum fitting. I then focus on the problem of extracting the kinematic when the velocity dispersion $\sigma$ is smaller than the velocity sampling $\Delta V$, which is generally, by design, close to the instrumental dispersion $\sigma_{\rm inst}$. The standard approach consists of convolving templates with a discretized kernel, while fitting for its parameters. This is obviously very inaccurate when $\sigma<\Delta V/2$, due to undersampling. Oversampling can prevent this, but it has drawbacks. Here I present a more accurate and efficient alternative. It avoids the evaluation of the under-sampled kernel, and instead directly computes its well-sampled analytic Fourier transform, for use with the convolution theorem. A simple analytic transform exists when the kernel is described by the popular Gauss-Hermite parametrization (which includes the Gaussian as special case) for the line-of-sight velocity distribution. I describe how this idea was implemented in a significant upgrade to the publicly available pPXF software. The key advantage of the new approach is that it provides accurate velocities regardless of $\sigma$. This is important e.g. for spectroscopic surveys targeting galaxies with $\sigma\ll\sigma_{\rm inst}$, for galaxy redshift determinations, or for measuring line-of-sight velocities of individual stars. The proposed method could also be used to fix Gaussian convolution algorithms used in today's popular software packages.

866 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: An half-wave Gaussian quantizer (HWGQ) is proposed for forward approximation and shown to have efficient implementation, by exploiting the statistics of of network activations and batch normalization operations, and to achieve much closer performance to full precision networks than previously available low-precision networks.
Abstract: The problem of quantizing the activations of a deep neural network is considered. An examination of the popular binary quantization approach shows that this consists of approximating a classical non-linearity, the hyperbolic tangent, by two functions: a piecewise constant sign function, which is used in feedforward network computations, and a piecewise linear hard tanh function, used in the backpropagation step during network learning. The problem of approximating the widely used ReLU non-linearity is then considered. An half-wave Gaussian quantizer (HWGQ) is proposed for forward approximation and shown to have efficient implementation, by exploiting the statistics of of network activations and batch normalization operations. To overcome the problem of gradient mismatch, due to the use of different forward and backward approximations, several piece-wise backward approximators are then investigated. The implementation of the resulting quantized network, denoted as HWGQ-Net, is shown to achieve much closer performance to full precision networks, such as AlexNet, ResNet, GoogLeNet and VGG-Net, than previously available low-precision networks, with 1-bit binary weights and 2-bit quantized activations.

520 citations


Book
20 Jul 2017
TL;DR: In this paper, the authors address the theory of Gaussian states, operations, and dynamics in great depth and breadth, through a novel approach that embraces both the Hilbert space and phase descriptions.
Abstract: Quantum Continuous Variables introduces the theory of continuous variable quantum systems, from its foundations based on the framework of Gaussian states to modern developments, including its applications to quantum information and forthcoming quantum technologies. This new book addresses the theory of Gaussian states, operations, and dynamics in great depth and breadth, through a novel approach that embraces both the Hilbert space and phase descriptions. The volume includes coverage of entanglement theory and quantum information protocols, and their connection with relevant experimental set-ups. General techniques for non-Gaussian manipulations also emerge as the treatment unfolds, and are demonstrated with specific case studies. This book will be of interest to graduate students looking to familiarise themselves with the field, in addition to experienced researchers eager to enhance their understanding of its theoretical methods. It will also appeal to experimentalists searching for a rigorous but accessible treatment of the theory in the area.

411 citations


Journal ArticleDOI
TL;DR: The protocol for Gaussian Boson Sampling with single-mode squeezed states is presented and it is shown that the proposal with the Hafnian matrix function can retain the higher photon number contributions at the input.
Abstract: Boson sampling has emerged as a tool to explore the advantages of quantum over classical computers as it does not require universal control over the quantum system, which favors current photonic experimental platforms. Here, we introduce Gaussian Boson sampling, a classically hard-to-solve problem that uses squeezed states as a nonclassical resource. We relate the probability to measure specific photon patterns from a general Gaussian state in the Fock basis to a matrix function called the Hafnian, which answers the last remaining question of sampling from Gaussian states. Based on this result, we design Gaussian Boson sampling, a #P hard problem, using squeezed states. This demonstrates that Boson sampling from Gaussian states is possible, with significant advantages in the photon generation probability, compared to existing protocols.

311 citations


Proceedings ArticleDOI
25 Jun 2017
TL;DR: An uncoordinated Gaussian multiple access channel with a relatively large number of active users within each block is considered, and a low complexity coding scheme is proposed, which is based on a combination of compute-and-forward and coding for a binary adder channel.
Abstract: We consider an uncoordinated Gaussian multiple access channel with a relatively large number of active users within each block. A low complexity coding scheme is proposed, which is based on a combination of compute-and-forward and coding for a binary adder channel. For a wide regime of parameters of practical interest, the energy-per-bit required by each user in the proposed scheme is significantly smaller than that required by popular solutions such as slotted-ALOHA and treating interference as noise.

216 citations


Journal ArticleDOI
TL;DR: In this paper, a completely elementary and self-contained proof of convergence of Gaussian multiplicative chaos is given, and it is shown that the limiting random measure is nontrivial in the entire subcritical phase (i.e., the limit is independent of the regularisation of the underlying field).
Abstract: A completely elementary and self-contained proof of convergence of Gaussian multiplicative chaos is given. The argument shows further that the limiting random measure is nontrivial in the entire subcritical phase $(\gamma < \sqrt{2d} )$ and that the limit is universal (i.e., the limiting measure is independent of the regularisation of the underlying field).

207 citations


Journal ArticleDOI
TL;DR: In this paper, the results of Gaussian-based ground-state and excited-state coupled-cluster theory with single and double excitations for three-dimensional solids are presented.
Abstract: We present the results of Gaussian-based ground-state and excited-state equation-of-motion coupled-cluster theory with single and double excitations for three-dimensional solids. We focus on diamond and silicon, which are paradigmatic covalent semiconductors. In addition to ground-state properties (the lattice constant, bulk modulus, and cohesive energy), we compute the quasiparticle band structure and band gap. We sample the Brillouin zone with up to 64 k-points using norm-conserving pseudopotentials and polarized double- and triple-ζ basis sets, leading to canonical coupled-cluster calculations with as many as 256 electrons in 2176 orbitals.

206 citations


Journal ArticleDOI
TL;DR: This work considers a new type of Gaussian de Finetti reduction, that exploits the invariance of some continuous-variable protocols under the action of the unitary group U(n) (instead of the symmetric group S_{n} as in usual definetti theorems), and introduces generalized SU(2,2) coherent states.
Abstract: Establishing the security of continuous-variable quantum key distribution against general attacks in a realistic finite-size regime is an outstanding open problem in the field of theoretical quantum cryptography if we restrict our attention to protocols that rely on the exchange of coherent states. Indeed, techniques based on the uncertainty principle are not known to work for such protocols, and the usual tools based on de Finetti reductions only provide security for unrealistically large block lengths. We address this problem here by considering a new type of Gaussian de Finetti reduction, that exploits the invariance of some continuous-variable protocols under the action of the unitary group $U(n)$ (instead of the symmetric group ${S}_{n}$ as in usual de Finetti theorems), and by introducing generalized $SU(2,2)$ coherent states. Crucially, combined with an energy test, this allows us to truncate the Hilbert space globally instead as at the single-mode level as in previous approaches that failed to provide security in realistic conditions. Our reduction shows that it is sufficient to prove the security of these protocols against Gaussian collective attacks in order to obtain security against general attacks, thereby confirming rigorously the widely held belief that Gaussian attacks are indeed optimal against such protocols.

199 citations


Journal ArticleDOI
TL;DR: This paper presents a Gaussian-mixture implementation of the probability hypothesis density (PHD) filter for tracking extended targets and suitable remedies are given to handle spatially close targets and target occlusion.
Abstract: We comment on the errors in the formulation of Theorem 1 given in Extended Target Tracking Using a Gaussian-Mixture PHD Filter by K. Granstrom, C. Lundquist, and U. Orguner, and give a correct formulation.

192 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered adaptive minimax and computationally tractable estimation of leading sparse canonical coefficient vectors in high dimensions under a Gaussian canonical pair model, and established separate minimax estimation rates for canonical coefficient vector of each set of random variables under no structural assumption on marginal covariance matrices.
Abstract: Canonical correlation analysis is a classical technique for exploring the relationship between two sets of variables. It has important applications in analyzing high dimensional datasets originated from genomics, imaging and other fields. This paper considers adaptive minimax and computationally tractable estimation of leading sparse canonical coefficient vectors in high dimensions. Under a Gaussian canonical pair model, we first establish separate minimax estimation rates for canonical coefficient vectors of each set of random variables under no structural assumption on marginal covariance matrices. Second, we propose a computationally feasible estimator to attain the optimal rates adaptively under an additional sample size condition. Finally, we show that a sample size condition of this kind is needed for any randomized polynomial-time estimator to be consistent, assuming hardness of certain instances of the planted clique detection problem. As a byproduct, we obtain the first computational lower bounds for sparse PCA under the Gaussian single spiked covariance model.

156 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: In particular, this paper showed that the complexity of learning a Gaussian mixture model is exponential in the dimension of the latent space, and showed that statistical query algorithms can be implemented in polynomial time.
Abstract: We describe a general technique that yields the first Statistical Query lower bounds} fora range of fundamental high-dimensional learning problems involving Gaussian distributions. Our main results are for the problems of (1) learning Gaussian mixture models (GMMs), and (2) robust (agnostic) learning of a single unknown Gaussian distribution. For each of these problems, we show a super-polynomial gap} between the (information-theoretic)sample complexity and the computational complexity of any} Statistical Query algorithm for the problem. Statistical Query (SQ) algorithms are a class of algorithms that are only allowed to query expectations of functions of the distribution rather than directly access samples. This class of algorithms is quite broad: a wide range of known algorithmic techniques in machine learning are known to be implementable using SQs.Moreover, for the unsupervised learning problems studied in this paper, all known algorithms with non-trivial performance guarantees are SQ or are easily implementable using SQs. Our SQ lower bound for Problem (1)is qualitatively matched by known learning algorithms for GMMs. At a conceptual level, this result implies that – as far as SQ algorithms are concerned – the computational complexity of learning GMMs is inherently exponential in the dimension of the latent space} – even though there is no such information-theoretic barrier. Our lower bound for Problem (2) implies that the accuracy of the robust learning algorithm in \cite{DiakonikolasKKLMS16} is essentially best possible among all polynomial-time SQ algorithms. On the positive side, we also give a new (SQ) learning algorithm for Problem (2) achievingthe information-theoretically optimal accuracy, up to a constant factor, whose running time essentially matches our lower bound. Our algorithm relies on a filtering technique generalizing \cite{DiakonikolasKKLMS16} that removes outliers based on higher-order tensors.Our SQ lower bounds are attained via a unified moment-matching technique that is useful in other contexts and may be of broader interest. Our technique yields nearly-tight lower bounds for a number of related unsupervised estimation problems. Specifically, for the problems of (3) robust covariance estimation in spectral norm, and (4) robust sparse mean estimation, we establish a quadratic statistical–computational tradeoff} for SQ algorithms, matching known upper bounds. Finally, our technique can be used to obtain tight sample complexitylower bounds for high-dimensional testing} problems. Specifically, for the classical problem of robustly testing} an unknown mean (known covariance) Gaussian, our technique implies an information-theoretic sample lower bound that scales linearly} in the dimension. Our sample lower bound matches the sample complexity of the corresponding robust learning} problem and separates the sample complexity of robust testing from standard (non-robust) testing. This separation is surprising because such a gap does not exist for the corresponding learning problem.

Journal ArticleDOI
TL;DR: A Bayesian linear inversion methodology based on Gaussian mixture models and its application to geophysical inverse problems is presented in this article, where a recursive exact solution to an approximation of the posterior distribution of the inverse problem is proposed.
Abstract: A Bayesian linear inversion methodology based on Gaussian mixture models and its application to geophysical inverse problems are presented in this paper. The proposed inverse method is based on a Bayesian approach under the assumptions of a Gaussian mixture random field for the prior model and a Gaussian linear likelihood function. The model for the latent discrete variable is defined to be a stationary first-order Markov chain. In this approach, a recursive exact solution to an approximation of the posterior distribution of the inverse problem is proposed. A Markov chain Monte Carlo algorithm can be used to efficiently simulate realizations from the correct posterior model. Two inversion studies based on real well log data are presented, and the main results are the posterior distributions of the reservoir properties of interest, the corresponding predictions and prediction intervals, and a set of conditional realizations. The first application is a seismic inversion study for the prediction of lithological facies, P- and S-impedance, where an improvement of 30% in the root-mean-square error of the predictions compared to the traditional Gaussian inversion is obtained. The second application is a rock physics inversion study for the prediction of lithological facies, porosity, and clay volume, where predictions slightly improve compared to the Gaussian inversion approach.

Journal ArticleDOI
TL;DR: This paper presents a tutorial on the main Gaussian filters that are used for state estimation of stochastic dynamic systems and describes the main concept of state estimation based on the Bayesian paradigm and Gaussian assumption of the noise.

Proceedings Article
10 Apr 2017
TL;DR: It is proved that the method estimates the sources for general smooth mixing nonlinearities, assuming the sources have sufficiently strong temporal dependencies, and these dependencies are in a certain way different from dependencies found in Gaussian processes.
Abstract: We develop a nonlinear generalization of independent component analysis (ICA) or blind source separation, based on temporal dependencies (e.g. autocorrelations). We introduce a nonlinear generative model where the independent sources are assumed to be temporally dependent, non-Gaussian, and stationary, and we observe arbitrarily nonlinear mixtures of them. We develop a method for estimating the model (i.e. separating the sources) based on logistic regression in a neural network which learns to discriminate between a short temporal window of the data vs. a temporal window of temporally permuted data. We prove that the method estimates the sources for general smooth mixing nonlinearities, assuming the sources have sufficiently strong temporal dependencies, and these dependencies are in a certain way different from dependencies found in Gaussian processes. For Gaussian (and similar) sources, the method estimates the nonlinear part of the mixing. We thus provide the first rigorous and general proof of identifiability of nonlinear ICA for temporally dependent sources, together with a practical method for its estimation.

Proceedings Article
19 Nov 2017
TL;DR: This paper proposed two models that explicitly structure the latent space around K components corresponding to different types of image content, and combine components to create priors for images that contain multiple types of content simultaneously (e.g., several kinds of objects).
Abstract: This paper explores image caption generation using conditional variational auto-encoders (CVAEs). Standard CVAEs with a fixed Gaussian prior yield descriptions with too little variability. Instead, we propose two models that explicitly structure the latent space around K components corresponding to different types of image content, and combine components to create priors for images that contain multiple types of content simultaneously (e.g., several kinds of objects). Our first model uses a Gaussian Mixture model (GMM) prior, while the second one defines a novel Additive Gaussian (AG) prior that linearly combines component means. We show that both models produce captions that are more diverse and more accurate than a strong LSTM baseline or a “vanilla” CVAE with a fixed Gaussian prior, with AG-CVAE showing particular promise.

Journal ArticleDOI
TL;DR: In this article, a new outlier-robust Student's t based Gaussian approximate filter is proposed to address the heavy-tailed process and measurement noises induced by the outlier measurements of velocity and range in cooperative localization of autonomous underwater vehicles (AUVs).
Abstract: In this paper, a new outlier-robust Student's t based Gaussian approximate filter is proposed to address the heavy-tailed process and measurement noises induced by the outlier measurements of velocity and range in cooperative localization of autonomous underwater vehicles (AUVs). The state vector, scale matrices, and degrees of freedom (DOF) parameters are jointly estimated based on the variational Bayesian approach by using the constructed Student's t based hierarchical Gaussian state-space model. The performances of the proposed filter and existing filters are tested in the cooperative localization of an AUV through a lake trial. Experimental results illustrate that the proposed filter has better localization accuracy and robustness than existing state-of-the-art outlier-robust filters.

Proceedings ArticleDOI
21 Jul 2017
TL;DR: Experimental results on large scale region classification and fine-grained recognition tasks show that G2DeNet is superior to its counterparts, capable of achieving state-of-the-art performance.
Abstract: Recently, plugging trainable structural layers into deep convolutional neural networks (CNNs) as image representations has made promising progress. However, there has been little work on inserting parametric probability distributions, which can effectively model feature statistics, into deep CNNs in an end-to-end manner. This paper proposes a Global Gaussian Distribution embedding Network (G2DeNet) to take a step towards addressing this problem. The core of G2DeNet is a novel trainable layer of a global Gaussian as an image representation plugged into deep CNNs for end-to-end learning. The challenge is that the proposed layer involves Gaussian distributions whose space is not a linear space, which makes its forward and backward propagations be non-intuitive and non-trivial. To tackle this issue, we employ a Gaussian embedding strategy which respects the structures of both Riemannian manifold and smooth group of Gaussians. Based on this strategy, we construct the proposed global Gaussian embedding layer and decompose it into two sub-layers: the matrix partition sub-layer decoupling the mean vector and covariance matrix entangled in the embedding matrix, and the square-rooted, symmetric positive definite matrix sub-layer. In this way, we can derive the partial derivatives associated with the proposed structural layer and thus allow backpropagation of gradients. Experimental results on large scale region classification and fine-grained recognition tasks show that G2DeNet is superior to its counterparts, capable of achieving state-of-the-art performance.

Journal ArticleDOI
TL;DR: A feature guided Gaussian mixture model (GMM) is proposed for the non-rigid registration of retinal images that is robust in different registration tasks and outperforms several competing approaches, especially when data is severely degraded.

Journal ArticleDOI
TL;DR: In this article, a hierarchical Bayesian approach is proposed to construct the window and covariance matrix such that the estimator is explicitly unbiased and nearly optimal for the Gaussian distribution of the initial power spectrum.
Abstract: One of the main unsolved problems of cosmology is how to maximize the extraction of information from nonlinear data. If the data are nonlinear the usual approach is to employ a sequence of statistics (N-point statistics, counting statistics of clusters, density peaks or voids etc.), along with the corresponding covariance matrices. However, this approach is computationally prohibitive and has not been shown to be exhaustive in terms of information content. Here we instead develop a hierarchical Bayesian approach, expanding the likelihood around the maximum posterior of linear modes, which we solve for using optimization methods. By integrating out the modes using perturbative expansion of the likelihood we construct an initial power spectrum estimator, which for a fixed forward model contains all the cosmological information if the initial modes are gaussian distributed. We develop a method to construct the window and covariance matrix such that the estimator is explicitly unbiased and nearly optimal. We then generalize the method to include the forward model parameters, including cosmological and nuisance parameters, and primordial non-gaussianity. We apply the method in the simplified context of nonlinear structure formation, using either simplified 2-LPT dynamics or N-body simulations as the nonlinear mapping between linear and nonlinear density, and 2-LPT dynamics in the optimization steps used to reconstruct the initial density modes. We demonstrate that the method gives an unbiased estimator of the initial power spectrum, providing among other a near optimal reconstruction of linear baryonic acoustic oscillations.

Journal ArticleDOI
TL;DR: A tighter bound is developed for the decoy-state method, which yields a smaller failure probability and results in a higher key rate and increases the maximum distance over which secure key exchange is possible.
Abstract: The decoy-state scheme is the most widely implemented quantum-key-distribution protocol in practice. In order to account for the finite-size key effects on the achievable secret key generation rate, a rigorous statistical fluctuation analysis is required. Originally, a heuristic Gaussian-approximation technique was used for this purpose, which, despite its analytical convenience, was not sufficiently rigorous. The fluctuation analysis has recently been made rigorous by using the Chernoff bound. There is a considerable gap, however, between the key-rate bounds obtained from these techniques and that obtained from the Gaussian assumption. Here we develop a tighter bound for the decoy-state method, which yields a smaller failure probability. This improvement results in a higher key rate and increases the maximum distance over which secure key exchange is possible. By optimizing the system parameters, our simulation results show that our method almost closes the gap between the two previously proposed techniques and achieves a performance similar to that of conventional Gaussian approximations.

Journal ArticleDOI
TL;DR: It is proved that a monogamy relation akin to the generalized Coffman-Kundu-Wootters inequality holds quantitatively for a recently introduced measure of Gaussian steering, which pin down the role of multipartite steering for quantum communication.
Abstract: We derive laws for the distribution of quantum steering among different parties in multipartite Gaussian states under Gaussian measurements. We prove that a monogamy relation akin to the generalized Coffman-Kundu-Wootters inequality holds quantitatively for a recently introduced measure of Gaussian steering. We then define the residual Gaussian steering, stemming from the monogamy inequality, as an indicator of collective steering-type correlations. For pure three-mode Gaussian states, the residual acts as a quantifier of genuine multipartite steering, and is interpreted operationally in terms of the guaranteed key rate in the task of secure quantum secret sharing. Optimal resource states for the latter protocol are identified, and their possible experimental implementation discussed. Our results pin down the role of multipartite steering for quantum communication.

Journal ArticleDOI
TL;DR: Because the boost potential is constructed using a harmonic function that follows Gaussian distribution in GaMD, cumulant expansion to the second order can be applied to recover the original free energy profiles of proteins and other large biomolecules, which solves a long-standing energetic reweighting problem of the previous aMD method.
Abstract: Gaussian accelerated molecular dynamics (GaMD) is a recently developed enhanced sampling technique that provides efficient free energy calculations of biomolecules. Like the previous accelerated molecular dynamics (aMD), GaMD allows for “unconstrained” enhanced sampling without the need to set predefined collective variables and so is useful for studying complex biomolecular conformational changes such as protein folding and ligand binding. Furthermore, because the boost potential is constructed using a harmonic function that follows Gaussian distribution in GaMD, cumulant expansion to the second order can be applied to recover the original free energy profiles of proteins and other large biomolecules, which solves a long-standing energetic reweighting problem of the previous aMD method. Taken together, GaMD offers major advantages for both unconstrained enhanced sampling and free energy calculations of large biomolecules. Here, we have implemented GaMD in the NAMD package on top of the existing aMD featu...

Journal ArticleDOI
TL;DR: Segmented contracted Gaussian basis sets optimized at the one-electron exact two-component (X2C) level - including a finite size model for the nucleus - are presented for elements up to Rn.
Abstract: Segmented contracted Gaussian basis sets optimized at the one-electron exact two-component (X2C) level – including a finite size model for the nucleus – are presented for elements up to Rn. These basis sets are counterparts for relativistic all-electron calculations to the Karlsruhe “def2” basis sets for nonrelativistic (H–Kr) or effective core potential based (Rb–Rn) treatments. For maximum consistency, the bases presented here were obtained from the latter by modification and reoptimization. Additionally we present extensions for self-consistent two-component calculations, required for the splitting of inner shells by spin–orbit coupling, and auxiliary basis sets for fitting the Coulomb part of the Fock matrix. Emphasis was put both on the accuracy of energies of atomic orbitals and on the accuracy of molecular properties. A large set of more than 300 molecules representing (nearly) all elements in their common oxidation states was used to assess the quality of the bases all across the periodic table.

Journal ArticleDOI
TL;DR: This work considers the problem of approximating sums of high-dimensional stationary time series by Gaussian vectors, using the framework of functional dependence measure, and considers an estimator for long-run covariance matrices and study its convergence properties.
Abstract: We consider the problem of approximating sums of high dimensional stationary time series by Gaussian vectors, using the framework of functional dependence measure. The validity of the Gaussian approximation depends on the sample size $n$, the dimension $p$, the moment condition and the dependence of the underlying processes. We also consider an estimator for long-run covariance matrices and study its convergence properties. Our results allow constructing simultaneous confidence intervals for mean vectors of high-dimensional time series with asymptotically correct coverage probabilities. As an application, we propose a Kolmogorov–Smirnov-type statistic for testing distributions of high-dimensional time series.

Journal ArticleDOI
TL;DR: This paper provides a novel method dealing with non-Gaussian random variables in wind farm decision making as a chance-constrained economic dispatch problem that can be solved as a deterministic linear convex optimization with a global optimal solution.
Abstract: Extending traditional deterministic economic dispatch to incorporate significant stochastic wind power is an important but challenging task in today's power system decision making. In this paper, this issue is formulated as a chance-constrained economic dispatch (CCED) problem. Usually, in the presence of non-Gaussian correlated random variables, both the objective function and constraints are difficult to handle. To address this issue, this paper provides a novel method dealing with non-Gaussian random variables. First, the Gaussian mixture model is adopted to represent the joint probability density function of power output for multiple wind farms. Then, analytical formulae are derived that can be used for fast computation of partial derivatives of the objective function and transformation of chance constraints into linear ones. Thereafter, the CCED can be solved as a deterministic linear convex optimization with a global optimal solution. The effectiveness and efficiency of the proposed methodology are validated via a case study with a modified IEEE 39-bus system.

Posted Content
TL;DR: Two models are proposed that explicitly structure the latent space around $K$ components corresponding to different types of image content, and combine components to create priors for images that contain multiple types of content simultaneously (e.g., several kinds of objects).
Abstract: This paper explores image caption generation using conditional variational auto-encoders (CVAEs). Standard CVAEs with a fixed Gaussian prior yield descriptions with too little variability. Instead, we propose two models that explicitly structure the latent space around $K$ components corresponding to different types of image content, and combine components to create priors for images that contain multiple types of content simultaneously (e.g., several kinds of objects). Our first model uses a Gaussian Mixture model (GMM) prior, while the second one defines a novel Additive Gaussian (AG) prior that linearly combines component means. We show that both models produce captions that are more diverse and more accurate than a strong LSTM baseline or a "vanilla" CVAE with a fixed Gaussian prior, with AG-CVAE showing particular promise.

Journal ArticleDOI
01 Dec 2017
TL;DR: A statistical anomaly detection approach based on Gaussian mixture model is proposed which can be detected much easier compared with small changes in the loads, and trained regularly based on the updated load profile.
Abstract: One of the most addressed attacks in power networks is false data injection (FDI) which affects monitoring, fault detection, and state estimation integrity by tampering measurement data. To detect such devastating attack, the authors propose a statistical anomaly detection approach based on Gaussian mixture model, while some appropriate machine learning approaches are evaluated for detecting FDI. It should be noted that a finite mixture model is a convex combination of some probability density functions and combining the properties of several probability functions, making the mixture models capable of approximating any arbitrary distribution. Simulations results confirm superior performance of the proposed method over conventional bad data detection (BDD) tests and other learning approaches that studied in this article. It should be noted that using data which change significantly over a day can be highly clustered, and therefore, detected much easier compared with small changes in the loads. So without loss of generality, in the simulations it is assumed that the power demand follows a uniform distribution in a small range. However, the detector can be trained regularly based on the updated load profile.

Proceedings ArticleDOI
14 May 2017
TL;DR: In this article, the authors present a Gaussian Boson sampling protocol with single-mode squeezed states, which eliminates heralding and shows that the Hafnian matrix function can retain the higher photon number contributions at the input.
Abstract: We present the protocol for Gaussian Boson Sampling with single-mode squeezed states. We eliminate heralding and show that our proposal with the Hafnian matrix function can retain the higher photon number contributions at the input.

Journal ArticleDOI
TL;DR: In this paper, a Riemannian Gaussian distribution was proposed for the classification of data in the space of symmetric positive definite matrices. But the distribution was not defined in terms of the probability density function.
Abstract: Data, which lie in the space $\mathcal {P}_{m\,}$ , of $m \times m$ symmetric positive definite matrices, (sometimes called tensor data ), play a fundamental role in applications, including medical imaging, computer vision, and radar signal processing. An open challenge, for these applications, is to find a class of probability distributions, which is able to capture the statistical properties of data in $\mathcal {P}_{m\,}$ , as they arise in real-world situations. The present paper meets this challenge by introducing Riemannian Gaussian distributions on $\mathcal {P}_{m\,}$ . Distributions of this kind were first considered by Pennec in 2006. However, the present paper gives an exact expression of their probability density function for the first time in existing literature. This leads to two original contributions. First, a detailed study of statistical inference for Riemannian Gaussian distributions, uncovering the connection between the maximum likelihood estimation and the concept of Riemannian centre of mass, widely used in applications. Second, the derivation and the implementation of an expectation-maximisation algorithm, for the estimation of mixtures of Riemannian Gaussian distributions. The paper applies this new algorithm, to the classification of data in $\mathcal {P}_{m\,}$ , (concretely, to the problem of texture classification, in computer vision), showing that it yields significantly better performance, in comparison to recent approaches.

Journal ArticleDOI
TL;DR: By reconstructing the covariance matrix of a continuous variable four-mode square Gaussian cluster state subject to asymmetric loss, the amount of bipartite steering with a variable number of modes per party is quantified, and recently introduced monogamy relations for Gaussian steerability are verified.
Abstract: Understanding how quantum resources can be quantified and distributed over many parties has profound applications in quantum communication. As one of the most intriguing features of quantum mechanics, Einstein-Podolsky-Rosen (EPR) steering is a useful resource for secure quantum networks. By reconstructing the covariance matrix of a continuous variable four-mode square Gaussian cluster state subject to asymmetric loss, we quantify the amount of bipartite steering with a variable number of modes per party, and verify recently introduced monogamy relations for Gaussian steerability, which establish quantitative constraints on the security of information shared among different parties. We observe a very rich structure for the steering distribution, and demonstrate one-way EPR steering of the cluster state under Gaussian measurements, as well as one-to-multimode steering. Our experiment paves the way for exploiting EPR steering in Gaussian cluster states as a valuable resource for multiparty quantum information tasks.