scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian published in 2006"


Journal ArticleDOI
TL;DR: A new notion of an enhanced broadcast channel is introduced and is used jointly with the entropy power inequality, to show that a superposition of Gaussian codes is optimal for the degraded vector broadcast channel and that DPC is ideal for the nondegraded case.
Abstract: The Gaussian multiple-input multiple-output (MIMO) broadcast channel (BC) is considered. The dirty-paper coding (DPC) rate region is shown to coincide with the capacity region. To that end, a new notion of an enhanced broadcast channel is introduced and is used jointly with the entropy power inequality, to show that a superposition of Gaussian codes is optimal for the degraded vector broadcast channel and that DPC is optimal for the nondegraded case. Furthermore, the capacity region is characterized under a wide range of input constraints, accounting, as special cases, for the total power and the per-antenna power constraints

1,899 citations


Journal ArticleDOI
TL;DR: Under linear, Gaussian assumptions on the target dynamics and birth process, the posterior intensity at any time step is a Gaussian mixture and closed-form recursions for propagating the means, covariances, and weights of the constituent Gaussian components of the posteriorintensity are derived.
Abstract: A new recursive algorithm is proposed for jointly estimating the time-varying number of targets and their states from a sequence of observation sets in the presence of data association uncertainty, detection uncertainty, noise, and false alarms. The approach involves modelling the respective collections of targets and measurements as random finite sets and applying the probability hypothesis density (PHD) recursion to propagate the posterior intensity, which is a first-order statistic of the random finite set of targets, in time. At present, there is no closed-form solution to the PHD recursion. This paper shows that under linear, Gaussian assumptions on the target dynamics and birth process, the posterior intensity at any time step is a Gaussian mixture. More importantly, closed-form recursions for propagating the means, covariances, and weights of the constituent Gaussian components of the posterior intensity are derived. The proposed algorithm combines these recursions with a strategy for managing the number of Gaussian components to increase efficiency. This algorithm is extended to accommodate mildly nonlinear target dynamics using approximation strategies from the extended and unscented Kalman filters

1,805 citations


Journal ArticleDOI
TL;DR: This work shows how to discover the complete causal structure of continuous-valued data, under the assumptions that (a) the data generating process is linear, (b) there are no unobserved confounders, and (c) disturbance variables have non-Gaussian distributions of non-zero variances.
Abstract: In recent years, several methods have been proposed for the discovery of causal structure from non-experimental data. Such methods make various assumptions on the data generating process to facilitate its identification from purely observational data. Continuing this line of research, we show how to discover the complete causal structure of continuous-valued data, under the assumptions that (a) the data generating process is linear, (b) there are no unobserved confounders, and (c) disturbance variables have non-Gaussian distributions of non-zero variances. The solution relies on the use of the statistical method known as independent component analysis, and does not require any pre-specified time-ordering of the variables. We provide a complete Matlab package for performing this LiNGAM analysis (short for Linear Non-Gaussian Acyclic Model), and demonstrate the effectiveness of the method using artificially generated data and real-world data.

1,196 citations


Journal ArticleDOI
TL;DR: In this article, the authors use a technique referred to as Gaussian decomposition for processing and calibrating data acquired with a novel small-footprint airborne laser scanner that digitises the complete waveform of the laser pulses scattered back from the Earth's surface.
Abstract: In this study we use a technique referred to as Gaussian decomposition for processing and calibrating data acquired with a novel small-footprint airborne laser scanner that digitises the complete waveform of the laser pulses scattered back from the Earth's surface. This paper presents the theoretical basis for modelling the waveform as a series of Gaussian pulses. In this way the range, amplitude, and width are provided for each pulse. Using external reference targets it is also possible to calibrate the data. The calibration equation takes into account the range, the amplitude, and pulse width and provides estimates of the backscatter cross-section of each target. The applicability of this technique is demonstrated based on RIEGL LMS-Q560 data acquired over the city of Vienna.

715 citations


Journal ArticleDOI
TL;DR: A surface-based version of the cluster size exclusion method used for multiple comparisons correction and a new method for generating regions of interest on the cortical surface using a sliding threshold of cluster exclusion followed by cluster growth are implemented.

703 citations


Journal ArticleDOI
TL;DR: A generalization of the cluster-state model of quantum computation to continuous-variable systems, along with a proposal for an optical implementation using squeezed-light sources, linear optics, and homodyne detection, is described.
Abstract: We describe a generalization of the cluster-state model of quantum computation to continuous-variable systems, along with a proposal for an optical implementation using squeezed-light sources, linear optics, and homodyne detection. For universal quantum computation, a nonlinear element is required. This can be satisfied by adding to the toolbox any single-mode non-Gaussian measurement, while the initial cluster state itself remains Gaussian. Homodyne detection alone suffices to perform an arbitrary multimode Gaussian transformation via the cluster state. We also propose an experiment to demonstrate cluster-based error reduction when implementing Gaussian operations.

653 citations


Journal ArticleDOI
TL;DR: This paper gives the power allocation policy that maximizes the mutual information over parallel channels with arbitrary input distributions, and admits a graphical interpretation, referred to as mercury/waterfilling, which generalizes the waterfilling solution and allows retaining some of its intuition.
Abstract: The mutual information of independent parallel Gaussian-noise channels is maximized, under an average power constraint, by independent Gaussian inputs whose power is allocated according to the waterfilling policy. In practice, discrete signaling constellations with limited peak-to-average ratios (m-PSK, m-QAM, etc.) are used in lieu of the ideal Gaussian signals. This paper gives the power allocation policy that maximizes the mutual information over parallel channels with arbitrary input distributions. Such policy admits a graphical interpretation, referred to as mercury/waterfilling, which generalizes the waterfilling solution and allows retaining some of its intuition. The relationship between mutual information of Gaussian channels and nonlinear minimum mean-square error (MMSE) proves key to solving the power allocation problem.

542 citations


Posted Content
TL;DR: In this article, the authors consider Bayesian regression with normal and double-exponential priors as forecasting methods based on large panels of time series and show that these forecasts are highly correlated with principal component forecasts and that they perform equally well for a wide range of prior choices.
Abstract: This paper considers Bayesian regression with normal and double-exponential priors as forecasting methods based on large panels of time series. We show that, empirically, these forecasts are highly correlated with principal component forecasts and that they perform equally well for a wide range of prior choices. Moreover, we study the asymptotic properties of the Bayesian regression under Gaussian prior under the assumption that data are quasi collinear to establish a criterion for setting parameters in a large cross-section.

488 citations


Journal ArticleDOI
TL;DR: A neural network particle finding algorithm and a new four-frame predictive tracking algorithm are proposed for three-dimensional Lagrangian particle tracking (LPT) and the best algorithms are verified to work in a real experimental environment.
Abstract: A neural network particle finding algorithm and a new four-frame predictive tracking algorithm are proposed for three-dimensional Lagrangian particle tracking (LPT). A quantitative comparison of these and other algorithms commonly used in three-dimensional LPT is presented. Weighted averaging, one-dimensional and two-dimensional Gaussian fitting, and the neural network scheme are considered for determining particle centers in digital camera images. When the signal to noise ratio is high, the one-dimensional Gaussian estimation scheme is shown to achieve a good combination of accuracy and efficiency, while the neural network approach provides greater accuracy when the images are noisy. The effect of camera placement on both the yield and accuracy of three-dimensional particle positions is investigated, and it is shown that at least one camera must be positioned at a large angle with respect to the other cameras to minimize errors. Finally, the problem of tracking particles in time is studied. The nearest neighbor algorithm is compared with a three-frame predictive algorithm and two four-frame algorithms. These four algorithms are applied to particle tracks generated by direct numerical simulation both with and without a method to resolve tracking conflicts. The new four-frame predictive algorithm with no conflict resolution is shown to give the best performance. Finally, the best algorithms are verified to work in a real experimental environment.

439 citations


Journal ArticleDOI
TL;DR: This work proposes an approach to directly measure the non‐Gaussian property of water diffusion, characterized by a four‐dimensional matrix referred to as the diffusion kurtosis tensor, and shows tissue‐specific geometry for different brain regions and the potential of identifying multiple fiber structures in a single voxel.
Abstract: Conventional diffusion tensor imaging (DTI) measures water diffusion parameters based on the assumption that the spin displacement distribution is a Gaussian function. However, water movement in biological tissue is often non-Gaussian and this non-Gaussian behavior may contain useful information related to tissue structure and pathophysiology. Here we propose an approach to directly measure the non-Gaussian property of water diffusion, characterized by a four-dimensional matrix referred to as the diffusion kurtosis tensor. This approach does not require the complete measurement of the displacement distribution function and, therefore, is more time efficient compared with the q-space imaging technique. A theoretical framework of the DK calculation is established, and experimental results are presented for humans obtained within a clinically feasible time of about 10 min. The resulting kurtosis maps are shown to be robust and reproducible. Directionally-averaged apparent kurtosis coefficients (AKC, a unitless parameter) are 0.74 +/- 0.03, 1.09 +/- 0.01 and 0.84 +/- 0.02 for gray matter, white matter and thalamus, respectively. The three-dimensional kurtosis angular plots show tissue-specific geometry for different brain regions and demonstrate the potential of identifying multiple fiber structures in a single voxel. Diffusion kurtosis imaging is a useful method to study non-Gaussian diffusion behavior and can provide complementary information to that of DTI.

413 citations


Journal Article
TL;DR: The Wigner distribution function of optical signals and systems can be interpreted directly in terms of geometrical optics as mentioned in this paper, which can be applied to partially coherent light as well.
Abstract: The Wigner distribution function of optical signals and systems has been introduced. The concept of such functions is not restricted to deterministic signals, but can be applied to partially coherent light as well. Although derived from Fourier optics, the description of signals and systems by means of Wigner distribution functions can be interpreted directly in terms of geometrical optics: (i) for quadratic-phase signals (and, if complex rays are allowed to appear, for Gaussian signals, too), it leads immediately to the curvature matrix of the signal; (ii) for Luneburg’s first-order system, it directly yields the ray transformation matrix of the system; (iii) for the propagation of quadratic-phase signals through first-order systems, it results in the well-known bilinear transformation of the signal’s curvature matrix. The zeroth-, first-, and second-order moments of the Wigner distribution function have been interpreted in terms of the energy, the center of gravity, and the effective width of the signal, respectively. The propagation of these moments through first-order systems has been derived. Since a Gaussian signal is completely described by its three lowest-order moments, the propagation of such a signal through first-order systems is known as well.

Journal ArticleDOI
TL;DR: It is proved that the Gaussian unitary attack is optimal for all the considered bounds on the key rate when the first and second momenta of the canonical variables involved are known by the honest parties.
Abstract: We analyze the asymptotic security of the family of Gaussian modulated quantum key distribution protocols for continuous-variables systems. We prove that the Gaussian unitary attack is optimal for all the considered bounds on the key rate when the first and second momenta of the canonical variables involved are known by the honest parties.

Journal ArticleDOI
TL;DR: The walk-sum perspective leads to a better understanding of Gaussian belief propagation and to stronger results for its convergence in loopy graphs.
Abstract: We present a new framework based on walks in a graph for analysis and inference in Gaussian graphical models. The key idea is to decompose the correlation between each pair of variables as a sum over all walks between those variables in the graph. The weight of each walk is given by a product of edgewise partial correlation coefficients. This representation holds for a large class of Gaussian graphical models which we call walk-summable. We give a precise characterization of this class of models, and relate it to other classes including diagonally dominant, attractive, non-frustrated, and pairwise-normalizable. We provide a walk-sum interpretation of Gaussian belief propagation in trees and of the approximate method of loopy belief propagation in graphs with cycles. The walk-sum perspective leads to a better understanding of Gaussian belief propagation and to stronger results for its convergence in loopy graphs.

Journal ArticleDOI
TL;DR: The general applicability of random walk particle tracking in comparison to the standard transport models is discussed and it is concluded that in advection-dominated problems using a high spatial discretization or requiring the performance of many model runs, RWPT represents a good alternative for modelling contaminant transport.

Journal ArticleDOI
TL;DR: It is proved that for every given covariance matrix the distillable secret key rate and the entanglement, if measured appropriately, are minimized by Gaussian states, implying that Gaussian encodings are optimal for the transmission of classical information through bosonic channels, if the capacity is additive.
Abstract: We investigate Gaussian quantum states in view of their exceptional role within the space of all continuous variables states. A general method for deriving extremality results is provided and applied to entanglement measures, secret key distillation and the classical capacity of bosonic quantum channels. We prove that for every given covariance matrix the distillable secret key rate and the entanglement, if measured appropriately, are minimized by Gaussian states. This result leads to a clearer picture of the validity of frequently made Gaussian approximations. Moreover, it implies that Gaussian encodings are optimal for the transmission of classical information through bosonic channels, if the capacity is additive.

Journal ArticleDOI
TL;DR: A common base is provided for the first time to analyze and compare Gaussian filters with respect to accuracy, efficiency and stability factor and to help design more efficient filters by employing better numerical integration methods.
Abstract: This paper proposes a numerical-integration perspective on the Gaussian filters. A Gaussian filter is approximation of the Bayesian inference with the Gaussian posterior probability density assumption being valid. There exists a variation of Gaussian filters in the literature that derived themselves from very different backgrounds. From the numerical-integration viewpoint, various versions of Gaussian filters are only distinctive from each other in their specific treatments of approximating the multiple statistical integrations. A common base is provided for the first time to analyze and compare Gaussian filters with respect to accuracy, efficiency and stability factor. This study is expected to facilitate the selection of appropriate Gaussian filters in practice and to help design more efficient filters by employing better numerical integration methods

Posted Content
TL;DR: In this paper, the authors considered the Gaussian Multiple Access Wire-Tap Channel (GMAC-WT) where multiple users communicate with an intended receiver in the presence of an intelligent and informed wire-tapper who receives a degraded version of the signal at the receiver.
Abstract: We consider the Gaussian Multiple Access Wire-Tap Channel (GMAC-WT). In this scenario, multiple users communicate with an intended receiver in the presence of an intelligent and informed wire-tapper who receives a degraded version of the signal at the receiver. We define suitable security measures for this multi-access environment. Using codebooks generated randomly according to a Gaussian distribution, achievable secrecy rate regions are identified using superposition coding and TDMA coding schemes. An upper bound for the secrecy sum-rate is derived, and our coding schemes are shown to achieve the sum capacity. Numerical results showing the new rate region are presented and compared with the capacity region of the Gaussian Multiple-Access Channel (GMAC) with no secrecy constraints, quantifying the price paid for secrecy.

Proceedings ArticleDOI
22 Mar 2006
TL;DR: In this paper, the best known guarantees for exact reconstruction of a sparse signal f from few nonadaptive universal linear measurements were shown. But these guarantees involve huge constants, in spite of very good performance of the algorithms in practice.
Abstract: This paper proves best known guarantees for exact reconstruction of a sparse signal f from few non-adaptive universal linear measurements. We consider Fourier measurements (random sample of frequencies of f) and random Gaussian measurements. The method for reconstruction that has recently gained momentum in the sparse approximation theory is to relax this highly non-convex problem to a convex problem, and then solve it as a linear program. What are best guarantees for the reconstruction problem to be equivalent to its convex relaxation is an open question. Recent work shows that the number of measurements k(r,n) needed to exactly reconstruct any r-sparse signal f of length n from its linear measurements with convex relaxation is usually O(r poly log (n)). However, known guarantees involve huge constants, in spite of very good performance of the algorithms in practice. In attempt to reconcile theory with practice, we prove the first guarantees for universal measurements (i.e. which work for all sparse functions) with reasonable constants. For Gaussian measurements, k(r,n) lsim 11.7 r [1.5 + log(n/r)], which is optimal up to constants. For Fourier measurements, we prove the best known bound k(r, n) = O(r log(n) middot log2(r) log(r log n)), which is optimal within the log log n and log3 r factors. Our arguments are based on the technique of geometric functional analysis and probability in Banach spaces.

Journal ArticleDOI
TL;DR: This paper examines the asymptotic performance of MUSIC-like algorithms for estimating directions of arrival (DOA) of narrowband complex noncircular sources using closed-form expressions of the covariance of the asylptotic distribution of different projection matrices to provide a unifying framework for investigating the ascyptoticperformance of arbitrary subspace-based algorithms.
Abstract: This paper examines the asymptotic performance of MUSIC-like algorithms for estimating directions of arrival (DOA) of narrowband complex noncircular sources. Using closed-form expressions of the covariance of the asymptotic distribution of different projection matrices, it provides a unifying framework for investigating the asymptotic performance of arbitrary subspace-based algorithms valid for Gaussian or non-Gaussian and complex circular or noncircular sources. We also derive different robustness properties from the asymptotic covariance of the estimated DOA given by such algorithms. These results are successively applied to four algorithms: to two attractive MUSIC-like algorithms previously introduced in the literature, to an extension of these algorithms, and to an optimally weighted MUSIC algorithm proposed in this paper. Numerical examples illustrate the performance of the studied algorithms compared to the asymptotically minimum variance (AMV) algorithms introduced as benchmarks

Journal ArticleDOI
TL;DR: An alternative to the Gaussian-n (G1, G2, and G3) composite methods of computing molecular energies is proposed and is named the "correlation consistent composite approach" (ccCA,ccCA-CBS-1, ccCA- CBS-2), which uses the correlation consistent polarized valence (cc-pVXZ) basis sets.
Abstract: Article discussing research on the correlation consistent composite approach (ccCA) and an alternative to the Gaussian-n methods.

Journal ArticleDOI
TL;DR: An automated algorithm for tissue segmentation of noisy, low-contrast magnetic resonance (MR) images of the brain is presented and the applicability of the framework can be extended to diseased brains and neonatal brains.
Abstract: An automated algorithm for tissue segmentation of noisy, low-contrast magnetic resonance (MR) images of the brain is presented. A mixture model composed of a large number of Gaussians is used to represent the brain image. Each tissue is represented by a large number of Gaussian components to capture the complex tissue spatial layout. The intensity of a tissue is considered a global feature and is incorporated into the model through tying of all the related Gaussian parameters. The expectation-maximization (EM) algorithm is utilized to learn the parameter-tied, constrained Gaussian mixture model. An elaborate initialization scheme is suggested to link the set of Gaussians per tissue type, such that each Gaussian in the set has similar intensity characteristics with minimal overlapping spatial supports. Segmentation of the brain image is achieved by the affiliation of each voxel to the component of the model that maximized the a posteriori probability. The presented algorithm is used to segment three-dimensional, T1-weighted, simulated and real MR images of the brain into three different tissues, under varying noise conditions. Results are compared with state-of-the-art algorithms in the literature. The algorithm does not use an atlas for initialization or parameter learning. Registration processes are therefore not required and the applicability of the framework can be extended to diseased brains and neonatal brains

Journal ArticleDOI
TL;DR: This paper presents a general Bayesian approach for estimating a GaussianCopula model that can handle any combination of discrete and continuous marginals, and generalises Gaussian graphical models to the Gaussian copula framework.
Abstract: A Gaussian copula regression model gives a tractable way of handling a multivariate regression when some of the marginal distributions are non-Gaussian. Our paper presents a general Bayesian approach for estimating a Gaussian copula model that can handle any combination of discrete and continuous marginals, and generalises Gaussian graphical models to the Gaussian copula framework. Posterior inference is carried out using a novel and efficient simulation method. The methods in the paper are applied to simulated and real data.

Journal ArticleDOI
TL;DR: In this article, two theoretical copula-based models are presented: a Gaussian and a non-Gaussian, respectively, for four quality parameters, chloride, sulfate, pH, and nitrate, obtained from a large-scale groundwater quality measurement network in Baden-Wurttemberg (Germany).
Abstract: [1] Groundwater quality parameters exhibit considerable spatial variability. Geostatistical methods including the assessment of variograms are usually used to characterize this variability. Copulas offer an interesting opportunity to describe dependence structures for multivariate distributions. Bivariate empirical copulas can be used as an alternative to variograms and covariance functions for the description of the spatial variability. Rank correlations of these copulas express the strength of the dependence independently of the marginal distributions and thus offer an alternative to the variograms. Empirical copulas for four quality parameters, chloride, sulfate, pH, and nitrate, obtained from a large-scale groundwater quality measurement network in Baden-Wurttemberg (Germany) are calculated. They indicate that the spatial dependence structure of the investigated parameters is not Gaussian. Two theoretical copula-based models are presented in this paper: a Gaussian and a non-Gaussian. Bootstrap-based statistical tests using stochastic simulation of the multivariate distributions are used to investigate the appropriateness of the models. According to the test results the Gaussian copula is rejected for most of the parameters while the non-Gaussian alternative is not rejected in most cases.

Journal ArticleDOI
TL;DR: Two distinct explicit descriptions of the RKHSs corresponding to Gaussian RBF kernels are given and some consequences are discussed and an orthonormal basis for these spaces is presented.
Abstract: Although Gaussian radial basis function (RBF) kernels are one of the most often used kernels in modern machine learning methods such as support vector machines (SVMs), little is known about the structure of their reproducing kernel Hilbert spaces (RKHSs). In this work, two distinct explicit descriptions of the RKHSs corresponding to Gaussian RBF kernels are given and some consequences are discussed. Furthermore, an orthonormal basis for these spaces is presented. Finally, it is discussed how the results can be used for analyzing the learning performance of SVMs

Journal ArticleDOI
TL;DR: A Markov random field image segmentation model, which aims at combining color and texture features through Bayesian estimation via combinatorial optimization (simulated annealing), and a parameter estimation method using the EM algorithm is proposed.

Journal ArticleDOI
TL;DR: In this article, the authors discuss the nonlinear propagation of spacecraft trajectory uncertainties via solutions of the Fokker-Planck equation and derive an analytic expression of a nonlinear trajectory solution using a higher-order Taylor series approach.
Abstract: This paper discusses the nonlinear propagation of spacecraft trajectory uncertainties via solutions of the Fokker– Planck equation. We first discuss the solutions of the Fokker–Planck equation for a deterministic system with a Gaussian boundary condition. Next, we derive an analytic expression of a nonlinear trajectory solution using a higher-order Taylor series approach, discuss the region of convergence for the solutions, and apply the result to spacecraft applications. Such applications consist of nonlinear propagation of the mean and covariance matrix, design of statistically correct trajectories, and nonlinear statistical targeting. The two-body and Hill three-body problems are chosen as examples and realistic initial uncertainty models are considered. The results show that the nonlinear map of the trajectory uncertainties can be approximated in an analytic form, and there exists an optimal place to perform a correction maneuver, which is not found using the linear method.

Journal ArticleDOI
TL;DR: By applying the screening technique to the Heyd-Scuseria-Ernzerhof short-range Coulomb hybrid density functional, the method achieves a computational efficiency comparable with that of standard nonhybrid density functional calculations.
Abstract: We present an efficient algorithm for the evaluation of short-range Hartree-Fock exchange energies and geometry gradients in Gaussian basis sets. Our method uses a hierarchy of screening levels to eliminate negligible two-electron integrals whose evaluation is the fundamental computational bottleneck of the procedure. By applying our screening technique to the Heyd-Scuseria-Ernzerhof [J. Chem. Phys. 118, 8207 (2003)] short-range Coulomb hybrid density functional, we achieve a computational efficiency comparable with that of standard nonhybrid density functional calculations.

Journal ArticleDOI
TL;DR: A threshold gradient descent (TGD) regularization procedure for estimating the sparse precision matrix in the setting of Gaussian graphical models is introduced and demonstrated to identify biologically meaningful genetic networks based on microarray gene expression data.
Abstract: SUMMARY Large-scale microarray gene expression data provide the possibility of constructing genetic networks or biological pathways. Gaussian graphical models have been suggested to provide an effective method for constructing such genetic networks. However, most of the available methods for constructing Gaussian graphs do not account for the sparsity of the networks and are computationally more demanding or infeasible, especially in the settings of high dimension and low sample size. We introduce a threshold gradient descent (TGD) regularization procedure for estimating the sparse precision matrix in the setting of Gaussian graphical models and demonstrate its application to identifying genetic networks. Such a procedure is computationally feasible and can easily incorporate prior biological knowledge about the network structure. Simulation results indicate that the proposed method yields a better estimate of the precision matrix than the procedures that fail to account for the sparsity of the graphs. We also present the results on inference of a gene network for isoprenoid biosynthesis in Arabidopsis thaliana. These results demonstrate that the proposed procedure can indeed identify biologically meaningful genetic networks based on microarray gene expression data.

Journal ArticleDOI
TL;DR: Fitting of non-Gaussian hierarchical random effects models by approximate maximum likelihood can be made automatic to the same extent that Bayesian model fitting can be automated by the program BUGS.

Proceedings ArticleDOI
09 Jul 2006
TL;DR: This work presents a method for secrecy extraction from jointly Gaussian random sources that has applications in enhancing security for wireless communications and is closely related to some well known lossy source coding problems.
Abstract: We present a method for secrecy extraction from jointly Gaussian random sources. The approach is motivated by and has applications in enhancing security for wireless communications. The problem is also found to be closely related to some well known lossy source coding problems.