scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian published in 2012"


Book
02 Dec 2012
TL;DR: Huzinaga et al. as mentioned in this paper provided information pertinent to the Gaussian basis sets, with emphasis on lithium, radon, and important ions, and discussed the polarization functions prepared for lithium through radon for further improvement of the basis sets.
Abstract: Gaussian Basis Sets for Molecular Calculations-S. Huzinaga 2012-12-02 Physical Sciences Data, Volume 16: Gaussian Basis Sets for Molecular Calculations provides information pertinent to the Gaussian basis sets, with emphasis on lithium, radon, and important ions. This book discusses the polarization functions prepared for lithium through radon for further improvement of the basis sets. Organized into three chapters, this volume begins with an overview of the basis set for the most stable negative and positive ions. This text then explores the total atomic energies given by the basis sets. Other chapters consider the distinction between diffuse functions and polarization function. This book presents as well the exponents of polarization function. The final chapter deals with the Gaussian basis sets. This book is a valuable resource for chemists, scientists, and research workers.

1,798 citations


Journal ArticleDOI
TL;DR: In this article, the non-paranormal graphical models are used as a safe replacement of the popular Gaussian graphical models, even when the data are truly Gaussian, for graph recovery and parameter estimation.
Abstract: ing the Spearman’s rho and Kendall’s tau. We prove that the nonparanormal skeptic achieves the optimal parametric rates of convergence for both graph recovery and parameter estimation. This result suggests that the nonparanormal graphical models can be used as a safe replacement of the popular Gaussian graphical models, even when the data are truly Gaussian. Besides theoretical analysis, we also conduct thorough numerical simulations to compare the graph recovery performance of dierent estimators under both ideal and noisy settings. The proposed methods are then applied on a largescale genomic dataset to illustrate their empirical usefulness. The R package huge implementing the proposed methods is available on the Comprehensive R Archive Network: http://cran.r-project.org/.

521 citations


Posted Content
TL;DR: In this paper, a Stata-specific treatment of generalized linear mixed models, also known as multilevel or hierarchical models, is presented, which allow fixed and random effects and are appropriate not only for continuous Gaussian responses but also for binary, count, and other types of limited dependent variables.
Abstract: This text is a Stata-specific treatment of generalized linear mixed models, also known as multilevel or hierarchical models. These models are "mixed" in the sense that they allow fixed and random effects and are "generalized" in the sense that they are appropriate not only for continuous Gaussian responses but also for binary, count, and other types of limited dependent variables.

474 citations


Journal ArticleDOI
TL;DR: It is commonly presumed that the random displacements that particles undergo as a result of the thermal jiggling of the environment follow a normal, or Gaussian, distribution, but non-Gaussian diffusion in soft materials is more prevalent than expected.
Abstract: It is commonly presumed that the random displacements that particles undergo as a result of the thermal jiggling of the environment follow a normal, or Gaussian, distribution. However, non-Gaussian diffusion in soft materials is more prevalent than expected.

473 citations


Journal Article
TL;DR: An R package named huge which provides easy-to-use functions for estimating high dimensional undirected graphs from data and allows the user to apply both lossless and lossy screening rules to scale up large-scale problems, making a tradeoff between computational and statistical efficiency.
Abstract: We describe an R package named huge which provides easy-to-use functions for estimating high dimensional undirected graphs from data. This package implements recent results in the literature, including Friedman et al. (2007), Liu et al. (2009, 2012) and Liu et al. (2010). Compared with the existing graph estimation package glasso, the huge package provides extra features: (1) instead of using Fortan, it is written in C, which makes the code more portable and easier to modify; (2) besides fitting Gaussian graphical models, it also provides functions for fitting high dimensional semiparametric Gaussian copula models; (3) more functions like data-dependent model selection, data generation and graph visualization; (4) a minor convergence problem of the graphical lasso algorithm is corrected; (5) the package allows the user to apply both lossless and lossy screening rules to scale up large-scale problems, making a tradeoff between computational and statistical efficiency.

440 citations


Journal ArticleDOI
TL;DR: Applications of CES distributions and the adaptive signal processors based on ML- and M-estimators of the scatter matrix are illustrated in radar detection problems and in array signal processing applications for Direction-of-Arrival estimation and beamforming.
Abstract: Complex elliptically symmetric (CES) distributions have been widely used in various engineering applications for which non-Gaussian models are needed. In this overview, circular CES distributions are surveyed, some new results are derived and their applications e.g., in radar and array signal processing are discussed and illustrated with theoretical examples, simulations and analysis of real radar data. The maximum likelihood (ML) estimator of the scatter matrix parameter is derived and general conditions for its existence and uniqueness, and for convergence of the iterative fixed point algorithm are established. Specific ML-estimators for several CES distributions that are widely used in the signal processing literature are discussed in depth, including the complex t -distribution, K-distribution, the generalized Gaussian distribution and the closely related angular central Gaussian distribution. A generalization of ML-estimators, the M-estimators of the scatter matrix, are also discussed and asymptotic analysis is provided. Applications of CES distributions and the adaptive signal processors based on ML- and M-estimators of the scatter matrix are illustrated in radar detection problems and in array signal processing applications for Direction-of-Arrival (DOA) estimation and beamforming. Furthermore, experimental validation of the usefulness of CES distributions for modelling real radar data is given.

392 citations


01 Jan 2012
TL;DR: This article provides a simple and intuitive derivation of the Kalman filter, with the aim of teaching this useful tool to students from disciplines that do not require a strong mathematical background.
Abstract: T his article provides a simple and intuitive derivation of the Kalman filter, with the aim of teaching this useful tool to students from disciplines that do not require a strong mathematical background. The most complicated level of mathematics required to understand this derivation is the ability to multiply two Gaussian functions together and reduce the result to a compact form. The Kalman filter is over 50 years old but is still one of the most important and common data fusion algorithms in use today. Named after Rudolf E. Kalman, the great success of the Kalman filter is due to its small computational requirement, elegant recursive properties, and its status as the optimal estimator for one-dimensional linear systems with Gaussian error statistics [1] . Typical uses of the Kalman filter include smoothing noisy data and providing estimates of parameters of interest. Applications include global positioning system receivers, phaselocked loops in radio equipment, smoothing the output from laptop trackpads, and many more. From a theoretical standpoint, the Kalman filter is an algorithm permitting exact inference in a linear dynamical system, which is a Bayesian model similar to a hidden Markov model but where the state space of the latent variables is continuous and where all latent and observed variables have a Gaussian distribution (often a multivariate Gaussian distribution). The aim of this lecture note is to permit people who find this description confusing or terrifying to understand the basis of the Kalman filter via a simple and intuitive derivation.

379 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that the straightforward extension of ADM is valid for the general case of $m\ge 3$ if it is combined with a Gaussian back substitution procedure and prove its convergence via the analytic framework of contractive-type methods.
Abstract: We consider the linearly constrained separable convex minimization problem whose objective function is separable into m individual convex functions with nonoverlapping variables. A Douglas–Rachford alternating direction method of multipliers (ADM) has been well studied in the literature for the special case of $m=2$. But the convergence of extending ADM to the general case of $m\ge 3$ is still open. In this paper, we show that the straightforward extension of ADM is valid for the general case of $m\ge 3$ if it is combined with a Gaussian back substitution procedure. The resulting ADM with Gaussian back substitution is a novel approach towards the extension of ADM from $m=2$ to $m\ge 3$, and its algorithmic framework is new in the literature. For the ADM with Gaussian back substitution, we prove its convergence via the analytic framework of contractive-type methods, and we show its numerical efficiency by some application problems.

352 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the normalized risk of the LASSO converges to a limit, and an explicit expression for this limit was derived for random instances, based on the analysis of AMP.
Abstract: We consider the problem of learning a coefficient vector xο ∈ RN from noisy linear observation y = Axo + ∈ Rn. In many contexts (ranging from model selection to image processing), it is desirable to construct a sparse estimator x. In this case, a popular approach consists in solving an l1-penalized least-squares problem known as the LASSO or basis pursuit denoising. For sequences of matrices A of increasing dimensions, with independent Gaussian entries, we prove that the normalized risk of the LASSO converges to a limit, and we obtain an explicit expression for this limit. Our result is the first rigorous derivation of an explicit formula for the asymptotic mean square error of the LASSO for random instances. The proof technique is based on the analysis of AMP, a recently developed efficient algorithm, that is inspired from graphical model ideas. Simulations on real data matrices suggest that our results can be relevant in a broad array of practical applications.

334 citations


Journal ArticleDOI
TL;DR: A robust recurrent neural network is presented in a Bayesian framework based on echo state mechanisms that is robust in the presence of outliers and is superior to existing methods.
Abstract: In this paper, a robust recurrent neural network is presented in a Bayesian framework based on echo state mechanisms. Since the new model is capable of handling outliers in the training data set, it is termed as a robust echo state network (RESN). The RESN inherits the basic idea of ESN learning in a Bayesian framework, but replaces the commonly used Gaussian distribution with a Laplace one, which is more robust to outliers, as the likelihood function of the model output. Moreover, the training of the RESN is facilitated by employing a bound optimization algorithm, based on which, a proper surrogate function is derived and the Laplace likelihood function is approximated by a Gaussian one, while remaining robust to outliers. It leads to an efficient method for estimating model parameters, which can be solved by using a Bayesian evidence procedure in a fully autonomous way. Experimental results show that the proposed method is robust in the presence of outliers and is superior to existing methods.

294 citations


Journal ArticleDOI
TL;DR: This paper proposes that the random variation is best described via a Poisson distribution, which better describes the zeros observed in the data as compared to the typical assumption of a Gaussian distribution, and presents a new algorithm for Poisson tensor factorization called CANDECOMP--PARAFAC alternating Poisson regression (CP-APR), based on a majorization-minimization approach.
Abstract: Tensors have found application in a variety of fields, ranging from chemometrics to signal processing and beyond. In this paper, we consider the problem of multilinear modeling of sparse count data. Our goal is to develop a descriptive tensor factorization model of such data, along with appropriate algorithms and theory. To do so, we propose that the random variation is best described via a Poisson distribution, which better describes the zeros observed in the data as compared to the typical assumption of a Gaussian distribution. Under a Poisson assumption, we fit a model to observed data using the negative log-likelihood score. We present a new algorithm for Poisson tensor factorization called CANDECOMP--PARAFAC alternating Poisson regression (CP-APR) that is based on a majorization-minimization approach. It can be shown that CP-APR is a generalization of the Lee--Seung multiplicative updates. We show how to prevent the algorithm from converging to non-KKT points and prove convergence of CP-APR under mil...

Journal ArticleDOI
TL;DR: For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by the standard Gaussian complementary cumulative distribution function.
Abstract: This paper studies the minimum achievable source coding rate as a function of blocklength n and probability ϵ that the distortion exceeds a given level d . Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by R(d) + √V(d)/(n) Q-1(ϵ), where R(d) is the rate-distortion function, V(d) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and Q-1(·) is the inverse of the standard Gaussian complementary cumulative distribution function.

Journal ArticleDOI
TL;DR: An adaptive image equalization algorithm that automatically enhances the contrast in an input image that is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.
Abstract: In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.

Journal ArticleDOI
TL;DR: It is proved that if the texture of compound-Gaussian clutter is modeled by an inverse-gamma distribution, the optimum detector is the optimum Gaussian matched filter detector compared to a data-dependent threshold that varies linearly with a quadratic statistic of the data.
Abstract: This paper deals with the problem of detecting a radar target signal against correlated non-Gaussian clutter, which is modeled by the compound-Gaussian distribution. We prove that if the texture of compound-Gaussian clutter is modeled by an inverse-gamma distribution, the optimum detector is the optimum Gaussian matched filter detector compared to a data-dependent threshold that varies linearly with a quadratic statistic of the data. We call this optimum detector a linear-threshold detector (LTD). Then, we show that the compound-Gaussian model presented here varies parametrically from the Gaussian clutter model to a clutter model whose tails are evidently heavier than any K-distribution model. Moreover, we show that the generalized likelihood ratio test (GLRT), which is a popular suboptimum detector because of its constant false-alarm rate (CFAR) property, is an optimum detector for our clutter model in the limit as the tails get extremely heavy. The GLRT-LTD is tested against simulated high-resolution sea clutter data to investigate the dependence of its performance on the various clutter parameters.

Proceedings ArticleDOI
21 Mar 2012
TL;DR: This work proposes an empirical-Bayesian technique that simultaneously learns the signal distribution while MMSE-recovering the signal-according to the learned distribution-using AMP, and model the non-zero distribution as a Gaussian mixture and learn its parameters through expectation maximization, using AMP to implement the expectation step.
Abstract: When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal's non-zero coefficients can have a profound affect on recovery mean-squared error (MSE). If this distribution was apriori known, one could use efficient approximate message passing (AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, though, the distribution is unknown, motivating the use of robust algorithms like Lasso—which is nearly minimax optimal—at the cost of significantly larger MSE for non-least-favorable distributions. As an alternative, we propose an empirical-Bayesian technique that simultaneously learns the signal distribution while MMSE-recovering the signal—according to the learned distribution—using AMP. In particular, we model the non-zero distribution as a Gaussian mixture, and learn its parameters through expectation maximization, using AMP to implement the expectation step. Numerical experiments confirm the state-of-the-art performance of our approach on a range of signal classes.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the conditional likelihood as a stochastic process in the parameters, and prove that it converges in distribution when errors are i.i.d. with suitable moment conditions and initial values are bounded.
Abstract: We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process Xt to be fractional of order d and cofractional of order d b; that is, there exist vectorsfor which � 0 Xt is fractional of order d b: The parameters d and b satisfy either db � 1=2, d = b � 1=2, or d = d0 � b � 1=2. Our main technical contribution is the proof of consistency of the maximum likelihood estimators on the set 1=2� bdd1 for any d1� d0. To this end, we consider the conditional likelihood as a stochastic process in the parameters, and prove that it converges in distribution when errors are i.i.d. with suitable moment conditions and initial values are bounded. We then prove that the estimator ofis asymptotically mixed Gaussian and estimators of the remaining parameters are asymptotically Gaussian. We also …nd the asymptotic distribution of the likelihood ratio test for cointegration rank, which is a functional of fractional Brownian motion of type II.

Journal ArticleDOI
TL;DR: In this article, the basic concepts and mathematical tools needed for phase-space description of a very common class of states, whose phase properties are described by Gaussian Wigner functions: the Gaussian states.
Abstract: In this tutorial, we introduce the basic concepts and mathematical tools needed for phase-space description of a very common class of states, whose phase properties are described by Gaussian Wigner functions: the Gaussian states. In particular, we address their manipulation, evolution and characterization in view of their application to quantum information.

Journal ArticleDOI
TL;DR: A new, easy to implement, nonparametric VSS-NLMS algorithm that employs the mean-square error and the estimated system noise power to control the step-size update and is in very good agreement with the experimental results.
Abstract: Numerous variable step-size normalized least mean-square (VSS-NLMS) algorithms have been derived to solve the dilemma of fast convergence rate or low excess mean-square error in the past two decades. This paper proposes a new, easy to implement, nonparametric VSS-NLMS algorithm that employs the mean-square error and the estimated system noise power to control the step-size update. Theoretical analysis of its steady-state behavior shows that, when the input is zero-mean Gaussian distributed, the misadjustment depends only on a parameter β controlling the update of step size. Simulation experiments show that the proposed algorithm performs very well. Furthermore, the theoretical steady-state behavior is in very good agreement with the experimental results.

Journal ArticleDOI
TL;DR: In this article, the authors apply nonlinear, monotonic transformations to the observed states, rendering them Gaussian (Gaussian anamorphosis, GA) to improve EnKF for parameter estimation in groundwater applications.
Abstract: [1] Ensemble Kalman filters (EnKFs) are a successful tool for estimating state variables in atmospheric and oceanic sciences. Recent research has prepared the EnKF for parameter estimation in groundwater applications. EnKFs are optimal in the sense of Bayesian updating only if all involved variables are multivariate Gaussian. Subsurface flow and transport state variables, however, generally do not show Gaussian dependence on hydraulic log conductivity and among each other, even if log conductivity is multi-Gaussian. To improve EnKFs in this context, we apply nonlinear, monotonic transformations to the observed states, rendering them Gaussian (Gaussian anamorphosis, GA). Similar ideas have recently been presented by Beal et al. (2010) in the context of state estimation. Our work transfers and adapts this methodology to parameter estimation. Additionally, we address the treatment of measurement errors in the transformation and provide several multivariate analysis tools to evaluate the expected usefulness of GA beforehand. For illustration, we present a first-time application of an EnKF to parameter estimation from 3-D hydraulic tomography in multi-Gaussian log conductivity fields. Results show that (1) GA achieves an implicit pseudolinearization of drawdown data as a function of log conductivity and (2) this makes both parameter identification and prediction of flow and transport more accurate. Combining EnKFs with GA yields a computationally efficient tool for nonlinear inversion of data with improved accuracy. This is an attractive benefit, given that linearization-free methods such as particle filters are computationally extremely demanding.

Journal ArticleDOI
TL;DR: This work considers the influence of phase noise in the preparation stage of the protocol and argues that taking this noise into account can improve the secret key rate because this source of noise is not controlled by the eavesdropper.
Abstract: As quantum key distribution becomes a mature technology, it appears clearly that some assumptions made in the security proofs cannot be justified in practical implementations. This might open the door to possible side-channel attacks. We examine several discrepancies between theoretical models and experimental setups in the case of continuous-variable quantum key distribution. We study in particular the impact of an imperfect modulation on the security of Gaussian protocols and show that approximating the theoretical Gaussian modulation with a discrete one is sufficient in practice. We also address the issue of properly calibrating the detection setup and in particular the value of the shot noise. Finally, we consider the influence of phase noise in the preparation stage of the protocol and argue that taking this noise into account can improve the secret key rate because this source of noise is not controlled by the eavesdropper.

Journal ArticleDOI
Jie Yu1
TL;DR: The proposed NKGMM approach outperforms the ICA and GMM methods in early detection of process faults, minimization of false alarms, and isolation of faulty variables of nonlinear and non-Gaussian multimode processes.

Journal ArticleDOI
TL;DR: In this paper, a distributed method for computing, at each sensor, an approximation of the joint likelihood function (JLF) by means of consensus algorithms is proposed, which is applicable if the local likelihood functions of the various sensors (viewed as conditional probability density functions of local measurements) belong to the exponential family of distributions.
Abstract: We consider distributed state estimation in a wireless sensor network without a fusion center. Each sensor performs a global estimation task-based on the past and current measurements of all sensors-using only local processing and local communications with its neighbors. In this estimation task, the joint (all-sensors) likelihood function (JLF) plays a central role as it epitomizes the measurements of all sensors. We propose a distributed method for computing, at each sensor, an approximation of the JLF by means of consensus algorithms. This “likelihood consensus” method is applicable if the local likelihood functions of the various sensors (viewed as conditional probability density functions of the local measurements) belong to the exponential family of distributions. We then use the likelihood consensus method to implement a distributed particle filter and a distributed Gaussian particle filter. Each sensor runs a local particle filter, or a local Gaussian particle filter, that computes a global state estimate. The weight update in each local (Gaussian) particle filter employs the JLF, which is obtained through the likelihood consensus scheme. For the distributed Gaussian particle filter, the number of particles can be significantly reduced by means of an additional consensus scheme. Simulation results are presented to assess the performance of the proposed distributed particle filters for a multiple target tracking problem.

Journal ArticleDOI
TL;DR: A simulation-based estimate of the resolution of an experimental single molecule acquisition is proposed based on image wavelet segmentation and single particle centroid determination, and its performance is compared with the commonly used gaussian fitting of the point spread function.
Abstract: Localization of single molecules in microscopy images is a key step in quantitative single particle data analysis. Among them, single molecule based super-resolution optical microscopy techniques require high localization accuracy as well as computation of large data sets in the order of 10(5) single molecule detections to reconstruct a single image. We hereby present an algorithm based on image wavelet segmentation and single particle centroid determination, and compare its performance with the commonly used gaussian fitting of the point spread function. We performed realistic simulations at different signal-to-noise ratios and particle densities and show that the calculation time using the wavelet approach can be more than one order of magnitude faster than that of gaussian fitting without a significant degradation of the localization accuracy, from 1 nm to 4 nm in our range of study. We propose a simulation-based estimate of the resolution of an experimental single molecule acquisition.

Journal ArticleDOI
TL;DR: In this article, the Collins-Soper-Sterman (CSS) formalism is applied to the spin dependence governed by the Sivers function, and the results are presented as parametrizations of a Gaussian form in transverse-momentum space, rather than in the Fourier conjugate transverse coordinate space normally used in the CSS formalism.
Abstract: We extend the Collins-Soper-Sterman (CSS) formalism to apply it to the spin dependence governed by the Sivers function. We use it to give a correct numerical QCD evolution of existing fixed-scale fits of the Sivers function. With the aid of approximations useful for the nonperturbative region, we present the results as parametrizations of a Gaussian form in transverse-momentum space, rather than in the Fourier conjugate transverse coordinate space normally used in the CSS formalism. They are specifically valid at small transverse momentum. Since evolution has been applied, our results can be used to make predictions for Drell-Yan and semi-inclusive deep inelastic scattering at energies different from those where the original fits were made. Our evolved functions are of a form that they can be used in the same parton-model factorization formulas as used in the original fits, but now with a predicted scale dependence in the fit parameters. We also present a method by which our evolved functions can be corrected to allow for twist-3 contributions at large parton transverse momentum.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a new scheme of wiretap lattice coding that achieves semantic security and strong secrecy over the Gaussian wiretap channel, which is based on the discrete Gaussian distribution over a lattice, which achieves the secrecy capacity to within a half nat under mild conditions.
Abstract: We propose a new scheme of wiretap lattice coding that achieves semantic security and strong secrecy over the Gaussian wiretap channel. The key tool in our security proof is the flatness factor which characterizes the convergence of the conditional output distributions corresponding to different messages and leads to an upper bound on the information leakage. We not only introduce the notion of secrecy-good lattices, but also propose the {flatness factor} as a design criterion of such lattices. Both the modulo-lattice Gaussian channel and the genuine Gaussian channel are considered. In the latter case, we propose a novel secrecy coding scheme based on the discrete Gaussian distribution over a lattice, which achieves the secrecy capacity to within a half nat under mild conditions. No \textit{a priori} distribution of the message is assumed, and no dither is used in our proposed schemes.

01 Jan 2012
TL;DR: This work proposes a distributed method for computing, at each sensor, an approximation of the JLF by means of consensus algorithms, and uses the likelihood consensus method to implement a distributed particle filter and a distributed Gaussian particle filter.
Abstract: We consider distributed state estimation in a wireless sensor network without a fusion center. Each sensor performs a global estimation task—based on the past and current measurements of all sensors—using only local processing and local communications with its neighbors. In this estimation task, the joint (all-sensors) likelihood function (JLF) plays a central role as it epitomizes the measurements of all sensors. We propose a distributed method for computing, at each sensor, an approximation of the JLF by means of consensus algorithms. This "likelihood consensus" method is applicable if the local likelihood functions of the various sensors (viewed as conditional probability density functions of the local measurements) belong to the exponential family of distributions. We then use the likelihood consensus method to implement a distributed particle filter and a distributed Gaussian particle filter. Each sensor runs a local particle filter, or a local Gaussian particle filter, that computes a global state estimate. The weight update in each local (Gaussian) particle filter employs the JLF, which is obtained through the likelihood consensus scheme. For the distributed Gaussian parti- cle filter, the number of particles can be significantly reduced by means of an additional consensus scheme. Simulation results are presented to assess the performance of the proposed distributed particle filters for a multiple target tracking problem. Index Terms—Wireless sensor network, distributed state estimation, sequential Bayesian estimation, consensus algorithm, distributed particle filter, distributed Gaussian particle filter, target tracking.

Posted Content
TL;DR: It is established that three popular canonical representations are unidentified, and it is shown that, although it is asymptotically equivalent to MLE, MCSE can be much easier to compute.
Abstract: This paper develops new results for identification and estimation of Gaussian affine term structure models. We establish that three popular canonical representations are unidentified, and demonstrate how unidentified regions can complicate numerical optimization. A separate contribution of the paper is the proposal of minimum-chi-square estimation as an alternative to MLE. We show that, although it is asymptotically equivalent to MLE, it can be much easier to compute. In some cases, MCSE allows researchers to recognize with certainty whether a given estimate represents a global maximum of the likelihood function and makes feasible the computation of small-sample standard errors.

Book
13 Jan 2012
TL;DR: The theory of Gaussian processes occupies one of the leading places in modern Probability as discussed by the authors, which is why Gaussian vectors and Gaussian distributions in infinite-dimensional spaces come into play.
Abstract: Theory of random processes needs a kind of normal distribution. This is why Gaussian vectors and Gaussian distributions in infinite-dimensional spaces come into play. By simplicity, importance and wealth of results, theory of Gaussian processes occupies one of the leading places in modern Probability.

Journal ArticleDOI
TL;DR: The proposed criterion based on the generalized likelihood ratio is shown to be both easy to derive and powerful in these diverse applications: patch discrimination, image denoising, stereo-matching and motion-tracking under gamma and Poisson noises.
Abstract: Many tasks in computer vision require to match image parts. While higher-level methods consider image features such as edges or robust descriptors, low-level approaches (so-called image-based) compare groups of pixels (patches) and provide dense matching. Patch similarity is a key ingredient to many techniques for image registration, stereo-vision, change detection or denoising. Recent progress in natural image modeling also makes intensive use of patch comparison. A fundamental difficulty when comparing two patches from "real" data is to decide whether the differences should be ascribed to noise or intrinsic dissimilarity. Gaussian noise assumption leads to the classical definition of patch similarity based on the squared differences of intensities. For the case where noise departs from the Gaussian distribution, several similarity criteria have been proposed in the literature of image processing, detection theory and machine learning. By expressing patch (dis)similarity as a detection test under a given noise model, we introduce these criteria with a new one and discuss their properties. We then assess their performance for different tasks: patch discrimination, image denoising, stereo-matching and motion-tracking under gamma and Poisson noises. The proposed criterion based on the generalized likelihood ratio is shown to be both easy to derive and powerful in these diverse applications.

Journal ArticleDOI
TL;DR: This paper presents comprehensive theoretical performance analysis of I0-LMS for white Gaussian input data based on some reasonable assumptions, which are reasonable in a large range of parameter setting.
Abstract: As one of the recently proposed algorithms for sparse system identification, I0 norm constraint Least Mean Square (io-LMS) algorithm modifies the cost function of the traditional method with a penalty of tap-weight sparsity. The performance of I0-LMS is quite attractive compared with its various precursors. However, there has been no detailed study of its performance. This paper presents comprehensive theoretical performance analysis of I0-LMS for white Gaussian input data based on some reasonable assumptions, which are reasonable in a large range of parameter setting. Expressions for steady-state mean square deviation (MSD) are derived and discussed with respect to algorithm parameters and system sparsity. The parameter selection rule is established for achieving the best performance. Approximated with Taylor series, the instantaneous behavior is also derived. In addition, the relationship between I0-LMS and some previous arts and the sufficient conditions for I0-LMS to accelerate convergence are set up. Finally, all of the theoretical results are compared with simulations and are shown to agree well in a wide range of parameters.