scispace - formally typeset
Search or ask a question

Showing papers on "Probability distribution published in 2012"


Book
01 May 2012
TL;DR: In this article, the authors provide an ideal introduction both to Stein's method and Malliavin calculus, from the standpoint of normal approximations on a Gaussian space, and explain the connections between Stein's methods and Mallian calculus of variations.
Abstract: Stein's method is a collection of probabilistic techniques that allow one to assess the distance between two probability distributions by means of differential operators. In 2007, the authors discovered that one can combine Stein's method with the powerful Malliavin calculus of variations, in order to deduce quantitative central limit theorems involving functionals of general Gaussian fields. This book provides an ideal introduction both to Stein's method and Malliavin calculus, from the standpoint of normal approximations on a Gaussian space. Many recent developments and applications are studied in detail, for instance: fourth moment theorems on the Wiener chaos, density estimates, Breuer–Major theorems for fractional processes, recursive cumulant computations, optimal rates and universality results for homogeneous sums. Largely self-contained, the book is perfect for self-study. It will appeal to researchers and graduate students in probability and statistics, especially those who wish to understand the connections between Stein's method and Malliavin calculus.

712 citations


Journal ArticleDOI
TL;DR: The method uses Bayesian transdimensional Markov Chain Monte Carlo and allows a wide range of possible thermal history models to be considered as general prior information on time, temperature (and temperature offset for multiple samples in a vertical profile).
Abstract: [1] A new approach for inverse thermal history modeling is presented. The method uses Bayesian transdimensional Markov Chain Monte Carlo and allows us to specify a wide range of possible thermal history models to be considered as general prior information on time, temperature (and temperature offset for multiple samples in a vertical profile). We can also incorporate more focused geological constraints in terms of more specific priors. The Bayesian approach naturally prefers simpler thermal history models (which provide an adequate fit to the observations), and so reduces the problems associated with over interpretation of inferred thermal histories. The output of the method is a collection or ensemble of thermal histories, which quantifies the range of accepted models in terms a (posterior) probability distribution. Individual models, such as the best data fitting (maximum likelihood) model or the expected model (effectively the weighted mean from the posterior distribution) can be examined. Different data types (e.g., fission track, U-Th/He, 40Ar/39Ar) can be combined, requiring just a data-specific predictive forward model and data fit (likelihood) function. To demonstrate the main features and implementation of the approach, examples are presented using both synthetic and real data.

514 citations


MonographDOI
01 Jul 2012
TL;DR: This chapter discusses clustering, classification and data mining in the context of spatial point processes, and investigates the role of time series analysis in this process.
Abstract: 1. Introduction 2. Probability 3. Statistical inference 4. Probability distribution functions 5. Nonparametric statistics 6. Density estimation or data smoothing 7. Regression 8. Multivariate analysis 9. Clustering, classification and data mining 10. Nondetections: censored and truncated data 11. Time series analysis 12. Spatial point processes Appendices Index.

449 citations


Proceedings Article
03 Dec 2012
TL;DR: This work proposes a minimax entropy principle to improve the quality of noisy labels from crowds of nonexperts, and shows that a simple coordinate descent scheme can optimize minimAX entropy.
Abstract: An important way to make large training sets is to gather noisy labels from crowds of nonexperts. We propose a minimax entropy principle to improve the quality of these labels. Our method assumes that labels are generated by a probability distribution over workers, items, and labels. By maximizing the entropy of this distribution, the method naturally infers item confusability and worker expertise. We infer the ground truth by minimizing the entropy of this distribution, which we show minimizes the Kullback-Leibler (KL) divergence between the probability distribution and the unknown truth. We show that a simple coordinate descent scheme can optimize minimax entropy. Empirically, our results are substantially better than previously published methods for the same problem.

393 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the Stieltjes transform of the empirical eigenvalue distribution of H is given by the Wigner semicircle law uniformly up to the edges of the spectrum with an error of order (Nη)−1 where η is the imaginary part of the spectral parameter in the stielt jes transform, and the edge universality holds in the sense that the probability distributions of the largest (and the smallest) eigenvalues of two generalized wigner ensembles are the same in the large N limit

375 citations


Journal ArticleDOI
TL;DR: The key idea is to align the complexity level and order of analysis with the reliability and detail level of statistical information on the input parameters to avoid the necessity to assign parametric probability distributions that are not sufficiently supported by limited available data.

350 citations


Journal ArticleDOI
TL;DR: An improved multi-objective teaching–learning-based optimization is implemented to yield the best expected Pareto optimal front and a novel self adaptive probabilistic modification strategy is offered to improve the performance of the presented algorithm.

348 citations


Journal ArticleDOI
TL;DR: SUR (stepwise uncertainty reduction) strategies are derived from a Bayesian formulation of the problem of estimating a probability of failure of a function f using a Gaussian process model of f and aim at performing evaluations of f as efficiently as possible to infer the value of the probabilities of failure.
Abstract: This paper deals with the problem of estimating the volume of the excursion set of a function f:? d ?? above a given threshold, under a probability measure on ? d that is assumed to be known. In the industrial world, this corresponds to the problem of estimating a probability of failure of a system. When only an expensive-to-simulate model of the system is available, the budget for simulations is usually severely limited and therefore classical Monte Carlo methods ought to be avoided. One of the main contributions of this article is to derive SUR (stepwise uncertainty reduction) strategies from a Bayesian formulation of the problem of estimating a probability of failure. These sequential strategies use a Gaussian process model of f and aim at performing evaluations of f as efficiently as possible to infer the value of the probability of failure. We compare these strategies to other strategies also based on a Gaussian process model for estimating a probability of failure.

330 citations


Journal ArticleDOI
TL;DR: The extended Kalman filtering problem is investigated for a class of nonlinear systems with multiple missing measurements over a finite horizon and it is shown that the desired filter can be obtained in terms of the solutions to two Riccati-like difference equations that are of a form suitable for recursive computation in online applications.

302 citations


Journal ArticleDOI
TL;DR: An optimization algorithm is developed based on the well-established particle swarm optimization (PSO) and interior point method to solve the economic dispatch model and is demonstrated by the IEEE 118-bus test system.
Abstract: In this paper, an economic dispatch model, which can take into account the uncertainties of plug-in electric vehicles (PEVs) and wind generators, is developed. A simulation based approach is first employed to study the probability distributions of the charge/discharge behaviors of PEVs. The probability distribution of wind power is also derived based on the assumption that the wind speed follows the Rayleigh distribution. The mathematical expectations of the generation costs of wind power and V2G (vehicle to grid) power are then derived analytically. An optimization algorithm is developed based on the well-established particle swarm optimization (PSO) and interior point method to solve the economic dispatch model. The proposed approach is demonstrated by the IEEE 118-bus test system.

241 citations


01 Jan 2012
TL;DR: The main contribution of the paper is to show that the KL divergence constrained DRO problems are often of the same complexity as their original stochastic programming problems and, thus, KL divergence appears a good candidate in modeling distribution ambiguities in mathematical programming.
Abstract: In this paper we study distributionally robust optimization (DRO) problems where the ambiguity set of the probability distribution is defined by the Kullback-Leibler (KL) divergence. We consider DRO problems where the ambiguity is in the objective function, which takes a form of an expectation, and show that the resulted minimax DRO problems can be formulated as a one-layer convex minimization problem. We also consider DRO problems where the ambiguity is in the constraint. We show that ambiguous expectation-constrained programs may be reformulated as a one-layer convex optimization problem that takes the form of the Benstein approximation of Nemirovski and Shapiro (2006). We further consider distributionally robust probabilistic programs. We show that the optimal solution of a probability minimization problem is also optimal for the distributionally robust version of the same problem, and also show that the ambiguous chance-constrained programs (CCPs) may be reformulated as the original CCP with an adjusted confidence level. A number of examples and special cases are also discussed in the paper to show that the reformulated problems may take simple forms that can be solved easily. The main contribution of the paper is to show that the KL divergence constrained DRO problems are often of the same complexity as their original stochastic programming problems and, thus, KL divergence appears a good candidate in modeling distribution ambiguities in mathematical programming.

Journal ArticleDOI
TL;DR: In this article, a stochastic multiobjective framework for daily volt/var control (VVC), including hydroturbine, fuel cell, wind turbine, and photovoltaic powerplants, is proposed to minimize the electrical losses, voltage deviations, total electrical energy costs, and total emissions of renewable energy sources and grid.
Abstract: This paper proposes a stochastic multiobjective framework for daily volt/var control (VVC), including hydroturbine, fuel cell, wind turbine, and photovoltaic powerplants The multiple objectives of the VVC problem to be minimized are the electrical energy losses, voltage deviations, total electrical energy costs, and total emissions of renewable energy sources and grid For this purpose, the uncertainty related to hourly load, wind power, and solar irradiance forecasts are modeled in a scenario-based stochastic framework A roulette wheel mechanism based on the probability distribution functions of these random variables is considered to generate the scenarios Consequently, the stochastic multiobjective VVC (SMVVC) problem is converted to a series of equivalent deterministic scenarios Furthermore, an Evolutionary Algorithm using the Modified Teaching-Learning-Algorithm (MTLA) is proposed to solve the SMVVC in the form of a mixed-integer nonlinear programming problem In the proposed algorithm, a new mutation method is taken into account in order to enhance the global searching ability and mitigate the premature convergence to local minima Finally, two distribution test feeders are considered as case studies to demonstrate the effectiveness of the proposed SMVVC

Journal ArticleDOI
TL;DR: An efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise is provided.
Abstract: We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix $\ensuremath{\rho}$) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix $\ensuremath{\mu}$ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst $O({d}^{4})$ for the basis change plus $O({d}^{3})$ for finding $\ensuremath{\rho}$ where $d$ is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only $O({d}^{3})$ as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

Journal ArticleDOI
TL;DR: This paper provides a change‐point detection algorithm based on direct density‐ratio estimation that can be computed very efficiently in an online manner and allows for nonparametric density estimation, which is known to be a difficult problem.
Abstract: Change-point detection is the problem of discovering time points at which properties of time-series data change. This covers a broad range of real-world problems and has been actively discussed in the community of statistics and data mining. In this paper, we present a novel nonparametric approach to detecting the change of probability distributions of sequence data. Our key idea is to estimate the ratio of probability densities, not the probability densities themselves. This formulation allows us to avoid nonparametric density estimation, which is known to be a difficult problem. We provide a change-point detection algorithm based on direct density-ratio estimation that can be computed very efficiently in an online manner. The usefulness of the proposed method is demonstrated through experiments using artificial and real-world datasets. © 2011 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 2011 © 2012 Wiley Periodicals, Inc.

Posted Content
TL;DR: The generalized mean field (GMF) algorithm as discussed by the authors is a generalization of mean field theory for approximate inference in complex exponential family models, which involves limiting the optimization over the class of cluster-factorizable distributions.
Abstract: The mean field methods, which entail approximating intractable probability distributions variationally with distributions from a tractable family, enjoy high efficiency, guaranteed convergence, and provide lower bounds on the true likelihood. But due to requirement for model-specific derivation of the optimization equations and unclear inference quality in various models, it is not widely used as a generic approximate inference algorithm. In this paper, we discuss a generalized mean field theory on variational approximation to a broad class of intractable distributions using a rich set of tractable distributions via constrained optimization over distribution spaces. We present a class of generalized mean field (GMF) algorithms for approximate inference in complex exponential family models, which entails limiting the optimization over the class of cluster-factorizable distributions. GMF is a generic method requiring no model-specific derivations. It factors a complex model into a set of disjoint variable clusters, and uses a set of canonical fix-point equations to iteratively update the cluster distributions, and converge to locally optimal cluster marginals that preserve the original dependency structure within each cluster, hence, fully decomposed the overall inference problem. We empirically analyzed the effect of different tractable family (clusters of different granularity) on inference quality, and compared GMF with BP on several canonical models. Possible extension to higher-order MF approximation is also discussed.

Journal ArticleDOI
TL;DR: This study introduces a new class of models for multivariate discrete data based on pair copula constructions (PCCs) that has two major advantages; it is shown that discrete PCCs attain highly flexible dependence structures and the high quality of inference function for margins and maximum likelihood estimates is demonstrated.
Abstract: Multivariate discrete response data can be found in diverse fields, including econometrics, finance, biometrics, and psychometrics. Our contribution, through this study, is to introduce a new class of models for multivariate discrete data based on pair copula constructions (PCCs) that has two major advantages. First, by deriving the conditions under which any multivariate discrete distribution can be decomposed as a PCC, we show that discrete PCCs attain highly flexible dependence structures. Second, the computational burden of evaluating the likelihood for an m-dimensional discrete PCC only grows quadratically with m. This compares favorably to existing models for which computing the likelihood either requires the evaluation of 2 m terms or slow numerical integration methods. We demonstrate the high quality of inference function for margins and maximum likelihood estimates, both under a simulated setting and for an application to a longitudinal discrete dataset on headache severity. This article has onli...

Journal ArticleDOI
TL;DR: In this paper, the authors show that the corresponding Fokker-planck equation is a system of N nonlinear ordinary differential equations defined on a Riemannian manifold of probability distributions whose inner product is generated by a 2-Wasserstein distance.
Abstract: The classical Fokker–Planck equation is a linear parabolic equation which describes the time evolution of the probability distribution of a stochastic process defined on a Euclidean space Corresponding to a stochastic process, there often exists a free energy functional which is defined on the space of probability distributions and is a linear combination of a potential and an entropy In recent years, it has been shown that the Fokker–Planck equation is the gradient flow of the free energy functional defined on the Riemannian manifold of probability distributions whose inner product is generated by a 2-Wasserstein distance In this paper, we consider analogous matters for a free energy functional or Markov process defined on a graph with a finite number of vertices and edges If N ≧ 2 is the number of vertices of the graph, we show that the corresponding Fokker–Planck equation is a system of Nnonlinear ordinary differential equations defined on a Riemannian manifold of probability distributions However, in contrast to stochastic processes defined on Euclidean spaces, the situation is more subtle for discrete spaces We have different choices for inner products on the space of probability distributions resulting in different Fokker–Planck equations for the same process It is shown that there is a strong connection but there are also substantial discrepancies between the systems of ordinary differential equations and the classical Fokker–Planck equation on Euclidean spaces Furthermore, both systems of ordinary differential equations are gradient flows for the same free energy functional defined on the Riemannian manifolds of probability distributions with different metrics Some examples are also discussed

Journal ArticleDOI
Ronald R. Yager1
TL;DR: The concept of Z‐numbers, which consist of an ordered pair of fuzzy numbers, is recalled and used to provide information about an uncertain variable V in the form of a Z‐valuation, which expresses the knowledge that the probability that V is A is equal to B.
Abstract: We first recall the concept of Z-numbers introduced by Zadeh. These objects consist of an ordered pair (A, B) of fuzzy numbers. We then use these Z-numbers to provide information about an uncertain variable V in the form of a Z-valuation, which expresses the knowledge that the probability that V is A is equal to B. We show that these Z-valuations essentially induce a possibility distribution over probability distributions associated with V. We provide a simple illustration of a Z-valuation. We show how we can use this representation to make decisions and answer questions. We show how to manipulate and combine multiple Z-valuations. We show the relationship between Z-numbers and linguistic summaries. Finally, we provide for a representation of Z-valuations in terms of Dempster–Shafer belief structures, which makes use of type-2 fuzzy sets. © 2012 Wiley Periodicals, Inc. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: In this article, a heuristic grid clustering method is developed to cluster each 2D diagram to rectangular sub spaces (states) with regard to travel time homogeneity, and then compute the transition probabilities and link partial travel time distributions to obtain the arterial route travel time distribution.
Abstract: Recent advances in the probe vehicle deployment offer an innovative prospect for research in arterial travel time estimation. Specifically, we focus on the estimation of probability distribution of arterial route travel time, which contains more information regarding arterial performance measurements and travel time reliability. One of the fundamental contributions of this work is the integration of travel time correlation of route’s successive links within the methodology. In the proposed technique, given probe vehicles travel times of the traversing links, a two-dimensional (2D) diagram is established with data points representing travel times of a probe vehicle crossing two consecutive links. A heuristic grid clustering method is developed to cluster each 2D diagram to rectangular sub spaces (states) with regard to travel time homogeneity. By applying a Markov chain procedure, we integrate the correlation between states of 2D diagrams for successive links. We then compute the transition probabilities and link partial travel time distributions to obtain the arterial route travel time distribution. The procedure with various probe vehicle sample sizes is tested on two study sites with time dependent conditions, with field measurements and simulated data. The results are very close to the Markov chain procedure and more accurate once compared to the convolution of links travel time distributions for different levels of congestion, even for small penetration rates of probe vehicles.

Journal ArticleDOI
TL;DR: The distribution of end-to-end delay in multi-hop WSNs is investigated and a comprehensive and accurate cross-layer analysis framework, which employs a stochastic queueing model in realistic channel environments, is developed and suggests that this framework can be easily extended to model additional QoS metrics such as energy consumption distribution.
Abstract: Emerging applications of wireless sensor networks (WSNs) require real-time quality-of-service (QoS) guarantees to be provided by the network. Due to the nondeterministic impacts of the wireless channel and queuing mechanisms, probabilistic analysis of QoS is essential. One important metric of QoS in WSNs is the probability distribution of the end-to-end delay. Compared to other widely used delay performance metrics such as the mean delay, delay variance, and worst-case delay, the delay distribution can be used to obtain the probability to meet a specific deadline for QoS-based communication in WSNs. To investigate the end-to-end delay distribution, in this paper, a comprehensive cross-layer analysis framework, which employs a stochastic queueing model in realistic channel environments, is developed. This framework is generic and can be parameterized for a wide variety of MAC protocols and routing protocols. Case studies with the CSMA/CA MAC protocol and an anycast protocol are conducted to illustrate how the developed framework can analytically predict the distribution of the end-to-end delay. Extensive test-bed experiments and simulations are performed to validate the accuracy of the framework for both deterministic and random deployments. Moreover, the effects of various network parameters on the distribution of end-to-end delay are investigated through the developed framework. To the best of our knowledge, this is the first work that provides a generic, probabilistic cross-layer analysis of end-to-end delay in WSNs.

Journal ArticleDOI
TL;DR: It is proposed that operational data from wind turbines are used to estimate bivariate probability distribution functions representing the power curve of existing turbines so that deviations from expected behavior can be detected.
Abstract: Power curves constructed from wind speed and active power output measurements provide an established method of analyzing wind turbine performance. In this paper, it is proposed that operational data from wind turbines are used to estimate bivariate probability distribution functions representing the power curve of existing turbines so that deviations from expected behavior can be detected. Owing to the complex form of dependency between active power and wind speed, which no classical parameterized distribution can approximate, the application of empirical copulas is proposed; the statistical theory of copulas allows the distribution form of marginal distributions of wind speed and power to be expressed separately from information about the dependency between them. Copula analysis is discussed in terms of its likely usefulness in wind turbine condition monitoring, particularly in early recognition of incipient faults such as blade degradation, yaw, and pitch errors.

Journal ArticleDOI
TL;DR: In this paper, the possibility to calculate the standardized precipitation index (SPI) by fitting to the precipitation data the normal and the log-normal probability distributions was studied, at various time scales (1, 3, 6, 12 and 24 months).
Abstract: The Standardized Precipitation Index (SPI) is widely used as drought meteorological index, to identify the duration and/or severity of a drought. The SPI is usually computed by fitting the gamma probability distribution to the observed precipitation data. In this work, the possibility to calculate SPI by fitting to the precipitation data the normal and the log-normal probability distributions was studied. For this purpose, 19 time series of monthly precipitation of 76 years were used, and the assumption that the gamma probability distribution would provide better representation of the precipitation data than log-normal and normal distributions, at various time scales (1, 3, 6, 12 and 24 months) was tested. It is concluded that for SPI of 12 or 24 months, the log-normal or the normal probability distribution can be used for simplicity, instead of gamma, producing almost the same results.

Journal ArticleDOI
TL;DR: A review of probabilistic concepts in the literature aims to disentangle these concepts and to classify empirical evidence accordingly.

Journal ArticleDOI
TL;DR: In this article, a probability-based theoretical scheme for building process-based models of uncertain hydrological systems is presented, where uncertainty for the model output is assessed by estimating the related probability distribution via simulation.
Abstract: [1] We present a probability based theoretical scheme for building process-based models of uncertain hydrological systems, thereby unifying hydrological modeling and uncertainty assessment. Uncertainty for the model output is assessed by estimating the related probability distribution via simulation, thus shifting from one to many applications of the selected hydrological model. Each simulation is performed after stochastically perturbing input data, parameters and model output, this latter by adding random outcomes from the population of the model error, whose probability distribution is conditioned on input data and model parameters. Within this view randomness, and therefore uncertainty, is treated as an inherent property of hydrological systems. We discuss the related assumptions as well as the open research questions. The theoretical framework is illustrated by presenting real-world and synthetic applications. The relevant contribution of this study is related to proposing a statistically consistent simulation framework for uncertainty estimation which does not require model likelihood computation and simplification of the model structure. The results show that uncertainty is satisfactorily estimated although the impact of the assumptions could be significant in conditions of data scarcity.

Posted Content
TL;DR: In this paper, a kernel-based discriminative learning framework on probability measures is presented, which learns using a collection of probability distributions that have been constructed to meaningfully represent training data by representing these probability distributions as mean embeddings in the reproducing kernel Hilbert space.
Abstract: This paper presents a kernel-based discriminative learning framework on probability measures. Rather than relying on large collections of vectorial training examples, our framework learns using a collection of probability distributions that have been constructed to meaningfully represent training data. By representing these probability distributions as mean embeddings in the reproducing kernel Hilbert space (RKHS), we are able to apply many standard kernel-based learning techniques in straightforward fashion. To accomplish this, we construct a generalization of the support vector machine (SVM) called a support measure machine (SMM). Our analyses of SMMs provides several insights into their relationship to traditional SVMs. Based on such insights, we propose a flexible SVM (Flex-SVM) that places different kernel functions on each training example. Experimental results on both synthetic and real-world data demonstrate the effectiveness of our proposed framework.

Journal ArticleDOI
TL;DR: A dimensionality reduction algorithm is developed that can be used to build a low-dimensional map of a phase space of high-dimensionality and is applied to a small model protein and succeeds in reproducing the free-energy surface that is obtained from a parallel tempering calculation.
Abstract: When examining complex problems, such as the folding of proteins, coarse grained descriptions of the system drive our investigation and help us to rationalize the results. Oftentimes collective variables (CVs), derived through some chemical intuition about the process of interest, serve this purpose. Because finding these CVs is the most difficult part of any investigation, we recently developed a dimensionality reduction algorithm, sketch-map, that can be used to build a low-dimensional map of a phase space of high-dimensionality. In this paper we discuss how these machine-generated CVs can be used to accelerate the exploration of phase space and to reconstruct free-energy landscapes. To do so, we develop a formalism in which high-dimensional configurations are no longer represented by low-dimensional position vectors. Instead, for each configuration we calculate a probability distribution, which has a domain that encompasses the entirety of the low-dimensional space. To construct a biasing potential, we exploit an analogy with metadynamics and use the trajectory to adaptively construct a repulsive, history-dependent bias from the distributions that correspond to the previously visited configurations. This potential forces the system to explore more of phase space by making it desirable to adopt configurations whose distributions do not overlap with the bias. We apply this algorithm to a small model protein and succeed in reproducing the free-energy surface that we obtain from a parallel tempering calculation.

Journal ArticleDOI
TL;DR: In this paper, the convergence analysis of a class of sequential Monte Carlo (SMC) methods where the times at which resampling occurs are computed online using criteria such as the effective sample size is studied.
Abstract: Sequential Monte Carlo (SMC) methods are a class of techniques to sample approximately from any sequence of probability distributions using a combination of importance sampling and resampling steps. This paper is concerned with the convergence analysis of a class of SMC methods where the times at which resampling occurs are computed online using criteria such as the effective sample size. This is a popular approach amongst practitioners but there are very few convergence results available for these methods. By combining semigroup techniques with an original coupling argument, we obtain functional central limit theorems and uniform exponential concentration estimates for these algorithms.

Journal ArticleDOI
TL;DR: A new and generalized statistical model to model the irradiance fluctuations of an unbounded optical wavefront propagating through a turbulent medium under all irradiance fluctuation conditions in homogeneous, isotropic turbulence is completed by including the adverse effect of pointing error losses due to misalignment.
Abstract: Recently, a new and generalized statistical model, called M or Malaga distribution, was proposed to model the irradiance fluctuations of an unbounded optical wavefront (plane and spherical waves) propagating through a turbulent medium under all irradiance fluctuation conditions in homogeneous, isotropic turbulence. Malaga distribution was demonstrated to have the advantage of unifying most of the proposed statistical models derived until now in the bibliography in a closed-form expression providing, in addition, an excellent agreement with published plane wave and spherical wave simulation data over a wide range of turbulence conditions (weak to strong). Now, such a model is completed by including the adverse effect of pointing error losses due to misalignment. In this respect, the well-known effects of aperture size, beam width and jitter variance are taken into account. Accordingly, after presenting the analytical expressions for the combined distribution of scintillation and pointing errors, we derive its centered moments of the overall probability distribution. Finally, we obtain the analytical expressions for the average bit error rate performance for the M distribution affected by pointing errors. Numerical results show the impact of misalignment on link performance.

Journal ArticleDOI
TL;DR: In this paper, the convergence analysis of a class of sequential Monte Carlo (SMC) methods where the times at which resampling occurs are computed online using criteria such as the effective sample size is studied.
Abstract: Sequential Monte Carlo (SMC) methods are a class of techniques to sample approximately from any sequence of probability distributions using a combination of importance sampling and resampling steps. This paper is concerned with the convergence analysis of a class of SMC methods where the times at which resampling occurs are computed online using criteria such as the effective sample size. This is a popular approach amongst practitioners but there are very few convergence results available for these methods. By combining semigroup techniques with an original coupling argument, we obtain functional central limit theorems and uniform exponential concentration estimates for these algorithms.

Journal ArticleDOI
22 Oct 2012
TL;DR: The aim is to optimize the slot access probability in order to achieve rateless-like distributions, focusing both on the maximization of the resolution probability of user transmissions and the throughput of the scheme.
Abstract: We propose a novel distributed random access scheme for wireless networks based on slotted ALOHA, motivated by the analogies between successive interference cancellation and iterative belief-propagation decoding on erasure channels. The proposed scheme assumes that each user independently accesses the wireless link in each slot with a predefined probability, resulting in a distribution of user transmissions over slots. The operation bears analogy with rateless codes, both in terms of probability distributions as well as to the fact that the ALOHA frame becomes fluid and adapted to the current contention process. Our aim is to optimize the slot access probability in order to achieve rateless-like distributions, focusing both on the maximization of the resolution probability of user transmissions and the throughput of the scheme.