scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 2003"


Journal ArticleDOI
TL;DR: A fully probabilistic framework is presented for estimating local probability density functions on parameters of interest in a model of diffusion, allowing for the quantification of belief in tractography results and the estimation of the cortical connectivity of the human thalamus.
Abstract: A fully probabilistic framework is presented for estimating local probability density functions on parameters of interest in a model of diffusion. This technique is applied to the estimation of parameters in the diffusion tensor model, and also to a simple partial volume model of diffusion. In both cases the parameters of interest include parameters defining local fiber direction. A technique is then presented for using these density functions to estimate global connectivity (i.e., the probability of the existence of a connection through the data field, between any two distant points), allowing for the quantification of belief in tractography results. This technique is then applied to the estimation of the cortical connectivity of the human thalamus. The resulting connectivity distributions correspond well with predictions from invasive tracer methods in nonhuman primate.

2,970 citations


Proceedings ArticleDOI
27 Oct 2003
TL;DR: First results on real data demonstrate, that the normal distributions transform algorithm is capable to map unmodified indoor environments reliable and in real time, even without using odometry data.
Abstract: Matching 2D range scans is a basic component of many localization and mapping algorithms. Most scan match algorithms require finding correspondences between the used features, i.e. points or lines. We propose an alternative representation for a range scan, the normal distributions transform. Similar to an occupancy grid, we subdivide the 2D plane into cells. To each cell, we assign a normal distribution, which locally models the probability of measuring a point. The result of the transform is a piecewise continuous and differentiable probability density, that can be used to match another scan using Newton's algorithm. Thereby, no explicit correspondences have to be established. We present the algorithm in detail and show the application to relative position tracking and simultaneous localization and map building (SLAM). First results on real data demonstrate, that the algorithm is capable to map unmodified indoor environments reliable and in real time, even without using odometry data.

944 citations


Journal ArticleDOI
TL;DR: A concise closed-form expression is derived for the characteristic function (c.f.) of MIMO system capacity with arbitrary correlation among the transmitting antennas or among the receiving antennas in frequency-flat Rayleigh-fading environments, and an exact expression for the mean value of the capacity for arbitrary correlation matrices is derived.
Abstract: In this paper, we investigate the capacity distribution of spatially correlated, multiple-input-multiple-output (MIMO) channels. In particular, we derive a concise closed-form expression for the characteristic function (c.f.) of MIMO system capacity with arbitrary correlation among the transmitting antennas or among the receiving antennas in frequency-flat Rayleigh-fading environments. Using the exact expression of the c.f., the probability density function (pdf) and the cumulative distribution function (CDF) can be easily obtained, thus enabling the exact evaluation of the outage and mean capacity of spatially correlated MIMO channels. Our results are valid for scenarios with the number of transmitting antennas greater than or equal to that of receiving antennas with arbitrary correlation among them. Moreover, the results are valid for an arbitrary number of transmitting and receiving antennas in uncorrelated MIMO channels. It is shown that the capacity loss is negligible even with a correlation coefficient between two adjacent antennas as large as 0.5 for exponential correlation model. Finally, we derive an exact expression for the mean value of the capacity for arbitrary correlation matrices.

735 citations


Journal ArticleDOI
TL;DR: A new shadowed Rice (1948) model for land mobile satellite channels, where the amplitude of the line-of-sight is characterized by the Nakagami distribution, provides a similar fit to the experimental data as the well-accepted Loo's (1985) model but with significantly less computational burden.
Abstract: We propose a new shadowed Rice (1948) model for land mobile satellite channels. In this model, the amplitude of the line-of-sight is characterized by the Nakagami distribution. The major advantage of the model is that it leads to closed-form and mathematically-tractable expressions for the fundamental channel statistics such as the envelope probability density function, moment generating function of the instantaneous power, and the level crossing rate. The model is very convenient for analytical and numerical performance prediction of complicated narrowband and wideband land mobile satellite systems, with different types of uncoded/coded modulations, with or without diversity. Comparison of the first- and the second-order statistics of the proposed model with different sets of published channel data demonstrates the flexibility of the new model in characterizing a variety of channel conditions and propagation mechanisms over satellite links. Interestingly, the proposed model provides a similar fit to the experimental data as the well-accepted Loo's (1985) model but with significantly less computational burden.

669 citations


Journal ArticleDOI
TL;DR: In this paper, new sum-of-sinusoids statistical simulation models are proposed for Rayleigh fading channels. And the autocorrelations and cross correlations of the quadrature components, and the auto-correlation of the complex envelope of the new simulators match the desired ones exactly, even if the number of sinusoids is as small as a single-digit integer.
Abstract: In this paper, new sum-of-sinusoids statistical simulation models are proposed for Rayleigh fading channels. These new models employ random path gain, random initial phase, and conditional random Doppler frequency for all individual sinusoids. It is shown that the autocorrelations and cross correlations of the quadrature components, and the autocorrelation of the complex envelope of the new simulators match the desired ones exactly, even if the number of sinusoids is as small as a single-digit integer. Moreover, the probability density functions of the envelope and phase, the level crossing rate, the average fade duration, and the autocorrelation of the squared fading envelope which contains fourth-order statistics of the new simulators, asymptotically approach the correct ones as the number of sinusoids approaches infinity, while good convergence is achieved even when the number of sinusoids is as small as eight. The new simulators can be directly used to generate multiple uncorrelated fading waveforms for frequency selective fading channels, multiple-input multiple-output channels, and diversity combining scenarios. Statistical properties of one of the new simulators are evaluated by numerical results, finding good agreements.

576 citations


Journal ArticleDOI
TL;DR: The proposed model compares well with other competing models to fit data that exhibits a bathtub-shaped hazard-rate function and can be considered as another useful 3-parameter generalization of the Weibull distribution.
Abstract: A new lifetime distribution capable of modeling a bathtub-shaped hazard-rate function is proposed. The proposed model is derived as a limiting case of the Beta Integrated Model and has both the Weibull distribution and Type 1 extreme value distribution as special cases. The model can be considered as another useful 3-parameter generalization of the Weibull distribution. An advantage of the model is that the model parameters can be estimated easily based on a Weibull probability paper (WPP) plot that serves as a tool for model identification. Model characterization based on the WPP plot is studied. A numerical example is provided and comparison with another Weibull extension, the exponentiated Weibull, is also discussed. The proposed model compares well with other competing models to fit data that exhibits a bathtub-shaped hazard-rate function.

488 citations


Journal ArticleDOI
TL;DR: This paper reviews both approaches to neural computation, with a particular emphasis on the latter, which the authors see as a very promising framework for future modeling and experimental work.
Abstract: In the vertebrate nervous system, sensory stimuli are typically encoded through the concerted activity of large populations of neurons. Classically, these patterns of activity have been treated as encoding the value of the stimulus (e.g., the orientation of a contour), and computation has been formalized in terms of function approximation. More recently, there have been several suggestions that neural computation is akin to a Bayesian inference process, with population activity patterns representing uncertainty about stimuli in the form of probability distributions (e.g., the probability density function over the orientation of a contour). This paper reviews both approaches, with a particular emphasis on the latter, which we see as a very promising framework for future modeling and experimental work.

445 citations


Journal ArticleDOI
TL;DR: In this article, the Fourier transform of generalized parton distribution functions at ξ = 0 describes the distribution of partons in the transverse plane, and the physical significance of these impact parameter dependent parton distributions is discussed.
Abstract: The Fourier transform of generalized parton distribution functions at ξ = 0 describes the distribution of partons in the transverse plane. The physical significance of these impact parameter dependent parton distribution functions is discussed. In particular, it is shown that they satisfy positivity constraints which justify their physical interpretation as a probability density. The generalized parton distribution H is related to impact parameter distribution of unpolarized quarks for an unpolarized nucleon, $\tilde{H}$ is related to the distribution of longitudinally polarized quarks in a longitudinally polarized nucleon, and E is related to the distortion of the unpolarized quark distribution in the transverse plane when the nucleon has transverse polarization. The magnitude of the resulting transverse flavor dipole moment can be related to the anomalous magnetic moment for that flavor in a model independent way.

437 citations


Proceedings ArticleDOI
09 Nov 2003
TL;DR: In this paper, a new statistical timing analysis method that accounts for inter-and intra-die process variations and their spatial correlations is presented, where a statistical bound on the probability distribution function of the exact circuit delay is computed with linear run time.
Abstract: Process variations have become a critical issue in performance verification of high-performance designs. We present a new, statistical timing analysis method that accounts for inter- and intra-die process variations and their spatial correlations. Since statistical timing analysis has an exponential run time complexity, we propose a method whereby a statistical bound on the probability distribution function of the exact circuit delay is computed with linear run time. First, we develop a model for representing inter- and intra-die variations and their spatial correlations. Using this model, we then show how gate delays and arrival times can be represented as a sum of components, such that the correlation information between arrival times and gate delays is preserved. We then show how arrival times are propagated and merged in the circuit to obtain an arrival time distribution that is an upper bound on the distribution of the exact circuit delay. We prove the correctness of the bound and also show how the bound can be improved by propagating multiple arrival times. The proposed algorithms were implemented and tested on a set of benchmark circuits under several process variation scenarios. The results were compared with Monte Carlo simulation and show an accuracy of 3.32% on average over all test cases.

434 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present results of self-consistent cosmological simulations of high-redshift galaxy formation that reproduce the Schmidt law naturally, without assuming it, and provide some clues to this puzzle.
Abstract: One of the most puzzling properties of observed galaxies is the universality of the empirical correlation between the star formation rate and the average gas surface density on kiloparsec scales (the Schmidt law). In this study, I present results of self-consistent cosmological simulations of high-redshift galaxy formation that reproduce the Schmidt law naturally, without assuming it, and provide some clues to this puzzle. The simulations have a dynamic range high enough to identify individual star-forming regions. The results indicate that the global Schmidt law is a manifestation of the overall density distribution of the interstellar medium. In particular, the density probability distribution function (PDF) in the simulated disks has a well-defined generic shape that can be approximated by a lognormal distribution at high densities. The PDF in a given region of the disk depends on the local average surface density Σg. The dependence is such that the fraction of gas mass in the high-density tail of the distribution scales as Σ with n ≈ 1.4, which gives rise to the Schmidt-like correlation. The high-density tail of the PDF is remarkably insensitive to the inclusion of feedback and details of the cooling and heating processes. This indicates that the global star formation rate is determined by the supersonic turbulence driven by gravitational instabilities on large scales, rather than stellar feedback or thermal instabilities on small scales.

291 citations


Journal ArticleDOI
TL;DR: It is sufficient to find the orthonormal rotation y=Wz of prewhitened sources z=Vx, which minimizes the mean squared error of the reconstruction of z from the rectified version y/sup +/ of y, which shows in particular the fast convergence of the rotation and geodesic methods.
Abstract: We consider the task of solving the independent component analysis (ICA) problem x=As given observations x, with a constraint of nonnegativity of the source random vector s. We refer to this as nonnegative independent component analysis and we consider methods for solving this task. For independent sources with nonzero probability density function (pdf) p(s) down to s=0 it is sufficient to find the orthonormal rotation y=Wz of prewhitened sources z=Vx, which minimizes the mean squared error of the reconstruction of z from the rectified version y/sup +/ of y. We suggest some algorithms which perform this, both based on a nonlinear principal component analysis (PCA) approach and on a geodesic search method driven by differential geometry considerations. We demonstrate the operation of these algorithms on an image separation problem, which shows in particular the fast convergence of the rotation and geodesic methods and apply the approach to a musical audio analysis task.

Journal ArticleDOI
Mark Girolami1, Chao He1
TL;DR: The Reduced Set Density Estimator is presented, which provides a kernel-based density estimator which employs a small percentage of the available data sample and is optimal in the L/sub 2/ sense.
Abstract: The requirement to reduce the computational cost of evaluating a point probability density estimate when employing a Parzen window estimator is a well-known problem. This paper presents the Reduced Set Density Estimator that provides a kernel-based density estimator which employs a small percentage of the available data sample and is optimal in the L/sub 2/ sense. While only requiring /spl Oscr/(N/sup 2/) optimization routines to estimate the required kernel weighting coefficients, the proposed method provides similar levels of performance accuracy and sparseness of representation as Support Vector Machine density estimation, which requires /spl Oscr/(N/sup 3/) optimization routines, and which has previously been shown to consistently outperform Gaussian Mixture Models. It is also demonstrated that the proposed density estimator consistently provides superior density estimates for similar levels of data reduction to that provided by the recently proposed Density-Based Multiscale Data Condensation algorithm and, in addition, has comparable computational scaling. The additional advantage of the proposed method is that no extra free parameters are introduced such as regularization, bin width, or condensation ratios, making this method a very simple and straightforward approach to providing a reduced set density estimator with comparable accuracy to that of the full sample Parzen density estimator.

Journal ArticleDOI
TL;DR: In this paper, a stochastic mode reduction strategy was applied to three prototype models with nonlinear behavior mimicking several features of low-frequency variability in the extratropical atmosphere.
Abstract: A systematic strategy for stochastic mode reduction is applied here to three prototype ‘‘toy’’ models with nonlinear behavior mimicking several features of low-frequency variability in the extratropical atmosphere. Two of the models involve explicit stable periodic orbits and multiple equilibria in the projected nonlinear climate dynamics. The systematic strategy has two steps: stochastic consistency and stochastic mode elimination. Both aspects of the mode reduction strategy are tested in an a priori fashion in the paper. In all three models the stochastic mode elimination procedure applies in a quantitative fashion for moderately large values of « 0.5 or even « 1, where the parameter « roughly measures the ratio of correlation times of unresolved variables to resolved climate variables, even though the procedure is only justified mathematically for « K 1. The results developed here provide some new perspectives on both the role of stable nonlinear structures in projected nonlinear climate dynamics and the regression fitting strategies for stochastic climate modeling. In one example, a deterministic system with 102 degrees of freedom has an explicit stable periodic orbit for the projected climate dynamics in two variables; however, the complete deterministic system has instead a probability density function with two large isolated peaks on the ‘‘ghost’’ of this periodic orbit, and correlation functions that only weakly ‘‘shadow’’ this periodic orbit. Furthermore, all of these features are predicted in a quantitative fashion by the reduced stochastic model in two variables derived from the systematic theory; this reduced model has multiplicative noise and augmented nonlinearity. In a second deterministic model with 101 degrees of freedom, it is established that stable multiple equilibria in the projected climate dynamics can be either relevant or completely irrelevant in the actual dynamics for the climate variable depending on the strength of nonlinearity and the coupling to the unresolved variables. Furthermore, all this behavior is predicted in a quantitative fashion by a reduced nonlinear stochastic model for a single climate variable with additive noise, which is derived from the systematic mode reduction procedure. Finally, the systematic mode reduction strategy is applied in an idealized context to the stochastic modeling of the effect of mountain torque on the angular momentum budget. Surprisingly, the strategy yields a nonlinear stochastic equation for the large-scale fluctuations, and numerical simulations confirm significantly improved predicted correlation functions from this model compared with a standard linear model with damping and white noise forcing.

Journal ArticleDOI
TL;DR: This paper explores the use of kernel density estimation with the fast Gauss transform (FGT) for problems in vision and presents applications of the technique to problems from image segmentation and tracking and shows that the algorithm allows application of advanced statistical techniques to solve practical vision problems in real-time with today's computers.
Abstract: Many vision algorithms depend on the estimation of a probability density function from observations. Kernel density estimation techniques are quite general and powerful methods for this problem, but have a significant disadvantage in that they are computationally intensive. In this paper, we explore the use of kernel density estimation with the fast Gauss transform (FGT) for problems in vision. The FGT allows the summation of a mixture of ill Gaussians at N evaluation points in O(M+N) time, as opposed to O(MN) time for a naive evaluation and can be used to considerably speed up kernel density estimation. We present applications of the technique to problems from image segmentation and tracking and show that the algorithm allows application of advanced statistical techniques to solve practical vision problems in real-time with today's computers.

Journal ArticleDOI
TL;DR: A recently proposed unified scaling law for interoccurrence times of earthquakes is analyzed, both theoretically and with data from Southern California, and fluctuations of the rate show a double power-law distribution and are fundamental to determine the overall behavior.
Abstract: A recently proposed unified scaling law for interoccurrence times of earthquakes is analyzed, both theoretically and with data from Southern California We decompose the corresponding probability density into local-instantaneous distributions, which scale with the rate of earthquake occurrence The fluctuations of the rate, characterizing the nonstationarity of the process, show a double power-law distribution and are fundamental to determine the overall behavior, described by a double power law as well

Journal ArticleDOI
TL;DR: In this article, a statistical damage identification algorithm based on frequency changes is developed to account for the effects of random noise in both the vibration data and finite element model, and the structural stiffness parameters in the intact state and damaged state are derived with a two-stage model updating process.

Journal ArticleDOI
TL;DR: The formalism of the continuous-time random walk is applied to data on the U.S. dollar-deutsche mark future exchange, finding good agreement between theory and the observed data.
Abstract: We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar-deutsche mark future exchange, finding good agreement between theory and the observed data.

Journal ArticleDOI
TL;DR: This work derives generic formulas for the cumulative distribution function, probability density function, and moment-generating function of the combined signal power for both switch-and-stay combining (SSC) and switch- and-examine combining (SEC) schemes and shows that, in general, the SEC performance improves with additional branches.
Abstract: We investigate the performance of multibranch switched diversity systems. Specifically, we first derive generic formulas for the cumulative distribution function, probability density function, and moment-generating function of the combined signal power for both switch-and-stay combining (SSC) and switch-and-examine combining (SEC) schemes. We then capitalize on these expressions to obtain closed-form expressions for the outage probability and average error rate for various practical communication scenarios of interest. As a byproduct of our analysis we prove that for SSC with identically distributed and uniformly correlated branches, increasing the number of branches to more than two does not improve the performance, but the performance can be different in the case the branches are not identically distributed and/or not uniformly correlated. We also show that, in general, the SEC performance improves with additional branches. The mathematical formalism is illustrated with a number of selected numerical examples.

Journal Article
TL;DR: In this article, a new nonlinear function for independent component analysis to process complex-valued signals, which is used in frequency-domain blind source separation, is presented. But the difference between the two types of functions is in the assumed densities of independent components.
Abstract: This paper presents a new type of nonlinear function for independent component analysis to process complex-valued signals, which is used in frequency-domain blind source separation. The new function is based on the polar coordinates of a complex number, whereas the conventional one is based on the Cartesian coordinates. The new function is derived from the probability density function of frequency-domain signals that are assumed to be independent of the phase. We show that the difference between the two types of functions is in the assumed densities of independent components. Experimental results for separating speech signals show that the new nonlinear function behaves better than the conventional one.

Journal ArticleDOI
TL;DR: In this paper, a variance-minimizing filter is introduced and its performance is demonstrated with the Korteweg-DeVries (KdV) equation and with a multilayer quasigeostrophic model of the ocean area around South Africa.
Abstract: A truly variance-minimizing filter is introduced and its performance is demonstrated with the Korteweg–DeVries (KdV) equation and with a multilayer quasigeostrophic model of the ocean area around South Africa. It is recalled that Kalman-like filters are not variance minimizing for nonlinear model dynamics and that four-dimensional variational data assimilation (4DVAR)-like methods relying on perfect model dynamics have difficulty with providing error estimates. The new method does not have these drawbacks. In fact, it combines advantages from both methods in that it does provide error estimates while automatically having balanced states after analysis, without extra computations. It is based on ensemble or Monte Carlo integrations to simulate the probability density of the model evolution. When observations are available, the so-called importance resampling algorithm is applied. From Bayes's theorem it follows that each ensemble member receives a new weight dependent on its “distance” to the obse...

Journal ArticleDOI
TL;DR: Shaked, M., Shanthikumar, J. as discussed by the authors defined some new classes of distributions based on the random variable X t and study their interrelations, and established its relationship with the reversed hazard rate ordering.
Abstract: If the random variable X denotes the lifetime (X ≥ 0, with probability one) of a unit, then the random variable X t = (t − X|X ≤ t), for a fixed t > 0, is known as `time since failure', which is analogous to the residual lifetime random variable used in reliability and survival analysis. The reversed hazard rate function, which is related to the random variable X t , has received the attention of many researchers in the recent past [(cf. Shaked, M., Shanthikumar, J. G., 1994). Stochastic Orders and Their Applications. New York: Academic Press]. In this paper, we define some new classes of distributions based on the random variable X t and study their interrelations. We also define a new ordering based on the mean of the random variable Xt and establish its relationship with the reversed hazard rate ordering.

Journal ArticleDOI
Andrew C. Lorenc1
TL;DR: In this paper, an incremental four-dimensional variational (4D-Var) is derived as a practical implementation of the extended Kalman filter, optimally using these modelled covariances for a finite time window.
Abstract: The extended Kalman filter is presented as a good approximation to the optimal assimilation of observations into a numerical weather prediction (NWP) model, as long as the evolution of errors stays close to linear. The error probability distributions are approximated by Gaussians, characterized by their mean and covariance. The full nonlinear forecast model is used to propagate the mean, and a linear model (not necessarily tangent to the full model) the covariances. Since it is impossible to determine the covariances in detail, physically based assumptions about their behaviour must be made; for instance, three-dimensional balance relationships are used. The linear model can be thought of as extending the covariance relationships to the time dimension. Incremental four-dimensional variational (4D-Var) is derived as a practical implementation of the extended Kalman filter, optimally using these modelled covariances for a finite time window. It is easy to include a simplified model of forecast errors in the representation. This Kalman filter based paradigm differs from more traditional derivations of 4D-Var in attempting to estimate the mean, rather than the mode, of the posterior probability density function. The latter is difficult for a NWP system representing scales which exhibit chaotic behaviour over the period of interest. The covariance modelling assumptions often result in a null space of error modes with little variance. It is argued that this is as important as the variance and correlation structures usually examined, since the implied constraints allow optimal use of observations giving gradient and tendency information. Difficulties arise in the approach when the NWP system is capable of resolving significant structures (such as convective cells) not always determined by the observations.

Journal ArticleDOI
TL;DR: An efficient approach for the evaluation of the Nakagami-m (1960) multivariate probability density function (PDF) and cumulative distribution function (CDF) with arbitrary correlation is presented and useful closed formulas are derived.
Abstract: An efficient approach for the evaluation of the Nakagami-m (1960) multivariate probability density function (PDF) and cumulative distribution function (CDF) with arbitrary correlation is presented. Approximating the correlation matrix with a Green's matrix, useful closed formulas for the joint Nakagami-m PDF and CDF, are derived. The proposed approach is a significant theoretical tool that can be efficiently used in the performance analysis of wireless communications systems operating over correlative Nakagami-m fading channels.

Journal ArticleDOI
TL;DR: A goodness-of-fit method which tests the compatibility between statistically independent data sets and avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters.
Abstract: We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the ${\ensuremath{\chi}}^{2}$ minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed.

Journal ArticleDOI
TL;DR: In this paper, two collision models for the collision rate of inertial particles immersed in homogeneous isotropic turbulence are compared and the merits and demerits of several known collision models are discussed.
Abstract: The objective of the paper is to present and compare two models for the collision rate of inertial particles immersed in homogeneous isotropic turbulence. The merits and demerits of several known collision models are discussed. One of the models proposed in the paper is based on the assumption that the velocities of the fluid and a particle obey a correlated Gaussian distribution. The other model stems from a kinetic equation for the probability density function of the relative velocity distribution of two particles. The predictions obtained by means of these two models are compared with numerical simulations published in the literature.

Journal ArticleDOI
03 Nov 2003
TL;DR: In this paper, the authors analyzed a set of random frequency modulated (FM) signals for wideband radar imaging and assessed their resolution capability and sidelobe distribution on the range-Doppler plane.
Abstract: The authors analysed a set of random frequency modulated (FM) signals for wideband radar imaging and assessed their resolution capability and sidelobe distribution on the range–Doppler plane. To this effect deterministic, bounded, nonlinear iterated maps were first considered. The initial condition of each chaotic map was assigned to a random variable to obtain statistically independent samples with invariant probability density function. The resulting sequences, which have white time–frequency representations, are used to construct wideband stochastic FM signals. These FM signals are ergodic and stationary. The autocorrelation, spectrum and the ambiguity surface associated with each of the FM signals were characterised. It was also demonstrated that the ambiguity surface of an FM signal generated via a chaotic map with uniform sample distribution and tail-shifted chaotic attractor is comparable to the ambiguity function of a Gaussian FM signal.

Journal ArticleDOI
TL;DR: In this article, the authors generate skew probability density functions (pdfs) of the form 2f(u)G(λu), where f is taken to be a normal pdf while the cumulative distributive function G is taken from one of normal, Student's t, Cauchy, Laplace, logistic or uniform distribution.

Journal ArticleDOI
TL;DR: In this article, a statistical analysis of the length of the total local skeleton of a 2D random field is presented, which is used to constrain cosmological models, in CMB maps but also in 3D galaxy catalogs.
Abstract: We discuss the skeleton as a probe of the filamentary structures of a 2D random field. It can be defined for a smooth field as the ensemble of pairs of field lines departing from saddle points, initially aligned with the major axis of local curvature and connecting them to local maxima. This definition is thus non local and makes analytical predictions difficult, so we propose a local approximation: the local skeleton is given by the set of points where the gradient is aligned with the local curvature major axis and where the second component of the local curvature is negative. We perform a statistical analysis of the length of the total local skeleton, chosen for simplicity as the set of all points of space where the gradient is either parallel or orthogonal to the main curvature axis. In all our numerical experiments, which include Gaussian and various non Gaussian realizations such as \chi^2 fields and Zel'dovich maps, the differential length is found within a normalization factor to be very close to the probability distribution function of the smoothed field. This is in fact explicitly demonstrated in the Gaussian case. This result might be discouraging for using the skeleton as a probe of non Gausiannity, but our analyses assume that the total length of the skeleton is a free, adjustable parameter. This total length could in fact be used to constrain cosmological models, in CMB maps but also in 3D galaxy catalogs, where it estimates the total length of filaments in the Universe. Making the link with other works, we also show how the skeleton can be used to study the dynamics of large scale structure.

Proceedings ArticleDOI
01 Apr 2003
TL;DR: In this paper, a new framework for state estimation based on progressive processing is proposed, where the original problem is exactly converted into a corresponding system of explicit ordinary first-order differential equations.
Abstract: This paper is concerned with recursively estimating the internal state of a nonlinear dynamic system by processing noisy measurements and the known system input. In the case of continuous states, an exact analytic representation of the probability density characterizing the estimate is generally too complex for recursive estimation or even impossible to obtain. Hence, it is replaced by a convenient type of approximate density characterized by a finite set of parameters. Of course, parameters are desired that systematically minimize a given measure of deviation between the (often unknown) exact density and its approximation, which in general leads to a complicated optimization problem. Here, a new framework for state estimation based on progressive processing is proposed. Rather than trying to solve the original problem, it is exactly converted into a corresponding system of explicit ordinary first-order differential equations. Solving this system over a finite "time" interval yields the desired optimal density parameters.

Journal ArticleDOI
TL;DR: In this article, the authors developed a statistically rigorous method for enforcing function nonnegativity in Bayesian inverse problems, which behaves similarly to a Gaussian process with a linear variogram for parameter values significantly greater than zero.
Abstract: Received 28 May 2002; revised 18 September 2002; accepted 24 September 2002; published 14 February 2003. [1] When an inverse problem is solved to estimate an unknown function such as the hydraulic conductivity in an aquifer or the contamination history at a site, one constraint is that the unknown function is known to be everywhere nonnegative. In this work, we develop a statistically rigorous method for enforcing function nonnegativity in Bayesian inverse problems. The proposed method behaves similarly to a Gaussian process with a linear variogram (i.e., unrestricted Brownian motion) for parameter values significantly greater than zero. The method uses the method of images to define a prior probability density function based on reflected Brownian motion that implicitly enforces nonnegativity. This work focuses on problems where the unknown is a function of a single variable (e.g., time). A Markov chain Monte Carlo (MCMC) method, specifically, a highly efficient Gibbs sampler, is implemented to generate conditional realizations of the unknown function. The new method is applied to the estimation of the trichloroethylene (TCE) and perchloroethylene (PCE) contamination history in an aquifer at Dover Air Force Base, Delaware, based on concentration profiles obtained from an underlying aquitard. INDEX TERMS: 1831 Hydrology: Groundwater quality; 1869 Hydrology: Stochastic processes; 3260 Mathematical Geophysics: Inverse theory; KEYWORDS: stochastic inverse modeling, contaminant source identification, inference under constraints, Markov chain Monte Carlo (MCMC), Gibbs sampling, Bayesian inference