scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 2006"


Journal ArticleDOI
TL;DR: Under linear, Gaussian assumptions on the target dynamics and birth process, the posterior intensity at any time step is a Gaussian mixture and closed-form recursions for propagating the means, covariances, and weights of the constituent Gaussian components of the posteriorintensity are derived.
Abstract: A new recursive algorithm is proposed for jointly estimating the time-varying number of targets and their states from a sequence of observation sets in the presence of data association uncertainty, detection uncertainty, noise, and false alarms. The approach involves modelling the respective collections of targets and measurements as random finite sets and applying the probability hypothesis density (PHD) recursion to propagate the posterior intensity, which is a first-order statistic of the random finite set of targets, in time. At present, there is no closed-form solution to the PHD recursion. This paper shows that under linear, Gaussian assumptions on the target dynamics and birth process, the posterior intensity at any time step is a Gaussian mixture. More importantly, closed-form recursions for propagating the means, covariances, and weights of the constituent Gaussian components of the posterior intensity are derived. The proposed algorithm combines these recursions with a strategy for managing the number of Gaussian components to increase efficiency. This algorithm is extended to accommodate mildly nonlinear target dynamics using approximation strategies from the extended and unscented Kalman filters

1,805 citations


Book ChapterDOI
01 Jan 2006
TL;DR: A new wavelet based approach is described to separate the trend from the fluctuations in a time series, and a deterministic (non-linear regression) model is constructed for the trend using genetic algorithm.
Abstract: Financial time series, in general, exhibit average behaviour at “long” time scales and stochastic behaviour at ‘short” time scales As in statistical physics, the two have to be modelled using different approaches — deterministic for trends and probabilistic for fluctuations about the trend In this talk, we will describe a new wavelet based approach to separate the trend from the fluctuations in a time series A deterministic (non-linear regression) model is then constructed for the trend using genetic algorithm We thereby obtain an explicit analytic model to describe dynamics of the trend Further the model is used to make predictions of the trend We also study statistical and scaling properties of the fluctuations The fluctuations have non-Gaussian probability distribution function and show multi-scaling behaviour Thus, our work results in a comprehensive model of trends and fluctuations of a financial time series

498 citations


Journal ArticleDOI
TL;DR: In this article, an auxiliary variable method is presented which requires only that independent samples can be drawn from the unnormalised density at any particular parameter value, and is illustrated by producing posterior samples for parameters of the Ising model given a particular lattice realisation.
Abstract: Maximum likelihood parameter estimation and sampling from Bayesian posterior distributions are problematic when the probability density for the parameter of interest involves an intractable normalising constant which is also a function of that parameter. In this paper, an auxiliary variable method is presented which requires only that independent samples can be drawn from the unnormalised density at any particular parameter value. The proposal distribution is constructed so that the normalising constant cancels from the Metropolis-Hastings ratio. The method is illustrated by producing posterior samples for parameters of the Ising model given a particular lattice realisation.

425 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a model in which firms' ability to take up new business opportunities increases with the number of opportunities already exploited, and showed that even in a small industry the agreement with asymptotic results is almost complete.
Abstract: Empirical analyses on aggregated datasets have revealed a common exponential behavior in the shape of the probability density of corporate growth rates. We present clear-cut evidence on this topic using disaggregated data. We explain the observed regularities proposing a model in which firms' ability to take up new business opportunities increases with the number of opportunities already exploited. A theoretical result is presented for the limiting case in which the number of firms and opportunities go to infinity. Moreover, using simulations, we show that even in a small industry the agreement with asymptotic results is almost complete.

379 citations


Journal ArticleDOI
TL;DR: In this paper, the performance of multihop transmissions with non-regenerative relays over not necessarily identically distributed Nakagami-m fading channels was studied and upper bounded by using an inequality between harmonic and geometric means of positive random variables (RVs).
Abstract: We present closed-form lower bounds for the performance of multihop transmissions with nonregenerative relays over not necessarily identically distributed Nakagami-m fading channels. The end-to-end signal-to-noise ratio is formulated and upper bounded by using an inequality between harmonic and geometric means of positive random variables (RVs). Novel closed-form expressions are derived for the moment generating function, the probability density function, and the cumulative distribution function of the product of rational powers of statistically independent Gamma RVs. These statistical results are then applied to studying the outage probability and the average bit-error probability for phase- and frequency-modulated signaling. Numerical examples compare analytical and simulation results, verifying the tightness of the proposed bounds.

371 citations


Journal ArticleDOI
TL;DR: In this article, the authors define the trivariate probability density and cumulative distribution functions of a multivariate statistical approach for the peak, volume and duration of a flood event and compare the results obtained using distributions built with symmetric copula and the standard Gumbel Logistic model.

357 citations


Journal ArticleDOI
TL;DR: Experimental results and statistical models of the induced ordering are presented and several applications are discussed: image enhancement, normalization, watermarking, etc.
Abstract: While in the continuous case, statistical models of histogram equalization/specification would yield exact results, their discrete counterparts fail. This is due to the fact that the cumulative distribution functions one deals with are not exactly invertible. Otherwise stated, exact histogram specification for discrete images is an ill-posed problem. Invertible cumulative distribution functions are obtained by translating the problem in a K-dimensional space and further inducing a strict ordering among image pixels. The proposed ordering refines the natural one. Experimental results and statistical models of the induced ordering are presented and several applications are discussed: image enhancement, normalization, watermarking, etc.

339 citations


Journal ArticleDOI
TL;DR: It is illustrated how misclassifying units as unused may lead to incorrect conclusions about resource use, and how recently developed occupancy models can be utilized within the resource-selection context to improve conclusions by explicitly accounting for detection probability.
Abstract: Resource-selection probability functions and occupancy models are powerful methods of identifying areas within a landscape that are highly used by a species. One common design/analysis method for estimation of a resource-selection probability function is to classify a sample of units as used or unused and estimate the probability of use as a function of independent variables using, for example, logistic regression. This method requires that resource units are correctly classified as unused (i.e., the species is never undetected in a used unit), or that the probability of misclassification is the same for all units. In this paper, I explore these issues, illustrating how misclassifying units as unused may lead to incorrect conclusions about resource use. I also show how recently developed occupancy models can be utilized within the resource-selection context to improve conclusions by explicitly accounting for detection probability. These models require that multiple surveys be conducted at each of a sample of resource units within a relatively short timeframe, but given the growing evidence from simulation studies and field data, I recommend that such procedures should be incorporated into studies of resource use.

322 citations


Journal ArticleDOI
TL;DR: In this paper, a two-point estimate method (2PEM) is proposed to account for uncertainties in the optimal power flow (OPF) problem in the context of competitive electricity markets, where uncertainties can be seen as a by-product of the economic pressure that forces market participants to behave in an "unpredictable" manner; hence, probability distributions of locational marginal prices are calculated as a result.
Abstract: This paper presents an application of a two-point estimate method (2PEM) to account for uncertainties in the optimal power flow (OPF) problem in the context of competitive electricity markets. These uncertainties can be seen as a by-product of the economic pressure that forces market participants to behave in an "unpredictable" manner; hence, probability distributions of locational marginal prices are calculated as a result. Instead of using computationally demanding methods, the proposed approach needs 2n runs of the deterministic OPF for n uncertain variables to get the result in terms of the first three moments of the corresponding probability density functions. Another advantage of the 2PEM is that it does not require derivatives of the nonlinear function used in the computation of the probability distributions. The proposed method is tested on a simple three-bus test system and on a more realistic 129-bus test system. Results are compared against more accurate results obtained from MCS. The proposed method demonstrates a high level of accuracy for mean values when compared to the MCS; for standard deviations, the results are better in those cases when the number of uncertain variables is relatively low or when their dispersion is not large

314 citations


Journal ArticleDOI
TL;DR: In this article, a general theory is proposed for the statistical correction of weather forecasts based on observed analogs, and several approximate analog techniques are then tested for their ability to skillfully calibrate probabilistic forecasts of 24-h precipitation amount.
Abstract: A general theory is proposed for the statistical correction of weather forecasts based on observed analogs. An estimate is sought for the probability density function (pdf) of the observed state, given today’s numerical forecast. Assume that an infinite set of reforecasts (hindcasts) and associated observations are available and that the climate is stable. Assume that it is possible to find a set of past model forecast states that are nearly identical to the current forecast state. With the dates of these past forecasts, the asymptotically correct probabilistic forecast can be formed from the distribution of observed states on those dates. Unfortunately, this general theory of analogs is not useful for estimating the global pdf with a limited set of reforecasts, for the chance of finding even one effectively identical forecast analog in that limited set is vanishingly small, and the climate is not stable. Nonetheless, approximations can be made to this theory to make it useful for statistically correcting weather forecasts. For instance, when estimating the state in a local region, choose the dates of analogs based on a pattern match of the local weather forecast; with a few decades of reforecasts, there are usually many close analogs. Several approximate analog techniques are then tested for their ability to skillfully calibrate probabilistic forecasts of 24-h precipitation amount. A 25-yr set of reforecasts from a reduced-resolution global forecast model is used. The analog techniques find past ensemble-mean forecasts in a local region that are similar to today’s ensemble-mean forecasts in that region. Probabilistic forecasts are formed from the analyzed weather on the dates of the past analogs. All of the analog techniques provide dramatic improvements in the Brier skill score relative to basing probabilities on the raw ensemble counts or the counts corrected for bias. However, the analog techniques did not produce guidance that was much more skillful than that produced by a logistic regression technique. Among the analog techniques tested, it was determined that small improvements to the baseline analog technique that matches ensemble-mean precipitation forecasts are possible. Forecast skill can be improved slightly by matching the ranks of the mean forecasts rather than the raw mean forecasts by using highly localized search regions for shorter-term forecasts and larger search regions for longer forecasts, by matching precipitable water in addition to precipitation amount, and by spatially smoothing the probabilities.

271 citations


Book ChapterDOI
05 Mar 2006
TL;DR: This work proposes a new framework for separation of the whole spectrograms instead of the conventional binwise separation, and demonstrates a gradient-based algorithm using multivariate activation functions derived from the PDFs.
Abstract: Conventional Independent Component Analysis (ICA) in frequency domain inherently causes the permutation problem. To solve the problem fundamentally, we propose a new framework for separation of the whole spectrograms instead of the conventional binwise separation. Under our framework, a measure of independence is calculated from the whole spectrograms, not individual frequency bins. For the calculation, we introduce some multivariate probability density functions (PDFs) which take a spectrum as arguments. To seek the unmixing matrix that makes spectrograms independent, we demonstrate a gradient-based algorithm using multivariate activation functions derived from the PDFs. Through experiments using real sound data, we have confirmed that our framework is effective to generate permutation-free unmixed results.

Journal ArticleDOI
TL;DR: In this article, a new Probabilistic Sensitivity Analysis (PSA) approach based on the concept of relative entropy is proposed for design under uncertainty, which can be applied both over the whole distribution of a performance response and in any interested partial range of a response distribution.
Abstract: In this paper, a new Probabilistic Sensitivity Analysis (PSA) approach based on the concept of relative entropy is proposed for design under uncertainty. The relative entropy based method evaluates the impact of a random variable on a design performance by measuring the divergence between two probability density functions of the performance response, obtained before and after the variation reduction of the random variable. The method can be applied both over the whole distribution of a performance response [called global response probabilistic sensitivity analysis (GRPSA)] and in any interested partial range of a response distribution [called regional response probabilistic sensitivity analysis (RRPSA)]. Such flexibility of our approach facilitates its use under various scenarios of design under uncertainty, for instance in robust design, reliability-based design, and utility optimization. The proposed method is applicable to both the prior-design stage for variable screening when a design solution is yet identified and the post-design stage for uncertainty reduction after an optimal design has been determined. The saddlepoint approximation approach is introduced for improving the computational efficiency of applying our proposed method. The proposed method is illustrated and verified by numerical examples and industrial design cases.

Journal ArticleDOI
TL;DR: The proposed formulation significantly improves previously published results, which are either in the form of infinite sums or higher order derivatives of the fading parameter m, and can be applied to the performance analysis of diversity combining receivers operating over Nakagami-m fading channels.
Abstract: We present closed-form expressions for the probability density function (PDF) and the cumulative distribution function (CDF) of the sum of non-identical squared Nakagami-m random variables (RVs) with integer-order fading parameters. As it is shown, they can be written as a weighted sum of Erlang PDFs and CDFs, respectively, while the analysis includes both independent and correlated sums of RVs. The proposed formulation significantly improves previously published results, which are either in the form of infinite sums or higher order derivatives of the fading parameter m. The obtained formulas can be applied to the performance analysis of diversity combining receivers operating over Nakagami-m fading channels

Journal ArticleDOI
TL;DR: In this paper, the authors present two applications with tick-by-tick stock and futures data, where the probability density functions for this limit process are solved to yield descriptions of long-term price changes, based on a high-resolution model of individual trades.
Abstract: Continuous time random walks (CTRWs) are used in physics to model anomalous diffusion, by incorporating a random waiting time between particle jumps. In finance, the particle jumps are log-returns and the waiting times measure delay between transactions. These two random variables (log-return and waiting time) are typically not independent. For these coupled CTRW models, we can now compute the limiting stochastic process (just like Brownian motion is the limit of a simple random walk), even in the case of heavy tailed (power-law) price jumps and/or waiting times. The probability density functions for this limit process solve fractional partial differential equations. In some cases, these equations can be explicitly solved to yield descriptions of long-term price changes, based on a high-resolution model of individual trades that includes the statistical dependence between waiting times and the subsequent log-returns. In the heavy tailed case, this involves operator stable space-time random vectors that generalize the familiar stable models. In this paper, we will review the fundamental theory and present two applications with tick-by-tick stock and futures data.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the probability distribution of sea surface wind speeds, w, and derived empirical expressions for the probability density function of w in terms of the mean and standard deviation of the vector wind.
Abstract: The probability distribution of sea surface wind speeds, w, is considered. Daily SeaWinds scatterometer observations are used for the characterization of the moments of sea surface winds on a global scale. These observations confirm the results of earlier studies, which found that the two-parameter Weibull distribution provides a good (but not perfect) approximation to the probability density function of w. In particular, the observed and Weibull probability distributions share the feature that the skewness of w is a concave upward function of the ratio of the mean of w to its standard deviation. The skewness of w is positive where the ratio is relatively small (such as over the extratropical Northern Hemisphere), the skewness is close to zero where the ratio is intermediate (such as the Southern Ocean), and the skewness is negative where the ratio is relatively large (such as the equatorward flank of the subtropical highs). An analytic expression for the probability density function of w, derived from a simple stochastic model of the atmospheric boundary layer, is shown to be in good qualitative agreement with the observed relationships between the moments of w. Empirical expressions for the probability distribution of w in terms of the mean and standard deviation of the vector wind are derived using Gram–Charlier expansions of the joint distribution of the sea surface wind vector components. The significance of these distributions for improvements to calculations of averaged air–sea fluxes in diagnostic and modeling studies is discussed.

Posted Content
TL;DR: A straightforward data-based method of determining the optimal number of bins in a uniform bin-width histogram is introduced by assigning a multinomial likelihood and a non-informative prior to derive the posterior probability for the number of Bin heights in a piecewise-constant density model given the data.
Abstract: Histograms are convenient non-parametric density estimators, which continue to be used ubiquitously. Summary quantities estimated from histogram-based probability density models depend on the choice of the number of bins. We introduce a straightforward data-based method of determining the optimal number of bins in a uniform bin-width histogram. By assigning a multinomial likelihood and a non-informative prior, we derive the posterior probability for the number of bins in a piecewise-constant density model given the data. In addition, we estimate the mean and standard deviations of the resulting bin heights, examine the effects of small sample sizes and digitized data, and demonstrate the application to multi-dimensional histograms.

Journal ArticleDOI
Jie Li1, Jianbing Chen1
TL;DR: In this article, the probability density evolution method (PDEM) is proposed for dynamic responses analysis of non-linear stochastic structures, which is based on the principle of preservation of probability, and a one-dimensional partial differential equation in terms of the joint probability density function is set up.
Abstract: The probability density evolution method (PDEM) for dynamic responses analysis of non-linear stochastic structures is proposed. In the method, the dynamic response of non-linear stochastic structures is firstly expressed in a formal solution, which is a function of the random parameters. In this sense, the dynamic responses are mutually uncoupled. A state equation is then constructed in the augmented state space. Based on the principle of preservation of probability, a one-dimensional partial differential equation in terms of the joint probability density function is set up. The numerical solving algorithm, where the Newmark-Beta time-integration algorithm and the finite difference method with Lax–Wendroff difference scheme are brought together, is studied. In the numerical examples, free vibration of a single-degree-of-freedom non-linear conservative system and dynamic responses of an 8-storey shear structure with bilinear hysteretic restoring forces, subjected to harmonic excitation and seismic excitation, respectively, are investigated. The investigations indicate that the probability density functions of dynamic responses of non-linear stochastic structures are usually irregular and far from the well-known distribution types. They exhibit obvious evolution characteristics. The comparisons with the analytical solution and Monte Carlo simulation method demonstrate that the proposed PDEM is of fair accuracy and efficiency. Copyright © 2005 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, an analogue of the linear continuous ranked probability score is introduced that applies to probabilistic forecasts of circular quantities and is used to assess predictions of wind direction for 361 cases of mesoscale, short-range ensemble forecasts over the North American Pacic Northwest.
Abstract: An analogue of the linear continuous ranked probability score is introduced that applies to probabilistic forecasts of circular quantities. This scoring rule is proper and thereby discourages hedging. The circular continuous ranked probability score reduces to angular distance when the forecast is deterministic, just as the linear continuous ranked probability score generalizes the absolute error. Furthermore, the continuous ranked probability score provides a direct way of comparing deterministic forecasts, discrete forecast ensembles, and post-processed forecast ensembles that can take the form of probability density functions. The circular continuous ranked probability score is used in this study to assess predictions of 10 m wind direction for 361 cases of mesoscale, short-range ensemble forecasts over the North American Pacic Northwest. Reference probability forecasts based on the ensemble mean and its forecast error history over the period outperform probability forecasts constructed directly from the ensemble sample statistics. These results suggest that short-term forecast uncertainty is not yet well predicted at mesoscale resolutions near the surface, despite the inclusion of multi-scheme physics diversity and surface boundary parameter perturbations in the mesoscale ensemble design. 1

Journal ArticleDOI
TL;DR: In this paper, the stochastic foundations for ultra-low diffusion were developed based on random walks with a random waiting time between jumps whose probability tail falls off at a logarithmic rate.

Journal ArticleDOI
TL;DR: In this article, a model for compound transport of cosmic rays due to random walk of the magnetic field lines, and for a range of models for particle transport along the field, is developed.
Abstract: A Chapman-Kolmogorov equation description of compound transport of cosmic rays due to random walk of the magnetic field lines, and for a range of models for particle transport along the field, is developed. The probability distribution, Pp, for the particle propagation along the field corresponds to either (1) a ballistic or scatter-free model, (2) a parallel diffusion model, or (3) a telegrapher equation model. The probability distribution function (pdf) describing the magnetic field statistics, PFRW, is assumed to be Gaussian. These models are used to discuss features of the dropout events in the low-energy, solar cosmic-ray intensity observed by Mazur et al. We show that the Chuvilgin andPtuskintransportequationforcompounddiffusion,atsufficientlylatetimes,canbewrittenasafractionalFokkerPlanck equation, involving ordinary diffusion parallel to the mean magnetic field and compound diffusion of the particles normal to thefield. The Green’s function solution of the equation and the corresponding spatial moments of the particle transport, both parallel and perpendicular to the field, are obtained. The two-dimensional pdf for compound diffusion across the field is obtained as an inverse Laplace transform, or as a real integral. Subject headingg cosmic rays — diffusion — turbulence Online material: color figure

Journal ArticleDOI
TL;DR: Sequential Monte Carlo approximations of the PHD using particle filter techniques have been implemented, showing the potential of this technique for real-time tracking applications.
Abstract: Bayesian single-target tracking techniques can be extended to a multiple-target environment by viewing the multiple-target state as a random finite set, but evaluating the multiple-target posterior distribution is currently computationally intractable for real-time applications. A practical alternative to the optimal Bayes multitarget filter is the probability hypothesis density (PHD) filter, which propagates the first-order moment of the multitarget posterior instead of the posterior distribution itself. It has been shown that the PHD is the best-fit approximation of the multitarget posterior in an information-theoretic sense. The method avoids the need for explicit data association, as the target states are viewed as a single global target state, and the identities of the targets are not part of the tracking framework. Sequential Monte Carlo approximations of the PHD using particle filter techniques have been implemented, showing the potential of this technique for real-time tracking applications. This paper presents mathematical proofs of convergence for the particle filtering algorithm and gives bounds for the mean-square error

Journal ArticleDOI
TL;DR: Experimental studies show that the RiIG MAP filter has excellent filtering performance in the sense that it smooths homogeneous regions, and at the same time preserves details.
Abstract: In this paper, a new statistical model for representing the amplitude statistics of ultrasonic images is presented. The model is called the Rician inverse Gaussian (RiIG) distribution, due to the fact that it is constructed as a mixture of the Rice distribution and the Inverse Gaussian distribution. The probability density function (pdf) of the RiIG model is given in closed form as a function of three parameters. Some theoretical background on this new model is discussed, and an iterative algorithm for estimating its parameters from data is given. Then, the appropriateness of the RiIG distribution as a model for the amplitude statistics of medical ultrasound images is experimentally studied. It is shown that the new distribution can fit to the various shapes of local histograms of linearly scaled ultrasound data better than existing models. A log-likelihood cross-validation comparison of the predictive performance of the RiIG, the K, and the generalized Nakagami models turns out in favor of the new model. Furthermore, a maximum a posteriori (MAP) filter is developed based on the RiIG distribution. Experimental studies show that the RiIG MAP filter has excellent filtering performance in the sense that it smooths homogeneous regions, and at the same time preserves details.

Journal ArticleDOI
TL;DR: Comparison of SREM2D simulated satellite rainfall with actual IR-3B41RT data showed that the error modeling technique can preserve the estimation error characteristics across scales with marginal deviations, and was compared against two simpler approaches of error modeling that do not account for uncertainty in rainy/nonrainy area delineation.
Abstract: A two-dimensional satellite rainfall error model ( SREM2D) is developed for simulating ensembles of satellite rain fields on the basis of "reference" rain fields derived from higher accuracy sensor estimates. With this model we aim at characterizing the multidimensional stochastic error structure of satellite rainfall estimates as a function of scale. The pertinent error dimensions we seek to address are: 1) the joint probability density function for characterizing the spatial structure of the successful delineation of rainy and nonrainy areas; 2) the temporal dynamics of rain estimation bias; and 3) the spatial variability of rain rate estimation error. Ground radar rain fields in the Southern plains of the United States are used as reference to evaluate SREM2D error parameters at 0.25-deg and hourly spatiotemporal resolution for an infrared (IR) rain retrieval algorithm (IR-3B41RT) developed at NASA. Comparison of SREM2D simulated satellite rainfall with actual IR-3B41RT data showed that the error modeling technique can preserve the estimation error characteristics across scales with marginal deviations. The model performance is compared against two simpler, but widely used, approaches of error modeling that do not account for uncertainty in rainy/nonrainy area delineation. It is shown that both of these approaches fare poorly with regards to preserving the error structure across scales. They underestimated the sensor retrieval error standard deviation by more than 100% upon aggregation, which, for SREM2D, was found to be below 30%. SREM2D is modular in design-it can be applied for any satellite rainfall algorithm to consistently characterize its error structure.

Journal ArticleDOI
TL;DR: In this paper, the equilibrium fluctuations of a field of cumulus clouds under homogeneous large-scale forcing are derived statistically, using the Gibbs canonical ensemble from statistical mechanics, and the probability density function of individual cloud mass fluxes is shown to be exponential.
Abstract: To provide a theoretical basis for stochastic parameterization of cumulus convection, the equilibrium fluctuations of a field of cumulus clouds under homogeneous large-scale forcing are derived statistically, using the Gibbs canonical ensemble from statistical mechanics. In the limit of noninteracting convective cells, the statistics of these convective fluctuations can be written in terms of the large-scale, externally constrained properties of the system. Using this framework, the probability density function of individual cloud mass fluxes is shown to be exponential. An analytical expression for the distribution function of total mass flux over a region of given size is also derived, and the variance of this distribution is found to be inversely related to the mean number of clouds in the ensemble. In a companion paper, these theoretical predictions are tested against cloud resolving model data.

Journal ArticleDOI
TL;DR: An innovative estimation algorithm is described, which faces the problem of probability density function (pdf) estimation in the context of synthetic aperture radar (SAR) amplitude data analysis by adopting a finite mixture model for the amplitude pdf, with mixture components belonging to a given dictionary of SAR-specific pdfs.
Abstract: In remotely sensed data analysis, a crucial problem is represented by the need to develop accurate models for the statistics of the pixel intensities. This paper deals with the problem of probability density function (pdf) estimation in the context of synthetic aperture radar (SAR) amplitude data analysis. Several theoretical and heuristic models for the pdfs of SAR data have been proposed in the literature, which have been proved to be effective for different land-cover typologies, thus making the choice of a single optimal parametric pdf a hard task, especially when dealing with heterogeneous SAR data. In this paper, an innovative estimation algorithm is described, which faces such a problem by adopting a finite mixture model for the amplitude pdf, with mixture components belonging to a given dictionary of SAR-specific pdfs. The proposed method automatically integrates the procedures of selection of the optimal model for each component, of parameter estimation, and of optimization of the number of components by combining the stochastic expectation-maximization iterative methodology with the recently developed "method-of-log-cumulants" for parametric pdf estimation in the case of nonnegative random variables. Experimental results on several real SAR images are reported, showing that the proposed method accurately models the statistics of SAR amplitude data.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the set of bounded probability density functions on finite intervals is a pre-Hilbert space, whose logarithm is square-integrable.
Abstract: The set of probability functions is a convex subset of L1 and it does not have a linear space structure when using ordinary sum and multiplication by real constants. Moreover, difficulties arise when dealing with distances between densities. The crucial point is that usual distances are not invariant under relevant transformations of densities. To overcome these limitations, Aitchison's ideas on compositional data analysis are used, generalizing perturbation and power transformation, as well as the Aitchison inner product, to operations on probability density functions with support on a finite interval. With these operations at hand, it is shown that the set of bounded probability density functions on finite intervals is a pre–Hilbert space. A Hilbert space of densities, whose logarithm is square–integrable, is obtained as the natural completion of the pre–Hilbert space.

Journal ArticleDOI
TL;DR: This work examines unitary encoded space-time transmission of MIMO systems and derives the received signal distribution when the channel matrix is correlated at the transmitter end.
Abstract: A promising new method from the field of representations of Lie groups is applied to calculate integrals over unitary groups, which are important for multiantenna communications. To demonstrate the power and simplicity of this technique, a number of recent results are rederived, using only a few simple steps. In particular, we derive the joint probability distribution of eigenvalues of the matrix GGdagger , with G a nonzero mean or a semicorrelated Gaussian random matrix. These joint probability distribution functions can then be used to calculate the moment generating function of the mutual information for Gaussian multiple-input multiple-output (MIMO) channels with these probability distribution of their channel matrices G. We then turn to the previously unsolved problem of calculating the moment generating function of the mutual information of MIMO channels, which are correlated at both the receiver and the transmitter. From this moment generating function we obtain the ergodic average of the mutual information and study the outage probability. These methods can be applied to a number of other problems. As a particular example, we examine unitary encoded space-time transmission of MIMO systems and we derive the received signal distribution when the channel matrix is correlated at the transmitter end

Journal ArticleDOI
TL;DR: In this article, the authors report measurements of the spreading rate of pairs of tracer particles in an intensely turbulent laboratory water flow and compare their measurements of this turbulent relative dispersion with the longstanding work of Richardson and Batchelor.
Abstract: We report measurements of the spreading rate of pairs of tracer particles in an intensely turbulent laboratory water flow. We compare our measurements of this turbulent relative dispersion with the longstanding work of Richardson and Batchelor, and find excellent agreement with Batchelor's predictions. The distance neighbour function, the probability density function of the relative dispersion, is measured and compared with existing models. We also investigate the recently proposed exit time analysis of relative dispersion.

Posted Content
TL;DR: In this article, the authors provide a critical evaluation of the various estimation techniques, focusing on the ease of implementation and comparative performance of the procedures when estimating the parameters of the Cox-Ingersoll-Ross and Ornstein-Uhlenbeck equations.
Abstract: Maximum-likelihood estimates of the parameters of stochastic differential equations are consistent and asymptotically efficient, but unfortunately difficult to obtain if a closed form expression for the transitional probability density function of the process is not available. As a result, a large number of competing estimation procedures have been proposed. This paper provides a critical evaluation of the various estimation techniques. Special attention is given to the ease of implementation and comparative performance of the procedures when estimating the parameters of the Cox-Ingersoll-Ross and Ornstein-Uhlenbeck equations respectively.

Journal ArticleDOI
TL;DR: In this article, the authors analyzed the data on personal income distribution from the Australian Bureau of Statistics and compared fits of the data to the exponential, log-normal, and gamma distributions.
Abstract: We analyze the data on personal income distribution from the Australian Bureau of Statistics. We compare fits of the data to the exponential, log-normal, and gamma distributions. The exponential function gives a good (albeit not perfect) description of 98% of the population in the lower part of the distribution. The log-normal and gamma functions do not improve the fit significantly, despite having more parameters, and mimic the exponential function. We find that the probability density at zero income is not zero, which contradicts the log-normal and gamma distributions, but is consistent with the exponential one. The high-resolution histogram of the probability density shows a very sharp and narrow peak at low incomes, which we interpret as the result of a government policy on income redistribution. r 2006 Elsevier B.V. All rights reserved.