scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 2002"


Journal ArticleDOI
TL;DR: Both optimal and suboptimal Bayesian algorithms for nonlinear/non-Gaussian tracking problems, with a focus on particle filters are reviewed.
Abstract: Increasingly, for many application areas, it is becoming important to include elements of nonlinearity and non-Gaussianity in order to model accurately the underlying dynamics of a physical system. Moreover, it is typically crucial to process data on-line as it arrives, both from the point of view of storage costs as well as for rapid adaptation to changing signal characteristics. In this paper, we review both optimal and suboptimal Bayesian algorithms for nonlinear/non-Gaussian tracking problems, with a focus on particle filters. Particle filters are sequential Monte Carlo methods based on point mass (or "particle") representations of probability densities, which can be applied to any state-space model and which generalize the traditional Kalman filtering methods. Several variants of the particle filter such as SIR, ASIR, and RPF are introduced within a generic framework of the sequential importance sampling (SIS) algorithm. These are discussed and compared with the standard EKF through an illustrative example.

11,409 citations


Journal ArticleDOI
TL;DR: A Monte Carlo study compared 14 methods to test the statistical significance of the intervening variable effect and found two methods based on the distribution of the product and 2 difference-in-coefficients methods have the most accurate Type I error rates and greatest statistical power.
Abstract: A Monte Carlo study compared 14 methods to test the statistical significance of the intervening variable effect. An intervening variable (mediator) transmits the effect of an independent variable to a dependent variable. The commonly used R. M. Baron and D. A. Kenny (1986) approach has low statistical power. Two methods based on the distribution of the product and 2 difference-in-coefficients methods have the most accurate Type I error rates and greatest statistical power except in 1 important case in which Type I error rates are too high. The best balance of Type I error and statistical power across all cases is the test of the joint significance of the two effects comprising the intervening variable effect.

8,629 citations


Journal ArticleDOI
TL;DR: In this paper, a fast Markov chain Monte Carlo exploration of cosmological parameter space is presented, which combines data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae type Ia and big-bang nucleosynthesis.
Abstract: We present a fast Markov chain Monte Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent cosmic microwave background ~CMB! experiments and provide parameter constraints, including s 8, from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae type Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass ( mn&0.3 eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendixes we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters.

3,550 citations


Journal ArticleDOI
TL;DR: A localization algorithm motivated from least-squares fitting theory is constructed and tested both on image stacks of 30-nm fluorescent beads and on computer-generated images (Monte Carlo simulations), and results show good agreement with the derived precision equation.

2,390 citations


Journal ArticleDOI
TL;DR: A Monte Carlo simulation--based approach to stochastic discrete optimization problems, where a random sample is generated and the expected value function is approximated by the corresponding sample average function.
Abstract: In this paper we study a Monte Carlo simulation--based approach to stochastic discrete optimization problems. The basic idea of such methods is that a random sample is generated and the expected value function is approximated by the corresponding sample average function. The obtained sample average optimization problem is solved, and the procedure is repeated several times until a stopping criterion is satisfied. We discuss convergence rates, stopping rules, and computational complexity of this procedure and present a numerical example for the stochastic knapsack problem.

1,728 citations


Journal ArticleDOI
TL;DR: The strength of this book is in bringing together advanced Monte Carlo methods developed in many disciplines, including the Ising model, molecular structure simulation, bioinformatics, target tracking, hypothesis testing for astronomical observations, Bayesian inference of multilevel models, missing-data problems.
Abstract: (2002). Monte Carlo Strategies in Scientific Computing. Technometrics: Vol. 44, No. 4, pp. 403-404.

1,434 citations


Journal ArticleDOI
15 May 2002-Proteins
TL;DR: An all‐atom force field aimed at protein and nucleotide optimization in vacuo (NOVA), which has been specifically designed to avoid this problem and can be applied to modeling applications as well as X‐ray and NMR structure refinement.
Abstract: One of the conclusions drawn at the CASP4 meeting in Asilomar was that applying various force fields during refinement of template-based models tends to move predictions in the wrong direction, away from the experimentally determined coordinates. We have derived an all-atom force field aimed at protein and nucleotide optimization in vacuo (NOVA), which has been specifically designed to avoid this problem. NOVA resembles common molecular dynamics force fields but has been automatically parameterized with two major goals: (i) not to make high resolution X-ray structures worse and (ii) to improve homology models built by WHAT IF. Force-field parameters were not required to be physically correct; instead, they were optimized with random Monte Carlo moves in force-field parameter space, each one evaluated by simulated annealing runs of a 50-protein optimization set. Errors inherent to the approximate force-field equation could thus be canceled by errors in force-field parameters. Compared with the optimization set, the force field did equally well on an independent validation set and is shown to move in silico models closer to reality. It can be applied to modeling applications as well as X-ray and NMR structure refinement. A new method to assign force-field parameters based on molecular trees is also presented. A NOVA server is freely accessible at http://www.yasara.com/servers

1,354 citations


Journal ArticleDOI
TL;DR: An analytical expression for the cluster coefficient is derived, which shows that the graphs in which each vertex is assigned random coordinates in a geometric space of arbitrary dimensionality are distinctly different from standard random graphs, even for infinite dimensionality.
Abstract: We analyze graphs in which each vertex is assigned random coordinates in a geometric space of arbitrary dimensionality and only edges between adjacent points are present. The critical connectivity is found numerically by examining the size of the largest cluster. We derive an analytical expression for the cluster coefficient, which shows that the graphs are distinctly different from standard random graphs, even for infinite dimensionality. Insights relevant for graph bipartitioning are included.

1,271 citations


Journal ArticleDOI
TL;DR: The aim of this paper is to present a survey of convergence results on particle filtering methods to make them accessible to practitioners.
Abstract: Optimal filtering problems are ubiquitous in signal processing and related fields. Except for a restricted class of models, the optimal filter does not admit a closed-form expression. Particle filtering methods are a set of flexible and powerful sequential Monte Carlo methods designed to. solve the optimal filtering problem numerically. The posterior distribution of the state is approximated by a large set of Dirac-delta masses (samples/particles) that evolve randomly in time according to the dynamics of the model and the observations. The particles are interacting; thus, classical limit theorems relying on statistically independent samples do not apply. In this paper, our aim is to present a survey of convergence results on this class of methods to make them accessible to practitioners.

1,013 citations



Book
26 Aug 2002
TL;DR: A short and systematic theoretical introduction to the Monte Carlo method and a practical guide with plenty of examples and exercises for the student.
Abstract: Introduction - purpose and scope of this volume, and some general comments theoretical foundation of the Monte Carlo method and its application in statistical physics guide to practical work with the Monte Carlo method some important recent developments of the Monte Carlo methodology.

Journal ArticleDOI
TL;DR: In this article, the authors examined a two-regime vector error-correction model with a single cointegrating vector and a threshold effect in the errorcorrection term, and proposed a relatively simple algorithm to obtain maximum likelihood estimation of the complete threshold cointegration model for the bivariate case.

01 Jan 2002
TL;DR: In this paper, the authors used Hermite polynomials to construct an explicit sequence of closed-form functions and showed that it converges to the true (but unknown) likelihood function.
Abstract: When a continuous-time diffusion is observed only at discrete dates, in most cases the transition distribution and hence the likelihood function of the observations is not explicitly computable. Using Hermite polynomials, I construct an explicit sequence of closed-form functions and show that it converges to the true (but unknown) likelihood function. I document that the approximation is very accurate and prove that maximizing the sequence results in an estimator that converges to the true maximum likelihood estimator and shares its asymptotic properties. Monte Carlo evidence reveals that this method outperforms other approximation schemes in situations relevant for financial models.

Journal ArticleDOI
TL;DR: In this article, an adaptive Markov chain approach is proposed to evaluate the desired integral that is based on the Metropolis-Hastings algorithm and a concept similar to simulated annealing.
Abstract: In a full Bayesian probabilistic framework for "robust" system identification, structural response predictions and performance reliability are updated using structural test data D by considering the predictions of a whole set of possible structural models that are weighted by their updated probability. This involves integrating h(θ)p(θ|D) over the whole parameter space, where θ is a parameter vector defining each model within the set of possible models of the structure, h(θ) is a model prediction of a response quantity of interest, and p(θ|D) is the updated probability density for θ, which provides a measure of how plausible each model is given the data D. The evaluation of this integral is difficult because the dimension of the parameter space is usually too large for direct numerical integration and p(θ|D) is concentrated in a small region in the parameter space and only known up to a scaling constant. An adaptive Markov chain Monte Carlo simulation approach is proposed to evaluate the desired integral that is based on the Metropolis-Hastings algorithm and a concept similar to simulated annealing. By carrying out a series of Markov chain simulations with limiting stationary distributions equal to a sequence of intermediate probability densities that converge on p(θ|D), the region of concentration of p(θ|D) is gradually portrayed. The Markov chain samples are used to estimate the desired integral by statistical averaging. The method is illustrated using simulated dynamic test data to update the robust response variance and reliability of a moment-resisting frame for two cases: one where the model is only locally identifiable based on the data and the other where it is unidentifiable.

Journal ArticleDOI
TL;DR: The development and application of Monte Carlo methods for inverse problems in the Earth sciences and in particular geophysics are traced from the earliest work of the Russian school and the pioneering studies in the west by Press [1968] to modern importance sampling and ensemble inference methods.
Abstract: [1] Monte Carlo inversion techniques were first used by Earth scientists more than 30 years ago. Since that time they have been applied to a wide range of problems, from the inversion of free oscillation data for whole Earth seismic structure to studies at the meter-scale lengths encountered in exploration seismology. This paper traces the development and application of Monte Carlo methods for inverse problems in the Earth sciences and in particular geophysics. The major developments in theory and application are traced from the earliest work of the Russian school and the pioneering studies in the west by Press [1968] to modern importance sampling and ensemble inference methods. The paper is divided into two parts. The first is a literature review, and the second is a summary of Monte Carlo techniques that are currently popular in geophysics. These include simulated annealing, genetic algorithms, and other importance sampling approaches. The objective is to act as both an introduction for newcomers to the field and a comprehensive reference source for researchers already familiar with Monte Carlo inversion. It is our hope that the paper will serve as a timely summary of an expanding and versatile methodology and also encourage applications to new areas of the Earth sciences.

ReportDOI
01 Nov 2002
TL;DR: The following techniques for uncertainty and sensitivity analysis are briefly summarized: Monte Carlo analysis, differential analysis, response surface methodology, Fourier amplitude sensitivity test, Sobol’ variance decomposition, and fast probability integration.
Abstract: The following techniques for uncertainty and sensitivity analysis are briefly summarized: Monte Carlo analysis, differential analysis, response surface methodology, Fourier amplitude sensitivity test, Sobol’ variance decomposition, and fast probability integration. Desirable features of Monte Carlo analysis in conjunction with Latin hypercube sampling are described in discussions of the following topics: (i) properties of random, stratified and Latin hypercube sampling, (ii) comparisons of random and Latin hypercube sampling, (iii) operations involving Latin hypercube sampling (i.e. correlation control, reweighting of samples to incorporate changed distributions, replicated sampling to test reproducibility of results), (iv) uncertainty analysis (i.e. cumulative distribution functions, complementary cumulative distribution functions, box plots), (v) sensitivity analysis (i.e. scatterplots, regression analysis, correlation analysis, rank transformations, searches for nonrandom patterns), and (vi) analyses involving stochastic (i.e. aleatory) and subjective (i.e. epistemic) uncertainty. Published by Elsevier Science Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors show that low-mass starless cores, the simplest units of star formation, are systematically differentiated in their chemical composition, and they also show that chemical differentiation automatically explains the discrepancy between the sizes of CS and NH3 maps, a problem that has remained unexplained for more than a decade.
Abstract: We present evidence that low-mass starless cores, the simplest units of star formation, are systematically differentiated in their chemical composition. Some molecules, including CO and CS, almost vanish near the core centers, where the abundance decreases by at least 1 or 2 orders of magnitude with respect to the value in the outer core. At the same time, the N2H+ molecule has a constant abundance, and the fraction of NH3 increases toward the core center. Our conclusions are based on a systematic study of five mostly round starless cores (L1498, L1495, L1400K, L1517B, and L1544), which we have mapped in C18O (1-0), CS (2-1), N2H+ (1-0), NH3 (1, 1) and (2, 2), and the 1.2 mm continuum [complemented with C17O (1-0) and C34S (2-1) data for some systems]. For each core we have built a spherically symmetric model in which the density is derived from the 1.2 mm continuum, the kinetic temperature is derived from NH3, and the abundance of each molecule is derived using a Monte Carlo radiative transfer code, which simultaneously fits the shape of the central spectrum and the radial profile of integrated intensity. Regarding the cores for which we have C17O (1-0) and C34S (2-1) data, the model fits these observations automatically when the standard isotopomer ratio is assumed. As a result of this modeling, we also find that the gas kinetic temperature in each core is constant at approximately 10 K. In agreement with previous work, we find that if the dust temperature is also constant, then the density profiles are centrally flattened, and we can model them with a single analytic expression. We also find that for each core the turbulent line width seems constant in the inner 0.1 pc. The very strong abundance drop of CO and CS toward the center of each core is naturally explained by the depletion of these molecules onto dust grains at densities of (2-6) × 104 cm-3. N2H+ seems unaffected by this process up to densities of several times 105 cm-3, or even 106 cm-3, while the NH3 abundance may be enhanced by its lack of depletion and by reactions triggered by the disappearance of CO from the gas phase. With the help of the Monte Carlo modeling, we show that chemical differentiation automatically explains the discrepancy between the sizes of CS and NH3 maps, a problem that has remained unexplained for more than a decade. Our models, in addition, show that a combination of radiative transfer effects can give rise to the previously observed discrepancy in the line width of these two tracers. Although this discrepancy has been traditionally interpreted as resulting from a systematic increase of the turbulent line width with radius, our models show that it can arise in conditions of constant gas turbulence.

Journal ArticleDOI
TL;DR: In this paper, particle filter methods are combined with importance sampling and Monte Carlo schemes in order to explore consistently a sequence of multiple distributions of interest, which can also offer an efficient estimation.
Abstract: SUMMARY Particle filter methods are complex inference procedures, which combine importance sampling and Monte Carlo schemes in order to explore consistently a sequence of multiple distributions of interest. We show that such methods can also offer an efficient estimation

Journal ArticleDOI
TL;DR: In this article, a Markov chain Monte Carlo (MCMCMC) algorithm is proposed to estimate the likelihood function of a generalized model of stochastic volatility, defined by heavy-tailed Student-t distributions.

Book
01 Jan 2002
TL;DR: In this paper, the authors proposed a mean-variance framework for measuring financial risk, which is used to measure the value at risk and the coherent risk measures in financial markets.
Abstract: Preface to the Second EditionAcknowledgements1 The Rise of Value at Risk1.1 The emergence of financial risk management1.2 Market risk management1.3 Risk management before VaR1.4 Value at riskAppendix 1: Types of Market Risk2 Measures of Financial Risk2.1 The Mean-Variance framework for measuring financial risk2.2 Value at risk2.3 Coherent risk measures2.4 ConclusionsAppendix 1: Probability FunctionsAppendix 2: Regulatory Uses of VaR3 Estimating Market Risk Measures: An Introduction and Overview3.1 Data3.2 Estimating historical simulation VaR3.3 Estimating parametric VaR3.4 Estimating coherent risk measures3.5 Estimating the standard errors of risk measure estimators3.6 OverviewAppendix 1: Preliminary Data AnalysisAppendix 2: Numerical Integration Methods4 Non-parametric Approaches4.1 Compiling historical simulation data4.2 Estimation of historical simulation VaR and ES4.3 Estimating confidence intervals for historical simulation VaR and ES4.4 Weighted historical simulation4.5 Advantages and disadvantages of non-parametric methods4.6 ConclusionsAppendix 1: Estimating Risk Measures with Order StatisticsAppendix 2: The BootstrapAppendix 3: Non-parametric Density EstimationAppendix 4: Principal Components Analysis and Factor Analysis5 Forecasting Volatilities, Covariances and Correlations5.1 Forecasting volatilities5.2 Forecasting covariances and correlations5.3 Forecasting covariance matricesAppendix 1: Modelling Dependence: Correlations and Copulas6 Parametric Approaches (I)6.1 Conditional vs unconditional distributions6.2 Normal VaR and ES6.3 The t-distribution6.4 The lognormal distribution6.5 Miscellaneous parametric approaches6.6 The multivariate normal variance-covariance approach6.7 Non-normal variance-covariance approaches6.8 Handling multivariate return distributions with copulas6.9 ConclusionsAppendix 1: Forecasting longer-term Risk Measures7 Parametric Approaches (II): Extreme Value7.1 Generalised extreme-value theory7.2 The peaks-over-threshold approach: the generalised pareto distribution7.3 Refinements to EV approaches7.4 Conclusions8 Monte Carlo Simulation Methods8.1 Uses of monte carlo simulation8.2 Monte carlo simulation with a single risk factor8.3 Monte carlo simulation with multiple risk factors8.4 Variance-reduction methods8.5 Advantages and disadvantages of monte carlo simulation8.6 Conclusions9 Applications of Stochastic Risk Measurement Methods9.1 Selecting stochastic processes9.2 Dealing with multivariate stochastic processes9.3 Dynamic risks9.4 Fixed-income risks9.5 Credit-related risks9.6 Insurance risks9.7 Measuring pensions risks9.8 Conclusions10 Estimating Options Risk Measures10.1 Analytical and algorithmic solutions m for options VaR10.2 Simulation approaches10.3 Delta-gamma and related approaches10.4 Conclusions11 Incremental and Component Risks11.1 Incremental VaR11.2 Component VaR11.3 Decomposition of coherent risk measures12 Mapping Positions to Risk Factors12.1 Selecting core instruments12.2 Mapping positions and VaR estimation13 Stress Testing13.1 Benefits and difficulties of stress testing13.2 Scenario analysis13.3 Mechanical stress testing13.4 Conclusions14 Estimating Liquidity Risks14.1 Liquidity and liquidity risks14.2 Estimating liquidity-adjusted VaR14.3 Estimating liquidity at risk (LaR)14.4 Estimating liquidity in crises15 Backtesting Market Risk Models15.1 Preliminary data issues15.2 Backtests based on frequency tests15.3 Backtests based on tests of distribution equality15.4 Comparing alternative models15.5 Backtesting with alternative positions and data15.6 Assessing the precision of backtest results15.7 Summary and conclusionsAppendix 1: Testing Whether Two Distributions are Different16 Model Risk16.1 Models and model risk16.2 Sources of model risk16.3 Quantifying model risk16.4 Managing model risk16.5 ConclusionsBibliographyAuthor IndexSubject Index

Journal ArticleDOI
TL;DR: In this article, the authors examined the panel data estimation of dynamic models for count data that include correlated fixed effects and predetermined variables, and used a linear feedback model to obtain a consistent estimator for the parameters in the dynamic model.

Journal ArticleDOI
TL;DR: In this paper, a dual way to price American options, based on simulating the paths of the option payoff, and of a judiciously chosen Lagrangian martingale, is introduced.
Abstract: This paper introduces a dual way to price American options, based on simulating the paths of the option payoff, and of a judiciously chosen Lagrangian martingale. Taking the pathwise maximum of the payoff less the martingale provides an upper bound for the price of the option, and this bound is sharp for the optimal choice of Lagrangian martingale. As a first exploration of this method, four examples are investigated numerically; the accuracy achieved with even very simple choices of Lagrangian martingale is surprising. The method also leads naturally to candidate hedging policies for the option, and estimates of the risk involved in using them.

Journal ArticleDOI
TL;DR: This letter shows through Wishart matrix analysis that the signal-to-noise ratio on the kth stream is a weighted Chi-squared variable with the weight equal to-the kth diagonal entry of the inverted transmit correlation matrix, and develops selection algorithms for two cases-maximizing ergodic capacity and minimizing the average probability of error.
Abstract: In this letter we solve the transmit antenna selection problem for a zero forcing spatial multiplexing system with knowledge of the channel statistics at the transmitter. We show through Wishart matrix analysis that the signal-to-noise ratio on the kth stream is a weighted Chi-squared variable with the weight equal to-the kth diagonal entry of the inverted transmit correlation matrix. We use this result to develop selection algorithms for two cases-maximizing ergodic capacity and minimizing the average probability of error. Monte Carlo simulations illustrate potential performance improvements.

Journal ArticleDOI
TL;DR: In this article, the authors examined the application of neural networks (NN) to reliability-based structural optimization of large-scale structural systems, where the failure of the structural system is associated with the plastic collapse.

Journal ArticleDOI
TL;DR: The three photon spectra at 6 MV from the machines of three different manufacturers show differences in their shapes as well as in the efficiency of bremsstrahlung production in the corresponding target and filter combinations.
Abstract: A recent paper analyzed the sensitivity to various simulation parameters of the Monte Carlo simulations of nine beams from three major manufacturers of commercial medical linear accelerators, ranging in energy from 4-25 MV. In this work the nine models are used: to calculate photon energy spectra and average energy distributions and compare them to those published by Mohan et al. [Med. Phys. 12, 592-597 (1985)]; to separate the spectra into primary and scatter components from the primary collimator, the flattening filter and the adjustable collimators; and to calculate the contaminant-electron fluence spectra and the electron contribution to the depth-dose curves. Notwithstanding the better precision of the calculated spectra, they are similar to those calculated by Mohan et al. The three photon spectra at 6 MV from the machines of three different manufacturers show differences in their shapes as well as in the efficiency of bremsstrahlung production in the corresponding target and filter combinations. The contribution of direct photons to the photon energy fluence in a 10 x 10 field varies between 92% and 97%, where the primary collimator contributes between 0.6% and 3.4% and the flattening filter contributes between 0.6% and 4.5% to the head-scatter energy fluence. The fluence of the contaminant electrons at 100 cm varies between 5 x 10(-9) and 2.4 x 10(-7) cm(-2) per incident electron on target, and the corresponding spectrum for each beam is relatively invariant inside a 10 x 10 cm2 field. On the surface the dose from electron contamination varies between 5.7% and 11% of maximum dose and, at the depth of maximum dose, between 0.16% and 2.5% of maximum dose. The photon component of the percentage depth-dose at 10 cm depth is compared with the general formula provided by AAPM's task group 51 and confirms the claimed accuracy of 2%.

Journal ArticleDOI
TL;DR: A review of the current theoretical understanding of collective and single particle diffusion on surfaces and how it relates to the existing experimental data can be found in this article, where a brief survey of the experimental techniques that have been employed for the measurement of the surface diffusion coefficients is presented.
Abstract: We review in this article the current theoretical understanding of collective and single particle diffusion on surfaces and how it relates to the existing experimental data. We begin with a brief survey of the experimental techniques that have been employed for the measurement of the surface diffusion coefficients. This is followed by a section on the basic concepts involved in this field. In particular, we wish to clarify the relation between jump or exchange motion on microscopic length scales, and the diffusion coefficients which can be defined properly only in the long length and time scales. The central role in this is played by the memory effects. We also discuss the concept of diffusion under nonequilibrium conditions. In the third section, a variety of different theoretical approaches that have been employed in studying surface diffusion such as first principles calculations, transition state theory, the Langevin equation, Monte Carlo and molecular dynamics simulations, and path integral formalism...

Journal ArticleDOI
TL;DR: In this paper, the high-energy particle transport code NMTC/JAM was improved for the high energy heavy ion transport calculation by incorporating the JQMD code, the SPAR code and the Shen formula.
Abstract: The high-energy particle transport code NMTC/JAM, which has been developed at JAERI, was improved for the high-energy heavy ion transport calculation by incorporating the JQMD code, the SPAR code and the Shen formula. The new NMTC/JAM named PHITS (Particle and Heavy-Ion Transport code System) is the first general-purpose heavy ion transport Monte Carlo code over the incident energies from several MeV/nucleon to several GeV/nucleon.

Journal ArticleDOI
TL;DR: In this paper, an uncertainty quantification scheme was developed for the simulation of stochastic thermofluid processes, which relies on spectral representation of uncertainty using the polynomial chaos (PC) system.

Journal ArticleDOI
TL;DR: The classical particle filter is extended here to the estimation of multiple state processes given realizations of several kinds of observation processes, and the ability of the particle filter to mix different types of observations is made use of.
Abstract: The classical particle filter deals with the estimation of one state process conditioned on a realization of one observation process. We extend it here to the estimation of multiple state processes given realizations of several kinds of observation processes. The new algorithm is used to track with success multiple targets in a bearings-only context, whereas a JPDAF diverges. Making use of the ability of the particle filter to mix different types of observations, we then investigate how to join passive and active measurements for improved tracking.

Proceedings ArticleDOI
03 Jun 2002
TL;DR: This work proposes a mathematical formulation for the notion of optimal projective cluster, starting from natural requirements on the density of points in subspaces, and develops a Monte Carlo algorithm for iteratively computing projective clusters.
Abstract: We propose a mathematical formulation for the notion of optimal projective cluster, starting from natural requirements on the density of points in subspaces. This allows us to develop a Monte Carlo algorithm for iteratively computing projective clusters. We prove that the computed clusters are good with high probability. We implemented a modified version of the algorithm, using heuristics to speed up computation. Our extensive experiments show that our method is significantly more accurate than previous approaches. In particular, we use our techniques to build a classifier for detecting rotated human faces in cluttered images.