scispace - formally typeset
Search or ask a question

Showing papers on "Probability distribution published in 1998"


Journal ArticleDOI
TL;DR: The Condensation algorithm uses “factored sampling”, previously applied to the interpretation of static images, in which the probability distribution of possible interpretations is represented by a randomly generated set.
Abstract: The problem of tracking curves in dense visual clutter is challenging. Kalman filtering is inadequate because it is based on Gaussian densities which, being unimo dal, cannot represent simultaneous alternative hypotheses. The Condensation algorithm uses “factored sampling”, previously applied to the interpretation of static images, in which the probability distribution of possible interpretations is represented by a randomly generated set. Condensation uses learned dynamical models, together with visual observations, to propagate the random set over time. The result is highly robust tracking of agile motion. Notwithstanding the use of stochastic methods, the algorithm runs in near real-time.

5,804 citations


Journal ArticleDOI
TL;DR: The problem of updating a structural model and its associated uncertainties by utilizing dynamic response data is addressed using a Bayesian statistical framework that can handle the inherent ill-conditioning and possible nonuniqueness in model updating applications.
Abstract: The problem of updating a structural model and its associated uncertainties by utilizing dynamic response data is addressed using a Bayesian statistical framework that can handle the inherent ill-conditioning and possible nonuniqueness in model updating applications. The objective is not only to give more accurate response predictions for prescribed dynamic loadings but also to provide a quantitative assessment of this accuracy. In the methodology presented, the updated (optimal) models within a chosen class of structural models are the most probable based on the structural data if all the models are equally plausible a priori. The prediction accuracy of the optimal structural models is given by also updating probability models for the prediction error. The precision of the parameter estimates of the optimal structural models, as well as the precision of the optimal prediction-error parameters, can be examined. A large-sample asymptotic expression is given for the updated predictive probability distribution of the uncertain structural response, which is a weighted average of the prediction probability distributions for each optimal model. This predictive distribution can be used to make model predictions despite possible nonuniqueness in the optimal models.

1,235 citations


Journal ArticleDOI
TL;DR: Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights.
Abstract: Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A/sup 3/ /spl radic/((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.

1,234 citations


Book
01 Oct 1998
TL;DR: This chapter discusses Graphical Descriptive Techniques for Quantitative Data, which focuses on the art and science of Graphical Presentations, and Hypothesis Testing, and its applications to Statistics.
Abstract: 1. WHAT IS STATISTIC?. Introduction to Statistics. Key Statistical Concepts. How Managers Use Statistics. Statistics and the Computer. World Wide Web and Learning Center. Part I. DESCRIPTIVE TECHNIQUES AND PROBABILITY. 2. Graphical Descriptive Techniques. Introduction. Types of Data. Graphical Techniques for Quantitative Data. Scatter Diagrams. Pie Charts, Bar Charts, and Line Charts. Summary. Case 2.1 Pacific Salmon Catches. Case 2.2 Bombardier Inc. Case 2.3 The North American Free Trade Agreement (NAFTA). Appendix 2.A Minitab Instructions. Appendix 2.B Excel Instructions. 3. Art and Science of Graphical Presentations. Introduction. Graphical Excellence. Graphical Deception. Summary. Case 3.1 Canadian Federal Budget. 4. Numerical Descriptive Measures. Introduction. Measures of Central Location. Measures of Variability. Interpreting Standard Deviation. Measures of Relative Standing and Box Plots. Measures of Association. General Guidelines on the Exploration of Data. Summary. Appendix 4.A Minitab Instructions. Appendix 4.B Summation Notation. 5. Data Collection and Sampling. Introduction. Sources of Data. Sampling. Sampling Plans. Errors Involved in Sampling. Use of Sampling in Auditing. Summary. 6. Probability and Discrete Probability Distributions. Introduction. Assigning Probabilities to Events. Probability Rules and Trees. Random Variables and Probability Distributions. Expected Value and Variance. Bivariate Distributions. Binomial Distribution. Poisson Distribution. Summary. Case 6.1 Let's Make a Deal. Case 6.2 Gains from Market Timing. Case 6.3 Calculating Probabilities Associated with the Stock Market. Appendix 6.A Minitab Instructions. Appendix 6.B Excel Instructions. 7. Continuous Probability Distributions. Introduction. Continuous Probability Distributions. Normal Distribution. Exponential Distribution. Summary. Appendix 7.A Minitab Instructions. Appendix 7.B Excel Instructions. Part II. STATISTICALl INFERENCE. 8. Sampling Distributions. Introduction. Sampling Distribution of the Mean. Summary. 9. Introduction to Estimation. Introduction. Concepts of Estimation. Estimating the Population Mean When the Population Variance Is Known. Selecting the Sample Size. Summary. Appendix 9.A Minitab Instructions. Appendix 9.B Excel Instructions. 10. Introduction to Hypothesis Testing. Introduction. Concepts of Hypothesis Testing. Testing the Population Mean When the Population Variance Is Known. The p-Value of a Test of Hypothesis. Calculating the Probability of a Type II Error. The Road Ahead. Summary. Appendix 10.A Minitab Instructions. Appendix 10.B Excel Instructions. 11. Inference about the Description of a Single Population. Introduction. Inference about a Population Mean When the Population Variance Is Unknown. Inference about a Population Variance. Inference about a Population Proportion. The Myth of the Law of Averages. Case 11.1 Number of Uninsured Motorists. Case 11.2 National Patent Development Corporation.

805 citations


01 Jan 1998
TL;DR: This chapter will assess whether the feedforward network has been superceded, for supervised regression and classification tasks, and will review work on this idea by Williams and Rasmussen (1996), Neal (1997), Barber and Williams (1997) and Gibbs and MacKay (1997).
Abstract: Feedforward neural networks such as multilayer perceptrons are popular tools for nonlinear regression and classification problems. From a Bayesian perspective, a choice of a neural network model can be viewed as defining a prior probability distribution over non-linear functions, and the neural network's learning process can be interpreted in terms of the posterior probability distribution over the unknown function. (Some learning algorithms search for the function with maximum posterior probability and other Monte Carlo methods draw samples from this posterior probability). In the limit of large but otherwise standard networks, Neal (1996) has shown that the prior distribution over non-linear functions implied by the Bayesian neural network falls in a class of probability distributions known as Gaussian processes. The hyperparameters of the neural network model determine the characteristic length scales of the Gaussian process. Neal's observation motivates the idea of discarding parameterized networks and working directly with Gaussian processes. Computations in which the parameters of the network are optimized are then replaced by simple matrix operations using the covariance matrix of the Gaussian process. In this chapter I will review work on this idea by Williams and Rasmussen (1996), Neal (1997), Barber and Williams (1997) and Gibbs and MacKay (1997), and will assess whether, for supervised regression and classification tasks, the feedforward network has been superceded.

795 citations


Journal ArticleDOI
TL;DR: The resulting model, called FRAME (Filters, Random fields And Maximum Entropy), is a Markov random field (MRF) model, but with a much enriched vocabulary and hence much stronger descriptive ability than the previous MRF models used for texture modeling.
Abstract: This article presents a statistical theory for texture modeling. This theory combines filtering theory and Markov random field modeling through the maximum entropy principle, and interprets and clarifies many previous concepts and methods for texture analysis and synthesis from a unified point of view. Our theory characterizes the ensemble of images I with the same texture appearance by a probability distribution f(I) on a random field, and the objective of texture modeling is to make inference about f(I), given a set of observed texture examples.In our theory, texture modeling consists of two steps. (1) A set of filters is selected from a general filter bank to capture features of the texture, these filters are applied to observed texture images, and the histograms of the filtered images are extracted. These histograms are estimates of the marginal distributions of f( I). This step is called feature extraction. (2) The maximum entropy principle is employed to derive a distribution p(I), which is restricted to have the same marginal distributions as those in (1). This p(I) is considered as an estimate of f( I). This step is called feature fusion. A stepwise algorithm is proposed to choose filters from a general filter bank. The resulting model, called FRAME (Filters, Random fields And Maximum Entropy), is a Markov random field (MRF) model, but with a much enriched vocabulary and hence much stronger descriptive ability than the previous MRF models used for texture modeling. Gibbs sampler is adopted to synthesize texture images by drawing typical samples from p(I), thus the model is verified by seeing whether the synthesized texture images have similar visual appearances to the texture images being modeled. Experiments on a variety of 1D and 2D textures are described to illustrate our theory and to show the performance of our algorithms. These experiments demonstrate that many textures which are previously considered as from different categories can be modeled and synthesized in a common framework.

746 citations


Proceedings ArticleDOI
Gary Bradski1
19 Oct 1998
TL;DR: An efficient, new algorithm is described here based on the mean shift algorithm, which robustly finds the mode (peak) of probability distributions within a video scene and is used as an interface for games and graphics.
Abstract: As a step towards a perceptual user interface, an object tracking algorithm is developed and demonstrated tracking human faces. Computer vision algorithms that are intended to form part of a perceptual user interface must be fast and efficient. They must be able to track in real time and yet not absorb a major share of computational resources. An efficient, new algorithm is described here based on the mean shift algorithm. The mean shift algorithm robustly finds the mode (peak) of probability distributions. We first describe histogram based methods of producing object probability distributions. In our case, we want to track the mode of an object's probability distribution within a video scene. Since the probability distribution of the object can change and move dynamically in time, the mean shift algorithm is modified to deal with dynamically changing probability distributions. The modified algorithm is called the Continuously Adaptive Mean Shift (CAMSHIFT) algorithm. CAMSHIFT is then used as an interface for games and graphics.

676 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a model for estimating the intensity of long-range dependence in finite and infinite variance time series, which is based on the maximally-skewed stable distributions.
Abstract: Part 1 Applications: heavy tailed probability distributions in the World Wide Web, M.E. Crovella et al self-similarity and heavy tails - structural modelling of network traffic, W. Willinger et al heavy tails in high-frequency financial data, U.A. Muller et al stable paretian modelling in finance - some empirical and theoretical aspects, S. Mittnik et al risk management and quantile estimation, F. Bassi et al. Part 2 Time series: analyzing stable time series, R.J. Adler et al inference for linear processes with stable noise, m. Calder, R.A. Davis on estimating the intensity of long-range dependence in finite and infinite variance time series, M.S. Taqqu, V. Teverovsky why non-linearities can ruin the heavy tailed modeller's day, S.I. Resnick periodogram estimates from heavy-tailed data, T. Mikosch Bayesian inference for time series with infinite variance stable innovations, N. Ravishanker, Z. Qiou. Part 3 Heavy tail estimation: hill, bootstrap and jackknife estimators for heavy tails, O.V. Pictet et al characteristic function based estimation of stable distribution parameters, S.M. Kogan. D.B. Williams. Part 4 Regression: bootstrapping signs and permutations for regression with heavy tailed errors - a robust resampling, R. LePage et al linear regression with stable disturbances, J.H. McCulloch. Part 5 Signal processing: deviation from normality in statistical signal processing - parameter estimation with alpha-stable distributions, P. Tsakalides, C.L. Nikias statistical modelling and receiver design for multi-user communication networks, G.A. Tsihrintzis. Part 6 Model structures: subexponential distributions, C.M. Goldie, C. Kluppelberg structure of stationary stable processes, J. Rosinski tail behaviour of some shot noise processes, G. Samorodnitsky. Part 7 Numerical procedures: numerical approximation of the symmetric stable distribution and density, J.H. McCulloch table of the maximally-skewed stable distributions, J.H. McCulloch, D.B. Panton multivariate stable distributions - approximation, estimation, simulation and identification, J.P. Nolan univariate stable distributions -parametrizations and software, J.P. Nolan.

500 citations


Journal ArticleDOI
TL;DR: The approach is illustrated with examples of models of genetic and biochemical phenomena where the ULTRASAN package is used to present results from numerical analysis and the outcome of simulations.
Abstract: An integrated understanding of molecular and developmental biology must consider the large number of molecular species involved and the low concentrations of many species in vivo. Quantitative stochastic models of molecular interaction networks can be expressed as stochastic Petri nets (SPNs), a mathematical formalism developed in computer science. Existing software can be used to define molecular interaction networks as SPNs and solve such models for the probability distributions of molecular species. This approach allows biologists to focus on the content of models and their interpretation, rather than their implementation. The standardized format of SPNs also facilitates the replication, extension, and transfer of models between researchers. A simple chemical system is presented to demonstrate the link between stochastic models of molecular interactions and SPNs. The approach is illustrated with examples of models of genetic and biochemical phenomena where the UltraSAN package is used to present results from numerical analysis and the outcome of simulations.

444 citations



Journal ArticleDOI
TL;DR: Generalized linear models have also been used in this article to fit and compare probability distributions of different probability distributions for time series and event history data, including survival data and event histories.
Abstract: Generalized Linear Modelling.- Discrete Data.- Fitting and Comparing Probability Distributions.- Growth Curves.- Time Series.- Survival Data.- Event Histories.- Spatial Data.- Normal Models.- Dynamic Models.

Journal ArticleDOI
TL;DR: In this article, the probability distribution of stock price changes is studied by analyzing a database (the Trades and Quotes Database) documenting every trade for all stocks in three major US stock markets, for the two year period Jan 1994 -- Dec 1995.
Abstract: The probability distribution of stock price changes is studied by analyzing a database (the Trades and Quotes Database) documenting every trade for all stocks in three major US stock markets, for the two year period Jan 1994 -- Dec 1995. A sample of 40 million data points is extracted, which is substantially larger than studied hitherto. We find an asymptotic power-law behavior for the cumulative distribution with an exponent alpha approximately 3, well outside the Levy regime 0< alpha <2.

Journal ArticleDOI
TL;DR: A stochastic search form of classification and regression tree (CART) analysis is proposed, motivated by a Bayesian model and an approximation to a probability distribution over the space of possible trees is explored.
Abstract: A stochastic search form of classification and regression tree (CART) analysis (Breiman et al., 1984) is proposed, motivated by a Bayesian model. An approximation to a probability distribution over the space of possible trees is explored using reversible jump Markov chain Monte Carlo methods (Green, 1995).

Journal ArticleDOI
TL;DR: In this article, the authors developed a general formulation for stereological analysis of particle distributions which is applicable to any particle size or size distribution (not limited to log-normal, unimodal, etc.).

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new model for calculating VaR where the user is free to choose any probability distributions for daily changes in the market variables and parameters of the probability distributions are subject to updating schemes such as GARCH.
Abstract: This paper proposes a new model for calculating VaR where the user is free to choose any probability distributions for daily changes in the market variables and parameters of the probability distributions are subject to updating schemes such as GARCH. Transformations of the probability distributions are assumed to be multivariate normal. The model is appealing in that the calculation of VaR is relatively straightforward and can make use of the RiskMetrics or a similar database. We test a version of the model using nine years of daily data on 12 different exchange rates. When the first half of the data is used to estimate the model’s parameters we find that it provides a good prediction of the distribution of daily changes in the second half of the data.

Proceedings ArticleDOI
04 May 1998
TL;DR: The cGA represents the population as a probability distribution over the set of solutions, and is operationally equivalent to the order-one behavior of the simple GA with uniform crossover.
Abstract: This paper introduces the "compact genetic algorithm" (cGA). The cGA represents the population as a probability distribution over the set of solutions, and is operationally equivalent to the order-one behavior of the simple GA with uniform crossover. It processes each gene independently and requires less memory than the simple GA.

Journal ArticleDOI
TL;DR: In this paper, new probability inequalities involving the ordered components of an $MTP_2$ random vector are derived, which provide an analytical proof of an important conjecture in the field of multiple hypothesis testing.
Abstract: Some new probability inequalities involving the ordered components of an $MTP_2$ random vector are derived, which provide an analytical proof of an important conjecture in the field of multiple hypothesis testing. This conjecture has been mostly validated so far using simulation.

Journal ArticleDOI
TL;DR: The accuracy of short-range probabilistic forecasts of quantitative precipitation from the experimental Eta-Regional Spectral Model ensemble is compared with the accuracy of forecasts from the Nested Grid Model's model output statistics (MOS) over a set of 13 case days from September 1995 through January 1996.
Abstract: The accuracy of short-range probabilistic forecasts of quantitative precipitation (PQPF) from the experimental Eta‐Regional Spectral Model ensemble is compared with the accuracy of forecasts from the Nested Grid Model’s model output statistics (MOS) over a set of 13 case days from September 1995 through January 1996. Ensembles adjusted to compensate for deficiencies noted in prior forecasts were found to be more skillful than MOS for all precipitation categories except the basic probability of measurable precipitation. Gamma distributions fit to the corrected ensemble probability distributions provided an additional small improvement. Interestingly, despite the favorable comparison with MOS forecasts, this ensemble configuration showed no ability to ‘‘forecast the forecast skill’’ of precipitation—that is, the ensemble was not able to forecast the variable specificity of the ensemble probability distribution from day-to-day and location-to-location. Probability forecasts from gamma distributions developed as a function of the ensemble mean alone were as skillful at PQPF as forecasts from distributions whose specificity varied with the spread of the ensemble. Since forecasters desire information on forecast uncertainty from the ensemble, these results suggest that future ensemble configurations should be checked carefully for their presumed ability to forecast uncertainty.

Journal ArticleDOI
TL;DR: Simulations indicate that both the adaptive and nonadaptive versions of this operator are capable of producing solutions that are statistically as good as, or better, than those produced when using Gaussian or Cauchy mutations alone.
Abstract: Traditional investigations with evolutionary programming for continuous parameter optimization problems have used a single mutation operator with a parametrized probability density function (PDF), typically a Gaussian. Using a variety of mutation operators that can be combined during evolution to generate PDFs of varying shapes could hold the potential for producing better solutions with less computational effort. In view of this, a linear combination of Gaussian and Cauchy mutations is proposed. Simulations indicate that both the adaptive and nonadaptive versions of this operator are capable of producing solutions that are statistically as good as, or better, than those produced when using Gaussian or Cauchy mutations alone.

Journal ArticleDOI
C. Gawron1
TL;DR: An iterative algorithm to determine the dynamic user equilibrium with respect to link costs defined by a traffic simulation model is presented and a queuing model is used for performance reasons.
Abstract: An iterative algorithm to determine the dynamic user equilibrium with respect to link costs defined by a traffic simulation model is presented. Each driver's route choice is modeled by a discrete probability distribution which is used to select a route in the simulation. After each simulation run, the probability distribution is adapted to minimize the travel costs. Although the algorithm does not depend on the simulation model, a queuing model is used for performance reasons. The stability of the algorithm is analyzed for a simple example network. As an application example, a dynamic version of Braess's paradox is studied.


Journal ArticleDOI
TL;DR: Empirical justifications for the lognormal, Rayleigh and Suzuki (1977) probability density functions in multipath fading channels are examined by quantifying the rates of convergence of the central limit theorem (CLT) for the addition and multiplication of random variables.
Abstract: Empirical justifications for the lognormal, Rayleigh and Suzuki (1977) probability density functions in multipath fading channels are examined by quantifying the rates of convergence of the central limit theorem (CLT) for the addition and multiplication of random variables. The accuracy of modeling the distribution of rays which experience multiple reflections/diffractions between transmitter and receiver as lognormal is quantified. In addition, it is shown that the vector sum of lognormal rays, such as in a narrow-band signal envelope, may best be approximated as being either Rayleigh, lognormal or Suzuki distributed depending on the fading channel conditions. These conditions are defined statistically.

Posted Content
TL;DR: In this paper, the authors extend the two-stage approach by allowing the probability weighting function to depend on the type of uncertainty, and derive relations between decision weights, probability judgments and probability Weighting under uncertainty.
Abstract: Decision weights are an important component in recent theories for decision making under uncertainty. To better explain these decision weights, a two stage approach has been proposed: first, the probability of an event is judged and then this probability is transformed by the probability weighting function known from decision making under risk. We extend the two stage approach by allowing the probability weighting function to depend on the type of uncertainty. Using this more general approach, certain properties of decision weights can be attributed to certain properties of probability judgments and/or to certain properties of probability weighting. After deriving these relations between decision weights, probability judgments and probability weighting under uncertainty, we present an empirical study which shows that it is indeed neccessary to allow the probability weighting function to be source dependent. The analysis includes an examination of properties of the probability weighting function under uncertainty which have not been considered yet.

Journal ArticleDOI
TL;DR: In this article, the P-model of chance constrained programming is extended to stochastic inputs and outputs via probabilistic input-output vector comparisons in a given empirical production (possibility) set.
Abstract: Pareto-Koopmans efficiency in Data Envelopment Analysis (DEA) is extended to stochastic inputs and outputs via probabilistic input-output vector comparisons in a given empirical production (possibility) set. In contrast to other approaches which have used Chance Constrained Programming formulations in DEA, the emphasis here is on “joint chance constraints.” An assumption of arbitrary but known probability distributions leads to the P-Model of chance constrained programming. A necessary condition for a DMU to be stochastically efficient and a sufficient condition for a DMU to be non-stochastically efficient are provided. Deterministic equivalents using the zero order decision rules of chance constrained programming and multivariate normal distributions take the form of an extended version of the additive model of DEA. Contacts are also maintained with all of the other presently available deterministic DEA models in the form of easily identified extensions which can be used to formalize the treatment of efficiency when stochastic elements are present.

Journal ArticleDOI
TL;DR: In this paper, a signal detection model based on the extreme value distribution has been proposed to yield unit slope receiver operating characteristic (ROC) curves for several classic data sets that are commonly given as examples of normal or logistic ROC curves with slopes that differ from unity.
Abstract: Generalized linear models are a general class of regressionlike models for continuous and categorical response variables. Signal detection models can be formulated as a subclass of generalized linear models, and the result is a rich class of signal detection models based on different underlying distributions. An example is a signal detection model based on the extreme value distribution. The extreme value model is shown to yield unit slope receiver operating characteristic (ROC) curves for several classic data sets that are commonly given as examples of normal or logistic ROC curves with slopes that differ from unity. The result is an additive model with a simple interpretation in terms of a shift in the location of an underlying distribution. The models can also be extended in several ways, such as to recognize response dependencies, to include random coefficients, or to allow for more general underlying probability distributions.

Journal ArticleDOI
TL;DR: In this article, the authors presented an acceptance/rejection sampling approach for the Markov chain Monte Carlo (MCMC) approximate sampling problem, which is based on the same idea as the Propp-Wilson algorithm and eliminates user-impatience bias.
Abstract: For a large class of examples arising in statistical physics known as attractive spin systems (e.g., the Ising model), one seeks to sample from a probability distribution $\pi$ on an enormously large state space, but elementary sampling is ruled out by the infeasibility of calculating an appropriate normalizing constant. The same difficulty arises in computer science problems where one seeks to sample randomly from a large finite distributive lattice whose precise size cannot be ascertained in any reasonable amount of time. The Markov chain Monte Carlo (MCMC) approximate sampling approach to such a problem is to construct and run "for a long time" a Markov chain with long-run distribution $\pi$. But determining how long is long enough to get a good approximation can be both analytically and empirically difficult. Recently, Propp and Wilson have devised an ingenious and efficient algorithm to use the same Markov chains to produce perfect (i.e., exact) samples from $\pi$. However, the running time of their algorithm is an unbounded random variable whose order of magnitude is typically unknown a priori and which is not independent of the state sampled, so a naive user with limited patience who aborts a long run of the algorithm will introduce bias. We present a new algorithm which (1) again uses the same Markov chains to produce perfect samples from $\pi$, but is based on a different idea (namely, acceptance/rejection sampling); and (2) eliminates user-impatience bias. Like the Propp-Wilson algorithm, the new algorithm applies to a general class of suitably monotone chains, and also (with modification) to "anti-monotone" chains. When the chain is reversible, naive implementation of the algorithm uses fewer transitions but more space than Propp-Wilson. When fine-tuned and applied with the aid of a typical pseudorandom number generator to an attractive spin system on n sites using a random site updating Gibbs sampler whose mixing time $\tau$ is polynomial in n, the algorithm runs in time of the same order (bound) as Propp-Wilson [expectation $O(\tau \log n)$] and uses only logarithmically more space [expectation $O(n \log n)$, vs.$O9n)$ for Propp-Wilson].

Journal ArticleDOI
TL;DR: In this article, the authors investigate the properties of a model of granular matter consisting of $N$ Brownian particles on a line, subject to inelastic mutual collisions, and display a genuine thermodynamic limit for the mean values of the energy, and the energy dissipation.
Abstract: We investigate the properties of a model of granular matter consisting of $N$ Brownian particles on a line, subject to inelastic mutual collisions. This model displays a genuine thermodynamic limit for the mean values of the energy, and the energy dissipation. When the typical relaxation time $\ensuremath{\tau}$ associated with the Brownian process is small compared with the mean collision time ${\ensuremath{\tau}}_{c}$ the spatial density is nearly homogeneous and the velocity probability distribution is Gaussian. In the opposite limit $\ensuremath{\tau}\ensuremath{\gg}{\ensuremath{\tau}}_{c}$ one has strong spatial clustering, with a fractal distribution of particles, and the velocity probability distribution strongly deviates from the Gaussian one.

Journal ArticleDOI
TL;DR: For a general class of probability distributions in disordered Ising spin systems, in the thermodynamical limit, this article proved the following property for overlaps among real replicas.
Abstract: For a very general class of probability distributions in disordered Ising spin systems, in the thermodynamical limit, we prove the following property for overlaps among real replicas. Consider the overlaps among s replicas. Add one replica s + 1. Then, the overlap between one of the first s replicas, let us say a, and the added s + 1 is either independent of the former ones, or it is identical to one of the overlaps , with b running among the first s replicas, excluding a. Each of these cases has equal probability .

Journal ArticleDOI
TL;DR: This work investigates nonuniform direction choice and shows that, under regularity conditions on the region S and target distribution G, there exists a unique direction choice distribution, characterized by necessary and sufficient conditions depending on S and G, which optimizes the Doob bound on rate of convergence.
Abstract: Hit-and-Run algorithms are Monte Carlo procedures for generating points that are asymptotically distributed according to general absolutely continuous target distributions G over open bounded regions S. Applications include nonredundant constraint identification, global optimization, and Monte Carlo integration. These algorithms are reversible random walks that commonly incorporate uniformly distributed step directions. We investigate nonuniform direction choice and show that, under regularity conditions on the region S and target distribution G, there exists a unique direction choice distribution, characterized by necessary and sufficient conditions depending on S and G, which optimizes the Doob bound on rate of convergence. We include computational results demonstrating greatly accelerated convergence for this optimizing direction choice as well as for more easily implemented adaptive heuristic rules.

Journal ArticleDOI
TL;DR: In this article, a two-stage methodology for assessment of reliability of water distribution networks recognizing uncertainties in nodal demands, pipe capacity, reservoir/tank levels, and availability of system components is proposed.
Abstract: A two-stage methodology for assessment of reliability of water distribution networks recognizing uncertainties in nodal demands, pipe capacity, reservoir/tank levels, and availability of system components is proposed. In the first stage, the probability distribution functions for nodal heads are derived from a linearized hydraulic model on the basis of known probability distribution functions of the nodal demands, pipe roughnesses, and reservoir/tank levels. The effects of nonlinearity in the network hydraulic model are accounted for in this step by partitioning the nodal demands into a number of categories or intervals. The probability of supply failure calculated in this first stage assumes that the system state or configuration is fixed, e.g., certain pipes and pumps are specified as being out of operation; that the pipe capacity varies randomly; and that demands vary around a specified level. The second step involves combining this probability with the probabilities of different system configurations and demand levels to generate the overall reliability measures for the whole system or a particular area of the system.