scispace - formally typeset
Search or ask a question

Showing papers on "Probability distribution published in 1986"


Book
01 Feb 1986
TL;DR: Tabukar and Pictorial Methods for describing data Numerical summary Measures Summarizing Bivariate data Probability Random Variables and discrete Probability Distribution Continuous Probability distribution Sampling Distributions Estimation using a single sample Inferences using two independent Samples Inferences with Paired Data Simple Linear Regression and Correlation Multiple Regression Analysis The Analysis of Categorical Data and Goodness-of-fit Methods Sampling Techniques as discussed by the authors.
Abstract: Tabukar and Pictorial Methods for describing data Numerical summary Measures Summarizing Bivariate data Probability Random Variables and discrete Probability Distribution Continuous Probability Distribution Sampling Distributions Estimation using a single sample Inferences using two Independent Samples Inferences using Paired Data Simple Linear Regression and Correlation Multiple Regression Analysis The Analysis of Categorical Data and Goodness-of-fit Methods Sampling Techniques.

678 citations


Journal ArticleDOI
TL;DR: It is shown in [1] that the Tiling problem with uniform distribution of instances has no polynominal “on average” algorithm, unless every NP-problem with every simple probability distribution has it.
Abstract: Many interesting combinatorial problems were found to be NP-complete. Since there is little hope to solve them fast in the worst case, researchers look for algorithms which are fast just “on average,” This matter is sensitive to the choice of a particular NP-complete problem and a probability distribution of its instances. Some of these tasks are easy and some not. But one needs a way to distinguish the “difficult on average” problems. Such negative results could not only save “positive” efforts but may also be used in areas (like cryptography) where hardness of some problems is a frequent assumption. It is shown in [1] that the Tiling problem with uniform distribution of instances has no polynominal “on average” algorithm, unless every NP-problem with every simple probability distribution has it.

520 citations


Journal ArticleDOI
Stephen E. Levinson1
TL;DR: The solution proposed here is to replace the probability distributions of duration with continuous probability density functions to form a continuously variable duration hidden Markov model (CVDHMM) which is ideally suited to specification of the durational density.

512 citations


Journal ArticleDOI
TL;DR: P pores in the RSA configuration are introduced and there is a direct correspondence between vertices of the VD network and these holes, and also between direct/indirect geometrical neighbors and theseholes, and the hole size distribution is found to be a parabola.
Abstract: By sequentially adding line segments to a line or disks to a surface at random positions without overlaps, we obtain configurations of the one- and two-dimensional random sequential adsorption (RSA) problem. We have simulated the one- and two-dimensional problem with periodic boundary condition. The one-dimensional simulations are compared with the exact analytical solutions to give an estimate of the accuracy of the simulation. In two dimensions the geometrical properties of the RSA configuration are discussed and in addition known results of the RSA process are reproduced. Various statistical distributions of the Voronoi-Dirichlet (VD) network corresponding to the RSA disk configuration are analyzed. In order to characterize pores in the RSA configuration, we introduce circular holes. There is a direct correspondence between vertices of the VD network and these holes, and also between direct/indirect geometrical neighbors and these holes. The hole size distribution is found to be a parabola. We also find general relations that connect the asymptotic behavior of the surface coverage, the correlation function, and the hole size distribution.

431 citations


Book
28 Nov 1986
TL;DR: Hendry as mentioned in this paper presents a preliminary view of the econometric modeling of data and its application in the context of econometrics, including the linear regression model, the Gauss linear model, and the multivariate normal distribution.
Abstract: Foreword David Hendry Preface Acknowledgements Part I. Introduction: 1. Econometric modelling, a preliminary view 2. Descriptive study of data Part II. Probability Theory: 3. Probability 4. Random variables and probability distributions 5. Random vectors and their distributions 6. Functions of random variables 7. The general notion of expectation 8. Stochastic processes 9. Limit theorems 10. Introduction to asymptotic theory Part III. Statistical Inferences: 11. The nature of statistical inference 12. Estimation I - properties of estimators 13. Estimation II - methods 14. Hypothesis testing and confidence regions 15. The multivariate normal distribution 16. Asymptotic test procedures Part IV. The Linear Regression and Related Statistical Models: 17. Statistical models in econometrics 18. The Gauss linear model 19. The linear regression model I - specification, estimation and testing 20. the linear regression model II - departures from the assumptions underlying the statistical GM 21. The linear regression model III- departures from the assumptions underlying the probability model 22. The linear regression model IV - departures from the sampling model assumption 23. The dynamic linear regression model 24. The multivariate linear regression model 25. The simultaneous equations model 26. Epilogue: towards a methodology of econometric modelling References Index.

391 citations


Journal ArticleDOI
TL;DR: A statistical model applicable to a local area has been developed for mobile-to-mobile urban and suburban land communication channels and the probability distributions of the received envelope and phase, spatial-time correlation function, and the power spectral density of the complex envelope have been developed.
Abstract: A statistical model applicable to a local area has been developed for mobile-to-mobile urban and suburban land communication channels. Using the model, the probability distributions of the received envelope and phase, spatial-time correlation function, and the power spectral density of the complex envelope have been developed.

389 citations


Journal ArticleDOI
TL;DR: In this paper, an evaluation of the effectiveness of the VITA, Quadrant, TPAV, U -level, Positive slope, and VITa with slope burst-detection algorithms has been done by making direct comparisons with flow visualization.
Abstract: An evaluation of the effectiveness of the VITA, Quadrant, TPAV, U -level, Positive slope, and VITA with slope burst-detection algorithms has been done by making direct comparisons with flow visualization. Measurements were made in a water channel using an X-type hot-film probe located in the near-wall region. Individual ejections from bursts which contacted the probe were identified using dye flow visualization. The effectiveness of each of the detection algorithms was found to be highly dependent on the operational parameters, i.e. threshold levels and averaging or window times. These parameters were adjusted so that the number of events detected by each of the algorithms corresponded to the number of ejections identified by flow visualization, while the probability of a false detection was minimized. Comparing the detection algorithm using these optimum parameter settings, the Quadrant technique was found to have the greatest reliability with a high probability of detecting the ejections and a low probability of false detections. Furthermore, it was found that the ejections detected by the Quadrant technique could be grouped into bursts by analysing the probability distribution of the time between ejections.

324 citations


Journal Article
TL;DR: In this article, the authors give a probabilistic representation of the distribution 7?i (A) in terms of Normal and truncated Normal laws, which reveals the "structure" of the class i_X I and indicates the kind of departure from normality.
Abstract: In a recent paper Azzalini (1985) introduced the class I ={ I (A): A EN} of skew-normal probability distributions and studied its main properties. The salient features of the class 5xI/' are mathematical tractability and strict inclusion of the Normal law (for A=0). The shape parameter A, to some extent, controls the index of skewness. It is the purpose of this note to give a probabilistic representation of the distribution 7?i (A) in terms of Normal and truncated Normal laws. This representation reveals the "structure" of the class i_X Iand indicates the kind of departure from normality. The moments of a random variable ZA with the distribution 9?k&l) are explicitly determined, and an efficient method for the Monte Carlo generation of ZA is shown.

316 citations


Journal ArticleDOI
TL;DR: In this paper, a method for estimating the distribution of the parameters of a random coefficient regression model is proposed, accounting for interindividual variability, which is assumed to lie in a wide class of probability distributions rather than in a given parametric class.
Abstract: SUMMARY A method for estimating the distribution of the parameters of a random coefficient regression model is proposed. This distribution, accounting for interindividual variability, is assumed to lie in a wide class of probability distributions rather than in a given parametric class. Estimation is based on observations from a sample of individuals and likelihood is the estimation criterion. Experimental designs may be different among individuals, allowing the method to apply to routinely collected data. The problem has strong connections with the theory of optimum design of experiments. Conditions are given under which the problem has a unique solution, which then corresponds to a discrete distribution. A simple pharmacokinetic model involving two parameters is used as an example; these parameters have a bimodal distribution as statistical specification. Moreover, only one observation is available per individual; thus the method applies even when the model is not identifiable.

285 citations


Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo technique from statistical mechanics was adapted to perform global optimization, and the technique was applied to synthetic data, where the authors presented an application of a similar Monte Carlo method to field data from the Wyoming Overthrust belt.
Abstract: Conventional approaches to residual statics estimation obtain solutions by performing linear inversion of observed traveltime deviations. A crucial component of these procedures is picking time delays; gross errors in these picks are known as "cycle skips" or "leg jumps" and are the bane of linear traveltime inversion schemes.This paper augments Rothman (1985), which demonstrated that the estimation of large statics in noise-contaminated data is posed better as a nonlinear, rather than as a linear, inverse problem. Cycle skips then appear as local (secondary) minima of the resulting nonlinear optimization problem. In the earlier paper, a Monte Carlo technique from statistical mechanics was adapted to perform global optimization, and the technique was applied to synthetic data. Here I present an application of a similar Monte Carlo method to field data from the Wyoming Overthrust belt. Key changes, however, have led to a more efficient and practical algorithm. The new technique performs explicit crosscorrelation of traces. Instead of picking the peaks of these crosscorrelation functions, the method transforms the crosscorrelation functions to probability distributions and then draws random numbers from the distributions. Estimates of statics are now iteratively updated by this procedure until convergence to the optimal stack is achieved.Here I also derive several theoretical properties of the algorithm. The method is expressed as a Markov chain, in which the equilibrium (steady-state) distribution is the Gibbs distribution of statistical mechanics.

264 citations


Journal ArticleDOI
TL;DR: In this paper, the Laplacian operator on superspace is defined and the probability of finding the 3-metric and matter field configuration in a given region of superspace was shown to be bounded by the proper time that the solutions spend in that region.

Journal ArticleDOI
TL;DR: In this paper, a family of statistical distributions and estimators for extreme values based on a fixed number r ⩾ 1 of the largest annual events are presented. The distributions are based on the asymptotic joint distribution of the r largest values in a single sample.



Journal ArticleDOI
TL;DR: In this paper, a method for estimating a probability distribution using estimates of its percentiles provided by experts is developed for estimating the conditional probability of equipment failure given a seismically induced stress.
Abstract: A method is developed for estimating a probability distribution using estimates of its percentiles provided by experts. The analyst's judgment concerning the credibility of these expert opinions is quantified in the likelihood function of Bayes’Theorem. The model considers explicitly the random variability of each expert estimate, the dependencies among the estimates of each expert, the dependencies among experts, and potential systematic biases. The relation between the results of the formal methods of this paper and methods used in practice is explored. A series of sensitivity studies provides insights into the significance of the parameters of the model. The methodology is applied to the problem of estimation of seismic fragility curves (i.e., the conditional probability of equipment failure given a seismically induced stress).

Journal ArticleDOI
TL;DR: In this paper, a class of distribution-free tests has been proposed for testing the null hypothesis that the two life distributions are identical against the alternative that one failure rate is uniformly smaller than the other.
Abstract: In this paper, we have studied some implications between tail-ordering (also known as dispersive ordering) and failure rate ordering (also called TP2 ordering) of two probability distribution functions. Based on independent random samples from these distributions, a class of distribution-free tests has been proposed for testing the null hypothesis that the two life distributions are identical against the alternative that one failure rate is uniformly smaller than the other. The tests have good efficiencies as compared to their competitors.

Book
01 Jan 1986
TL;DR: In this article, the second moment method of reliability analysis is applied to the reliability analysis of geotechnical structures, and the reliability properties of the structure of the geodesic structures are investigated.
Abstract: Contents: Basic probability theory Random variables Common probability distributions The second moment method of reliability analysis Applications of the second moment method More probability distributions Matrix algebra Correlated and non-normal variables The reliability of geotechnical structures.

Journal ArticleDOI
TL;DR: In this paper, an alternative approach is developed by transforming the data into unsigned four-dimensional directions and using known results on the sampling properties of the spectral decomposition of the resulting sample moment of inertia matrix.
Abstract: SUMMARY Maximum likelihood estimation using the matrix von Mises-Fisher distribution in orientation statistics leads to unacceptably complicated likelihood equations, essentially because of the inconvenient form of the normalizing constant in the probability distribution. For the case of 3 x 2 or 3 x 3 orientations, the main cases of practical importance, an alternative approach is developed here by transforming the data into unsigned four-dimensional directions and using known results on the sampling properties of the spectral decomposition of the resulting sample moment of inertia matrix. It is demonstrated that the necessary computations are relatively simple by applying some of the techniques to a set of vectorcardiogram data.


Proceedings ArticleDOI
Stephen E. Levinson1
01 Dec 1986
TL;DR: The solution proposed here is to replace the probability distributions of duration with continuous probability density functions to form a continuously variable duration hidden Markov model (CVDHMM) which is ideally suited to specification of the durational density.
Abstract: During the past decade, the applicability of hidden Markov models (HMM) to various facets of speech analysis had been demonstrated in several different experiments. These investigations all rest on the assumption that speech is a quasi-stationary process whose stationary intervals can be identified with the occupancy of a single state of an appropriate HMM. In the traditional form of the HMM, the probability of duration of a state decreases exponentially with time. This behavior does not provide an adequate representation of the temporal structure of speech. The solution proposed here is to replace the probability distributions of duration with continuous probability density functions to form a continuously variable duration hidden Markov model (CVDHMM). The gamma distribution is ideally suited to specification of the durational density since it is one-sided and has only two parameters which, together, define both mean and variance. The main result is a derivation and proof of convergence of reestimation formulae for all the parameters of the CVDHMM. It is interesting to note that if the state durations are gamma distributed, one of the formulae is nonalgebraic but, fortuitously, has properties such that it is easily and rapidly solved numerically to any desired degree of accuracy. Other results are presented including the performance of the formulae on simulated data.

Journal ArticleDOI
TL;DR: In this article, the distribution of total claims payable by an insurer is considered when the frequency of claims is a mixed Poisson random variable, and the total claims density can be evaluated numerically using simple recursive formulae (discrete or continuous).
Abstract: The distribution of total claims payable by an insurer is considered when the frequency of claims is a mixed Poisson random variable. It is shown how in many cases the total claims density can be evaluated numerically using simple recursive formulae (discrete or continuous).Mixed Poisson distributions often have desirable properties for modelling claim frequencies. For example, they often have thick tails which make them useful for long-tailed data. Also, they may be interpreted as having arisen from a stochastic process. Mixing distributions considered include the inverse Gaussian, beta, uniform, non-central chi-squared, and the generalized inverse Gaussian as well as other more general distributions.It is also shown how these results may be used to derive computational formulae for the total claims density when the frequency distribution is either from the Neyman class of contagious distributions, or a class of negative binomial mixtures. Also, a computational formula is derived for the probability distribution of the number in the system for the M/G/1 queue with bulk arrivals.

Journal ArticleDOI
TL;DR: Strong consistency of the proposed estimator is proved under certain sufficient conditions and simulation results are presented in support of the theory.

Journal ArticleDOI
TL;DR: For a large class of distributions, the asymptotic behavior of the median of this minimum is determined, and it is shown that it is exponentially small.
Abstract: Given a set of n items with real-valued sizes, the optimum partition problem asks how it can be partitioned into two subsets so that the absolute value of the difference of the sums of the sizes over the two subsets is minimized. We present bounds on the probability distribution of this minimum under the assumption that the sizes are independent random variables drawn from a common distribution. For a large class of distributions, we determine the asymptotic behavior of the median of this minimum, and show that it is exponentially small.

Journal ArticleDOI
TL;DR: In this article, a review of known results on tolerance limits for univariate distributions is presented, including results for parametric continuous and discrete distributions as well as those based on nonparametric methods.
Abstract: This review covers some known results on tolerance limits for univariate distributions. Results for parametric continuous and discrete distributions as well as those based on nonparametric methods are included. Some general results, including those for certain restricted families of distributions are also covered. Sample size determination and related problems are discussed

Journal ArticleDOI
TL;DR: In this paper, the authors consider a single-input non-linear discrete-time system of the form where x ϵℝN, uϵ ℝ, and f(x,u) is a C∞ ℘N -valued function and give necessary and sufficient conditions for approximate linearizability.
Abstract: We consider a single-input non-linear discrete-time system of the form where x ϵℝN, uϵℝ, and f(x,u): ℝN+1 → ℝN is a C∞ ℝN - valued function. Necessary and sufficient conditions for approximate linearizability are given for Σ. We also give necessary and sufficient conditions for local linearizability. Finally, we present analogous results for multi-input non-linear discrete-time systems.

Journal ArticleDOI
TL;DR: The first passage problem in Brownian motion has the form as a randomized Paris-Erdogan equation with a multiplicative random variable on the right side of the equation as mentioned in this paper.

Journal ArticleDOI
Silviu Guiasu1
TL;DR: The aim of this paper is to obtain a probabilistic model when the queueing system is in a maximum entropy condition, which gives a simple probability distribution of possible states in the general framework of a birth-and-death process.
Abstract: The main results in queueing theory are obtained when the queueing system is in a steady-state condition and if the requirements of a birth-and-death stochastic process are satisfied. The aim of this paper is to obtain a probabilistic model when the queueing system is in a maximum entropy condition. For applying the entropic approach, the only information required is represented by mean values (mean arrival rates, mean service rates, the mean number of customers in the system). For some one-server queueing systems, when the expected number of customers is given, the maximum entropy condition gives the same probability distribution of the possible states of the system as the birth-and-death process applied to an M/M/1 system in a steady-state condition. For other queueing systems, as M/G/1 for instance, the entropic approach gives a simple probability distribution of possible states, while no close expression for such a probability distribution is known in the general framework of a birth-and-death process.

Proceedings ArticleDOI
01 Oct 1986
TL;DR: A modulation classification algorithm using statistical pattern recognition techniques has been developed and tested on numerically simulated signals and uses statistical moments of both the demodulated signal and the signal spectrum as the modulation identifying parameters.
Abstract: The ability to identify the modulation of an arbitrary signal is desirable for a number of reasons, including signal confirmation, interference identification, and the selection of proper demodulators A modulation classification algorithm using statistical pattern recognition techniques has been developed and tested on numerically simulated signals This algorithm uses statistical moments of both the demodulated signal and the signal spectrum as the modulation identifying parameters The basis for the classification routine is a set of formulated probability distributions which were developed by generating and statistically analyzing a large set of numerically simulated signals The resulting classification equations were tested on an independent set of numerically simulated signals

Journal ArticleDOI
TL;DR: In this article, a methodology to predict the performance of photovoltic generator based on long-term climatological data and expected cell performance is described, which is used to calculate the probability distribution function parameters for each hour of a typical day of any season, week or day.
Abstract: This paper describes a methodology to predict the performance of photovoltic (PV) generator based on long term climatological data and expected cell performance. The methodology uses long term historical data on insolation to calculate the probability distribution function parameters for each hour of a typical day of any season, week or day. Once the probability distribution function parameters are calculated, they are used to evaluate the predicted hourly, daily, weekly and seasonal capacity factors of a particular design of a PV panel/array at a particular site. Long term insolation data from Sterling, Virginia have been utilized with Solarex SX-110 panel designs to predict PV array performance.

Journal ArticleDOI
TL;DR: The averaging process is modeled as a linear system whose low-pass filter characteristics are determined by the degree in temporal misalignment of signals, which demonstrates that alignment errors can both add and subtract signal components.
Abstract: The averaging process is modeled as a linear system whose low-pass filter characteristics are determined by the degree in temporal misalignment of signals. Assuming the errors in temporal alignment of successive cardiac cycles are random, then the model transfer function is equivalent to the probability density function. The response of the model to a step input is equivalent to the probability distribution function, which can be readily quantified. To validate the model, a high resolution ECG amplifier and QRS recognition system was constructed that synchronizes a step input with a point on the QRS. Design criteria for optimal amplification, filtering, and triggering of the ECG are determined. Test of the model reveals a close correspondence between observed and predicted step responses. From the average step response, the recording fidelity of any average can be determined-rapidly while the alignment is adjusted for optimal precision. Using ECG signals from patients, our model system demonstrates that alignment errors can both add and subtract signal components. Methods for estimating the extent of signal distortion induced by averaging as well as criteria for minimizing it are presented.