scispace - formally typeset
Search or ask a question

Showing papers on "Conditional probability distribution published in 1972"


Book
01 Jan 1972
TL;DR: This text provides an introduction to elementary probability theory and stochastic processes, and shows how probability theory can be applied to the study of phenomena in fields such as engineering, computer science, management science, the physical and social sciences, and operations research.
Abstract: Ross's classic bestseller, Introduction to Probability Models, has been used extensively by professionals and as the primary text for a first undergraduate course in applied probability. It provides an introduction to elementary probability theory and stochastic processes, and shows how probability theory can be applied to the study of phenomena in fields such as engineering, computer science, management science, the physical and social sciences, and operations research. With the addition of several new sections relating to actuaries, this text is highly recommended by the Society of Actuaries. A new section (3.7) on COMPOUND RANDOM VARIABLES, that can be used to establish a recursive formula for computing probability mass functions for a variety of common compounding distributions. A new section (4.11) on HIDDDEN MARKOV CHAINS, including the forward and backward approaches for computing the joint probability mass function of the signals, as well as the Viterbi algorithm for determining the most likely sequence of states. Simplified Approach for Analyzing Nonhomogeneous Poisson processes Additional results on queues relating to the (a) conditional distribution of the number found by an M/M/1 arrival who spends a time t in the system; (b) inspection paradox for M/M/1 queues (c) M/G/1 queue with server breakdown Many new examples and exercises.

326 citations



Journal ArticleDOI
TL;DR: In this paper, a precise definition of identifiability of a parameter is given in terms of consistency in probability for the parameter estimate, and necessary and sufficient conditions for the unknown parameter to be identifiable are established.
Abstract: A precise definition of identifiability of a parameter is given in terms of consistency in probability for the parameter estimate. Under some mild Uniformity assumptions on the conditional density parameterized by the unknown parameter, necessary and sufficient conditions for the unknown parameter to be identifiable are established. The assumptions and identifiability criteria are expressed in terms of the density of individual observations, conditioned upon all past observations. The results are applied to linear system identification problems.

113 citations


Journal ArticleDOI
TL;DR: In this article, a time-dependent conditional phase-space distribution function for rigid ensembles of rigid molecules undergoing collision-interrupted free rotation is derived for the J•diffusion and the M•Diffusion models.
Abstract: Time‐dependent conditional phase‐space distribution functions are derived for classical ensembles of rigid molecules undergoing collision‐interrupted free rotation. The J‐diffusion and the M‐diffusion models proposed by Gordon, [J. Chem. Phys. 44, 1830 (1966)] are explored in detail. The expressions for the conditional distribution functions are evaluated for these models in terms of multiple time integrals. It is then shown that Fourier transform techniques can be used to express the integrals as convolutions which are analogous to Fixman and Rider's expressions [J. Chem. Phys. 51, 2425 (1969)]. A number of numerical results are presented.

70 citations


Journal ArticleDOI
TL;DR: In this article, a conditional Poisson process whose intensity is a function of a Markov process is characterized, and the jump times and sizes of these processes are analyzed, as well as their limiting behavior.
Abstract: A conditional Poisson process (often called a double stochastic Poisson process) is characterized as a random time transformation of a Poisson process with unit intensity. This characterization is used to exhibit the jump times and sizes of these processes, and to study their limiting behavior. A conditional Poisson process, whose intensity is a function of a Markov process, is discussed. Results similar to those presented can be obtained for any process with conditional stationary independent increments.

52 citations


Journal ArticleDOI
TL;DR: In this article, a Poisson point process with an intensity parameter forming a Markov chain with continuous time and finite state space is considered and an estimate of the current intensity, optimal in the least-squares sense, is computed from this distribution.
Abstract: Consider a Poisson point process with an intensity parameter forming a Markov chain with continuous time and finite state space. A system of ordinary differential equations is derived for the conditional distribution of the Markov chain given observations of the point process. An estimate of the current intensity, optimal in the least-squares sense, is computed from this distribution. Applications to reliability and replacement theory are given. A special case with two states, corresponding to a process in control and out of control, is discussed at length. Adjustment rules, based on the conditional probability of the out of control state, are studied. Regarded as a function of time, this probability forms a Markov process with the unit interval as state space. For the distribution of this process, integro-differential equations are derived. They are used to compute the average long run cost of adjustment rules.

35 citations


Journal ArticleDOI
TL;DR: In this paper, the authors give a general theorem on characterization by conditional expectations for the Weibull distribution and a special form of the theorem leads to a characterization of the uniform distribution.
Abstract: In a recent paper, Shanbhag [3] gave characterizations for the exponential and geometric distributions in terms of conditional expectations. The present note gives a general theorem on characterization by conditional expectations. A special form of the theorem characterizes the Weibull distribution (and hence Shanbhag's result for the exponential distribution). Another interesting special form of the theorem leads to a characterization of the uniform distribution. Applications of these characterizations are also indicated.

35 citations



Journal ArticleDOI
TL;DR: In this article, the authors describe methods of fitting prior distributions to equipment MTBF =?, shows the priors fitted to different equipments, establishes data criteria for fitting prior distribution to?, and presents the results of a robustness analysis performed on the fitted priors.
Abstract: This paper describes methods of fitting prior distributions to equipment MTBF = ?, shows the priors fitted to different equipments, establishes data criteria for fitting prior distribution to ?, and presents the results of a robustness analysis performed on the fitted priors. Systematic procedures for fitting priors are shown for Type 1 data (number of failures in fixed time T) and Type 2 data (observed MTBF, number of failures not the same for all equipments), and specific data criteria, in the form of minimum values of n (number of equipments) and K (number of failures) are presented. The inverted-gamma prior-distributions were derived from operational failure data obtained from Tinker AFB. The equipments are primarily electronic, therefore, the time-to-failure distribution was assumed to be exponential; however, the methods are generally applicable whatever the form of the conditional distribution. The robustness analysis shows the effects of errors in estimating the parameters of the prior on the posterior distribution. In general, the effect of errors in estimating parameters of the prior was practically negligible for large values of K.

22 citations



Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of determining the probability of an event having occurred in a set of transformations from the variation space to the response space under a given distribution.
Abstract: The traditional model of statistics is a class of probability measures for a response variable Under reasonable continuity this can be given as a class $C$ of probability density functions relative to an atom-free measure With a realized value of the response variable, the model $C$ gives the possible probabilities for that realized value--it gives the likelihood function The likelihood function can be accepted alone or in conjunction with the distribution of possible likelihood functions In a variety of applications, the variation in a response variable can be traced to a well-defined source having a known probability distribution The model then is not a class of probability measures but is a single probability measure and a class of random variables Under moderate conditions this can be given as a probability density function and a class $C_2$ of transformations from the variation space to the response space And if the distribution for variation is not completely known, the model becomes a class $C_1$ of probability density functions and a class $C_2$ of transformations from the variation space to the response space With an observed response value, the component $C_2$ identifies a set, the set of possible values for the realized variation If $C_2$ is a transformation group, then $C_2$ identifies a set--in a partition on the variation space Standard probability argument using $C_1$ then gives the probability of what has been "observed," and the conditional distribution of what has not been "observed": it gives the likelihood function from the identified set, and the conditional density within the identified set The likelihood function alone or with its distribution gives the information concerning the parameter of $C_1$; and for any assumed value of that parameter the conditional density gives the information concerning possible values for the realized variation, and accordingly gives the information concerning the parameter of $C_2$, it being what stands between the realized variation and the observed response The probability of what is identified as having occurred--the likelihood function--is a fundamental output of a model involving density functions The determination of this probability can however involve certain complexities as soon as the class $C_2$ of random variables is no longer effectively a group Certainly the class $C_2$ identifies a set on the variation space But in moderately general cases the range of alternatives can be a partition on the variation space depends on the element of $C_2$ Thus an `event' is identified but the range of possible `events' depends on the parameter of $C_2$ For two kinds of generalized model $(C_1, C_2)$ this paper explores the determination of the probability of what is identified as having occurred--it explores the determination of the likelihood function In Section 1 the notation and results are summarized for the special model $(C_1, C_2)$ with $C_2$ a transformation group Two generalizations are examined in Section 2: first, the class $C_2$ is a group but its application as a transformation group has an additional parameter; second, the class $C_2$ is a class of expression transformations $L$ applied to a group of transformations $G$, ie $C_2 = LG$ These two generalizations are not as distinct as they may at first appear but they are quite distinct in contexts The transformed regression model is the central example Several formulas for volume change in subspaces are recorded in Section 3 and used in Section 4 to make four determinations of likelihood for the generalized model $(C_1, C_2)$ These are applied to the transformed regression model in Section 5 and compared by means of examples in Section 6 The effects of initial variable on the likelihood functions is examined in Section 7 and two compensating routes for analysis are proposed The class $L$ of expression transformations is examined in Section 8 and shown to be a group under mild consistency conditions A corresponding invariant likelihood is determined in Section 9, and a transit likelihood in Section 10; the power-transformed regression model is examined in Section 11 In Section 12 the transit likelihood is shown to be the natural likelihood when the semi-direct product $LG$ is itself a group

Journal ArticleDOI
TL;DR: In this paper, a method for solving linear programming problems under the assumption that input-output, restraint, and functional coefficients follow a discrete joint probability distribution is presented for solving a farm problem.
Abstract: A method is presented for solving linear programming problems under the assumption that input-output, restraint, and functional coefficients follow a discrete joint probability distribution. This may be a rather plausible assumption in some uncertain farm planning contexts. The objective function is formulated in terms of variance and/or expectation. Special attention is given to the most adverse outcomes with respect to both the functional value and side constraints. Parametric analysis can be used to determine trade-offs among the functional value, the adversity level, the tolerance probability, and the probability of infeasibility. Finally, the proposed method is applied to a farm problem.

01 May 1972
TL;DR: In this article, a comprehensive treatment of numerical approaches to the solution of Bayes Law has been included, describing numerical methods, computational algorithms, two example problems, and extensive numerical results.
Abstract: : A comprehensive treatment of numerical approaches to the solution of Bayes Law has been included, describing numerical methods, computational algorithms, two example problems, and extensive numerical results. Bayes Law is an integral equation describing the evolution of the conditional probability distribution, describing the state of a Markov process, conditioned on the past noisy observations. The Bayes Law is, in fact, the general solution to the discrete nonlinear estimation problem. This research represents one of the first successful attempts to approximate the conditional probability densities numerically and evaluate the Bayes integral by quadratures. The methods of density representation studied most thoroughly include orthogonal polynomials, point-masses, gaussian sums, and Fourier series.

Journal ArticleDOI
TL;DR: In this paper, it was shown that there is a translation-invariant, finitely additive extension of lebesgue measure to all sets of reals, where the conditional probability p(A, B) for all pairs A, B of real reals is known.



Journal ArticleDOI
TL;DR: In this paper, a dam model is proposed for which it is assumed that the conditional distributions of the inputs, given the past, are known only to lie in some class M. For selected M, bounds are derived on various quantities of interest such as the mean time to first emptiness.
Abstract: A dam model is proposed for which it is assumed that the conditional distributions of the inputs, given the past, are known only to lie in some class M . For selected M , bounds are derived on various quantities of interest such as the mean time to first emptiness. The case of normal inputs is treated in greater detail and a release rule is discussed. The techniques used are similar to those used in the theory of gambling as developed by Dubins and Savage (1965).

Journal ArticleDOI
TL;DR: In this paper, the authors show that by introducing conditional expectations (regression curves) at the same time, they are able to link these concepts and present a unified approach to link conditional expectation and covariance.
Abstract: In many introductory probability and statistics courses the concept,s of independent random variables, conditional random variable;, expectation and covariance are discussed when the topic of bivariate random variables is presented. The result that independence implies zero covariance and an example to show that the converse does not necessarily follow are invariably included in the course. In this note we show that by introducing conditional expectations (regression curves) at the same time we are abIe to link these concepts and present a unified approach. The author does not claim that any of the examples included in this note are original. However, it is considered that their assembly together adds to their usefulness.

01 Sep 1972
TL;DR: In this article, it was shown that the conditional distribution of a linear function of n independent random variables X i, 1 ≤ i ≤ n on another linear function function of the same set of random variables is symmetric.
Abstract: Recently Heyde (1970) has proved that if the conditional distribution of a linear function of n independent random variables X i , 1 ≤ i ≤ n on another linear function of the same set of random variables is symmetric, then each of the X i , 1 ≤ i ≤ n is normally distributed or degenerate under some conditions. We prove a similar result for Wiener process by means of stochastic integrals defined in probability.

Journal ArticleDOI
TL;DR: In this paper, the authors derived expressions for the joint and conditional distribution functions in the microcanonical ensemble of two-dimensional harmonic oscillators in the limit of an infinite system.

Journal ArticleDOI
TL;DR: In this article, it was shown that the assumption that the regularity of the conditional probability of P cannot be replaced by a fixed compact system cannot be satisfied by the assumptions that each condition is approximable by fixed compact systems.
Abstract: If a -field is sufficient for a family of probability measures defined on a -field then there exist regular determinations of the conditional probability of P, given , which are independent of the special measure , provided that is -regular. A counterexample shows that the -regularity of cannot be replaced by the assumption that each is approximable by a fixed compact system. In particular if a -field is sufficient for a family of probability measures defined on a separable -field and if each admits a regular conditional probability, given , a common regular conditional probability, given , need not exist.

Journal ArticleDOI
TL;DR: In this article, it is shown that the conditional expectations of conditional expectations will converge in Po-measure to the conditional expectation of the conditional fields of a given conditional expectation in the Po-Measure to fo.
Abstract: Let Pn, n∈IN∪{0}, be probability measures on a -fieldA; fn, n∈IN∪{0}, be a family of uniformly boundedA-measurable functions andAn, n∈IN, be a sequence of sub- -fields ofA, increasing or decreasing to the -fieldAo. It is shown in this paper that the conditional expectations converge in Po-measure to with k, n, m → ∞, if Pn|A, n∈IN, converges uniformly to Pn|A and fn, n∈IN, converges in Po-measure to fo.

Journal ArticleDOI
TL;DR: In this article, a model that has as its underlying assumption the "Ornstein-Uhlenbeck" process is applied to this problem, which uses the antecedent quantitatively without loss of information and with surprising simplicity.
Abstract: Previous models for estimating the conditional probability of an event have used, as the condition, an initial categorized event such as no rain or overcast at time zero. But initial conditions frequently are observed and known in greater detail, and these observed values can replace the categories in determining conditional probabilities. A model that has as its underlying assumption the “Ornstein-Uhlenbeck” process is applicable to this problem. It uses the antecedent quantitatively without loss of information and with surprising simplicity.



Journal ArticleDOI
TL;DR: This paper generalizes earlier work to include a larger class of conditional distributions and theory is then applied to the problem of adjusting the gain of a positional servomechanism.
Abstract: The Empirical Bayes approach to parameter estimation problems is discussed; and, in particular, two classes of distribution functions are presented where it is always possible to find a consistent sequence of estimators for E ( θ k | z = ( z 1 , z 2 ,…, z n )). The advantage of the Empirical Bayes approach is that it does not assume a specific prior distribution. This paper generalizes earlier work to include a larger class of conditional distributions. This theory is then applied to the problem of adjusting the gain of a positional servomechanism. The problem reduces to finding a consistent sequence to estimate φ ( z ) = e −2Tθ . It is shown that in the following three prior-conditional cases: (1) gamma-poisson; (2) beta-negative binomial; (3) gamma-exponential; the Empirical Bayes estimator for φf ( z ) approaches the Bayes estimator φ G ( z ) as the number of past observations z n gets large.




01 Jan 1972
TL;DR: In this article, a dam model is proposed for which it is assumed that the conditional distributions of the inputs, given the past, are known only to lie in some class M. For selected M, bounds are derived on various quantities of interest such as the mean time to first emptiness.
Abstract: A dam model is proposed for which it is assumed that the conditional distributions of the inputs, given the past, are known only to lie in some class M. For selected M, bounds are derived on various quantities of interest such as the mean time to first emptiness. The case of normal inputs is treated in greater detail and a release rule is discussed. The techniques used are similar to those used in the theory of gambling as developed by Dubins and Savage (1965).