scispace - formally typeset
Search or ask a question

Showing papers on "Conditional probability distribution published in 1986"


Journal ArticleDOI
TL;DR: In this paper, a natural generalization of the ARCH (Autoregressive Conditional Heteroskedastic) process introduced in 1982 to allow for past conditional variances in the current conditional variance equation is proposed.

17,555 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss the current research in building models of conditional variances using the Autoregressive Conditional Heteroskedastic (ARCH) and Generalized ARCH (GARCH) formulations.
Abstract: This paper will discuss the current research in building models of conditional variances using the Autoregressive Conditional Heteroskedastic (ARCH) and Generalized ARCH (GARCH) formulations. The discussion will be motivated by a simple asset pricing theory which is particularly appropriate for examining futures contracts with risk averse agents. A new class of models defined to be integrated in variance is then introduced. This new class of models includes the variance analogue of a unit root in the mean as a special case. The models are argued to be both theoretically important for the asset pricing models and empirically relevant. The conditional density is then generalized from a normal to a Student-t with unknown degrees of freedom. By estimating the degrees of freedom, implications about the conditional kurtosis of these models and time aggregated models can be drawn. A further generalization allows the conditional variance to be a non-linear function of the squared innovations. Throughout empirical...

2,055 citations


Journal ArticleDOI
TL;DR: In this article, the dependence between individuals in a group is modelled by a group specific quantity, which can be interpreted as an unobserved covariate common to the individuals in the group and assumed to follow a positive stable distribution.
Abstract: SUMMARY A class- of continuous multivariate lifetime distributions is proposed. The dependence between individuals in a group is modelled by a group specific quantity, which can be interpreted as an unobserved covariate common to the individuals in the group and assumed to follow a positive stable distribution. It is possible to include covariates in the model and discuss whether the dependence is still present after specific covariates are taken into account. If the conditional hazards are proportional, then the hazards in the marginal distributions are also proportional, but with different constants of proportionality. Also the hazard for the minimum in a group is proportional to the marginal hazards. If the conditional distributions given the group quantity are Weibull then the marginal distributions are also Weibull. This class can be used to test the hypothesis of independence of litter mates in the proportional hazards model.

650 citations


Journal ArticleDOI
TL;DR: In this article, the persistence of conditional variance is modelled as a deterministic process, and the persistence is modeled as a linear function of the number of variables in the model.
Abstract: (1986). Modeling The persistence Of Conditional Variances: A Comment. Econometric Reviews: Vol. 5, No. 1, pp. 51-56.

572 citations


Journal ArticleDOI
TL;DR: In this article, a broad class of latent variable models, namely the monotone unidimensional models, are studied, in which the latent variable is a scalar, the observable variables are conditionally independent given the Latent Variable, and the conditional distribution of the observables given the LSTM is stochastically increasing in the latent Variable.
Abstract: Latent variable models represent the joint distribution of observable variables in terms of a simple structure involving unobserved or latent variables, usually assuming the conditional independence of the observable variables given the latent variables. These models play an important role in educational measurement and psychometrics, in sociology and in population genetics, and are implicit in some work on systems reliability. We study a broad class of latent variable models, namely the monotone unidimensional models, in which the latent variable is a scalar, the observable variables are conditionally independent given the latent variable and the conditional distribution of the observables given the latent variable is stochastically increasing in the latent variable. All models in this class imply a new strong form of positive dependence among the observable variables, namely conditional (positive) association. This positive dependence condition may be used to test whether any model in this class can provide an adequate fit to observed data. Various applications, generalizations and a numerical example are discussed.

321 citations


Journal ArticleDOI
TL;DR: In this article, the conditional density of Y given X is estimated by the kernel method and it is shown that the (empirically determined) mode of the kernel estimate is uniformly convergent to the conditional mode function when the process is Φ-mixing.

103 citations


Journal ArticleDOI
TL;DR: In this paper, the distribution of Pearson's statistic and of the likelihood-ratio goodness-of-fit statistic for discrete data in the important case where the data are extensive but sparse is considered.
Abstract: I consider the distribution of Pearson's statistic and of the likelihood-ratio goodness-of-fit statistic for discrete data in the important case where the data are extensive but sparse. It is argued that the appropriate reference distribution is conditional on the sufficient statistic for the unknown regression parameters, β. The first three conditional asymptotic cumulants are derived by Edgeworth expansion, and these are used for the computation of tail probabilities. The principal advantage of the limit considered here, as opposed to the more usual X 2 limit, is that the cell counts need not be large.

96 citations


Journal ArticleDOI
TL;DR: In this article, a partially observed linear system is considered with arbitrary non-Gaussian initial conditions and the corresponding (nonlinear) filtering problem is investigated; an explicit formula is obtained for the conditional expectation of an arbitrary function of the current state given past observations; a set of sufficient statistics is shown to exist which are recursively computable as outputs of a finite-dimensional dynamical system.
Abstract: In this paper, a partially observed linear system is considered with arbitrary non-Gaussian initial conditions and the corresponding (nonlinear) filtering problem is investigated. An explicit formula is obtained for the conditional expectation of an arbitrary function of the current state given past observations; a set of sufficient statistics is shown to exist which are recursively computable as outputs of a finite-dimensional dynamical system. The basic results are specialized to purely complex exponentials and to indicator functions of Bore1 sets, and yield formulae for the conditional characteristic function and probability law of the current state given past observations. A special case when some covariance matrix is invertible, is also studied and a sharpening of the basic results is obtained as the existence and form of a conditional density is established. The method of analysis is probabilistic and relies on Girsanov's Theorem, basic results in linear filtering, theory and some easy facts for Gau...

55 citations


Journal ArticleDOI
TL;DR: The disjunctive kriging method described in this paper produces a nonlinear unbiased estimator with the characteristic minimum variance of errors, which is as good, or otherwise better than linear estimators in the sense of reduced Kriging variance and exactness of estimation.
Abstract: The disjunctive kriging method described in this paper produces a nonlinear unbiased estimator with the characteristic minimum variance of errors. Disjunctive kriging is as good, or otherwise better than linear estimators in the sense of reduced kriging variance and exactness of estimation. It does not suffer from the difficulties associated with computing the conditional expectation and can be thought of as its estimator. Disjunctive kriging also provides an estimate of the conditional probability that a random variable located at a point or averaged over a block in two-dimensional space is above some specified cutoff or tolerance level and this can be written in terms of the probability distribution or the density function. The method has important implications in aiding management decisions by providing a quantitative input (which is not readily obtained from the linear kriging estimators), based on the available data, which is the best nonlinear unbiased estimator short of the conditional expectation. A major disadvantage in using disjunctive kriging is the increased computational time. This, however, is mitigated by increased information about the estimate.

53 citations


Journal ArticleDOI
TL;DR: In this article, the strong consistency of regression quantile statistics in linear models with error distributions was established for the conditional distribution function of Y and no regularity conditions on the error distribution were required for uniform strong convergence, thus establishing a Glivenko-Cantelli type theorem for this estimator.
Abstract: The strong consistency of regression quantile statistics (Koenker and Bassett [4]) in linear models with iid errors is established. Mild regularity conditions on the regression design sequence and the error distribution are required. Strong consistency of the associated empirical quantile process (introduced in Bassett and Koenker [1]) is also established under analogous conditions. However, for the proposed estimate of the conditional distribution function of Y, no regularity conditions on the error distribution are required for uniform strong convergence, thus establishing a Glivenko-Cantelli-type theorem for this estimator.

48 citations


Journal ArticleDOI
TL;DR: In this article, an "observed" parallel to parametric statistics is established using obseved information and an observed skewness tensor instead of the above expected quantities.
Abstract: In the differential geometric approach to parametric statistics, developed by Chentsov, Efron, Amari, and others, the parameter space is set up as a differentiable manifold with expected information as metric tensor and with a family of affine connections, the a-connections, determined from the expected information and the skewness tensor of the score vector. The usefulness of this approach is particularly notable in connection with Edgeworth expansions of estimators. Motivated by the conditionality viewpoint, an "observed" parallel to that theory is established in the present paper using obseved information and an "observed skewness" tensor instead of the above expected quantities. The formula $c\|\overset{hat}{j}\|^{1/2}\overset{-}{L}$ for the conditional distribution of the maximum likelihood estimator is expanded (to third order) asymptotically and the "observed geometries" are shown to have a role in this type of expansion similar to that of the "expected geometries" in the Edgeworth expansions mentioned above. In these new developments "mixed derivatives of the log model function," defined by means of an auxiliary statistic complementing the maximum likelihood estimator, take the place of moments of derivatives of the log likelihood function.

Journal ArticleDOI
TL;DR: A detailed study of first and second-order approximations to the first passage time conditional probability density function (p.d.f.) of a stationary Gaussian process with differentiable sample paths is provided both theoretically and numerically.

Journal Article
TL;DR: In this paper, the mean of a gamma distribution with unknown shape parameter is approximated using saddlepoint techniques for approximating an inversion integral, which reduces to a F-distribution in certain limits.
Abstract: The construction of a similar test region for the hypothesis that the mean of a gamma distribution has a specified value requires numerical integration of a conditional density. This conditional density is not explicitly known and we give here a simple and accurate approximation to the density, using saddlepoint techniques for approximating an inversion integral. To avoid the numerical integration of the approximative density we also derive a simple expansion of the distribution function, which reduces to aF-distribution in certain limits. For the case of several samples with the same shape parameter, we consider the hypothesis that the means are equal. The problems that arise in the construction of a similar test region are the same as for the one sample case, and the solutions given are basically the same too. In this paper we study inference for the mean of a gamma distribution with unknown shape parameter. Both the one sample case and the multisample case will be examined. In the one sample case we consider the hypothesis that the mean has a specified value and in the multisample case we assume that the shape parameter is the same for all samples and consider the hypothesis that all the means are equal. In both cases an exact (or similar) test can be obtained by conditioning on the minimal sufficient statistic under the null hypothesis. However, the conditional distributions needed in the construction of the tests involve func- tions that are not known in any explicit form. It is the purpose of this paper to obtain simple and accurate approximations to these functions, so that an approximation to the exact tests can be calculated without the use of simulation. If we lety denote the mean of the gamma distribution and let w be the shape parameter, the gamma distribution has density

Journal ArticleDOI
TL;DR: In this article, the authors compared Fisher's exact test, difference in proportions, log odds ratio, Pearson's chi-squared, and likelihood ratio for testing independence of two dichotomous factors when the associated p values are computed by using the conditional distribution given the marginals.
Abstract: Fisher's exact test, difference in proportions, log odds ratio, Pearson's chi-squared, and likelihood ratio are compared as test statistics for testing independence of two dichotomous factors when the associated p values are computed by using the conditional distribution given the marginals. The statistics listed above that can be used for a one-sided alternative give identical p values. For a two-sided alternative, many of the above statistics lead to different p values. The p values are shown to differ only by which tables in the opposite tail from the observed table are considered more extreme than the observed table.

Journal ArticleDOI
TL;DR: In this article, a new decomposition of the Brier score is described, based on conditional distributions of forecast probabilities given observed events, and, as a result, it differs in a fundamental way from most previous partitions of quadratic verification measures.
Abstract: A new decomposition of the Brier score is described. This decomposition is based on conditional distributions of forecast probabilities given observed events, and, as a result, it differs in a fundamental way from most previous partitions of quadratic verification measures. The new decomposition consists of 1) a term involving the variances of the conditional distributions and 2) a term related to the mean errors in the forecasts, which involves the squared differences between the means of the conditional distributions and the respective mean observations (the latter are necessarily either zero or one). Decrease in these variances and/or mean errors generally lead to improvements in the Brier score. The decomposition may be useful in verification studies, since it appears to provide additional insight into the quality of probabilistic forecasts.

Book ChapterDOI
01 Jan 1986
TL;DR: The aim of this paper is to provide methods to construct a probability distribution, using the maximum entropy principle, for a fuzzy mathematical programming problem, that is solved in several examples.
Abstract: Let X be a variable taking its values on a finite set U. If we have a fuzzy information about these values, then we may represent this information by means of a possibility distribution; but there are some cases in which we need a probability distribution. The aim of this paper is to provide methods to construct such probability distribution. The link between these two kinds of informations, possibilistic and probabilistic, is given by the concept of possibility-probability consistency. The maximum entropy principle is used, being obtained a fuzzy mathematical programming problem, that is solved in several examples.

Journal ArticleDOI
TL;DR: In this article, a methodology to infer the spatial distribution of the statistical moments of a stationary normal random spatial function based on a finite set of measured values is described, and the uncertainty of estimation of the parameters characterizing the probability density function (pdf) is incorporated in the process of inferring and conditioning.
Abstract: A methodology to infer the spatial distribution of the statistical moments of a stationary normal random spatial function based on a finite set of measured values is described. The uncertainty of estimation of the parameters characterizing the probability density function (pdf) is incorporated in the process of inferring and conditioning. Relationships generalizing the traditional approach of stochastic interpolation and simulation have been derived. They yield: (i) an estimate of the conditional expectation; (ii) an estimate of the conditional multivariate normal (MVN) covariance and variance; and (iii) a new quantity, the variance of the estimated variance. The method is illustrated for a two-dimensional synthetic example with both estimated mean and variance regarded as random. A set of 60 points was generated within a square of sides equal to unity from a MVN population with zero mean, unity variance and exponential covariance with a linear integral scale of 0.3. Contour maps of the inferred expected values and variances are drawn and compared with maps of kriged values and maps of kriged variances. Contour maps of variances of estimation of the variance are given to demonstrate the uncertainty of variance estimates. The unconditional and conditional statistical moments of the space average (block value), , were also evaluated. The reduction of the uncertainty of the inferred random variable as the number of measured points increases is demonstrated with the same synthetic example.

Journal ArticleDOI
TL;DR: In this article, a non-parametric maximum likelihood estimate for F(x,z) = (choice, attributes) from choice-based samples is derived, which can be thought of as the bias-corrected empirical distribution of (choice and attributes) derived from our biased data.

Journal ArticleDOI
TL;DR: In this article, the authors provide programs in the BASIC language for computing the Goodman-Kruskal gamma coefficient in three situations: (1) when the data consist of two scores for each of N individual persons/items, as in a correlational situation; (2) when data are arrayed in an ordered 2×2 contingency table, in a cross-classification situation; and (3) if the data consists of two conditional probabilities (e.g., conditional probability of saying “old” given that the actual state is old, and conditional probability
Abstract: In this article, I provide programs in the BASIC language for computing the Goodman-Kruskal gamma coefficient in three situations: (1) when the data consist of two scores for each of N individual persons/items, as in a correlational situation; (2) when the data are arrayed in an ordered 2×2 contingency table, as in a cross-classification situation; or (3) when the data consist of two conditional probabilities (e.g., conditional probability of saying “old” given that the actual state is old, and conditional probability of saying “old” given that the actual state is new), as in an absolute-judgment or detection situation.

Book ChapterDOI
01 Jan 1986
TL;DR: A novel approach to image data compression is proposed which uses a stochastic learning automaton to predict the conditional probability distribution of the adjacent pixels and these conditional probabilities are used to code the gray level values using a Huffman coder.
Abstract: A novel approach to image data compression is proposed which uses a stochastic learning automaton to predict the conditional probability distribution of the adjacent pixels. These conditional probabilities are used to code the gray level values using a Huffman coder. The system achieves a 4/1.7 compression ratio. This performance is achieved without any degradation to the received image.


Journal ArticleDOI
TL;DR: In this article, the shape of the ABC zigzag has been shown to have a topological invariance with respect to distortions into V, A, or C shapes in either space-time or momentum energy space.
Abstract: This is for prediction. Retrodictively, the same formula holds, with ] A ) and (Cl denoting the final occupation numbers of the final states. In this, we have an instance of Loschmidt’s reversibility. Considering the shape of the ABC zigzag, where B denotes the collision, either in space-time or in the momentum-energy space, we can say that equation 2 is the same for A and V shapes of the zigzag. Equation 2 also holds if A denotes the initial state and C denotes the final state of a molecule; then ( A IC) denotes the intrinsic transition probability, IA) denotes the initial occupation number of the initial state, (Cl denotes the final occupation number of the final state, and IA) (Cl denotes the overall transition probability. The classics, including Boltzmann, multiplied ( A I C) by IA), but not by (Cl. This, however, was “intrinsically illogical” because multiplication by I A ) implies “statistical indistinguishability,” and, if so, there are (Cl ways in which a transiting molecule can reach the C state. Physics, of course, vindicates equation 2, thus revealing that there are two sorts of particles: bosons, which are such that IA) = (Cl = 0, 1, 2, . . . , and fermions, which are such that [ A ) = (Cl = 0 , l . Let us say that in this case, the ABC zigzag has a ( or a Cshape. What has been shown is that, in statistical mechanics, equation 2 for an overall collision or transition probability has a topological invariance with respect to distortions of the ABC zigzag into V, A, or C shapes in either space-time or the momentum-energy space. 1‘4) (Cl= 1’4) (’410 (Cl= IC) (CIA) (-41.

Book ChapterDOI
01 Jan 1986
TL;DR: Theoretical statistics makes abundant use of a certain concept named “sufficiency” which is traditionally introduced as follows.
Abstract: Theoretical statistics makes abundant use of a certain concept named “sufficiency.” It is traditionally introduced as follows.

Journal ArticleDOI
TL;DR: In this article, the disjunctive kriging (DK) method was extended to account for more than one random function, and the results indicated that the DCK procedure produces a better estimator than ordinary cokriging in terms of reduced variance of errors and exactness of estimation.
Abstract: The disjunctive kriging (DK) method described in the first paper of this series is extended to account for more than one random function. In the derivation contained herein, two random functions are considered, but this is easily generalized to any number. An example is presented using disjunctive cokriging (DCK) where the surface gravimetric moisture content is estimated using the bare soil temperature as an auxiliary random function. The results indicate that the DCK procedure produces a better estimator than ordinary cokriging in terms of reduced variance of errors and exactness of estimation. Also, using DCK, an estimate of the conditional probability that the level of a property is greater than a known cutoff value can be obtained. In general, this conditional probability is better than the DK probability by virtue of the additional information contained in the second, auxiliary random function.

Patent
27 Mar 1986
TL;DR: In this article, the amplitude-density-function generator is coupled with a memory for storing the amplitude density function of the signal x, and the memory is coupled to the processor input so that the memory can be addressed by the input signal y.
Abstract: A signal processor is for receiving an input signal y. The input signal y has in it a signal x and has a conditional density function, given that the signal x is present, p(y|x). The signal processor includes a memory for storing the amplitude-density-function of the signal x, p(x). The memory is coupled to the processor input so that the memory is addressed by the input signal y. An amplitude-density-function generator is coupled to the processor input and to the memory generates the product of the conditional density function p(y|x) and the density function p(x). The processor of the invention also includes apparatus for detecting the peak of that product.

Journal ArticleDOI
TL;DR: In this paper, an alternative model for the joint distribution of Xi and Yi is developed, which allows for large-scale heterogeneity in the spatial structure under study and may well be of general interest in spatial studies of biological material.
Abstract: Recently, model-based estimation of parameters defined for a three-dimensional object, from observations on plane sections through the object, has been proposed by Cruz-Orive (1980, Biometrics 36, 595-605). More specifically, the use of linear unbiased estimators of ratios has been suggested in this spatial context if the data (Xi, Yi), i = 1, . . . , n, can be described by a linear regression model through the origin. In the present paper, Cruz-Orive's model proposal for the conditional distribution of Y, given Xi is discussed both empirically and theoretically. Promoted by an analysis of data, an alternative model for the joint distribution of Xi and Yi is developed. The model allows for large-scale heterogeneity in the spatial structure under study and may well be of general interest in spatial studies of biological material.

Journal ArticleDOI
TL;DR: In this paper, the conditional probability of the ith coordinate of an infinite-dimensional diffusion process with respect to the others is characterized in a robust form as the law of a stochastic differential equation with smooth and bounded drift and initial measure.
Abstract: In this paper we characterize the conditional law of the ith coordinate of an infinite-dimensional diffusion process with respect to the others If the interaction is given by a smooth gradient system of finite range, the conditional probability is determined in a robust form as the law of a stochastic differential equation with smooth and bounded drift and initial measure. Additionally the conditional law is shown to be Lipschitz continuous in with respect to the Vasserstein metric on C[iX 1]

01 Jan 1986
TL;DR: The disjunctive kriging method described in this paper produces a nonlinear unbiased estimator with the characteristic minimum variance of errors, which is the best nonlinear estimator short of the conditional expectation.
Abstract: The disjunctive kriging method described in this paper produces a nonlinear unbiased estimator with the characteristic minimum variance of errors. Disjunctive kriging is as good, or otherwise better than _ linear estimators in the sense of reduced kriging variance and exactness of estimation. It does not suffer from the difficulties associated with computing the conditional expectation and can be thought of as its estimator. Disjunctive kriging also provides an estimate of the conditional probability that a random variable located at a point or averaged over a block in two-dimensional space is above some specified cutoff or tolerance level and this can be written in terms of the probability distribution or the density function. The method has important implications in aiding management decisions by providing a quantitative input (which is not readily obtained from the linear kriging estimators), based on the available data, which is the best nonlinear unbiased estimator short of the conditional expectation. A major disadvantage in using disjunctive kriging is the increased computational time. This, however, is mitigated by increased information about the estimate.

Posted Content
TL;DR: In this article, a general equilibrium asset price model is proposed to describe an economy with actual output generated by a Markovian Latent Process of Technolgical Shocks, where agents make use of the entire observed history to make inference about the Latent Technological Shocks.
Abstract: The Paper Has Two Major Parts. the First Part Focuses on the Theoretical Properties of a General Equilibrium Asset Price Model Describing an Economy with Actual Output Stochastically Generated by a Markovian Latent Process of Technolgical Shocks. with a Concealed State Space Economy, Agents Make Use of the Entire Observed History to Make Inference About the Latent Technological Shock. Instead of Focussing on the Entire History of Output, Past Events Are Summarized by a Conditional Probability Distribution Defined on the Space of All Possible States of Technology. Bayesian Updating Reestablishes Markovian Recursive Dynamics, and Allows One to Exploit the Analytical Tools Introduced by Lucas (1978) in Solving for Equilibrium Asset Prices. the Second Part of the Paper Deals with the Econometric Implications of the Model. the Consumption and Portfolio Decisions Can Be Expressed As Time Invariant Functions Defined on the Transformed State Space, I.E. the Space of Conditional Probability Distributions on the State of Nature At Any Point in Time. This Does Not Necessarily Imply That the Co-Movements of Consumption, Portfolio Decisions, Output and Asset Prices Are Stationary. We Formulate a Gaussian Model, Very Similar to Hansen and Singleton (1983) and Estimate It Via a State Space Representation Which Incorporates the Rational Expectations Equilibrium Cross-Equation Restriction.

Journal ArticleDOI
TL;DR: In this paper, an exact distributional framework is developed for analysing an IxJxK contingency table, where the covariance matrices in terms of the Kronecker products of matrices are presented.
Abstract: In this paper an exact distributional framework is developed for analysing an IxJxK contingency table. It is shown that for the case of hypotheses H0:Pijk=Pi..P.j./K and H0:Pijk =Pi..P.j.P..k the exact distributional results do not follow as simple extensions of the corresponding results obtained for an I×J table under the hypothesis of independence. From the factorial moment generating functions, expressions for the covariance matrices in terms of the Kronecker products of matrices, are presented. These expressions give indications whether or not Pearson's chi-square statistic should be corrected by the factor (n−1)/n or not. Marginal and conditional distributions are considered briefly and important differences with regard to the resuits for marginal and conditional distributions for an IxJ table are mentioned.