scispace - formally typeset
Search or ask a question

Showing papers on "Conditional probability distribution published in 1991"


Journal ArticleDOI
TL;DR: In this paper, the authors developed a discrete state space solution method for a class of nonlinear rational expectations models by using numerical quadrature rules to approximate the integral operators that arise in stochastic intertemporal models.
Abstract: The paper develops a discrete state space solution method for a class of nonlinear rational expectations models. The method works by using numerical quadrature rules to approximate the integral operators that arise in stochastic intertemporal models. The method is particularly useful for approximating asset pricing models and has potential applications in other problems as well. An empirical application uses the method to study the relationship between the risk premium and the conditional variability of the equity return under an ARCH endowment process. NONLINEAR DYNAMIC RATIONAL EXPECTATIONS MODELS rarely admit explicit solutions. Techniques like the method of undetermined coefficients or forward- looking expansions, which often work well for linear models, rarely provide explicit solutions for nonlinear models. The lack of explicit solutions compli- cates the tasks of analyzing the dynamic properties of such models and generat- ing simulated realizations for applied policy work and other purposes. This paper develops a discrete state-space approximation method for a specific class of nonlinear rational expectations models. The class of models is distinguished by two features: First, the solution functions for the endogenous variables are functions of at most a finite number of lags of an exogenous stationary state vector. Second, the expectational equations of the model take the form of integral equations, or more precisely, Fredholm equations of the second type. The key component of the method is a technique, based on numerical quadrature, for forming a discrete approximation to a general time series conditional density. More specifically, the technique provides a means for calibrating a Markov chain, with a discrete state space, whose probability distribution closely approximates the distribution of a given time series. The quality of the approximation can be expected to get better as the discrete state space is made sufficiently finer. The term "discrete" is used here in reference to the range space of the random variables and not to the time index; time is always discrete in our analysis. The discretization technique is primarily useful for taking a discrete approxi- mation to the conditional density of the strictly exogenous variables of a model. The specification of this conditional density could be based on a variety of 1Financial support under NSF Grants SES-8520244 and SES-8810357 is acknowledged. We thank the co-editor and referees of earlier drafts for many, many helpful comments that substantially improved the manuscript.

955 citations


Journal ArticleDOI
TL;DR: In this paper, a semiparametric autoregressive conditional heteroscedasticity (ARCH) model is proposed, which has conditional first and second moments given by auto-gressive moving average and ARCH parametric formulations but a conditional density that is assumed only to be sufficiently smooth to be approximated by a nonparametric density estimator.
Abstract: This article introduces a semiparametric autoregressive conditional heteroscedasticity (ARCH) model that has conditional first and second moments given by autoregressive moving average and ARCH parametric formulations but a conditional density that is assumed only to be sufficiently smooth to be approximated by a nonparametric density estimator. For several particular conditional densities, the relative efficiency of the quasi-maximum likelihood estimator is compared with maximum likelihood under correct specification. These potential efficiency gains for a fully adaptive procedure are compared in a Monte Carlo experiment with the observed gains from using the proposed semiparametric procedure, and it is found that the estimator captures a substantial proportion of the potential. The estimator is applied to daily stock returns from small firms that are found to exhibit conditional skewness and kurtosis and to the British pound to dollar exchange rate.

494 citations


Journal ArticleDOI
TL;DR: In this paper, different specifications of conditional expectations are compared with nonparametric techniques that make no assumptions about the distribution of the data, and the conditional mean and variance of the NYSE market return are examined.
Abstract: This paper explores different specifications of conditional expectations. The mostcommon specification, linear least squares, is contrasted with nonparametric techniques that make no assumptions about the distribution of the data. Nonparametric regression is successful in capturing some nonlinearities in financial data, in particular, asymmetric responses of security returns to the direction and magnitude of market returns. The technique is ideally suited for empirically modeling returns of securities that have complicated embedded options. The conditional mean and variance of the NYSE market return are also examined. Forecasts of market returns are not improved with the nonparametric techniques which suggests that linear conditional expectations are a reasonable approximation in conditional asset pricing research. However, the linear model produces a disturbing number of negative expected excess returns. My results also indicate that the relation between the conditional mean and variance depends on the specification of the conditional variance. Furthermore, a linear model relating mean to variance is rejected and these tests are not sensitive to the expectation generating mechanism nor the conditioning information. Rejections are driven by the distinct countercyclical variation in the ratio of the conditional mean to variance. A revised version of this paper was published in the Journal of Empirical Finance in 2001.

287 citations


Journal ArticleDOI
TL;DR: In this article, regression-based conditional mean and conditional variance diagnostics are proposed for nonlinear models of conditional means and conditional variances for cross-section or time-series data, and the distinguishing feature of the current approach, which builds on already popular residual-based procedures, is that no auxiliary assumptions are imposed at any testing stage.

282 citations


Journal ArticleDOI
TL;DR: In this article, two nonparametric maximum likelihood methods based on the EM algorithm are presented for the analysis of the distribution of parameters in pharmacokinetic models based on population data.

163 citations


Journal ArticleDOI
TL;DR: In this article, a class of new parametric models on the unit simplex in R m is introduced, the distributions in question being obtained as conditional distributions of m independent generalized inverse Gaussian random variables given their sum.

136 citations


Journal ArticleDOI
TL;DR: In this paper, a truncated Hermite expansion with an ARCH leading term is used as the conditional density of the process and the method of maximum likelihood is used to fit it to data.
Abstract: In econometrics, seminonparametric (SNP) estimators originated in the consumer demand literature. The Fourier flexible form is a well-known example. The idea is to replace the consumer's indirect utility function with a truncated series expansion and then use a parametric procedure, such as nonlinear multivariate regression, to set a confidence interval on an elasticity. More recently, SNP estimators have been used in nonlinear time series analysis. A truncated Hermite expansion with an ARCH leading term is used as the conditional density of the process. The method of maximum likelihood is used to fit it to data.

113 citations


Journal ArticleDOI
TL;DR: In this paper, the authors extended the analysis of asset pricing when information is incomplete by relaxing some of the restrictive assumptions of the Gaussian model studied earlier in the literature, and used a separation theorem to produce a closed-form solution for the interest rate process when the investor's utility function is logarithmic.

94 citations


Journal ArticleDOI
J. Winnicki1
TL;DR: In this article, an estimation theory for the variances of the offspring and immigration distributions in a simple branching process with immigration is developed, analogous to the estimation theory given by Wei and Winnicki (1990).
Abstract: Estimation theory for the variances of the offspring and immigration distributions in a simple branching process with immigration is developed, analogous to the estimation theory for the means given by Wei and Winnicki (1990). Conditional and weighted conditional least squares estimators are considered and their asymptotic properties for the full range of parameters are studied. Nonexistence of consistent estimators in the critical case is established, which complements analogous result of Wei and Winnicki for the supercritical case.

92 citations


Journal ArticleDOI
TL;DR: This article developed robust regression-based conditional moment tests for models estimated by quasi-maximum-likelihood using a density in the linear exponential family, which are relatively simple to compute, while being robust to distributional assumptions other than those being explicitly tested.

87 citations


Journal ArticleDOI
TL;DR: In this paper, the conditional sojourn time distribution is used to estimate the conditional transition probabilities of a switching process in a semi-Markov process with imperfect observations and changing structures.
Abstract: A switching process in which the switching probabilities depend on a random sojourn time is a class of semi-Markov processes and is encountered in target tracking, systems subject to failures, And also in the socioeconomic environment. In such a system, knowledge of the sojourn time is needed for the computation of the conditional transition probabilities. It is shown how one can infer the transition probabilities through the evaluation of the conditional distribution of the sojourn time. Subsequently, a recursive state estimation for such systems is obtained using the conditional sojourn time distribution for dynamic systems with imperfect observations and changing structures (models). >

Book ChapterDOI
13 Jul 1991
TL;DR: In this paper, an approach to reasoning with default rules where the proportion of exceptions, or more generally the probability of encountering an exception, can be at least roughly assessed is presented.
Abstract: An approach to reasoning with default rules where the proportion of exceptions, or more generally the probability of encountering an exception, can be at least roughly assessed is presented. It is based on local uncertainty propagation rules which provide the best bracketing of a conditional probability of interest from the knowledge of the bracketing of some other conditional probabilities. A procedure that uses two such propagation rules repeatedly is proposed in order to estimate any simple conditional probability of interest from the available knowledge. The iterative procedure, that does not require independence assumptions, looks promising with respect to the linear programming method. Improved bounds for conditional probabilities are given when independence assumptions hold.

Journal ArticleDOI
TL;DR: In this article, it is shown that the conditional distributions from a GARCH-like process, which explicity models the clustering of volatility and exhibits the fat-tail property as well, can be stable given suitable conditions.

Journal ArticleDOI
TL;DR: This paper presents an asymptotic approximation of marginal tail probabilities for a real-valued function of a random vector, where the function has continuous gradient that does not vanish at the mode of the joint density of the random vector.
Abstract: SUMMARY This paper presents an asymptotic approximation of marginal tail probabilities for a real-valued function of a random vector, where the function has continuous gradient that does not vanish at the mode of the joint density of the random vector. This approximation has error 0(n-312) and improves upon a related standard normal approximation which has error 0(n-1). Derivation involves the application of a tail probability formula given by DiCiccio, Field & Fraser (1990) to an approximation of a marginal density derived by Tierney, Kass & Kadane (1989). The approximation can be applied for Bayesian and conditional inference as well as for approximating sampling distributions, and the accuracy of the approximation is illustrated through several numerical examples related to such applications. In the context of conditional inference, we develop refinements of the standard normal approximation to the distribution of two different signed root likelihood ratio statistics for a component of the natural parameter in exponential families.

Journal ArticleDOI
TL;DR: In this paper, it is shown that a non-normal bivariate distribution can have conditional distribution functions that are normal in both directions, which appear as Gaussian curves in the three-dimensional plots.
Abstract: It is possible for a nonnormal bivariate distribution to have conditional distribution functions that are normal in both directions. This article presents several examples, with graphs, including a counterintuitive bimodal joint density. The graphs simultaneously display the joint density and the conditional density functions, which appear as Gaussian curves in the three-dimensional plots.

Journal ArticleDOI
TL;DR: The notion of uniform stochastic ordering has been shown to be tractable in matters of statistical inference as discussed by the authors, and it is shown in this paper that the problem is asymptotically distribution free.
Abstract: Stochastic ordering between probability distributions is a widely studied concept. It arises in numerous settings and has useful applications. Since it is often easy to make value judgments when such orderings exist, it is desirable to recognize their occurrence and to model distributional structure under such orderings. Unfortunately, the necessary theory for statistical inference procedures has not been developed for many problems involving stochastic ordering and this development seems to be a difficult task. We show in this paper that the stronger notion of uniform stochastic ordering (which is equivalent to failure rate ordering for continuous distributions) is quite tractable in matters of statistical inference. In particular, we consider nonparametric maximum likelihood estimation for $k$-population problems under uniform stochastic ordering restrictions. We derive closed-form estimates even with right-censored data by a reparameterization which reduces the problem to a well-known isotonic regression problem. We also derive the asymptotic distribution of the likelihood ratio statistic for testing equality of the $k$ populations against the uniform stochastic ordering restriction. This asymptotic distribution is of the chi-bar-square type as discussed by Robertson, Wright and Dykstra. These distributional results are obtained by appealing to elegant results from empirical process theory and showing that the proposed test is asymptotically distribution free. Recurrence formulas are derived for the weights of the chi-bar-square distribution for particular cases. The theory developed in this paper is illustrated by an example involving data for survival times for carcinoma of the oropharynx.

Journal ArticleDOI
TL;DR: In this article, a simple derivation of the probability distribution of the monopulse ratio is presented based upon a conditional distribution and considers both Rayleigh targets and simple non-Rayleigh cases.
Abstract: A simple derivation of the probability distribution of the monopulse ratio is presented. The derivation is based upon a conditional distribution and considers both Rayleigh targets and simple non-Rayleigh cases. The mean is obtained almost without calculation. The variance expression is given completely general noise and glint interpretation. Analytical expressions for angle error mean and spread, including noise, target width, and unresolved targets, are presented as functions of antenna position, in simple and comprehensive diagrams. >

Journal ArticleDOI
TL;DR: In this article, the authors investigate nonparametric curve estimation by kernel methods when the observed data satisfy a strong mixing condition and give precise asymptotic evaluations of these errors.

Journal ArticleDOI
TL;DR: In this article, a real application where this question is relevant: the use of gamma-camera imagery in the location of lesions is discussed based on real and simulated data, and physical modeling and algorithms are presented.
Abstract: After a brief review of the paradigm of bayesian image restoration, we pose the question: If high-level prior information is available and usable, what is lost by modelling at the pixel level instead ? Our discussion is based on a real application where this question is relevant: the use of gamma-camera imagery in the location of lesions. Procedures using 'global' prior information in the form of a structural model for the image are compared with those using more conventional 'local' priors, modelling only interactions among neighbouring pixels. We address in detail physical modelling and algorithms, and present results with both real and simulated data. Most modern work on statistical approaches to the extraction of information from digital images is based on the paradigm of bayesian image analysis pioneered by Besag (1983, 1986) and Geman & Geman (1984). This aims to embrace a wide variety of image analysis tasks, arising in applications right across the applied sciences from geography to medicine, within a framework based on probabilistic modelling and statistical inference. The essence of this framework is as follows. 1. The observed image or record, a finite array of numbers representing the pixellated signals or intensities, is regarded as a realization of a random vector. 2. The true 'state of nature', about which the record provides partial information, is a realization of a random array (or function) called the true image or truth. 3. Information about the truth available prior to observation is represented by a probability distribution. 4. The process generating the record, typically incorporating various forms of degradation such as blur, noise, geometrical distortion and discretization, is represented by the record's conditional distribution given the truth. 5. It is required to make inference about the truth, or some function thereof: this will be based on the conditional distribution of the truth given the record. We will use the symbols x and y to denote the truth and the record respectively. All probability distributions will be expressed as densities with respect to appropriate measures, and will be denoted generically by p: thus p(x), p(y x) and p(x y) represent the three distributions mentioned above. Within this broad framework, there is considerable flexibility: we briefly elaborate on each of the items above. Even what constitutes the raw data y may be ambiguous when the image is recorded by modern instrumentation incorporating data

Journal ArticleDOI
01 Jan 1991
TL;DR: A theory of discrete-time optimal filtering and smoothing based on convex sets of probability distributions is presented and the resulting estimator is an exact solution to the problem of running an infinity of Kalman filters and fixed-interval smoothers.
Abstract: A theory of discrete-time optimal filtering and smoothing based on convex sets of probability distributions is presented. Rather than propagating a single conditional distribution as does conventional Bayesian estimation, a convex set of conditional distributions is evolved. For linear Gaussian systems, the convex set can be generated by a set of Gaussian distributions with equal covariance with means in a convex region of state space. The conventional point-valued Kalman filter is generated to a set-valued Kalman filter consisting of equations of evolution of a convex set of conditional means and a conditional covariance. The resulting estimator is an exact solution to the problem of running an infinity of Kalman filters and fixed-interval smoothers, each with different initial conditions. An application is presented to illustrate and interpret the estimator results. >

Journal ArticleDOI
TL;DR: In this article, a class of conditional $U$U$-statistics, which generalize the Nadaraya-Watson estimate of a regression function, is introduced.
Abstract: We introduce a class of so-called conditional $U$-statistics, which generalize the Nadaraya-Watson estimate of a regression function in the same way as Hoeffding's classical $U$-statistics is a generalization of the sample mean. Asymptotic normality and weak and strong consistency are proved.

Journal ArticleDOI
TL;DR: In this article, the Strassen-Dudley theorem is used to obtain strong invariance principles for vector-valued martingales which, when properly normalized, converge in law to a mixture of Gaussian distributions.
Abstract: In this paper we focus on sequences of random vectors which do not admit a strong approximation of their partial sums by sums of independent random vectors. In the first part we prove conditional versions of the Strassen-Dudley theorem. We apply these in the second part of the paper to obtain strong invariance principles for vector-valued martingales which, when properly normalized, converge in law to a mixture of Gaussian distributions.

Journal ArticleDOI
TL;DR: In this paper, the saddlepoint method is used to construct a conditional density for a real parameter in an exponential linear model, using only a two-pass calculation on the observed likelihood function for the original data.
Abstract: For an exponential linear model, the saddlepoint method gives accurate approximations for the density of the minimal sufficient statistic or maximum likelihood estimate, and for the corresponding distribution functions. In this paper we describe a simple numerical procedure that constructs such approximations for a real parameter in an exponential linear model, using only a two-pass calculation on the observed likelihood function for the original data. Simple examples of the numerical procedure are discussed, but we take the general accuracy of the saddlepoint procedure as given. An immediate application of this is to exponential family models, where inference for a component of the canonical parameter is to be based on the conditional density of the corresponding component of the sufficient statistic, given the remaining components. This conditional density is also of exponential family form, but its functional form and cumulant-generating function may not be accessible. The procedure is applied to the corresponding likelihood, approximated as the full likelihood divided by an approximate marginal likelihood obtained from Barndorff-Nielsen's formula. A double saddlepoint approximation provides another means of bypassing this difficulty. The computational procedure is also examined as a numerical procedure for obtaining the saddlepoint approximation to the Fourier inversion of a characteristic function. As such it is a two-pass calculation on a table of the cumulant-generating function.

Journal ArticleDOI
TL;DR: In this paper, a recursive solution for optimal sequences of decisions given uncertainty in future weather events, and forecasts of those events, is presented, which incorporates a representation of the autocorrelation that is typically exhibited.
Abstract: A recursive solution for optimal sequences of decisions given uncertainty in future weather events, and forecasts of those events, is presented. The formulation incorporates a representation of the autocorrelation that is typically exhibited. The general finite-horizon dynamic decision–analytic framework is employed, with the weather forecast for the previous decision period included as a state variable. Serial correlation is represented through conditional probability distributions of the forecast for the current decision period, given the forecast for the previous period. Autocorrelation of the events is represented by proxy through the autocorrelation of the forecasts. The formulation is practical to implement operationally, and efficient in the sense that the weather component can be represented through a single state variable. A compact representation of the required conditional distributions, based on an autoregressive model for forecast autocorrelation, is presented for the em of 24-h prob...

Posted Content
TL;DR: In this paper, the authors estimate the conditional distribution of trade-to-trade price changes using ordered probit, a statistical model for discrete random variables, taking into account the fact that transaction price changes occur in discrete increments, typically eighths of a dollar, and occur at irregularly spaced time intervals.
Abstract: We estimate the conditional distribution of trade-to-trade price changes using ordered probit, a statistical model for discrete random variables. Such an approach takes into account the fact that transaction price changes occur in discrete increments, typically eighths of a dollar, and occur at irregularly spaced time intervals. Unlike existing continuous-time/discrete-state models of discrete transaction prices, ordered probit can capture the effects of other economic variables on price changes, such as volume, past price changes, and the time between trades. Using 1988 transactions data for over 100 randomly chosen U.S. stocks, we estimate the ordered probit model via maximum likelihood and use the parameter estimates to measure several transaction-related quantities, such as the price impact of trades of a given size, the tendency towards price reversals from one transaction to the next, and the empirical significance of price discreteness.

Journal ArticleDOI
TL;DR: A method of estimating the centiles of a conditional distribution using multi-dimensional kernel density estimation, to allow conditioning on the value of one or more covariates, is proposed.
Abstract: Observing a clinical measurement for an individual is of little value, unless it can be compared with measurements obtained from a healthy population, thought of as standard. The range of measurements observed will, in general, vary with age or some function of time. The usual approach is to assume a distributional form for the population density, but this is inappropriate for variables which do not follow a simple distribution. A method of estimating the centiles of a conditional distribution using multi-dimensional kernel density estimation, to allow conditioning on the value of one or more covariates, is proposed. By careful choice of the kernel used, the percentiles may easily be calculated using a Newton-Raphson procedure. The method is illustrated using kidney lengths and birthweights of a sample of newborn infants.


Book ChapterDOI
TL;DR: Bootstrap techniques naturally arise in the setting of nonparametric regression when we consider questions of smoothing parameter selection or error bar construction as mentioned in this paper, and they provide a simple-to-implement alternative to procedures based on asymptotic arguments.
Abstract: Bootstrap techniques naturally arise in the setting of nonparametric regression when we consider questions of smoothing parameter selection or error bar construction. The bootstrap provides a simple-to-implement alternative to procedures based on asymptotic arguments. In this paper we give an overview over the various bootstrap techniques that have been used and proposed in nonparametric regression. The bootstrap has to be adapted to the models and questions one has in mind. An interesting variant that we consider more closely is called the Wild Bootstrap. This technique has been used for construction of confidence bands and for comparison with competing parametric models.

Journal ArticleDOI
TL;DR: In this article, a mixture of two multivariate normal populations is illustrated through the analytical expressions of its conditional distribution and moments, and a comparison of the mixture statistics with those predicted by traditional models ignoring the mixture reveals the inadequacy and inappropriateness of these traditional approaches.
Abstract: A simple example simulating a mixture of two normal populations results in some important observations, nonnormality and nonsymmetry of the mixture conditional pdf, nonlinearity of the conditional mean as a function of the conditioning data, heteroscedasticity of the conditional variance and its nonmonotonicity as a function of distance of the unknown to the conditioning data. A comparison of the mixture statistics with those predicted by traditional models ignoring the mixture reveals the inadequacy and inappropriateness of these traditional approaches. A mixture of two multivariate normal populations is illustrated through the analytical expressions of its conditional distribution and moments.

Journal ArticleDOI
TL;DR: This article presents an algorithm that calculates kernel-smoothed conditional quantiles with a cross-validation choice of bandwidth for X, which is computationally feasible for large data sets when X assumes a small number of values since it requires only one pass through the full data set.
Abstract: Quantiles of a variable Y conditional on another variable X, when plotted against X, can be a useful descriptive tool. These plots give a quick impression of the functional form of the relation between X and the location, spread, and shape of the conditional distribution of Y. If several Y are observed for each X, then sample quantiles could be calculated for each X. The resulting quantile plot may be quite noisy, however, and smoothing across X may be desired. This article presents an algorithm that calculates kernel-smoothed conditional quantiles with a cross-validation choice of bandwidth for X. It is computationally feasible for large data sets when X assumes a small number of values since it requires only one pass through the full data set. The cross-validation does not require a pass through the data because of simplifications arising from the L 1 loss function being based on absolute values. The technique is illustrated by plotting the conditional quantiles of the net wealth of a sample of...