scispace - formally typeset
Search or ask a question

Showing papers on "Probability distribution published in 1999"


Journal ArticleDOI
TL;DR: The theory of possibility described in this paper is related to the theory of fuzzy sets by defining the concept of a possibility distribution as a fuzzy restriction which acts as an elastic constraint on the values that may be assigned to a variable.

8,918 citations


Journal ArticleDOI
TL;DR: In this article, a replica-exchange method was proposed to overcome the multiple-minima problem by exchanging non-interacting replicas of the system at several temperatures, which allows the calculation of any thermodynamic quantity as a function of temperature in that range.

4,135 citations


Proceedings Article
29 Nov 1999
TL;DR: The algorithm is a natural extension of the support vector algorithm to the case of unlabelled data and is regularized by controlling the length of the weight vector in an associated feature space.
Abstract: Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori specified ν between 0 and 1. We propose a method to approach this problem by trying to estimate a function f which is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. We provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabelled data.

1,851 citations


Book
01 Nov 1999
TL;DR: Basic Concept of Reliability, Commonly Used Probability Distributions, and Determination of Distributions and Parameters from Observed Data.
Abstract: Basic Concept of Reliability. Mathematics of Probability. Modeling of Uncertainty. Commonly Used Probability Distributions. Determination of Distributions and Parameters from Observed Data. Randomness in Response Variables. Fundamentals of Reliability Analysis. Advanced Topics on Reliability Analysis. Simulation Techniques. Appendices. Conversion Factors. References. Index.

1,456 citations


Journal ArticleDOI
28 Oct 1999-Nature
TL;DR: It is shown that, when the target sites are sparse and can be visited any number of times, an inverse square power-law distribution of flight lengths, corresponding to Lévy flight motion, is an optimal strategy.
Abstract: We address the general question of what is the best statistical strategy to adapt in order to search efficiently for randomly located objects ('target sites'). It is often assumed in foraging theory that the flight lengths of a forager have a characteristic scale: from this assumption gaussian, Rayleigh and other classical distributions with well-defined variances have arisen. However, such theories cannot explain the long-tailed power-law distributions of flight lengths or flight times that are observed experimentally. Here we study how the search efficiency depends on the probability distribution of flight lengths taken by a forager that can detect target sites only in its limited vicinity. We show that, when the target sites are sparse and can be visited any number of times, an inverse square power-law distribution of flight lengths, corresponding to Levy flight motion, is an optimal strategy. We test the theory by analysing experimental foraging data on selected insect, mammal and bird species, and find that they are consistent with the predicted inverse square power-law distributions.

1,416 citations


Proceedings Article
18 Jul 1999
TL;DR: Monte Carlo Localization is a version of Markov localization, a family of probabilistic approaches that have recently been applied with great practical success and yields improved accuracy while requiring an order of magnitude less computation when compared to previous approaches.
Abstract: This paper presents a new algorithm for mobile robot localization, called Monte Carlo Localization (MCL). MCL is a version of Markov localization, a family of probabilistic approaches that have recently been applied with great practical success. However, previous approaches were either computationally cumbersome (such as grid-based approaches that represent the state space by high-resolution 3D grids), or had to resort to extremely coarse-grained resolutions. Our approach is computationally efficient while retaining the ability to represent (almost) arbitrary distributions. MCL applies sampling-based methods for approximating probability distributions, in a way that places computation "where needed." The number of samples is adapted on-line, thereby invoking large sample sets only when necessary. Empirical results illustrate that MCL yields improved accuracy while requiring an order of magnitude less computation when compared to previous approaches. It is also much easier to implement.

1,206 citations


Journal ArticleDOI
TL;DR: Azzalini and Dalla Valle as discussed by the authors have discussed the multivariate skew normal distribution which extends the class of normal distributions by the addition of a shape parameter, and a further extension is described which introduces a skewing factor of an elliptical density.
Abstract: Azzalini and Dalla Valle have recently discussed the multivariate skew normal distribution which extends the class of normal distributions by the addition of a shape parameter. The first part of the present paper examines further probabilistic properties of the distribution, with special emphasis on aspects of statistical relevance. Inferential and other statistical issues are discussed in the following part, with applications to some multivariate statistics problems, illustrated by numerical examples. Finally, a further extension is described which introduces a skewing factor of an elliptical density.

1,130 citations


Proceedings Article
13 Jul 1999
TL;DR: Preliminary experiments show that the BOA outperforms the simple genetic algorithm even on decomposable functions with tight building blocks as a problem size grows.
Abstract: In this paper, an algorithm based on the concepts of genetic algorithms that uses an estimation of a probability distribution of promising solutions in order to generate new candidate solutions is proposed. To estimate the distribution, techniques for modeling multivariate data by Bayesian networks are used. The proposed algorithm identifies, reproduces and mixes building blocks up to a specified order. It is independent of the ordering of the variables in the strings representing the solutions. Moreover, prior information about the problem can be incorporated into the algorithm. However, prior information is not essential. Preliminary experiments show that the BOA outperforms the simple genetic algorithm even on decomposable functions with tight building blocks as a problem size grows.

1,073 citations


Journal ArticleDOI
TL;DR: In this paper, a nonlinear filtering theory is applied to unify the data assimilation and ensemble generation problem and to produce superior estimates of the probability distribution of the initial state of the atmosphere (or ocean) on regional or global scales.
Abstract: Knowledge of the probability distribution of initial conditions is central to almost all practical studies of predictability and to improvements in stochastic prediction of the atmosphere. Traditionally, data assimilation for atmospheric predictability or prediction experiments has attempted to find a single “best” estimate of the initial state. Additional information about the initial condition probability distribution is then obtained primarily through heuristic techniques that attempt to generate representative perturbations around the best estimate. However, a classical theory for generating an estimate of the complete probability distribution of an initial state given a set of observations exists. This nonlinear filtering theory can be applied to unify the data assimilation and ensemble generation problem and to produce superior estimates of the probability distribution of the initial state of the atmosphere (or ocean) on regional or global scales. A Monte Carlo implementation of the fully n...

967 citations


01 Jan 1999
TL;DR: This paper uses maximum entropy techniques for text classification by estimating the conditional distribution of the class variable given the document by comparing accuracy to naive Bayes and showing that maximum entropy is sometimes significantly better, but also sometimes worse.
Abstract: This paper proposes the use of maximum entropy techniques for text classification. Maximum entropy is a probability distribution estimation technique widely used for a variety of natural language tasks, such as language modeling, part-of-speech tagging, and text segmentation. The underlying principle of maximum entropy is that without external knowledge, one should prefer distributions that are uniform. Constraints on the distribution, derived from labeled training data, inform the technique where to be minimally non-uniform. The maximum entropy formulation has a unique solution which can be found by the improved iterative scaling algorithm. In this paper, maximum entropy is used for text classification by estimating the conditional distribution of the class variable given the document. In experiments on several text datasets we compare accuracy to naive Bayes and show that maximum entropy is sometimes significantly better, but also sometimes worse. Much future work remains, but the results indicate that maximum entropy is a promising technique for text classification.

945 citations


Journal ArticleDOI
TL;DR: This paper concerns the combination of experts' probability distributions in risk analysis, discussing a variety of combination methods and attempting to highlight the important conceptual and practical issues to be considered in designing a combination process in practice.
Abstract: This paper concerns the combination of experts' probability distributions in risk analysis, discussing a variety of combination methods and attempting to highlight the important conceptual and practical issues to be considered in designing a combination process in practice. The role of experts is important because their judgments can provide valuable information, particularly in view of the limited availability of “hard data” regarding many important uncertainties in risk analysis. Because uncertainties are represented in terms of probability distributions in probabilistic risk analysis (PRA), we consider expert information in terms of probability distributions. The motivation for the use of multiple experts is simply the desire to obtain as much information as possible. Combining experts' probability distributions summarizes the accumulated information for risk analysts and decision-makers. Procedures for combining probability distributions are often compartmentalized as mathematical aggregation methods or behavioral approaches, and we discuss both categories. However, an overall aggregation process could involve both mathematical and behavioral aspects, and no single process is best in all circumstances. An understanding of the pros and cons of different methods and the key issues to consider is valuable in the design of a combination process for a specific PRA. The output, a “combined probability distribution,” can ideally be viewed as representing a summary of the current state of expert opinion regarding the uncertainty of interest.

Journal ArticleDOI
TL;DR: The results suggest that the Kaufman initialization method induces to the K-Means algorithm a more desirable behaviour with respect to the convergence speed than the random initialization method.

Journal ArticleDOI
TL;DR: In this paper, four measures of distinguishability for quantum-mechanical states are surveyed from the point of view of the cryptographer with a particular eye on applications in quantum cryptography.
Abstract: This paper, mostly expository in nature, surveys four measures of distinguishability for quantum-mechanical states. This is done from the point of view of the cryptographer with a particular eye on applications in quantum cryptography. Each of the measures considered is rooted in an analogous classical measure of distinguishability for probability distributions: namely, the probability of an identification error, the Kolmogorov distance, the Bhattacharyya coefficient, and the Shannon (1948) distinguishability (as defined through mutual information). These measures have a long history of use in statistical pattern recognition and classical cryptography. We obtain several inequalities that relate the quantum distinguishability measures to each other, one of which may be crucial for proving the security of quantum cryptographic key distribution. In another vein, these measures and their connecting inequalities are used to define a single notion of cryptographic exponential indistinguishability for two families of quantum states. This is a tool that may prove useful in the analysis of various quantum-cryptographic protocols.

Journal ArticleDOI
29 Nov 1999
TL;DR: In this article, the authors analyzed belief propagation in networks with arbitrary topologies when the nodes in the graph describe jointly Gaussian random variables and gave sufficient conditions for convergence and showed that when belief propagation converges it gives the correct posterior means for all graph topologies, not just networks with a single loop.
Abstract: Local "belief propagation" rules of the sort proposed by Pearl [15] are guaranteed to converge to the correct posterior probabilities in singly connected graphical models. Recently, a number of researchers have empirically demonstrated good performance of "loopy belief propagation"- using these same rules on graphs with loops. Perhaps the most dramatic instance is the near Shannon-limit performance of "Turbo codes", whose decoding algorithm is equivalent to loopy belief propagation. Except for the case of graphs with a single loop, there has been little theoretical understanding of the performance of loopy propagation. Here we analyze belief propagation in networks with arbitrary topologies when the nodes in the graph describe jointly Gaussian random variables. We give an analytical formula relating the true posterior probabilities with those calculated using loopy propagation. We give sufficient conditions for convergence and show that when belief propagation converges it gives the correct posterior means for all graph topologies, not just networks with a single loop. The related "max-product" belief propagation algorithm finds the maximum posterior probability estimate for singly connected networks. We show that, even for non-Gaussian probability distributions, the convergence points of the max-product algorithm in loopy networks are maxima over a particular large local neighborhood of the posterior probability. These results help clarify the empirical performance results and motivate using the powerful belief propagation algorithm in a broader class of networks.

Journal ArticleDOI
TL;DR: A phenomenological study of stock price fluctuations of individual companies, which finds that the tails of the distributions can be well described by a power-law decay, well outside the stable Lévy regime.
Abstract: We present a phenomenological study of stock price fluctuations of individual companies. We systematically analyze two different databases covering securities from the three major U.S. stock markets: ~a! the New York Stock Exchange, ~b! the American Stock Exchange, and ~c! the National Association of Securities Dealers Automated Quotation stock market. Specifically, we consider~i! the trades and quotes database, for which we analyze 40 million records for 1000 U.S. companies for the 2-yr period 1994‐95; and ~ii! the Center for Research and Security Prices database, for which we analyze 35 million daily records for approximately 16 000 companies in the 35-yr period 1962‐96. We study the probability distribution of returns over varying time scales Dt, where Dt varies by a factor of ’10 5 , from 5 min up to ’4 yr. For time scales from 5 min up to approximately 16 days, we find that the tails of the distributions can be well described by a power-law decay,

Journal ArticleDOI
TL;DR: A method has been developed for calculation of CTV-to-PTV margin size based on the assumption that the CTV should be adequately irradiated with a high probability, demonstrated to be fast and accurate for a prostate, cervix, and lung cancer case.
Abstract: Purpose: Following the ICRU-50 recommendations, geometrical uncertainties in tumor position during radiotherapy treatments are generally included in the treatment planning by adding a margin to the clinical target volume (CTV) to yield the planning target volume (PTV). We have developed a method for automatic calculation of this margin. Methods and Materials: Geometrical uncertainties of a specific patient group can normally be characterized by the standard deviation of the distribution of systematic deviations in the patient group (Σ) and by the average standard deviation of the distribution of random deviations (σ). The CTV of a patient to be planned can be represented in a 3D matrix in the treatment room coordinate system with voxel values one inside and zero outside the CTV. Convolution of this matrix with the appropriate probability distributions for translations and rotations yields a matrix with coverage probabilities (CPs) which is defined as the probability for each point to be covered by the CTV. The PTV can then be chosen as a volume corresponding to a certain iso-probability level. Separate calculations are performed for systematic and random deviations. Iso-probability volumes are selected in such a way that a high percentage of the CTV volume (on average > 99%) receives a high dose (> 95%). The consequences of systematic deviations on the dose distribution in the CTV can be estimated by calculation of dose histograms of the CP matrix for systematic deviations, resulting in a so-called dose probability histogram (DPH). A DPH represents the average dose volume histogram (DVH) for all systematic deviations in the patient group. The consequences of random deviations can be calculated by convolution of the dose distribution with the probability distributions for random deviations. Using the convolved dose matrix in the DPH calculation yields full information about the influence of geometrical uncertainties on the dose in the CTV. Results: The model is demonstrated to be fast and accurate for a prostate, cervix, and lung cancer case. A CTV-to-PTV margin size which ensures at least 95% dose to (on average) 99% of the CTV, appears to be equal to about 2Σ + 0.7σ for three all cases. Because rotational deviations are included, the resulting margins can be anisotropic, as shown for the prostate cancer case. Conclusion: A method has been developed for calculation of CTV-to-PTV margins based on the assumption that the CTV should be adequately irradiated with a high probability.

Reference BookDOI
27 Dec 1999
TL;DR: Introduction Summarizing Data Probability Functions of Random Variables Discrete Probability Distributions Continuous Probable Distributions Standard Normal Distribution Estimation Confidence Intervals Hypothesis Testing Regression Analysis Analysis of Variance Experimental Design Nonparametric Statistics Quality Control and Risk Analysis.
Abstract: Introduction Summarizing Data Probability Functions of Random Variables Discrete Probability Distributions Continuous Probability Distributions Standard Normal Distribution Estimation Confidence Intervals Hypothesis Testing Regression Analysis Analysis of Variance Experimental Design Nonparametric Statistics Quality Control and Risk Analysis General Linear Models Miscellaneous Topics Special Functions

Book
15 Nov 1999
TL;DR: In this paper, a theoretical framework for population codes which generalizes naturally to the important case where the population provides information about a whole probability distribution over an underlying quantity rather than just a single value is presented.
Abstract: We present a theoretical framework for population codes which generalizes naturally to the important case where the population provides information about a whole probability distribution over an underlying quantity rather than just a single value. We use the framework to analyze two existing models, and to suggest and evaluate a third model for encoding such probability distributions.

Journal ArticleDOI
TL;DR: This paper presents a time sequential Monte Carlo simulation technique which can be used in complex distribution system evaluation, and describes a computer program developed to implement this technique.
Abstract: Analytical techniques for distribution system reliability assessment can be effectively used to evaluate the mean values of a wide range of system reliability indices. This approach is usually used when teaching the basic concepts of distribution system reliability evaluation. The mean or expected value, however, does not provide any information on the inherent variability of an index. Appreciation of this inherent variability is an important parameter in comprehending the actual reliability experienced by a customer and should be recognized when teaching distribution system reliability evaluation. This paper presents a time sequential Monte Carlo simulation technique which can be used in complex distribution system evaluation, and describes a computer program developed to implement this technique. General distribution system elements, operating models and radial configurations are considered in the program. The results obtained using both analytical and simulation methods are compared. The mean values and the probability distributions for both load point and system indices are illustrated using a practical test system.

Journal ArticleDOI
TL;DR: A Monte Carlo evaluation of estimators used to control for endogeneity of dummy explanatory variables in continuous outcome regression models finds estimators using discrete factor approximations compare favorably to efficient estimators in terms of precision and bias.

Journal ArticleDOI
TL;DR: Experiments on phonemic transcripts of spontaneous speech by parents to young children suggest that the model-based, unsupervised algorithm for recovering word boundaries in a natural-language text from which they have been deleted is more effective than other proposed algorithms, at least when utterance boundaries are given and the text includes a substantial number of short utterances.
Abstract: This paper presents a model-based, unsupervised algorithm for recovering word boundaries in a natural-language text from which they have been deleted. The algorithm is derived from a probability model of the source that generated the text. The fundamental structure of the model is specified abstractly so that the detailed component models of phonology, word-order, and word frequency can be replaced in a modular fashion. The model yields a language-independent, prior probability distribution on all possible sequences of all possible words over a given alphabet, based on the assumption that the input was generated by concatenating words from a fixed but unknown lexicon. The model is unusual in that it treats the generation of a complete corpus, regardless of length, as a single event in the probability space. Accordingly, the algorithm does not estimate a probability distribution on wordss instead, it attempts to calculate the prior probabilities of various word sequences that could underlie the observed text. Experiments on phonemic transcripts of spontaneous speech by parents to young children suggest that our algorithm is more effective than other proposed algorithms, at least when utterance boundaries are given and the text includes a substantial number of short utterances.

Journal ArticleDOI
TL;DR: In this paper, the authors established the asymptotic distribution of an extremum estimator when the true parameter lies on the boundary of the parameter space, where the boundary may be linear, curved, and/or kinked.
Abstract: This paper establishes the asymptotic distribution of an extremum estimator when the true parameter lies on the boundary of the parameter space. The boundary may be linear, curved, and/or kinked. Typically the asymptotic distribution is a function of a multivariate normal distribution in models without stochastic trends and a function of a multivariate Brownian motion in models with stochastic trends. The results apply to a wide variety of estimators and models. Examples treated in the paper are: (i) quasi-ML estimation of a random coefficients regression model with some coefficient variances equal to zero and (ii) LS estimation of an augmented Dickey-Fuller regression with unit root and time trend parameters on the boundary of the parameter space.

Journal ArticleDOI
TL;DR: The Factorized Distribution Algorithm, an evolutionary algorithm which combines mutation and recombination by using a distribution, is extended to LFDA, which computes an approximate factorization using only the data, not the ADF structure.
Abstract: The Factorized Distribution Algorithm (FDA) is an evolutionary algorithm which combines mutation and recombination by using a distribution. The distribution is estimated from a set of selected points. In general, a discrete distribution defined for n binary variables has 2n parameters. Therefore it is too expensive to compute. For additively decomposed discrete functions (ADFs) there exist algorithms which factor the distribution into conditional and marginal distributions. This factorization is used by FDA. The scaling of FDA is investigated theoretically and numerically. The scaling depends on the ADF structure and the specific assignment of function values. Difficult functions on a chain or a tree structure are solved in about O(n√n) operations. More standard genetic algorithms are not able to optimize these functions. FDA is not restricted to exact factorizations. It also works for approximate factorizations as is shown for a circle and a grid structure. By using results from Bayes networks, FDA is extended to LFDA. LFDA computes an approximate factorization using only the data, not the ADF structure. The scaling of LFDA is compared to the scaling of FDA.

Proceedings ArticleDOI
15 Mar 1999
TL;DR: A hidden Markov model based on multi-space probability distribution (MSD) can model pitch patterns without heuristic assumption and a reestimation algorithm is derived that can find a critical point of the likelihood function.
Abstract: This paper discusses a hidden Markov model (HMM) based on multi-space probability distribution (MSD). The HMMs are widely-used statistical models to characterize the sequence of speech spectra and have successfully been applied to speech recognition systems. From these facts, it is considered that the HMM is useful for modeling pitch patterns of speech. However, we cannot apply the conventional discrete or continuous HMMs to pitch pattern modeling since the observation sequence of the pitch pattern is composed of one-dimensional continuous values and a discrete symbol which represents "unvoiced". MSD-HMM includes discrete HMMs and continuous mixture HMMs as special cases, and further can model the sequence of observation vectors with variable dimension including zero-dimensional observations, i.e., discrete symbols. As a result, MSD-HMMs can model pitch patterns without heuristic assumption. We derive a reestimation algorithm for the extended HMM and show that it can find a critical point of the likelihood function.

Journal ArticleDOI
TL;DR: In this article, a fit of the PDF of the bulk speed and magnetic field intensity fluctuations calculated in the solar wind, with a multiplicative cascade model, was performed, and the physical implications of the obtained values of the parameter as well as its scaling law were discussed.
Abstract: Intermittency in fluid turbulence can be emphasized through the analysis of Probability Distribution Functions (PDF) for velocity fluctuations, which display a strong non-gaussian behavior at small scales. Castaing et al. (1990) have introduced the idea that this behavior can be represented, in the framework of a multiplicative cascade model, by a convolution of gaussians whose variances is distributed according to a log-normal distribution. In this letter we have tried to test this conjecture on the MHD solar wind turbulence by performing a fit of the PDF of the bulk speed and magnetic field intensity fluctuations calculated in the solar wind, with the model. This fit allows us to calculate a parameter λ² depending on the scale, which represents the width of the log-normal distribution of the variances of the gaussians. The physical implications of the obtained values of the parameter as well as of its scaling law are finally discussed.

Journal ArticleDOI
TL;DR: Two approximate methods for computational implementation of Bayesian hierarchical models that include unknown hyperparameters such as regularization constants and noise levels are examined, and the evidence framework is shown to introduce negligible predictive error under straightforward conditions.
Abstract: I examine two approximate methods for computational implementation of Bayesian hierarchical models, that is, models that include unknown hyperparameters such as regularization constants and noise levels. In the evidence framework, the model parameters are integrated over, and the resulting evidence is maximized over the hyperparameters. The optimized hyperparameters are used to define a gaussian approximation to the posterior distribution. In the alternative MAP method, the true posterior probability is found by integrating over the hyperparameters. The true posterior is then maximized over the model parameters, and a gaussian approximation is made. The similarities of the two approaches and their relative merits are discussed, and comparisons are made with the ideal hierarchical Bayesian solution. In moderately ill-posed problems, integration over hyperparameters yields a probability distribution with a skew peak, which causes signifi-cant biases to arise in the MAP method. In contrast, the evidence fram...

Journal ArticleDOI
01 Apr 1999-Geoderma
TL;DR: Nonlinear soil process models that are defined and calibrated at the point support and where model output is required at block support cannot at the same time be valid at the block support, which means spatial aggregation should take place after the model is run.

Journal ArticleDOI
TL;DR: Stochastic mathematical programs with equilibrium constraints (SMPEC), which generalize MPEC models by explicitly incorporating possible uncertainties in the problem data to obtain robust solutions to hierarchical problems, are introduced.

Journal ArticleDOI
TL;DR: In this paper, a conceptual framework is presented for a unified treatment of issues arising in a variety of predictability studies, including the predictive power (PP), a measure based on information-theoretical principles, lies at the center of this framework.
Abstract: A conceptual framework is presented for a unified treatment of issues arising in a variety of predictability studies. The predictive power (PP), a predictability measure based on information‐theoretical principles, lies at the center of this framework. The PP is invariant under linear coordinate transformations and applies to multivariate predictions irrespective of assumptions about the probability distribution of prediction errors. For univariate Gaussian predictions, the PP reduces to conventional predictability measures that are based upon the ratio of the rms error of a model prediction over the rms error of the climatological mean prediction. Since climatic variability on intraseasonal to interdecadal timescales follows an approximately Gaussian distribution, the emphasis of this paper is on multivariate Gaussian random variables. Predictable and unpredictable components of multivariate Gaussian systems can be distinguished by predictable component analysis, a procedure derived from discriminant analysis: seeking components with large PP leads to an eigenvalue problem, whose solution yields uncorrelated components that are ordered by PP from largest to smallest. In a discussion of the application of the PP and the predictable component analysis in different types of predictability studies, studies are considered that use either ensemble integrations of numerical models or autoregressive models fitted to observed or simulated data. An investigation of simulated multidecadal variability of the North Atlantic illustrates the proposed methodology. Reanalyzing an ensemble of integrations of the Geophysical Fluid Dynamics Laboratory coupled general circulation model confirms and refines earlier findings. With an autoregressive model fitted to a single integration of the same model, it is demonstrated that similar conclusions can be reached without resorting to computationally costly ensemble integrations.

Journal ArticleDOI
TL;DR: This article developed Bayesian methods for computing the exact finite-sample distribution of conditional forecasts in the vector autoregressive (VAR) framework and applied them to both structural and reduced-form VAR models.
Abstract: In the existing literature, conditional forecasts in the vector autoregressive (VAR) framework have not been commonly presented with probability distributions. This paper develops Bayesian methods for computing the exact finite-sample distribution of conditional forecasts. It broadens the class of conditional forecasts to which the methods can be applied. The methods work for both structural and reduced-form VAR models and, in contrast to common practices, account for parameter uncertainty in finite samples. Empirical examples under both a flat prior and a reference prior are provided to show the use of these methods.