scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Two issues in using mixtures of polynomials for inference in hybrid Bayesian networks

01 Jul 2012-International Journal of Approximate Reasoning (Elsevier Science Inc.)-Vol. 53, Iss: 5, pp 847-866
TL;DR: A new method for finding MOP approximations based on Lagrange interpolating polynomials (LIP) with Chebyshev points is described, and how the LIP method can be used to find efficient MOP approximation of PDFs is described.
About: This article is published in International Journal of Approximate Reasoning.The article was published on 2012-07-01 and is currently open access. It has received 29 citations till now. The article focuses on the topics: Taylor series & Normal distribution.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper utilizes a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique, and proposes an alternative learning method that relies on the cumulative distribution function of theData.

32 citations

Journal ArticleDOI
TL;DR: An architecture for solving large general hybrid Bayesian networks (BNs) with deterministic conditionals for continuous variables using local computation and an extended version of the crop problem that includes non-conditional linear Gaussian distributions and non-linear deterministic functions is solved.

18 citations


Cites background or methods from "Two issues in using mixtures of pol..."

  • ...Another solution is to approximate all PDFs by mixtures of polynomials [Shenoy and West 2011, Shenoy 2010]....

    [...]

  • ...Another way around the problem of integration of density functions is to approximate them using mixtures of polynomials (MOP) [Shenoy and West 2011, Shenoy 2010]....

    [...]

  • ...For example, the family of MTE functions is not closed under transformations needed by linear deterministic functions involving two or more continuous parent variables [Shenoy 2010]....

    [...]

Journal ArticleDOI
TL;DR: Results on real datasets show that the non-parametric Bayesian classifiers using MoPs are comparable to the kernel density-based Bayesianclassifiers.

14 citations


Cites background or methods from "Two issues in using mixtures of pol..."

  • ...MoPs learned with B-spline interpolation were compared with MoPs using LIPs as proposed in [16] and with the MoTBF learning approach in [17]....

    [...]

  • ...Later, Lagrange interpolating polynomials (LIPs) were used to obtain MoPs [16]....

    [...]

  • ...• MoP approximation using Lagrange interpolating polynomials: The results were compared with the MoPs obtained by computing the LIP over the Chebyshev points defined in each interval independently [16]....

    [...]

  • ...Later, Shenoy [16] proposed estimating plX (x) as the LIP over the Chebyshev points defined in AlX ....

    [...]

  • ...Previous proposals for learning MoPs assume that the mathematical expression of the generating parametric density is known [9] or apply some interpolation technique using the true densities of a set of points [16]....

    [...]

Book ChapterDOI
17 Sep 2014
TL;DR: This paper introduces the first auxiliary BN method (called MPL-EC) to tackle parameter learning with exterior constraints, and demonstrates the successful application to learn a real-world software defects BN with sparse data.
Abstract: Lack of relevant data is a major challenge for learning Bayesi-an networks (BNs) in real-world applications. Knowledge engineering techniques attempt to address this by incorporating domain knowledge from experts. The paper focuses on learning node probability tables using both expert judgment and limited data. To reduce the massive burden of eliciting individual probability table entries (parameters) it is often easier to elicit constraints on the parameters from experts. Constraints can be interior (between entries of the same probability table column) or exterior (between entries of different columns). In this paper we introduce the first auxiliary BN method (called MPL-EC) to tackle parameter learning with exterior constraints. The MPL-EC itself is a BN, whose nodes encode the data observations, exterior constraints and parameters in the original BN. Also, MPL-EC addresses (i) how to estimate target parameters with both data and constraints, and (ii) how to fuse the weights from different causal relationships in a robust way. Experimental results demonstrate the superiority of MPL-EC at various sparsity levels compared to conventional parameter learning algorithms and other state-of-the-art parameter learning algorithms with constraints. Moreover, we demonstrate the successful application to learn a real-world software defects BN with sparse data.

12 citations

References
More filters
Journal ArticleDOI

16,176 citations


"Two issues in using mixtures of pol..." refers methods in this paper

  • ...We can use the Kullback-Liebler (KL) divergence [20] as a measure of the goodness of fit....

    [...]

Book
01 Jan 1978
TL;DR: This report contains a description of the typical topics covered in a two-semester sequence in Numerical Analysis, and describes the accuracy, efficiency and robustness of these algorithms.
Abstract: Introduction. Mathematical approximations have been used since ancient times to estimate solutions, but with the rise of digital computing the field of numerical analysis has become a discipline in its own right. Numerical analysts develop and study algorithms that provide approximate solutions to various types of numerical problems, and they analyze the accuracy, efficiency and robustness of these algorithms. As technology becomes ever more essential for the study of mathematics, learning algorithms that provide approximate solutions to mathematical problems and understanding the accuracy of such approximations becomes increasingly important. This report contains a description of the typical topics covered in a two-semester sequence in Numerical Analysis.

7,315 citations


"Two issues in using mixtures of pol..." refers background in this paper

  • ...P (x) has the following properties [19]....

    [...]

  • ...For example [19], consider f(x) = 1 x ....

    [...]

  • ...How should we choose the n points? For the interval (a, b), the n Chebyshev points are given by [19]:...

    [...]

01 Jan 1996

3,908 citations

Journal ArticleDOI
TL;DR: A propagation scheme for Bayesian networks with conditional Gaussian distributions that does not have the numerical weaknesses of the scheme derived in Lauritzen and Spiegelhalter is described.
Abstract: This article describes a propagation scheme for Bayesian networks with conditional Gaussian distributions that does not have the numerical weaknesses of the scheme derived in Lauritzen (Journal of the American Statistical Association 87: 1098–1108, 1992). The propagation architecture is that of Lauritzen and Spiegelhalter (Journal of the Royal Statistical Society, Series B 50: 157– 224, 1988). In addition to the means and variances provided by the previous algorithm, the new propagation scheme yields full local marginal distributions. The new scheme also handles linear deterministic relationships between continuous variables in the network specification. The computations involved in the new propagation scheme are simpler than those in the previous scheme and the method has been implemented in the most recent version of the HUGIN software.

285 citations


"Two issues in using mixtures of pol..." refers methods in this paper

  • ...One exact solution to the integration problem proposed by Lauritzen and Jensen [5] is to restrict conditionals of continuous variables to the conditional linear Gaussian (CLG) family, and for discrete variables to not have continuous parents....

    [...]

  • ...Oneexact solution to the integrationproblemproposedbyLauritzenand Jensen [5] is to restrict conditionals of continuous variables to the conditional linear Gaussian (CLG) family, and for discrete variables to not have continuous parents....

    [...]

  • ...Murphy’s and Lerner’s approach is then embedded in the Lauritzen-Jensen [5] algorithm to solve the resulting mixtures of Gaussians BN....

    [...]

  • ...The resulting mixture of Gaussians BN is then solved using the Lauritzen-Jensen [5] algorithm....

    [...]

  • ...Murphy’s and Lerner’s approach is then embedded in the Lauritzen-Jensen [5] algorithm to solve the resulting mixtures of Gaussians BN. Shenoy [9] proposes approximating non-CLG distributions by mixtures of Gaussians using a nonlinear optimization technique, and using arc reversals to ensure discrete variables do not have continuous parents....

    [...]