scispace - formally typeset
Search or ask a question

Showing papers on "Variable-order Bayesian network published in 2019"


Book
05 Sep 2019
TL;DR: This chapter discusses Mixed Effects Models with Missing Covariates, Joint Modeling for Longitudinal Data and Survival Data, and Bayesian Joint Models of Longitudinal and Survival data.
Abstract: Introduction Introduction Longitudinal Data and Clustered Data Some Examples Regression Models Mixed Effects Models Complex or Incomplete Data Software Outline and Notation Mixed Effects Models Introduction Linear Mixed Effects (LME) Models Nonlinear Mixed Effects (NLME) Models Generalized Linear Mixed Models (GLMMs) Nonparametric and Semiparametric Mixed Effects Models Computational Strategies Further Topics Software Missing Data, Measurement Errors, and Outliers Introduction Missing Data Mechanisms and Ignorability General Methods for Missing Data EM Algorithms Multiple Imputation General Methods for Measurement Errors General Methods for Outliers Software Mixed Effects Models with Missing Data Introduction Mixed Effects Models with Missing Covariates Approximate Methods Mixed Effects Models with Missing Responses Multiple Imputation Methods Computational Strategies Examples Mixed Effects Models with Covariate Measurement Errors Introduction Measurement Error Models and Methods Two-Step Methods and Regression Calibration Methods Likelihood Methods Approximate Methods Measurement Error and Missing Data Mixed Effects Models with Censoring Introduction Mixed Effects Models with Censored Responses Mixed Effects Models with Censoring and Measurement Errors Mixed Effects Models with Censoring and Missing Data Appendix Survival Mixed Effects (Frailty) Models Introduction Survival Models Frailty Models Survival and Frailty Models with Missing Covariates Frailty Models with Measurement Errors Joint Modeling Longitudinal and Survival Data Introduction Joint Modeling for Longitudinal Data and Survival Data Two-Step Methods Joint Likelihood Inference Joint Models with Incomplete Data Joint Modeling of Several Longitudinal Processes Robust Mixed Effects Models Introduction Robust Methods Mixed Effects Models with Robust Distributions M-Estimators for Mixed Effects Models Robust Inference for Mixed Effects Models with Incomplete Data Generalized Estimating Equations (GEEs) Introduction Marginal Models Estimating Equations with Incomplete Data Discussion Bayesian Mixed Effects Models Introduction Bayesian Methods Bayesian Mixed Effects Models Bayesian Mixed Models with Missing Data Bayesian Models with Covariate Measurement Errors Bayesian Joint Models of Longitudinal and Survival Data Appendix: Background Materials Likelihood Methods The Gibbs Sampler and MCMC Methods Rejection Sampling and Importance Sampling Methods Numerical Integration and the Gauss-Hermite Quadrature Method Optimization Methods and the Newton-Raphson Algorithm Bootstrap Methods Matrix Algebra and Vector Differential Calculus References Index Abstract

244 citations


Journal ArticleDOI
TL;DR: A foundational Bayesian perspective based on agent opinion analysis theory defines a new framework for density forecast combination, and encompasses several existing forecast pooling methods, that develops a novel class of dynamic latent factor models for time series forecast synthesis.

76 citations


Posted Content
TL;DR: It is shown that an EBNC is a special case of a softmax polynomial regression model, and how to identify a non-redundant set of parameters for an EB NC is shown, and an asymptotic approximation for learning the structure of Bayesian networks that contain EBNCs is described.
Abstract: Low-dimensional probability models for local distribution functions in a Bayesian network include decision trees, decision graphs, and causal independence models. We describe a new probability model for discrete Bayesian networks, which we call an embedded Bayesian network classifier or EBNC. The model for a node $Y$ given parents $\bf X$ is obtained from a (usually different) Bayesian network for $Y$ and $\bf X$ in which $\bf X$ need not be the parents of $Y$. We show that an EBNC is a special case of a softmax polynomial regression model. Also, we show how to identify a non-redundant set of parameters for an EBNC, and describe an asymptotic approximation for learning the structure of Bayesian networks that contain EBNCs. Unlike the decision tree, decision graph, and causal independence models, we are unaware of a semantic justification for the use of these models. Experiments are needed to determine whether the models presented in this paper are useful in practice.

15 citations


Journal ArticleDOI
TL;DR: This paper introduces a new factor structure suitable for modeling large realized covariance matrices with full likelihood based estimation and Parametric and nonparametric versions are introduced.
Abstract: This paper introduces a new factor structure suitable for modeling large realized covariance matrices with full likelihood-based estimation. Parametric and nonparametric versions are introduced. Because of the computational advantages of our approach, we can model the factor nonparametrically as a Dirichlet process mixture or as an infinite hidden Markov mixture, which leads to an infinite mixture of inverse-Wishart distributions. Applications to 10 assets and 60 assets show that the models perform well. By exploiting parallel computing the models can be estimated in a matter of a few minutes.

14 citations


Journal ArticleDOI
01 Oct 2019
TL;DR: In this article, two local diagnostic procedures using curvature-based and slope-based methods are proposed in the framework of Bayesian perspective for the spatial autoregressive models with heteroscedasticity.
Abstract: This paper studies Bayesian local influence analysis for the spatial autoregressive models with heteroscedasticity (heteroscedastic SAR models). Two local diagnostic procedures using curvature-based and slope-based methods are proposed in the framework of Bayesian perspective. The curvature-based diagnostic are obtained by maximizing the normal curvature of an influence graph based on Kullback–Leibler divergence measure and slope-based diagnostic use the first order derivative of Bayesian factor defined for perturbation. Three perturbation schemes under the heteroscedastic SAR models are suggested and the diagnostic measures are derived respectively. The computations for the proposed diagnostic measures can be easily obtained using Markov Chain Monte Carlo sampler. The proposed methodologies are illustrated using two real examples.

9 citations


Journal ArticleDOI
TL;DR: A targeted Bayesian network learning method is proposed to account for a classification objective during the learning stage of the network model, which effectively manages the trade-off between the classification accuracy and the model complexity.
Abstract: A targeted Bayesian network learning (TBNL) method is proposed to account for a classification objective during the learning stage of the network model. The TBNL approximates the expected conditional probability distribution of the class variable. It effectively manages the trade-off between the classification accuracy and the model complexity by using a discriminative approach, constrained by information theory measurements. The proposed approach also provides a mechanism for maximizing the accuracy via a Pareto frontier over a complexity–accuracy plane, in cases of missing data in the data-sets. A comparative study over a set of classification problems shows the competitiveness of the TBNL mainly with respect to other graphical classifiers.

2 citations


Posted Content
TL;DR: This work presents two hybrid models combining the Bayesian and frequentist versions of CART and neural networks, which it describes as Bayesian neural tree (BNT) models, which exploit the architecture of decision trees and have lesser number of parameters to tune than advanced neural networks.
Abstract: Frequentist and Bayesian methods differ in many aspects, but share some basic optimal properties. In real-life classification and regression problems, situations exist in which a model based on one of the methods is preferable based on some subjective criterion. Nonparametric classification and regression techniques, such as decision trees and neural networks, have frequentist (classification and regression trees (CART) and artificial neural networks) as well as Bayesian (Bayesian CART and Bayesian neural networks) approaches to learning from data. In this work, we present two hybrid models combining the Bayesian and frequentist versions of CART and neural networks, which we call the Bayesian neural tree (BNT) models. Both models exploit the architecture of decision trees and have lesser number of parameters to tune than advanced neural networks. Such models can simultaneously perform feature selection and prediction, are highly flexible, and generalize well in settings with a limited number of training observations. We study the consistency of the proposed models, and derive the optimal value of an important model parameter. We also provide illustrative examples using a wide variety of real-life regression data sets.

1 citations