Topic
Variable-order Bayesian network
About: Variable-order Bayesian network is a research topic. Over the lifetime, 5450 publications have been published within this topic receiving 265828 citations.
Papers published on a yearly basis
Papers
More filters
•
01 Dec 2005TL;DR: The DNB model is built upon a Naive Bayesian network, a successful classifier for data with flattened (nonhierarchical) class labels, and the classification accuracy is significantly higher than it is with previous approaches.
Abstract: In this paper, we propose a Dynamic Naive Bayesian (DNB) network model for classifying data sets with hierarchical labels. The DNB model is built upon a Naive Bayesian (NB) network, a successful classifier for data with flattened (nonhierarchical) class labels. The problems using flattened class labels for hierarchical classification are addressed in this paper. The DNB has a top-down structure with each level of the class hierarchy modeled as a random variable. We defined augmenting operations to transform class hierarchy into a form that satisfies the probability law. We present algorithms for efficient learning and inference with the DNB model. The learning algorithm can be used to estimate the parameters of the network. The inference algorithm is designed to find the optimal classification path in the class hierarchy. The methods are tested on yeast gene expression data sets, and the classification accuracy with DNB classifier is significantly higher than it is with previous approaches– flattened classification using NB classifier.
01 Jan 2006
TL;DR: This research concentrates on enhancing Bayesian Network technique fro learning gene network and early results show that PNL can be used to recover gene network for 3 subnetworks for S.Cerevisiae with varying success.
Abstract: Gene network is a representation of gene interactions. A gene usually collaborates with other genes in order
to function. Understanding these interactions is a crucial step towards understanding how our body
functions. Bayesian Network is a technique that was initially used in Expert System to represent expert
knowledge. Since the pioneer work of Friedman et al. that applied this technique to analyse gene
expression data, other researchers have enhanced the technique further. This research concentrates on
enhancing Bayesian Network technique fro learning gene network. In order to get better results, Bayesian
technique will be used with prior knowledge. The tool that is used to learn the gene network is
PNL(Probabilistic Network Library). Early results show that PNL can be used to recover gene network for
3 subnetworks for S.Cerevisiae. These 3 subnetworks has been learned using PNL with varying success.
The next step in this research is to learn the gene network from the dataset of 800 genes. The knowledge
that will be gained will be used to produce a better approach to learning gene network using Bayesian
network technique
••
08 Oct 2000TL;DR: This work proposes a meta-Gaussian approach that is appropriate for direct learning from general, continuous variables and can recover the network structure, provided that the variables of the network satisfy a fundamental monotonicity property.
Abstract: Most existing approaches to learning the structure of Bayesian networks assume that all variables are discrete or that all variables are continuously normally distributed. We propose a meta-Gaussian approach that is appropriate for direct learning from general, continuous variables. We first transform the original variables into standard normal variables. Under the assumption that the transformed variables are multivariate normally distributed, we then make use of existing algorithms to learn the network structure in the transformed space, and then project the results back into the original space. Preliminary experimental results show that this approach can recover the network structure, provided that the variables of the network satisfy a fundamental monotonicity property.
••
TL;DR: In this article, the authors consider the detection of abrupt changes in the transition matrix of a Markov chain from a Bayesian viewpoint, and derive Bayes factors and posterior probabilities for unknown numbers of change-points, as well as the positions of the changepoints, assuming non-informative but proper priors on the parameters and fixed upper bound.
Abstract: Summary
This paper considers the detection of abrupt changes in the transition matrix of a Markov chain from a Bayesian viewpoint. It derives Bayes factors and posterior probabilities for unknown numbers of change-points, as well as the positions of the change-points, assuming non-informative but proper priors on the parameters and fixed upper bound. The Markov chain Monte Carlo approach proposed by Chib in 1998 for estimating multiple change-points models is adapted for the Markov chain model. It is especially useful when there are many possible change-points. The method can be applied in a wide variety of disciplines and is particularly relevant in the social and behavioural sciences, for analysing the effects of events on the attitudes of people.