Compiling Bayesian Network Classifiers into Decision Graphs.
Andy Shih,Arthur Choi,Adnan Darwiche +2 more
- Vol. 33, Iss: 01, pp 7966-7974
Reads0
Chats0
TLDR
An algorithm is proposed for compiling Bayesian network classifiers into decision graphs that mimic the input and output behavior of the classifiers, which are tractable and can be exponentially smaller in size than decision trees.Abstract:
We propose an algorithm for compiling Bayesian network classifiers into decision graphs that mimic the input and output behavior of the classifiers. In particular, we compile Bayesian network classifiers into ordered decision graphs, which are tractable and can be exponentially smaller in size than decision trees. This tractability facilitates reasoning about the behavior of Bayesian network classifiers, including the explanation of decisions they make. Our compilation algorithm comes with guarantees on the time of compilation and the size of compiled decision graphs. We apply our compilation algorithm to classifiers from the literature and discuss some case studies in which we show how to automatically explain their decisions and verify properties of their behavior.read more
Citations
More filters
Posted Content
On The Reasons Behind Decisions.
Adnan Darwiche,Auguste Hirth +1 more
TL;DR: A theory for unveiling the reasons behind the decisions made by Boolean classifiers is presented and notions such as sufficient, necessary and complete reasons behind decisions are defined, in addition to classifier and decision bias.
Book ChapterDOI
From Contrastive to Abductive Explanations and Back Again.
TL;DR: In this paper, the authors established a rigorous formal relationship between Open image in new window and Open image-in-new-window explanations, and showed that for any given instance, OIs in new windows explanations are minimal hitting sets of Open images in windows explanations and vice versa.
Journal ArticleDOI
Delivering Trustworthy AI through Formal XAI
TL;DR: This paper argues that the best known eXplainable AI (XAI) approaches fail to provide sound explanations, or that alternatively find explanations which can exhibit significant redundancy.
Proceedings Article
Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay
TL;DR: It is shown that the computation of one PI-explanation for an NBC can be achieved in log-linear time, and that the same result also applies to the more general class of linear classifiers.
Proceedings ArticleDOI
Three Modern Roles for Logic in AI
TL;DR: Three modern roles for logic in artificial intelligence are considered, which are based on the theory of tractable Boolean circuits: logic as a basis for computation, logic for learning from a combination of data and knowledge, and logic for reasoning about the behavior of machine learning systems.
References
More filters
Journal ArticleDOI
Graph-Based Algorithms for Boolean Function Manipulation
TL;DR: In this paper, the authors present a data structure for representing Boolean functions and an associated set of manipulation algorithms, which have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large.
Journal ArticleDOI
Bayesian Network Classifiers
TL;DR: Tree Augmented Naive Bayes (TAN) is single out, which outperforms naive Bayes, yet at the same time maintains the computational simplicity and robustness that characterize naive Baye.
Proceedings Article
On Discriminative vs. Generative Classifiers: A comparison of logistic regression and naive Bayes
Andrew Y. Ng,Michael I. Jordan +1 more
TL;DR: It is shown, contrary to a widely-held belief that discriminative classifiers are almost always to be preferred, that there can often be two distinct regimes of performance as the training set size is increased, one in which each algorithm does better.
Book ChapterDOI
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
TL;DR: In this paper, the authors presented a scalable and efficient technique for verifying properties of deep neural networks (or providing counter-examples) based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function.
Book
Modeling and Reasoning with Bayesian Networks
TL;DR: This book provides an extensive discussion of techniques for building Bayesian networks that model real-world situations, including techniques for synthesizing models from design, learning models from data, and debugging models using sensitivity analysis.