scispace - formally typeset
Topic

Bayes classifier

About: Bayes classifier is a(n) research topic. Over the lifetime, 2332 publication(s) have been published within this topic receiving 68962 citation(s).

...read more

Papers
  More

Open accessJournal ArticleDOI: 10.1023/A:1007465528199
Nir Friedman1, Dan Geiger2, Moises Goldszmidt3Institutions (3)
01 Nov 1997-Machine Learning
Abstract: Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness that characterize naive Bayes. We experimentally tested these approaches, using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for feature selection.

...read more

4,448 Citations


Open accessJournal ArticleDOI: 10.1023/A:1007413511361
Pedro Domingos1, Michael J. Pazzani1Institutions (1)
01 Nov 1997-Machine Learning
Abstract: The simple Bayesian classifier is known to be optimal when attributes are independent given the class, but the question of whether other sufficient conditions for its optimality exist has so far not been explored. Empirical results showing that it performs surprisingly well in many domains containing clear attribute dependences suggest that the answer to this question may be positive. This article shows that, although the Bayesian classifier‘s probability estimates are only optimal under quadratic loss if the independence assumption holds, the classifier itself can be optimal under zero-one loss (misclassification rate) even when this assumption is violated by a wide margin. The region of quadratic-loss optimality of the Bayesian classifier is in fact a second-order infinitesimal fraction of the region of zero-one optimality. This implies that the Bayesian classifier has a much greater range of applicability than previously thought. For example, in this article it is shown to be optimal for learning conjunctions and disjunctions, even though they violate the independence assumption. Further, studies in artificial domains show that it will often outperform more powerful classifiers for common training set sizes and numbers of attributes, even if its bias is a priori much less appropriate to the domain. This article‘s results also imply that detecting attribute dependence is not necessarily the best way to extend the Bayesian classifier, and this is also verified empirically.

...read more

Topics: Margin classifier (63%), Bayesian average (59%), Quadratic classifier (59%) ...read more

3,018 Citations


Open accessBook
Bradley P. Carlin, Thomas A. Louis1Institutions (1)
15 May 1996-
Abstract: Approaches for Statistical Inference. The Bayes Approach. The Empirical Bayes Approach. Performance of Bayes Procedures. Bayesian Computation. Model Criticism and Selection. Special Methods and Models. Case Studies. Appendices.

...read more

Topics: Bayes factor (71%), Bayes' theorem (69%), Bayes classifier (67%) ...read more

2,374 Citations


Open access
Irina Rish1Institutions (1)
01 Jan 2001-
Abstract: The naive Bayes classifier greatly simplify learning by assuming that features are independent given class. Although independence is generally a poor assumption, in practice naive Bayes often competes well with more sophisticated classifiers. Our broad goal is to understand the data characteristics which affect the performance of naive Bayes. Our approach uses Monte Carlo simulations that allow a systematic study of classification accuracy for several classes of randomly generated problems. We analyze the impact of the distribution entropy on the classification error, showing that low-entropy feature distributions yield good performance of naive Bayes. We also demonstrate that naive Bayes works well for certain nearlyfunctional feature dependencies, thus reaching its best performance in two opposite cases: completely independent features (as expected) and functionally dependent features (which is surprising). Another surprising result is that the accuracy of naive Bayes is not directly correlated with the degree of feature dependencies measured as the classconditional mutual information between the features. Instead, a better predictor of naive Bayes accuracy is the amount of information about the class that is lost because of the independence assumption.

...read more

Topics: Naive Bayes classifier (71%), Bayes error rate (66%), Bayes classifier (66%) ...read more

2,014 Citations



Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20221
202125
202036
201953
201854
201788

Top Attributes

Show by:

Topic's top 5 most impactful authors

Robert Burduk

15 papers, 44 citations

Majid Mojirsheibani

11 papers, 119 citations

Giorgio Giacinto

7 papers, 816 citations

Marco Zaffalon

5 papers, 261 citations

Edward R. Dougherty

5 papers, 96 citations

Network Information
Related Topics (5)
Dimensionality reduction

21.9K papers, 579.2K citations

89% related
Kernel method

11.3K papers, 501K citations

89% related
Mixture model

18.1K papers, 588.3K citations

88% related
Feature selection

41.4K papers, 1M citations

88% related
Support vector machine

73.6K papers, 1.7M citations

87% related