Wrappers for feature subset selection
Ron Kohavi,George H. John +1 more
TLDR
The wrapper method searches for an optimal feature subset tailored to a particular algorithm and a domain and compares the wrapper approach to induction without feature subset selection and to Relief, a filter approach tofeature subset selection.About:
This article is published in Artificial Intelligence.The article was published on 1997-12-01 and is currently open access. It has received 8610 citations till now. The article focuses on the topics: Feature selection & Minimum redundancy feature selection.read more
Citations
More filters
Book
Data Mining: Concepts and Techniques
TL;DR: This book presents dozens of algorithms and implementation examples, all in pseudo-code and suitable for use in real-world, large-scale data mining projects, and provides a comprehensive, practical look at the concepts and techniques you need to get the most out of real business data.
Book
Data Mining: Practical Machine Learning Tools and Techniques
TL;DR: This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining.
Journal ArticleDOI
An introduction to variable and feature selection
Isabelle Guyon,André Elisseeff +1 more
TL;DR: The contributions of this special issue cover a wide range of aspects of variable selection: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods.
Journal ArticleDOI
Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy
TL;DR: In this article, the maximal statistical dependency criterion based on mutual information (mRMR) was proposed to select good features according to the maximal dependency condition. But the problem of feature selection is not solved by directly implementing mRMR.
Journal ArticleDOI
Gene Selection for Cancer Classification using Support Vector Machines
TL;DR: In this article, a Support Vector Machine (SVM) method based on recursive feature elimination (RFE) was proposed to select a small subset of genes from broad patterns of gene expression data, recorded on DNA micro-arrays.
References
More filters
Journal Article
On the Connection between In-sample Testing and Generalization Error.
TL;DR: It is impossible to justify a corre latio n between rep roducti on of a training set and generali zation err or off of the training set using only a pr iori reasoning, and a novel formalism for address ing mac hine learni ng is developed.
Book
Readings in Artificial Intelligence
Bonnie Webber,Nils J. Nilsson +1 more
TL;DR: The selection first elaborates on representations of problems of reasoning about actions, a problem similarity approach to devising heuristics, and optimal search strategies for speech understanding control, and consistency in networks of relations, non-resolution theorem proving, using rewriting rules for connection graphs to prove theorems, and closed world data bases.
Dissertation
On the induction of decision trees for multiple concept learning
TL;DR: This dissertation makes four contributions to the theory and practice of the top-down non-backtracking induction of decision trees for multiple concept learning, and analyzes the merits and limitations of using the entropy measure (and others from the family of impurity measures) for attribute selection.
Journal ArticleDOI
Inductive Policy: The Pragmatics of Bias Selection
Foster Provost,Bruce G. Buchanan +1 more
TL;DR: A framework for representing and automatically selecting a wide variety of biases is presented and experiments with an instantiation of the framework addressing various pragmatic tradeoffs of time, space, accuracy, and the cost of errors are described.
Journal ArticleDOI
Explorations of an Incremental, Bayesian Algorithm for Categorization
John R. Anderson,Michael Matessa +1 more
TL;DR: An incremental categorization algorithm is described which, at each step, assigns the next instance to the most probable category, and Bayesian extensions to deal with nonindependent features are described and evaluated.