scispace - formally typeset
Topic

Statistical classification

About: Statistical classification is a(n) research topic. Over the lifetime, 18068 publication(s) have been published within this topic receiving 316046 citation(s). The topic is also known as: cluster analysis.

...read more

Papers
  More

Open accessJournal ArticleDOI: 10.1109/TPAMI.2008.79
John Wright1, Allen Y. Yang2, Arvind Ganesh1, S. Shankar Sastry2  +1 moreInstitutions (2)
Abstract: We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.

...read more

Topics: Sparse approximation (64%), K-SVD (58%), Feature vector (58%) ...read more

9,039 Citations


Journal ArticleDOI: 10.1109/MASSP.1987.1165576
Richard P. Lippmann1Institutions (1)
01 Apr 1987-IEEE Assp Magazine
Abstract: Artificial neural net models have been studied for many years in the hope of achieving human-like performance in the fields of speech and image recognition. These models are composed of many nonlinear computational elements operating in parallel and arranged in patterns reminiscent of biological neural nets. Computational elements or nodes are connected via weights that are typically adapted during use to improve performance. There has been a recent resurgence in the field of artificial neural nets caused by new net topologies and algorithms, analog VLSI implementation techniques, and the belief that massive parallelism is essential for high performance speech and image recognition. This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification. These nets are highly parallel building blocks that illustrate neural net components and design principles and can be used to construct more complex systems. In addition to describing these nets, a major emphasis is placed on exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components. Single-layer nets can implement algorithms required by Gaussian maximum-likelihood classifiers and optimum minimum-error classifiers for binary patterns corrupted by noise. More generally, the decision regions required by any classification algorithm can be generated in a straightforward manner by three-layer feed-forward nets.

...read more

Topics: Artificial neural network (57%), Statistical classification (53%), Madaline (52%) ...read more

7,595 Citations


Open accessJournal ArticleDOI: 10.1214/AOS/1016218223
Abstract: Boosting is one of the most important recent developments in classification methodology. Boosting works by sequentially applying a classification algorithm to reweighted versions of the training data and then taking a weighted majority vote of the sequence of classifiers thus produced. For many classification algorithms, this simple strategy results in dramatic improvements in performance. We show that this seemingly mysterious phenomenon can be understood in terms of well-known statistical principles, namely additive modeling and maximum likelihood. For the two-class problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical results to boosting. Direct multiclass generalizations based on multinomial likelihood are derived that exhibit performance comparable to other recently proposed multiclass generalizations of boosting in most situations, and far superior in some. We suggest a minor modification to boosting that can reduce computation, often by factors of 10 to 50. Finally, we apply these insights to produce an alternative formulation of boosting decision trees. This approach, based on best-first truncated tree induction, often leads to better performance, and can provide interpretable descriptions of the aggregate decision rule. It is also much faster computationally, making it more suitable to large-scale data mining applications.

...read more

Topics: BrownBoost (67%), Gradient boosting (66%), Boosting (machine learning) (61%) ...read more

6,068 Citations


Journal ArticleDOI: 10.1016/J.IPM.2009.03.002
Marina Sokolova1, Guy Lapalme2Institutions (2)
Abstract: This paper presents a systematic analysis of twenty four performance measures used in the complete spectrum of Machine Learning classification tasks, i.e., binary, multi-class, multi-labelled, and hierarchical. For each classification task, the study relates a set of changes in a confusion matrix to specific characteristics of data. Then the analysis concentrates on the type of changes to a confusion matrix that do not change a measure, therefore, preserve a classifier's evaluation (measure invariance). The result is the measure invariance taxonomy with respect to all relevant label distribution changes in a classification problem. This formal analysis is supported by examples of applications where invariance properties of measures lead to a more reliable evaluation of classifiers. Text classification supplements the discussion with several case studies.

...read more

3,030 Citations


Proceedings ArticleDOI: 10.1109/CVPR.1997.609310
E. Osuna1, Robert M. Freund1, F. Girosit1Institutions (1)
17 Jun 1997-
Abstract: We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs., 1985) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. The decision surfaces are found by solving a linearly constrained quadratic programming problem. This optimization problem is challenging because the quadratic form is completely dense and the memory requirements grow with the square of the number of data points. We present a decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of optimality conditions which are used both to generate improved iterative values, and also establish the stopping criteria for the algorithm. We present experimental results of our implementation of SVM, and demonstrate the feasibility of our approach on a face detection problem that involves a data set of 50,000 data points.

...read more

2,696 Citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202232
20211,542
20201,525
20191,382
20181,454
20171,363

Top Attributes

Show by:

Topic's top 5 most impactful authors

Mengjie Zhang

31 papers, 382 citations

Alex A. Freitas

20 papers, 1.6K citations

Bing Xue

20 papers, 246 citations

Jon Atli Benediktsson

12 papers, 607 citations

Fadi Thabtah

11 papers, 480 citations

Network Information
Related Topics (5)
Feature extraction

111.8K papers, 2.1M citations

96% related
Support vector machine

73.6K papers, 1.7M citations

96% related
Feature vector

48.8K papers, 954.4K citations

95% related
Convolutional neural network

74.7K papers, 2M citations

93% related
Data modeling

29.6K papers, 470.1K citations

93% related