scispace - formally typeset
Search or ask a question
Topic

Linear discriminant analysis

About: Linear discriminant analysis is a research topic. Over the lifetime, 18361 publications have been published within this topic receiving 603195 citations. The topic is also known as: Linear discriminant analysis & LDA.


Papers
More filters
Proceedings ArticleDOI
20 Jun 2005
TL;DR: This paper proposes a discriminant tensor criterion (DTC), whereby multiple interrelated lower-dimensional discriminative subspaces are derived for feature selection and an algorithm discriminant analysis with tensor representation (DATER), which has the potential to outperform the traditional subspace learning algorithms, especially in the small sample size cases.
Abstract: In this paper, we present a novel approach to solving the supervised dimensionality reduction problem by encoding an image object as a general tensor of 2nd or higher order. First, we propose a discriminant tensor criterion (DTC), whereby multiple interrelated lower-dimensional discriminative subspaces are derived for feature selection. Then, a novel approach called k-mode cluster-based discriminant analysis is presented to iteratively learn these subspaces by unfolding the tensor along different tensor dimensions. We call this algorithm discriminant analysis with tensor representation (DATER), which has the following characteristics: 1) multiple interrelated subspaces can collaborate to discriminate different classes; 2) for classification problems involving higher-order tensors, the DATER algorithm can avoid the curse of dimensionality dilemma and overcome the small sample size problem; and 3) the computational cost in the learning stage is reduced to a large extent owing to the reduced data dimensions in generalized eigenvalue decomposition. We provide extensive experiments by encoding face images as 2nd or 3rd order tensors to demonstrate that the proposed DATER algorithm based on higher order tensors has the potential to outperform the traditional subspace learning algorithms, especially in the small sample size cases.

201 citations

Journal ArticleDOI
TL;DR: A new kernel function, called the cosine kernel, is proposed to increase the discriminating capability of the original polynomial kernel function and a geometry-based feature vector selection scheme is adopted to reduce the computational complexity of KFDA.
Abstract: This work is a continuation and extension of our previous research where kernel Fisher discriminant analysis (KFDA), a combination of the kernel trick with Fisher linear discriminant analysis (FLDA), was introduced to represent facial features for face recognition. This work makes three main contributions to further improving the performance of KFDA. First, a new kernel function, called the cosine kernel, is proposed to increase the discriminating capability of the original polynomial kernel function. Second, a geometry-based feature vector selection scheme is adopted to reduce the computational complexity of KFDA. Third, a variant of the nearest feature line classifier is employed to enhance the recognition performance further as it can produce virtual samples to make up for the shortage of training samples. Experiments have been carried out on a mixed database with 125 persons and 970 images and they demonstrate the effectiveness of the improvements.

200 citations

Journal ArticleDOI
TL;DR: In this article, a novel principal component analysis (PCA) technique directly based on original image matrices is developed for image feature extraction, which is more powerful and efficient than conventional PCA and FLD.

199 citations

01 Jan 1997
TL;DR: This thesis proposes an alternate architecture that goes beyond the basilar-membrane model, and, using which, auditory features can be computed in real time, and presents a unified framework for the problem of dimension reduction and HMM parameter estimation by modeling the original features with reduced-rank HMM.
Abstract: Biologically motivated feature extraction algorithms have been found to provide significantly robust performance in speech recognition systems, in the presence of channel and noise degradation, when compared to the standard features such as mel-cepstrum coefficients. However, auditory feature extraction is computationally expensive, and makes these features useless for real-time speech recognition systems. In this thesis, I investigate the use of low power techniques and custom analog VLSI for auditory feature extraction. I first investigated the basilar-membrane model and the hair-cell model chips that were designed by Liu (Liu, 1992). I performed speech recognition experiments to evaluate how well these chips would perform as a front-end to a speech recognizer. Based on the experience gained by these experiments, I propose an alternate architecture that goes beyond the basilar-membrane model, and, using which, auditory features can be computed in real time. These chips have been designed and tested, and consume only a few milliwatts of power as compared to general purpose digital machines that consume several Watts. I have also investigated Linear Discriminant Analysis (LDA) for dimension reduction of auditory features. Researchers have used Fisher-Rao linear discriminant analysis (LDA) to reduce the feature dimension. They model the low-dimensional features obtained from LDA as the outputs of a Markov process with hidden states (HMM). I present a unified framework for the problem of dimension reduction and HMM parameter estimation by modeling the original features with reduced-rank HMM. This re-formulation also leads to a generalization of LDA that is consistent with the heteroscedastic state models used in HMM, and give better performance when tested on a digit recognition task.

199 citations

Journal ArticleDOI
TL;DR: It is confirmed that a combination of spectral and spatial information increases accuracy of species classification and that species mapping is tractable in tropical forests when using high-fidelity imaging spectroscopy.
Abstract: We identify canopy species in a Hawaiian tropical forest using supervised classification applied to airborne hyperspectral imagery acquired with the Carnegie Airborne Observatory-Alpha system. Nonparametric methods (linear and radial basis function support vector machine, artificial neural network, and k-nearest neighbor) and parametric methods (linear, quadratic, and regularized discriminant analysis) are compared for a range of species richness values and training sample sizes. We find a clear advantage in using regularized discriminant analysis, linear discriminant analysis, and support vector machines. No unique optimal classifier was found for all conditions tested, but we highlight the possibility of improving support vector machine classification with a better optimization of its free parameters. We also confirm that a combination of spectral and spatial information increases accuracy of species classification: we combine segmentation and species classification from regularized discriminant analysis to produce a map of the 17 discriminated species. Finally, we compare different methods to assess spectral separability and find a better ability of Bhattacharyya distance to assess separability within and among species. The results indicate that species mapping is tractable in tropical forests when using high-fidelity imaging spectroscopy.

199 citations


Network Information
Related Topics (5)
Regression analysis
31K papers, 1.7M citations
85% related
Artificial neural network
207K papers, 4.5M citations
80% related
Feature extraction
111.8K papers, 2.1M citations
80% related
Cluster analysis
146.5K papers, 2.9M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20251
20242
2023756
20221,711
2021678
2020815