scispace - formally typeset
Journal ArticleDOI

Discriminant Feature Extraction by Generalized Difference Subspace

Reads0
Chats0
TLDR
The discriminant ability of the orthogonal projection of data onto a generalized difference subspace (GDS) both theoretically and experimentally is revealed and two useful extensions of these methods are discussed, nonlinear extension by kernel trick, and the combination of convolutional neural network (CNN) features.
Abstract
In this paper, we reveal the discriminant capacity of orthogonal data projection onto the generalized difference subspace (GDS), both theoretically and experimentally. In our previous work, we demonstrated that the GDS projection works as a quasi-orthogonalization of class subspaces, which is an effective feature extraction for subspace based classifiers. Here, we further show that GDS projection also works as a discriminant feature extraction through a similar mechanism to the Fisher discriminant analysis (FDA). A direct proof of the connection between GDS projection and FDA is difficult due to the significant difference in their formulations. To circumvent the complication, we first introduce geometrical Fisher discriminant analysis (gFDA) based on a simplified Fisher criterion. It is derived from a heuristic yet practically plausible assumption: the direction of the sample mean vector of a class is largely aligned to the first principal component vector of the class, given that the principal component analysis (PCA) is applied without data centering. gFDA works stably even under few samples, bypassing the small sample size (SSS) problem of FDA. We then prove that gFDA is equivalent to GDS projection with a small correction term. This equivalence ensures GDS projection to inherit the discriminant ability from FDA via gFDA. Furthermore, we discuss two useful extensions of these methods, 1) a nonlinear extension by kernel trick, 2) a combination with CNN features. The equivalence and the effectiveness of the extensions have been verified through extensive experiments on the extended Yale B+, CMU face database, ALOI, ETH80, MNIST, and CIFAR10, mainly focusing on image recognition under small samples.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Environmental Sound Classification Based on CNN Latent Subspaces

TL;DR: Signal latent subspace (SLS) is proposed, an alternative sound classification method, achieving competitive results without requiring a huge amount of data.
Journal ArticleDOI

Time-series Anomaly Detection based on Difference Subspace between Signal Subspaces

TL;DR: In this article , the authors proposed a new method for anomaly detection in time-series data by incorporating the concept of difference subspace into the singular spectrum analysis (SSA), which can capture the whole structural difference between the two subspaces in its magnitude and direction.
Journal ArticleDOI

Temporal-stochastic tensor features for action recognition

TL;DR: Temporal-Stochastic Product Grassmann Manifold (TS-PGM) as mentioned in this paper is an efficient method for tensor classification in tasks such as gesture and action recognition by mapping tensor modes to linear subspaces, where each subspace can be seen as a point on a Grassmann manifold of the corresponding mode.
References
More filters

Gradient-based learning applied to document recognition

TL;DR: This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task, and Convolutional neural networks are shown to outperform all other techniques.
Journal ArticleDOI

From few to many: illumination cone models for face recognition under variable lighting and pose

TL;DR: A generative appearance-based method for recognizing human faces under variation in lighting and viewpoint that exploits the fact that the set of images of an object in fixed pose but under all possible illumination conditions, is a convex cone in the space of images.