scispace - formally typeset
Topic

Feature vector

About: Feature vector is a(n) research topic. Over the lifetime, 48889 publication(s) have been published within this topic receiving 954464 citation(s). The topic is also known as: feature space.

...read more

Papers
  More

Open accessJournal ArticleDOI: 10.1023/A:1022627411411
Corinna Cortes1, Vladimir Vapnik1Institutions (1)
15 Sep 1995-Machine Learning
Abstract: The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

...read more

Topics: Feature learning (63%), Active learning (machine learning) (62%), Feature vector (62%) ...read more

35,157 Citations


Journal ArticleDOI: 10.1109/TKDE.2009.191
Sinno Jialin Pan1, Qiang Yang1Institutions (1)
Abstract: A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.

...read more

Topics: Semi-supervised learning (69%), Inductive transfer (68%), Multi-task learning (67%) ...read more

13,267 Citations


Journal ArticleDOI: 10.1109/34.1000236
Dorin Comaniciu1, Peter Meer1Institutions (1)
Abstract: A general non-parametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure: the mean shift. For discrete data, we prove the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density. The relation of the mean shift procedure to the Nadaraya-Watson estimator from kernel regression and the robust M-estimators; of location is also established. Algorithms for two low-level vision tasks discontinuity-preserving smoothing and image segmentation - are described as applications. In these algorithms, the only user-set parameter is the resolution of the analysis, and either gray-level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.

...read more

Topics: Mean-shift (63%), Smoothing (58%), Estimator (56%) ...read more

11,014 Citations


Open accessJournal ArticleDOI: 10.1109/TPAMI.2008.79
John Wright1, Allen Y. Yang2, Arvind Ganesh1, S. Shankar Sastry2  +1 moreInstitutions (2)
Abstract: We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.

...read more

Topics: Sparse approximation (64%), K-SVD (58%), Feature vector (58%) ...read more

9,039 Citations


Open accessProceedings ArticleDOI: 10.1109/CVPR.2015.7298682
07 Jun 2015-
Abstract: Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.

...read more

Topics: Three-dimensional face recognition (73%), Face detection (63%), Object-class detection (62%) ...read more

8,289 Citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202268
20212,723
20203,908
20194,489
20183,597
20173,020

Top Attributes

Show by:

Topic's top 5 most impactful authors

David P. Casasent

35 papers, 512 citations

Dacheng Tao

34 papers, 3.5K citations

Nasser M. Nasrabadi

30 papers, 1.6K citations

Ioannis Pitas

27 papers, 629 citations

Tieniu Tan

24 papers, 1.8K citations

Network Information
Related Topics (5)
Feature extraction

111.8K papers, 2.1M citations

97% related
Support vector machine

73.6K papers, 1.7M citations

96% related
Convolutional neural network

74.7K papers, 2M citations

95% related
Deep learning

79.8K papers, 2.1M citations

95% related
Feature (computer vision)

128.2K papers, 1.7M citations

94% related