scispace - formally typeset
Search or ask a question

Showing papers on "Euclidean distance published in 2021"


Book ChapterDOI
TL;DR: The siamese neural network architecture is described, and its main applications in a number of computational fields since its appearance in 1994 are outlined, including the programming languages, software packages, tutorials, and guides that can be practically used by readers to implement this powerful machine learning model.
Abstract: Similarity has always been a key aspect in computer science and statistics. Any time two element vectors are compared, many different similarity approaches can be used, depending on the final goal of the comparison (Euclidean distance, Pearson correlation coefficient, Spearman's rank correlation coefficient, and others). But if the comparison has to be applied to more complex data samples, with features having different dimensionality and types which might need compression before processing, these measures would be unsuitable. In these cases, a siamese neural network may be the best choice: it consists of two identical artificial neural networks each capable of learning the hidden representation of an input vector. The two neural networks are both feedforward perceptrons, and employ error back-propagation during training; they work parallelly in tandem and compare their outputs at the end, usually through a cosine distance. The output generated by a siamese neural network execution can be considered the semantic similarity between the projected representation of the two input vectors. In this overview we first describe the siamese neural network architecture, and then we outline its main applications in a number of computational fields since its appearance in 1994. Additionally, we list the programming languages, software packages, tutorials, and guides that can be practically used by readers to implement this powerful machine learning model.

281 citations


Journal ArticleDOI
TL;DR: Findings indicate that the developed framework successfully distinguishes individuals who walk too near and breaches/violates social distances; also, the transfer learning approach boosts the overall efficiency of the model.

182 citations


Journal ArticleDOI
TL;DR: This paper presents a hybrid version of the Harris Hawks Optimization algorithm based on Bitwise operations and Simulated Annealing to solve the FS problem for classification purposes using wrapper methods and presented superior results compared to other algorithms.
Abstract: The significant growth of modern technology and smart systems has left a massive production of big data. Not only are the dimensional problems that face the big data, but there are also other emerging problems such as redundancy, irrelevance, or noise of the features. Therefore, feature selection (FS) has become an urgent need to search for the optimal subset of features. This paper presents a hybrid version of the Harris Hawks Optimization algorithm based on Bitwise operations and Simulated Annealing (HHOBSA) to solve the FS problem for classification purposes using wrapper methods. Two bitwise operations (AND bitwise operation and OR bitwise operation) can randomly transfer the most informative features from the best solution to the others in the populations to raise their qualities. The Simulate Annealing (SA) boosts the performance of the HHOBSA algorithm and helps to flee from the local optima. A standard wrapper method K-nearest neighbors with Euclidean distance metric works as an evaluator for the new solutions. A comparison between HHOBSA and other state-of-the-art algorithms is presented based on 24 standard datasets and 19 artificial datasets and their dimension sizes can reach up to thousands. The artificial datasets help to study the effects of different dimensions of data, noise ratios, and the size of samples on the FS process. We employ several performance measures, including classification accuracy, fitness values, size of selected features, and computational time. We conduct two statistical significance tests of HHOBSA like paired-samples T and Wilcoxon signed ranks. The proposed algorithm presented superior results compared to other algorithms.

101 citations


Journal ArticleDOI
TL;DR: A novel few-shot learning framework named hybrid inference network (HIN) is proposed to tackle the problem of SAR target recognition with only a few training samples by combining the inductive inference and the transductive inference methods.
Abstract: Synthetic aperture radar (SAR) automatic target recognition (ATR) plays an important role in SAR image interpretation. However, at least hundreds of training samples are usually required for each target type in the existing SAR ATR algorithms. In this article, a novel few-shot learning framework named hybrid inference network (HIN) is proposed to tackle the problem of SAR target recognition with only a few training samples. The recognition procedure of HIN consists of two main stages. In the first stage, an embedding network is utilized to map the SAR images into an embedding space. In the second stage, a hybrid inference strategy that combines the inductive inference and the transductive inference is adopted to classify the samples in the embedding space. In the inductive inference section, each sample is recognized independently according to a metric based on Euclidean distance. In the transductive inference section, all samples are recognized as a whole according to their manifold structures by label propagation. Finally, in the hybrid inference section, the classification result is obtained by combining the above two inference methods. To train the framework more effectively, a novel loss function named enhanced hybrid loss is proposed to constrain samples to gain better interclass separability in the embedding space. Experimental results on the moving and stationary target acquisition and recognition (MSTAR) benchmark data set illustrate that HIN performs well in few-shot SAR image classification.

69 citations



Journal ArticleDOI
TL;DR: A novel method called correntropy-based hypergraph regularized NMF (CHNMF) is proposed to solve the complex optimization problem of non-negative matrix factorization and extensive experimental results indicate that the proposed method is superior to other state-of-the-art methods for clustering and feature selection.
Abstract: Non-negative matrix factorization (NMF) has become one of the most powerful methods for clustering and feature selection. However, the performance of the traditional NMF method severely degrades when the data contain noises and outliers or the manifold structure of the data is not taken into account. In this article, a novel method called correntropy-based hypergraph regularized NMF (CHNMF) is proposed to solve the above problem. Specifically, we use the correntropy instead of the Euclidean norm in the loss term of CHNMF, which will improve the robustness of the algorithm. And the hypergraph regularization term is also applied to the objective function, which can explore the high-order geometric information in more sample points. Then, the half-quadratic (HQ) optimization technique is adopted to solve the complex optimization problem of CHNMF. Finally, extensive experimental results on multi-cancer integrated data indicate that the proposed CHNMF method is superior to other state-of-the-art methods for clustering and feature selection.

59 citations


Journal ArticleDOI
TL;DR: A novel fault diagnosis approach based on improved manhattan distance in Symmetrized Dot Pattern (SDP) image is proposed, and different vibration signal of rolling bearing is classified according to this improved man Manhattan distance.

58 citations


Journal ArticleDOI
TL;DR: Extensive experimental results performed on the curated breast imaging subset of digital database of screening mammography dataset show that the proposed FFCL structure can achieve superior performances for both triple and multiclass classification in BI-RADS scoring, outperforming the state-of-the-art methods.
Abstract: Traditional deep learning methods are suboptimal in classifying ambiguity features, which often arise in noisy and hard to predict categories, especially, to distinguish semantic scoring. Semantic scoring, depending on semantic logic to implement evaluation, inevitably contains fuzzy description and misses some concepts, for example, the ambiguous relationship between normal and probably normal always presents unclear boundaries (normal—more likely normal—probably normal). Thus, human error is common when annotating images. Differing from existing methods that focus on modifying kernel structure of neural networks, this article proposes a dominant fuzzy fully connected layer (FFCL) for breast imaging reporting and data system (BI-RADS) scoring and validates the universality of this proposed structure. This proposed model aims to develop complementary properties of scoring for semantic paradigms, while constructing fuzzy rules based on analyzing human thought patterns, and to particularly reduce the influence of semantic conglutination. Specifically, this semantic-sensitive defuzzifier layer projects features occupied by relative categories into semantic space, and a fuzzy decoder modifies probabilities of the last output layer referring to the global trend. Moreover, the ambiguous semantic space between two relative categories shrinks during the learning phases, as the positive and negative growth trends of one category appearing among its relatives were considered. We first used the Euclidean distance to zoom in the distance between the real scores and the predicted scores, and then employed two sample t test method to evidence the advantage of the FFCL architecture. Extensive experimental results performed on the curated breast imaging subset of digital database of screening mammography dataset show that our FFCL structure can achieve superior performances for both triple and multiclass classification in BI-RADS scoring, outperforming the state-of-the-art methods.

50 citations


Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a Bi-Similarity Network (BSNet) which consists of a single embedding module and a bi-similarity module of two similarity measures.
Abstract: Few-shot learning for fine-grained image classification has gained recent attention in computer vision. Among the approaches for few-shot learning, due to the simplicity and effectiveness, metric-based methods are favorably state-of-the-art on many tasks. Most of the metric-based methods assume a single similarity measure and thus obtain a single feature space. However, if samples can simultaneously be well classified via two distinct similarity measures, the samples within a class can distribute more compactly in a smaller feature space, producing more discriminative feature maps. Motivated by this, we propose a so-called Bi-Similarity Network ( BSNet ) that consists of a single embedding module and a bi-similarity module of two similarity measures. After the support images and the query images pass through the convolution-based embedding module, the bi-similarity module learns feature maps according to two similarity measures of diverse characteristics. In this way, the model is enabled to learn more discriminative and less similarity-biased features from few shots of fine-grained images, such that the model generalization ability can be significantly improved. Through extensive experiments by slightly modifying established metric/similarity based networks, we show that the proposed approach produces a substantial improvement on several fine-grained image benchmark datasets. Codes are available at: https://github.com/PRIS-CV/BSNet .

48 citations


Journal ArticleDOI
01 Jan 2021
TL;DR: This letter presents a new deep learning-based framework for robust nonlinear estimation and control using the concept of a Neural Contraction Metric, and demonstrates how to exploit NCMs to design an online optimal estimator and controller for nonlinear systems with bounded disturbances utilizing their duality.
Abstract: This letter presents a new deep learning-based framework for robust nonlinear estimation and control using the concept of a Neural Contraction Metric (NCM). The NCM uses a deep long short-term memory recurrent neural network for a global approximation of an optimal contraction metric, the existence of which is a necessary and sufficient condition for exponential stability of nonlinear systems. The optimality stems from the fact that the contraction metrics sampled offline are the solutions of a convex optimization problem to minimize an upper bound of the steady-state Euclidean distance between perturbed and unperturbed system trajectories. We demonstrate how to exploit NCMs to design an online optimal estimator and controller for nonlinear systems with bounded disturbances utilizing their duality. The performance of our framework is illustrated through Lorenz oscillator state estimation and spacecraft optimal motion planning problems.

42 citations


Journal ArticleDOI
TL;DR: An original and novel gravity model with effective distance for identifying influential nodes based on information fusion and multi-level processing is proposed that is able to comprehensively consider the global and local information of the complex network, and also utilize the effective distance to replace the Euclidean Distance.

Journal ArticleDOI
TL;DR: The significance and precision of the prediction relies on the fault indicator, which is computed based on three distance measures, such as mahalanobis distance, Euclidean distance, and angular distance, which enables an effective health estimation of the circuit.
Abstract: Fault prediction in the analog circuits is a serious problem to be addressed on an immediate basis, as traditionally, the faults in the analog circuits are diagnosed only after their occurrence. Since the outcome of the faults creates highly expensive scenarios in case of the analog circuit industry, there is a need for an effective prediction model that keeps track of the faults prior to their occurrence. Accordingly, this article focuses on the fault prediction model in analog circuits using proposed deep model called, Rider-deep-long short-term memory (LSTM). Here, the significance and precision of the prediction relies on the fault indicator, which is computed based on three distance measures, such as mahalanobis distance, Euclidean distance, and angular distance, and thereby, enables an effective health estimation of the circuit. The estimation is effectively solved using the Rider-deep-LSTM, which is the integration of proposed Rider-Adam algorithm in deep-LSTM, for training the model parameters. The proposed prediction method acquires the Pearson correlation coefficient of 0.9973 and 0.9919 while using the circuits, such as solar power converter and low noise bipolar transistor amplifier.

Journal ArticleDOI
TL;DR: This article proposes a similarity-based ranking (SR) strategy inspired by a density-based clustering algorithm and introduces structural similarity (SSIM) to measure the relationships between the bands and demonstrated that SR-SSIM outperformed the other methods.
Abstract: Band selection (BS) is a commonly used dimension reduction technique for hyperspectral images. In this article, we propose a similarity-based ranking (SR) strategy inspired by a density-based clustering algorithm. The representativeness of a band is evaluated according to its ability to become a cluster center. We introduce structural similarity (SSIM) to measure the relationships between the bands. Thus, our proposed ranking-based BS method is called SR-SSIM. We picked state-of-the-art BS methods as competitors and carried out classification experiments on different data sets. The results illustrated that SR-SSIM outperformed the other methods. It is demonstrated, in this article, that the SSIM is more suitable for hyperspectral BS than the Euclidean distance since the SSIM can mine the spatial information contained in the band images. Furthermore, we discuss the application of BS methods on deep learning classifier. We found that proper preprocessing by the BS method can effectively eliminate redundant information and avoid overfitting.

Journal ArticleDOI
TL;DR: An improved iterative closest point (ICP) algorithm based on the curvature feature similarity of the point cloud is proposed to improve the performance of classic ICP algorithm in an unstructured environment, such as alignment accuracy, robustness and stability.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a novel ranking loss function, named Bi-directional Exponential Angular Triplet Loss, to help learn an angularly separable common feature space by explicitly constraining the included angles between embedding vectors.
Abstract: RGB-Infrared person re-identification (RGB-IR Re-ID) is a cross-modality matching problem, where the modality discrepancy is a big challenge. Most existing works use Euclidean metric based constraints to resolve the discrepancy between features of images from different modalities. However, these methods are incapable of learning angularly discriminative feature embedding because Euclidean distance cannot measure the included angle between embedding vectors effectively. As an angularly discriminative feature space is important for classifying the human images based on their embedding vectors, in this paper, we propose a novel ranking loss function, named Bi-directional Exponential Angular Triplet Loss, to help learn an angularly separable common feature space by explicitly constraining the included angles between embedding vectors. Moreover, to help stabilize and learn the magnitudes of embedding vectors, we adopt a common space batch normalization layer. The quantitative and qualitative experiments on the SYSU-MM01 and RegDB dataset support our analysis. On SYSU-MM01 dataset, the performance is improved from 7.40% / 11.46% to 38.57% / 38.61% for rank-1 accuracy / mAP compared with the baseline. The proposed method can be generalized to the task of single-modality Re-ID and improves the rank-1 accuracy / mAP from 92.0% / 81.7% to 94.7% / 86.6% on the Market-1501 dataset, from 82.6% / 70.6% to 87.6% / 77.1% on the DukeMTMC-reID dataset.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a probabilistic graph convolutional network (GCN) based method to improve the similarity measurement in ReID, which regards the ReID task as a prediction problem of the link probability between node pairs.

Journal ArticleDOI
TL;DR: A strategy that models the visual scene as semantic sub-graph by only preserving the semantic and geometric information from object detection is presented, which indicates that this semantic graph-based representation without extracting visual features is feasible for loop-closure detection at potential and competitive precision.

Journal ArticleDOI
TL;DR: A secure and efficient scheme to find the exact nearest neighbor over encrypted medical images by computing the lower bound of the Euclidean distance that is related to the mean and standard deviation of data.
Abstract: Medical imaging is crucial for medical diagnosis, and the sensitive nature of medical images necessitates rigorous security and privacy solutions to be in place. In a cloud-based medical system for Healthcare Industry 4.0, medical images should be encrypted prior to being outsourced. However, processing queries over encrypted data without first executing the decryption operation is challenging and impractical at present. In this paper, we propose a secure and efficient scheme to find the exact nearest neighbor over encrypted medical images. Instead of calculating the Euclidean distance, we reject candidates by computing the lower bound of the Euclidean distance that is related to the mean and standard deviation of data. Unlike most existing schemes, our scheme can obtain the exact nearest neighbor rather than an approximate result. We, then, evaluate our proposed approach to demonstrate its utility.

Proceedings ArticleDOI
01 Jun 2021
TL;DR: DotD as discussed by the authors is defined as normalized Euclidean distance between the center points of two bounding boxes, which is a new metric for tiny object detection where anchor-based and anchor-free detectors are used.
Abstract: Object detection has achieved great progress with the development of anchor-based and anchor-free detectors. However, the detection of tiny objects is still challenging due to the lack of appearance information. In this paper, we observe that Intersection over Union (IoU), the most widely used metric in object detection, is sensitive to slight offsets between predicted bounding boxes and ground truths when detecting tiny objects. Although some new metrics such as GIoU, DIoU and CIoU are proposed, their performance on tiny object detection is still below the expected level by a large margin. In this paper, we propose a simple but effective new metric called Dot Distance (DotD) for tiny object detection where DotD is defined as normalized Euclidean distance between the center points of two bounding boxes. Extensive experiments on tiny object detection dataset show that anchor-based detectors’ performance is highly improved over their baselines with the application of DotD.

Journal ArticleDOI
TL;DR: By employing VT and the largest standard cluster analysis (LSCA) to quantify neighborhood between atoms to a MD simulated cooling of liquid metal Ta, the VNA cases are indeed found in the system and it is shown that as the disordered degree of a system increases, the probability of VNA increases.

Journal ArticleDOI
TL;DR: This article first construct Mahalanobis distance in the kernel space and then proposes a novel fuzzy clustering model with a kernelized MahalanOBis distance, namely KMD-FC, which outperformed the state-of-the-art methods in comparison.
Abstract: Data samples of complicated geometry and nonlinear separability are considered as common challenges to clustering algorithms. In this article, we first construct Mahalanobis distance in the kernel space and then propose a novel fuzzy clustering model with a kernelized Mahalanobis distance, namely KMD-FC. The key contributions of KMD-FC include: first, the construction of KMD matrix is innovatively transformed from the Euclidean distance kernel matrix, which is able to effectively avoid the problem of “curse of dimensionality” posed by explicitly calculating the sample covariance matrix in the kernel space; second, for the first time, the kernelized Gustafson–Kessel (GK) fuzzy C-means algorithm is achieved, which is critically important to extend the applications of the GK algorithm to the nonlinear classification tasks; finally, taking account of the overall distribution of samples in the kernel space after kernel mapping to improve the generalizability of the proposed KMD-FC clustering method. Comprehensive experiments conducted on a wide range of datasets, including synthetic datasets and machine learning repository (UCI) datasets, have validated that the proposed clustering algorithm outperformed the state-of-the-art methods in comparison.

Journal ArticleDOI
TL;DR: A new NDB generation algorithm that employs a new set of parameters to control the selection of different bits when generating NDB records, and this enables a fine-grained control of the accuracy of distance estimation, and proposes an approach specialized for estimating Euclidean distance from the NDBs generated by the Q K -hidden algorithm.

Journal ArticleDOI
Yiyuan Gao1, Dejie Yu1
TL;DR: Experimental results indicate that G SR-D is better and more stable than the standard convolutional neural network and support vector machine in rolling bearing fault diagnosis, and GSR-D only has two tuning parameters with certain robustness.

Journal ArticleDOI
TL;DR: This paper aims to establish a squared Euclidean distance (SED)-based outranking approach and develop a novel PF LINMAP methodology for handling an MCDA problem under PF uncertainty, and derives the comprehensive dominance index to determine the overall dominance relation.
Abstract: Pythagorean fuzzy (PF) sets involving Pythagorean membership grades can befittingly manipulate inexact and equivocal information in real-life problems involving multiple criteria decision analysis (MCDA). The linear programming technique for multidimensional analysis of preference (LINMAP) is a prototypical compromising model, and it is widely used to carry on decision-making problems in many down-to-earth applications. In LINMAP, the employment of squares of Euclidean distances is a significant technique that is an effective approach to fit measurements. Taking the advantages of a newly developed Euclidean distance model on the grounds of PF sets, this paper initiates a beneficial concept of squared PF Euclidean distances and studies its valuable and desirable properties. This paper aims to establish a squared Euclidean distance (SED)-based outranking approach and develop a novel PF LINMAP methodology for handling an MCDA problem under PF uncertainty. In the SED-based outranking approach, a novel SED-based dominance index is proposed to reflect an overall balance of a PF evaluative rating between the connection and remotest connection with positive- and negative-ideal ratings, respectively. The properties of the proposed index are also analyzed to exhibit its efficaciousness in determining the dominance relations for intracriterion comparisons. Moreover, this paper derives the comprehensive dominance index to determine the overall dominance relation and defines measurements of rank consistency for goodness of fit and rank inconsistency for poorness of fit. The PF LINMAP model is formulated to seek to ascertain the optimal weight vector that maximizes the total comprehensive dominance index and minimizes the poorness of fit under consideration of the lowest acceptable level and specialized degenerate weighting issues. The practical application concerning bridge-superstructure construction methods is conducted to test the feasibility and practicability of the PF LINMAP model. Over and above that, a generalization of the proposed methodology, along with applications to green supplier selection and railway project investment, is investigated to deal with group decision-making issues. Several comparative studies are implemented to further validate its usefulness and advantages. The application and comparison results display the effectuality and flexibility of the developed PF LINMAP methodology. In the end, the directions for future research of this work are represented in the conclusion.

Journal ArticleDOI
TL;DR: This work presents the first efficient, scalable and exact method to find time series motifs under Dynamic Time Warping and shows, in many domains, DTW-based motifs represent semantically meaningful conserved behavior that would escape the authors' attention using all existing Euclidean distance-based methods.
Abstract: In recent years, time series motif discovery has emerged as perhaps the most important primitive for many analytical tasks, including clustering, classification, rule discovery, segmentation, and summarization. In parallel, it has long been known that Dynamic Time Warping (DTW) is superior to other similarity measures such as Euclidean Distance under most settings. However, due to the computational complexity of both DTW and motif discovery, virtually no research efforts have been directed at combining these two ideas. The current best mechanisms to address their lethargy appear to be mutually incompatible. In this work, we present the first efficient, scalable and exact method to find time series motifs under DTW. Our method automatically performs the best trade-off of time-to-compute versus tightness-of-lower-bounds for a novel hierarchy of lower bounds that we introduce. As we shall show through extensive experiments, our algorithm prunes up to 99.99% of the DTW computations under realistic settings and is up to three to four orders of magnitude faster than the brute force search, and two orders of magnitude faster than the only other competitor algorithm. This allows us to discover DTW motifs in massive datasets for the first time. As we will show, in many domains, DTW-based motifs represent semantically meaningful conserved behavior that would escape our attention using all existing Euclidean distance-based methods.

Journal ArticleDOI
TL;DR: This work proposes the ADP (Adaptive-threshold Douglas-Peucker) algorithm, which no longer relies on ship static information and makes it easier to determine the threshold, which is what traditional algorithms cannot achieve.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a hybrid deep learning framework, dynamic directed spatio-temporal graph convolution networks (DD-STGCN), to deal with space-time prediction in the continuous and dynamic wind-field.

Journal ArticleDOI
TL;DR: In this paper, a fixed-length quantizer for gradients in first order stochastic optimization is presented, which is easy to implement and involves only a Hadamard transform computation and adaptive uniform quantization with appropriately chosen dynamic ranges.
Abstract: We present Rotated Adaptive Tetra-iterated Quantizer (RATQ), a fixed-length quantizer for gradients in first order stochastic optimization RATQ is easy to implement and involves only a Hadamard transform computation and adaptive uniform quantization with appropriately chosen dynamic ranges For noisy gradients with almost surely bounded Euclidean norms, we establish an information theoretic lower bound for optimization accuracy using finite precision gradients and show that RATQ almost attains this lower bound For mean square bounded noisy gradients, we use a gain-shape quantizer which separately quantizes the Euclidean norm and uses RATQ to quantize the normalized unit norm vector We establish lower bounds for performance of any optimization procedure and shape quantizer, when used with a uniform gain quantizer Finally, we propose an adaptive quantizer for gain which when used with RATQ for shape quantizer outperforms uniform gain quantization and is, in fact, close to optimal As a by-product, we show that our fixed-length quantizer RATQ has almost the same performance as the optimal variable-length quantizers for distributed mean estimation Also, we obtain an efficient quantizer for Gaussian vectors which attains a rate very close to the Gaussian rate-distortion function and is, in fact, universal for sub Gaussian input vectors

Journal ArticleDOI
TL;DR: A Euclidean distance based multiscale fuzzy entropy (EDM-Fuzzy), which measures the similarity of two vectors with continuous values from zero to one based on the Euclideans distance of the two vectors, and obtains higher accuracy in detecting the bearing faults than the state-of-the-art entropy technologies.
Abstract: Sample entropy (SampEn) technologies have been widely applied in diagnosing the faults of industrial systems. However, there are two disadvantages of these technologies. First, all of these technologies measure the distance of two vectors solely based on the maximum distance between the corresponding elements in the two vectors, which is not able to fully reflect the distance of the two vectors. Second, these methodologies measure the similarity of two vectors with either zero or one, which may cause sudden changes in entropy values. Therefore, we proposed a Euclidean distance based multiscale fuzzy entropy (EDM-Fuzzy), which measures the similarity of two vectors with continuous values from zero to one based on the Euclidean distance of the two vectors. The results from the synthetic and real signals demonstrated that EDM-Fuzzy has higher accuracy in measuring the complexity of signals. As a result, EDM-Fuzzy obtains a higher accuracy in detecting the bearing faults than the state-of-the-art entropy technologies.

Journal ArticleDOI
TL;DR: This paper proposes a locality-constrained sparse representation classifier (LSRC) and experiments prove that the proposed LSRC outperforms other popular classifiers.