scispace - formally typeset
Search or ask a question

Showing papers on "Sparse approximation published in 2018"


PatentDOI
TL;DR: The experimental implementation of sparse coding algorithms in a bio-inspired approach using a 32 × 32 crossbar array of analog memristors enables efficient implementation of pattern matching and lateral neuron inhibition and allows input data to be sparsely encoded using neuron activities and stored dictionary elements.
Abstract: Sparse representation of information performs powerful feature extraction on high-dimensional data and is of interest for applications in signal processing, machine vision, object recognition, and neurobiology Sparse coding is a mechanism by which biological neural systems can efficiently process complex sensory data while consuming very little power Sparse coding algorithms in a bio-inspired approach can be implemented in a crossbar array of memristors (resistive memory devices) This network enables efficient implementation of pattern matching and lateral neuron inhibition, allowing input data to be sparsely encoded using neuron activities and stored dictionary elements The reconstructed input can be obtained by performing a backward pass through the same crossbar matrix using the neuron activity vector as input Different dictionary sets can be trained and stored in the same system, depending on the nature of the input signals Using the sparse coding algorithm, natural image processing is performed based on a learned dictionary

484 citations


Journal ArticleDOI
TL;DR: A systematic review of the SR-based multi-sensor image fusion literature, highlighting the pros and cons of each category of approaches and evaluating the impact of these three algorithmic components on the fusion performance when dealing with different applications.

297 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel multiple sparse representation framework for visual tracking which jointly exploits the shared and feature-specific properties of different features by decomposing multiple sparsity patterns and introduces a novel online multiple metric learning to efficiently and adaptively incorporate the appearance proximity constraint.
Abstract: The use of multiple features has been shown to be an effective strategy for visual tracking because of their complementary contributions to appearance modeling. The key problem is how to learn a fused representation from multiple features for appearance modeling. Different features extracted from the same object should share some commonalities in their representations while each feature should also have some feature-specific representation patterns which reflect its complementarity in appearance modeling. Different from existing multi-feature sparse trackers which only consider the commonalities among the sparsity patterns of multiple features, this paper proposes a novel multiple sparse representation framework for visual tracking which jointly exploits the shared and feature-specific properties of different features by decomposing multiple sparsity patterns. Moreover, we introduce a novel online multiple metric learning to efficiently and adaptively incorporate the appearance proximity constraint, which ensures that the learned commonalities of multiple features are more representative. Experimental results on tracking benchmark videos and other challenging videos demonstrate the effectiveness of the proposed tracker.

207 citations


Journal ArticleDOI
TL;DR: Various applications of sparse representation in wireless communications, with a focus on the most recent compressive sensing (CS)-enabled approaches, are discussed.
Abstract: Sparse representation can efficiently model signals in different applications to facilitate processing. In this article, we will discuss various applications of sparse representation in wireless communications, with a focus on the most recent compressive sensing (CS)-enabled approaches. With the help of the sparsity property, CS is able to enhance the spectrum efficiency (SE) and energy efficiency (EE) of fifth-generation (5G) and Internet of Things (IoT) networks.

200 citations


Journal ArticleDOI
TL;DR: Experimental results indicate that INLF is able to acquire NLFs from HiDS matrices more efficiently than any existing method does, and makes the parameter training unconstrained and compatible with general training schemes on the premise of maintaining the nonnegativity constraints.
Abstract: High-dimensional and sparse (HiDS) matrices are commonly encountered in many big-data-related and industrial applications like recommender systems. When acquiring useful patterns from them, nonnegative matrix factorization (NMF) models have proven to be highly effective owing to their fine representativeness of the nonnegative data. However, current NMF techniques suffer from: 1) inefficiency in addressing HiDS matrices; and 2) constraints in their training schemes. To address these issues, this paper proposes to extract nonnegative latent factors (NLFs) from HiDS matrices via a novel inherently NLF (INLF) model. It bridges the output factors and decision variables via a single-element-dependent mapping function, thereby making the parameter training unconstrained and compatible with general training schemes on the premise of maintaining the nonnegativity constraints. Experimental results on six HiDS matrices arising from industrial applications indicate that INLF is able to acquire NLFs from them more efficiently than any existing method does.

187 citations


Book ChapterDOI
TL;DR: An overview of these sparse methods for DOA estimation is provided, with a particular highlight on the recently developed gridless sparse methods, e.g., those based on covariance fitting and the atomic norm.
Abstract: Direction-of-arrival (DOA) estimation refers to the process of retrieving the direction information of several electromagnetic waves/sources from the outputs of a number of receiving antennas that form a sensor array. DOA estimation is a major problem in array signal processing and has wide applications in radar, sonar, wireless communications, etc. With the development of sparse representation and compressed sensing, the last decade has witnessed a tremendous advance in this research topic. The purpose of this article is to provide an overview of these sparse methods for DOA estimation, with a particular highlight on the recently developed gridless sparse methods, e.g., those based on covariance fitting and the atomic norm. Several future research directions are also discussed.

184 citations


Journal ArticleDOI
Yi Qin1
TL;DR: This paper explores a new impulsive feature extraction method based on the sparse representation that demonstrates its advantage and superiority in weak repetitive transient feature extraction.
Abstract: The localized faults of rolling bearings can be diagnosed by the extraction of the impulsive feature. However, the approximately periodic impulses may be submerged in strong interferences generated by other components and the background noise. To address this issue, this paper explores a new impulsive feature extraction method based on the sparse representation. According to the vibration model of an impulse generated by the bearing fault, a novel impulsive wavelet is constructed, which satisfies the admissibility condition. As a result, this family of model-based impulsive wavelets can form a Parseval frame. With the model-based impulsive wavelet basis and Fourier basis, a convex optimization problem is formulated to extract the repetitive impulses. Based on the splitting idea, an iterative thresholding shrinkage algorithm is proposed to solve this problem, and it has a fast convergence rate. Via the simulated signal and real vibration signals with bearing fault information, the performance of the proposed approach for repetitive impulsive feature extraction is validated and compared with the noted spectral kurtosis method, the optimized spectral kurtosis method based on simulated annealing, and the resonance-based signal decomposition method. The results demonstrate its advantage and superiority in weak repetitive transient feature extraction.

183 citations


Journal ArticleDOI
TL;DR: DeepDenoiser as discussed by the authors uses a deep neural network to learn a sparse representation of data in the time-frequency domain and a nonlinear function that maps this representation into masks that decompose input data into a signal of interest and noise.
Abstract: Denoising and filtering are widely used in routine seismic-data-processing to improve the signal-to-noise ratio (SNR) of recorded signals and by doing so to improve subsequent analyses. In this paper we develop a new denoising/decomposition method, DeepDenoiser, based on a deep neural network. This network is able to learn simultaneously a sparse representation of data in the time-frequency domain and a non-linear function that maps this representation into masks that decompose input data into a signal of interest and noise (defined as any non-seismic signal). We show that DeepDenoiser achieves impressive denoising of seismic signals even when the signal and noise share a common frequency band. Our method properly handles a variety of colored noise and non-earthquake signals. DeepDenoiser can significantly improve the SNR with minimal changes in the waveform shape of interest, even in presence of high noise levels. We demonstrate the effect of our method on improving earthquake detection. There are clear applications of DeepDenoiser to seismic imaging, micro-seismic monitoring, and preprocessing of ambient noise data. We also note that potential applications of our approach are not limited to these applications or even to earthquake data, and that our approach can be adapted to diverse signals and applications in other settings.

167 citations


Journal ArticleDOI
TL;DR: A new data preprocessing method was proposed to obtain the training data and by reusing the data points between the adjacent samples, the fault identifying rate was significantly improved and the hyperparameter was altered by changing the unit numbers of each layer.

164 citations


Journal ArticleDOI
TL;DR: In this article, an image based rendering technique based on light field reconstruction from a limited set of perspective views acquired by cameras was developed, which utilizes sparse representation of epipolar-plane images (EPI) in shearlet transform domain.
Abstract: In this article we develop an image based rendering technique based on light field reconstruction from a limited set of perspective views acquired by cameras. Our approach utilizes sparse representation of epipolar-plane images (EPI) in shearlet transform domain. The shearlet transform has been specifically modified to handle the straight lines characteristic for EPI. The devised iterative regularization algorithm based on adaptive thresholding provides high-quality reconstruction results for relatively big disparities between neighboring views. The generated densely sampled light field of a given 3D scene is thus suitable for all applications which require light field reconstruction. The proposed algorithm compares favorably against state of the art depth image based rendering techniques and shows superior performance specifically in reconstructing scenes containing semi-transparent objects.

157 citations


Journal ArticleDOI
TL;DR: A thorough set of performance comparisons indicates a very wide range of performance differences among the existing and proposed methods, and clearly identifies those that are the most effective.
Abstract: Convolutional sparse representations are a form of sparse representation with a dictionary that has a structure that is equivalent to convolution with a set of linear filters. While effective algorithms have recently been developed for the convolutional sparse coding problem, the corresponding dictionary learning problem is substantially more challenging. Furthermore, although a number of different approaches have been proposed, the absence of thorough comparisons between them makes it difficult to determine which of them represents the current state of the art. The present work both addresses this deficiency and proposes some new approaches that outperform existing ones in certain contexts. A thorough set of performance comparisons indicates a very wide range of performance differences among the existing and proposed methods, and clearly identifies those that are the most effective.

Journal ArticleDOI
TL;DR: An off-grid model for downlink channel sparse representation with arbitrary two-dimensional-array antenna geometry is introduced, and an efficient sparse Bayesian learning approach for the sparse channel recovery and off- grid refinement is proposed.
Abstract: This paper addresses the problem of downlink channel estimation in frequency-division duplexing massive multiple-input multiple-output systems. The existing methods usually exploit hidden sparsity under a discrete Fourier transform (DFT) basis to estimate the downlink channel. However, there are at least two shortcomings of these DFT-based methods: first, they are applicable to uniform linear arrays (ULAs) only, since the DFT basis requires a special structure of ULAs; and second, they always suffer from a performance loss due to the leakage of energy over some DFT bins. To deal with the above-mentioned shortcomings, we introduce an off-grid model for downlink channel sparse representation with arbitrary two-dimensional-array antenna geometry, and propose an efficient sparse Bayesian learning approach for the sparse channel recovery and off-grid refinement. The main idea of the proposed off-grid method is to consider the sampled grid points as adjustable parameters. Utilizing an in-exact block majorization–minimization algorithm, the grid points are refined iteratively to minimize the off-grid gap. Finally, we further extend the solution to uplink-aided channel estimation by exploiting the angular reciprocity between downlink and uplink channels, which brings enhanced recovery performance.

Journal ArticleDOI
TL;DR: A novel sparse representation framework that learns dictionaries based on the latent space of variational auto-encoder for large-scale data sets and outperforms competing algorithms in all kinds of anomaly detection tasks.
Abstract: Anomaly detection has a wide range of applications in security area such as network monitoring and smart city/campus construction. It has become an active research issue of great concern in recent years. However, most algorithms of the existing studies are powerless for large-scale and high-dimensional data, and the intermediate data extracted by some methods that can handle high-dimensional data will consume lots of storage space. In this paper, we propose a novel sparse representation framework that learns dictionaries based on the latent space of variational auto-encoder. For large-scale data sets, it can play the role of dimensionality reduction to obtain hidden information, and extract more high-level features than hand-crafted features. At the same time, for the storage of normal information, the space cost can be greatly reduced. To verify the versatility and performance of the proposed learning algorithm, we have experimented on different types of anomaly detection tasks, including KDD-CUP data set for network intrusion detection, Mnist data set for image anomaly detection, and UCSD pedestrian’s data set for abnormal event detection in surveillance videos. The experimental results demonstrate that the proposed algorithm outperforms competing algorithms in all kinds of anomaly detection tasks.

Proceedings ArticleDOI
17 Oct 2018
TL;DR: The results demonstrate the importance of sparsity in neural IR models and show that dense representations can be pruned effectively, giving new insights about essential semantic features and their distributions.
Abstract: The availability of massive data and computing power allowing for effective data driven neural approaches is having a major impact on machine learning and information retrieval research, but these models have a basic problem with efficiency. Current neural ranking models are implemented as multistage rankers: for efficiency reasons, the neural model only re-ranks the top ranked documents retrieved by a first-stage efficient ranker in response to a given query. Neural ranking models learn dense representations causing essentially every query term to match every document term, making it highly inefficient or intractable to rank the whole collection. The reliance on a first stage ranker creates a dual problem: First, the interaction and combination effects are not well understood. Second, the first stage ranker serves as a "gate-keeper" or filter, effectively blocking the potential of neural models to uncover new relevant documents. In this work, we propose a standalone neural ranking model (SNRM) by introducing a sparsity property to learn a latent sparse representation for each query and document. This representation captures the semantic relationship between the query and documents, but is also sparse enough to enable constructing an inverted index for the whole collection. We parameterize the sparsity of the model to yield a retrieval model as efficient as conventional term based models. Our model gains in efficiency without loss of effectiveness: it not only outperforms the existing term matching baselines, but also performs similarly to the recent re-ranking based neural models with dense representations. Our model can also take advantage of pseudo-relevance feedback for further improvements. More generally, our results demonstrate the importance of sparsity in neural IR models and show that dense representations can be pruned effectively, giving new insights about essential semantic features and their distributions.

Journal ArticleDOI
TL;DR: A superposed linear representation classifier (SLRC) is developed to cast the recognition problem by representing the test image in term of a superposition of the class centroids and the shared intra-class differences.
Abstract: Collaborative representation methods, such as sparse subspace clustering (SSC) and sparse representation-based classification (SRC), have achieved great success in face clustering and classification by directly utilizing the training images as the dictionary bases. In this paper, we reveal that the superior performance of collaborative representation relies heavily on the sufficiently large class separability of the controlled face datasets such as Extended Yale B. On the uncontrolled or undersampled dataset, however, collaborative representation suffers from the misleading coefficients of the incorrect classes. To address this limitation, inspired by the success of linear discriminant analysis (LDA), we develop a superposed linear representation classifier (SLRC) to cast the recognition problem by representing the test image in term of a superposition of the class centroids and the shared intra-class differences. In spite of its simplicity and approximation, the SLRC largely improves the generalization ability of collaborative representation, and competes well with more sophisticated dictionary learning techniques, on the experiments of AR and FRGC databases. Enforced with the sparsity constraint, SLRC achieves the state-of-the-art performance on FERET database using single sample per person.

Journal ArticleDOI
TL;DR: This work represents a bridge between matrix factorization, sparse dictionary learning, and sparse autoencoders, and it is shown that the training of the filters is essential to allow for nontrivial signals in the model, and an online algorithm to learn the dictionaries from real data, effectively resulting in cascaded sparse convolutional layers.
Abstract: The recently proposed multilayer convolutional sparse coding (ML-CSC) model, consisting of a cascade of convolutional sparse layers, provides a new interpretation of convolutional neural networks (CNNs). Under this framework, the forward pass in a CNN is equivalent to a pursuit algorithm aiming to estimate the nested sparse representation vectors from a given input signal. Despite having served as a pivotal connection between CNNs and sparse modeling, a deeper understanding of the ML-CSC is still lacking. In this paper, we propose a sound pursuit algorithm for the ML-CSC model by adopting a projection approach. We provide new and improved bounds on the stability of the solution of such pursuit and we analyze different practical alternatives to implement this in practice. We show that the training of the filters is essential to allow for nontrivial signals in the model, and we derive an online algorithm to learn the dictionaries from real data, effectively resulting in cascaded sparse convolutional layers. Last, but not least, we demonstrate the applicability of the ML-CSC model for several applications in an unsupervised setting, providing competitive results. Our work represents a bridge between matrix factorization, sparse dictionary learning, and sparse autoencoders, and we analyze these connections in detail.

Journal ArticleDOI
TL;DR: A novel double-image compression-encryption algorithm is proposed by combining co-sparse representation with random pixel exchanging to enhance the confidentiality and the robustness of double image encryption algorithms.

Journal ArticleDOI
TL;DR: A hybrid quantum-classical algorithm for the time evolution of out-of-equilibrium thermal states by classically computing a sparse approximation to the density matrix and time-evolving each matrix element via the quantum computer.
Abstract: We present a hybrid quantum-classical algorithm for the time evolution of out-of-equilibrium thermal states. The method depends on classically computing a sparse approximation to the density matrix and, then, time-evolving each matrix element via the quantum computer. For this exploratory study, we investigate a time-dependent Ising model with five spins on the Rigetti Forest quantum virtual machine and a one spin system on the Rigetti 8Q-Agave quantum processor.

Journal ArticleDOI
TL;DR: Qualitative and quantitative results show that the proposed 3D feature constrained reconstruction (3D-FCR) algorithm can lead to a promising improvement of LDCT image quality.
Abstract: Low-dose computed tomography (LDCT) images are often highly degraded by amplified mottle noise and streak artifacts. Maintaining image quality under low-dose scan protocols is a well-known challenge. Recently, sparse representation-based techniques have been shown to be efficient in improving such CT images. In this paper, we propose a 3D feature constrained reconstruction (3D-FCR) algorithm for LDCT image reconstruction. The feature information used in the 3D-FCR algorithm relies on a 3D feature dictionary constructed from available high quality standard-dose CT sample. The CT voxels and the sparse coefficients are sequentially updated using an alternating minimization scheme. The performance of the 3D-FCR algorithm was assessed through experiments conducted on phantom simulation data and clinical data. A comparison with previously reported solutions was also performed. Qualitative and quantitative results show that the proposed method can lead to a promising improvement of LDCT image quality.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed ECG signal representation using sparse decomposition technique with PSO optimized least-square twin SVM (best classifier model among k-NN, PNN and RBFNN) reported higher classification accuracy than the existing methods to the state-of-art diagnosis.
Abstract: As per the report of the World Health Organization (WHO), the mortalities due to cardiovascular diseases (CVDs) have increased to 50 million worldwide. Therefore, it is essential to have an efficient diagnosis of CVDs to enhance the healthcare in the clinical cardiovascular domain. The ECG signal analysis of a patient is a very popular tool to perform diagnosis of CVDs. However, due to the non-stationary nature of ECG signal and higher computational burden of the existing signal processing methods, the automated and efficient diagnosis remains a challenge. This paper presents a new feature extraction method using the sparse representation technique to efficiently represent the different ECG signals for efficient analysis. The sparse method decomposes an ECG signal into elementary waves using an overcomplete gabor dictionary. Four features such as time delay, frequency, width parameter, and square of expansion coefficient are extracted from each of the significant atoms of the dictionary. These features are concatenated and analyzed to determine the optimal length of discriminative feature vector representing each of the ECG signal. These extracted features representing the ECG signals are further classified using machine learning techniques such as least-square twin SVM, k-NN, PNN, and RBFNN. Further, the learning parameters of the classifiers are optimized using ABC and PSO techniques. The experiments are carried out for the proposed methods (i.e. feature extraction along with all classifiers) using benchmark MIT-BIH data and evaluated under category and personalized analysis schemes. Experimental results show that the proposed ECG signal representation using sparse decomposition technique with PSO optimized least-square twin SVM (best classifier model among k-NN, PNN and RBFNN) reported higher classification accuracy of 99.11% in category and 89.93% in personalized schemes respectively than the existing methods to the state-of-art diagnosis.

Journal ArticleDOI
TL;DR: Sparse representation theory (the authors shall refer to it as Sparseland) puts forward an emerging, highly effective, and universal model that describes data as a linear combination of few atoms taken from a dictionary of such fundamental elements.
Abstract: Modeling data is the way we-scientists-believe that information should be explained and handled. Indeed, models play a central role in practically every task in signal and image processing and machine learning. Sparse representation theory (we shall refer to it as Sparseland) puts forward an emerging, highly effective, and universal model. Its core idea is the description of data as a linear combination of few atoms taken from a dictionary of such fundamental elements.

Journal ArticleDOI
TL;DR: A novel superpixel-based sparse representation (SSR) model is proposed for hyperspectral image (HSI) super-resolution, which first learns a spectral dictionary from HSI and constructs a transformed dictionary corresponding to MSI.

Journal ArticleDOI
TL;DR: A Bayes-optimal algorithm is devised for robust DOA estimation in additive outliers from the perspective of sparse Bayesian learning (SBL), which can achieve excellent performance in terms of resolution and accuracy.
Abstract: Conventional direction-of-arrival (DOA) estimation methods are sensitive to outlier measurements. Therefore, their performance may degrade substantially in the presence of impulsive noise. In this paper, we address the problem of DOA estimation in additive outliers from the perspective of sparse Bayesian learning (SBL). A Bayes-optimal algorithm is devised for robust DOA estimation, which can achieve excellent performance in terms of resolution and accuracy. To reduce the computational complexity of the SBL scheme, a fast alternating algorithm is also developed. New grid-refining procedures are further introduced into these two proposed algorithms to efficiently fix the off-grid gap. As our solutions do not require the prior knowledge of the number of sources and can resolve highly correlated or coherent sources, it is expected that they have higher applicability. Simulation results verify the outlier-robust performance of the SBL approach.

Journal ArticleDOI
TL;DR: A whale optimization algorithm (WOA)-optimized orthogonal matching pursuit (OMP) with a combined time–frequency atom dictionary with comparisons with the state of the art in the field are illustrated in detail, which highlight the advantages of the proposed method.

Journal ArticleDOI
TL;DR: A novel graph learning method named adaptive weighted nonnegative low-rank representation (AWNLRR) for data clustering, which imposes an adaptive weighted matrix on the data reconstruction errors to reinforce the role of the important features in the joint representation and thus a robust graph can be obtained.

Journal ArticleDOI
TL;DR: A novel method is proposed to extract fault features from non-stationary vibration signals of gearboxes using the techniques of signal sparse decomposition and order tracking and an improved matching pursuit algorithm on segmental signal is designed to solve sparse coefficients and reconstruct steady- type fault components and impact-type fault components.

Journal ArticleDOI
TL;DR: A novel subspace clustering via learning an adaptive low-rank graph affinity matrix is proposed, where the affinity matrix and the representation coefficients are learned in a unified framework and the pre-computed graph regularizer is effectively obviated and better performance can be achieved.
Abstract: By using a sparse representation or low-rank representation of data, the graph-based subspace clustering has recently attracted considerable attention in computer vision, given its capability and efficiency in clustering data. However, the graph weights built using the representation coefficients are not the exact ones as the traditional definition is in a deterministic way. The two steps of representation and clustering are conducted in an independent manner, thus an overall optimal result cannot be guaranteed. Furthermore, it is unclear how the clustering performance will be affected by using this graph. For example, the graph parameters, i.e., the weights on edges, have to be artificially pre-specified while it is very difficult to choose the optimum. To this end, in this paper, a novel subspace clustering via learning an adaptive low-rank graph affinity matrix is proposed, where the affinity matrix and the representation coefficients are learned in a unified framework. As such, the pre-computed graph regularizer is effectively obviated and better performance can be achieved. Experimental results on several famous databases demonstrate that the proposed method performs better against the state-of-the-art approaches, in clustering.

Journal ArticleDOI
TL;DR: The foundation of compressive sensing is explained and the process of measurement is highlighted by reviewing the existing measurement matrices, and a 3‐level classification is provided and the results show that the Circulant, Toeplitz, and Hadamard matrices outperform the other measurementMatrices.

Journal ArticleDOI
TL;DR: By vectorizing the sample covariance matrix, the generalized sum and difference coarray (GSDC) concept is defined for exploiting the advantages of proposed configuration for direction of arrival estimation.
Abstract: In this paper, we propose a generalized co-prime multiple-input multiple-output (MIMO) configuration for direction of arrival estimation. Compared with the conventional co-prime MIMO radar that requires prototype co-prime arrays for transmitter and receiver, the proposed configuration enlarges the inter-element spacing of transmitter with an integer expansion factor. By vectorizing the sample covariance matrix, the generalized sum and difference coarray (GSDC) concept is defined for exploiting the advantages of proposed configuration. The analytical expressions for the expansion factor, the maximum consecutive lags, and the number of unique lags are derived carefully. It is verified that GSDC can obtain more degrees of freedom (DOFs) with the increase of expansion factor. Specifically, with $\mathcal {O}(M+N)$ sensors, GSDC can provide $\mathcal {O}(M^{2}N^{2})$ DOFs, whereas the conventional one only has $\mathcal {O}(MN)$ DOFs. Simulation results demonstrate the usefulness of proposed configuration utilizing both spatial smoothing and sparse representation algorithms.

Journal ArticleDOI
TL;DR: A new classification method via fusing CC and JSR is proposed, which attempts to use the within-class similarity between training and test samples while decreasing the between-class interference.
Abstract: The joint sparse representation (JSR)-based classifier assumes that pixels in a local window can be jointly and sparsely represented by a dictionary constructed by the training samples. The class label of each pixel can be decided according to the representation residual. However, once the local window of each pixel includes pixels from different classes, the performance of the JSR classifier may be seriously decreased. Since correlation coefficient (CC) is able to measure the spectral similarity among different pixels efficiently, this letter proposes a new classification method via fusing CC and JSR, which attempts to use the within-class similarity between training and test samples while decreasing the between-class interference. First, the CCs among the training and test samples are calculated. Then, the JSR-based classifier is used to obtain the representation residuals of different pixels. Finally, a regularization parameter $\lambda $ is introduced to achieve the balance between the JSR and the CC. Experimental results obtained on the Indian Pines data set demonstrate the competitive performance of the proposed approach with respect to other widely used classifiers.