scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel approach called joint sparse principal component analysis (JSPCA) is proposed to jointly select useful features and enhance robustness to outliers and the experimental results demonstrate that the proposed approach is feasible and effective.

174 citations

Journal ArticleDOI
TL;DR: This work presents a family of iterative least squares based dictionary learning algorithms (ILS-DLA), including algorithms for design of signal dependent block based dictionaries and overlapping dictionaries, as generalizations of transforms and filter banks, respectively.

174 citations

Proceedings Article
25 Jul 2015
TL;DR: This paper proposes that the affinity between pairs of samples could be computed using sparse representation with proper constraints, and provides an efficient algorithm to solve the difficult optimization problem based on improvement of existing algorithms.
Abstract: The Laplacian matrix of a graph can be used in many areas of mathematical research and has a physical interpretation in various theories. However, there are a few open issues in the Laplacian graph construction: (i) Selecting the appropriate scale of analysis, (ii) Selecting the appropriate number of neighbors, (iii) Handling multiscale data, and, (iv) Dealing with noise and outliers. In this paper, we propose that the affinity between pairs of samples could be computed using sparse representation with proper constraints. This parameter free setting automatically produces the Laplacian graph, leads to significant reduction in computation cost and robustness to the outliers and noise. We further provide an efficient algorithm to solve the difficult optimization problem based on improvement of existing algorithms. To demonstrate our motivation, we conduct spectral clustering experiments with benchmark methods. Empirical experiments on 9 data sets demonstrate the effectiveness of our method.

173 citations

Proceedings ArticleDOI
31 May 2014
TL;DR: A cost-sensitive discriminative dictionary learning (CDDL) approach for software defect classification and prediction, which outperforms several representative state-of-the-art defect prediction methods.
Abstract: In order to improve the quality of a software system, software defect prediction aims to automatically identify defective software modules for efficient software test. To predict software defect, those classification methods with static code attributes have attracted a great deal of attention. In recent years, machine learning techniques have been applied to defect prediction. Due to the fact that there exists the similarity among different software modules, one software module can be approximately represented by a small proportion of other modules. And the representation coefficients over the pre-defined dictionary, which consists of historical software module data, are generally sparse. In this paper, we propose to use the dictionary learning technique to predict software defect. By using the characteristics of the metrics mined from the open source software, we learn multiple dictionaries (including defective module and defective-free module sub-dictionaries and the total dictionary) and sparse representation coefficients. Moreover, we take the misclassification cost issue into account because the misclassification of defective modules generally incurs much higher risk cost than that of defective-free ones. We thus propose a cost-sensitive discriminative dictionary learning (CDDL) approach for software defect classification and prediction. The widely used datasets from NASA projects are employed as test data to evaluate the performance of all compared methods. Experimental results show that CDDL outperforms several representative state-of-the-art defect prediction methods.

172 citations

Proceedings ArticleDOI
06 Nov 2011
TL;DR: This work unify the class distribution and appearance information into an objective function for learning a sparse dictionary of action attributes and proposes a Gaussian Process (GP) model for sparse representation to optimize the dictionary objective function.
Abstract: We present an approach for dictionary learning of action attributes via information maximization. We unify the class distribution and appearance information into an objective function for learning a sparse dictionary of action attributes. The objective function maximizes the mutual information between what has been learned and what remains to be learned in terms of appearance information and class distribution for each dictionary item. We propose a Gaussian Process (GP) model for sparse representation to optimize the dictionary objective function. The sparse coding property allows a kernel with a compact support in GP to realize a very efficient dictionary learning process. Hence we can describe an action video by a set of compact and discriminative action attributes. More importantly, we can recognize modeled action categories in a sparse feature space, which can be generalized to unseen and unmodeled action categories. Experimental results demonstrate the effectiveness of our approach in action recognition applications.

172 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371