scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal ArticleDOI
TL;DR: A self-paced joint sparse representation (SPJSR) model is proposed for the classification of hyperspectral images (HSIs) and is more accurate and robust than existing JSR methods, especially in the case of heavy noise.
Abstract: In this paper, a self-paced joint sparse representation (SPJSR) model is proposed for the classification of hyperspectral images (HSIs) It replaces the least-squares (LS) loss in the standard joint sparse representation (JSR) model with a weighted LS loss and adopts a self-paced learning (SPL) strategy to learn the weights for neighboring pixels Rather than predefining a weight vector in the existing weighted JSR methods, both the weight and sparse representation (SR) coefficient associated with neighboring pixels are optimized by an alternating iterative strategy According to the nature of SPL, in each iteration, neighboring pixels with nonzero weights (ie, easy pixels) are included for the joint SR of a testing pixel With the increase of iterations, the model size (ie, the number of selected neighboring pixels) is enlarged and more neighboring pixels from easy to complex are gradually added into the JSR learning process After several iterations, the algorithm can be terminated to produce a desirable model that includes easy homogeneous pixels and excludes complex inhomogeneous pixels Experimental results on two benchmark hyperspectral data sets demonstrate that our proposed SPJSR is more accurate and robust than existing JSR methods, especially in the case of heavy noise

103 citations

Journal ArticleDOI
TL;DR: This lecture note describes the development of image prior modeling and learning techniques, including sparse representation models, low-rank models, and deep learning models.
Abstract: The use of digital imaging devices, ranging from professional digital cinema cameras to consumer grade smartphone cameras, has become ubiquitous. The acquired image is a degraded observation of the unknown latent image, while the degradation comes from various factors such as noise corruption, camera shake, object motion, resolution limit, hazing, rain streaks, or a combination of them. Image restoration (IR), as a fundamental problem in image processing and low-level vision, aims to reconstruct the latent high-quality image from its degraded observation. Image degradation is, in general, irreversible, and IR is a typical ill-posed inverse problem. Due to the large space of natural image contents, prior information on image structures is crucial to regularize the solution space and produce a good estimation of the latent image. Image prior modeling and learning then are key issues in IR research. This lecture note describes the development of image prior modeling and learning techniques, including sparse representation models, low-rank models, and deep learning models.

103 citations

Journal ArticleDOI
TL;DR: Experimental results show that the proposed ECG signal representation using sparse decomposition technique with PSO optimized least-square twin SVM (best classifier model among k-NN, PNN and RBFNN) reported higher classification accuracy than the existing methods to the state-of-art diagnosis.
Abstract: As per the report of the World Health Organization (WHO), the mortalities due to cardiovascular diseases (CVDs) have increased to 50 million worldwide. Therefore, it is essential to have an efficient diagnosis of CVDs to enhance the healthcare in the clinical cardiovascular domain. The ECG signal analysis of a patient is a very popular tool to perform diagnosis of CVDs. However, due to the non-stationary nature of ECG signal and higher computational burden of the existing signal processing methods, the automated and efficient diagnosis remains a challenge. This paper presents a new feature extraction method using the sparse representation technique to efficiently represent the different ECG signals for efficient analysis. The sparse method decomposes an ECG signal into elementary waves using an overcomplete gabor dictionary. Four features such as time delay, frequency, width parameter, and square of expansion coefficient are extracted from each of the significant atoms of the dictionary. These features are concatenated and analyzed to determine the optimal length of discriminative feature vector representing each of the ECG signal. These extracted features representing the ECG signals are further classified using machine learning techniques such as least-square twin SVM, k-NN, PNN, and RBFNN. Further, the learning parameters of the classifiers are optimized using ABC and PSO techniques. The experiments are carried out for the proposed methods (i.e. feature extraction along with all classifiers) using benchmark MIT-BIH data and evaluated under category and personalized analysis schemes. Experimental results show that the proposed ECG signal representation using sparse decomposition technique with PSO optimized least-square twin SVM (best classifier model among k-NN, PNN and RBFNN) reported higher classification accuracy of 99.11% in category and 89.93% in personalized schemes respectively than the existing methods to the state-of-art diagnosis.

103 citations

Journal ArticleDOI
TL;DR: In this article, a new transient feature extraction technique is proposed for gearbox fault diagnosis based on sparse representation in wavelet basis, which can extract both the impulse time and the period of transients.

103 citations

Journal ArticleDOI
TL;DR: Experiments demonstrate that the proposed image-deblocking algorithm combining SSR and QC outperforms the current state-of-the-art methods in both peak signal-to-noise ratio and visual perception.
Abstract: The block discrete cosine transform (BDCT) has been widely used in current image and video coding standards, owing to its good energy compaction and decorrelation properties. However, because of independent quantization of DCT coefficients in each block, BDCT usually gives rise to visually annoying blocking compression artifacts, especially at low bit rates. In this paper, to reduce blocking artifacts and obtain high-quality images, image deblocking is cast as an optimization problem within maximum a posteriori framework, and a novel algorithm for image deblocking by using structural sparse representation (SSR) prior and quantization constraint (QC) prior is proposed. The SSR prior is utilized to simultaneously enforce the intrinsic local sparsity and the nonlocal self-similarity of natural images, while QC is explicitly incorporated to ensure a more reliable and robust estimation. A new split Bregman iteration-based method with an adaptively adjusted regularization parameter is developed to solve the proposed optimization problem, which makes the entire algorithm more practical. Experiments demonstrate that the proposed image-deblocking algorithm combining SSR and QC outperforms the current state-of-the-art methods in both peak signal-to-noise ratio and visual perception.

103 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371