scispace - formally typeset
Search or ask a question
Topic

Approximation algorithm

About: Approximation algorithm is a research topic. Over the lifetime, 23912 publications have been published within this topic receiving 654311 citations.


Papers
More filters
Book
01 Jan 2004
TL;DR: This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system and shows that similar stability is also available using the basis and the matching pursuit algorithms.
Abstract: Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimal-sparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.

2,365 citations

Journal ArticleDOI
TL;DR: The paper presents an SA algorithm that is based on a simultaneous perturbation gradient approximation instead of the standard finite-difference approximation of Keifer-Wolfowitz type procedures that can be significantly more efficient than the standard algorithms in large-dimensional problems.
Abstract: The problem of finding a root of the multivariate gradient equation that arises in function minimization is considered. When only noisy measurements of the function are available, a stochastic approximation (SA) algorithm for the general Kiefer-Wolfowitz type is appropriate for estimating the root. The paper presents an SA algorithm that is based on a simultaneous perturbation gradient approximation instead of the standard finite-difference approximation of Keifer-Wolfowitz type procedures. Theory and numerical experience indicate that the algorithm can be significantly more efficient than the standard algorithms in large-dimensional problems. >

2,149 citations

Journal ArticleDOI
TL;DR: It is proved optimal, up to an arbitrary ε > 0, inapproximability results for Max-E k-Sat for k ≥ 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations modulo a prime p and Set Splitting.
Abstract: We prove optimal, up to an arbitrary e > 0, inapproximability results for Max-E k-Sat for k ≥ 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for the efficient approximability of many optimization problems studied previously. In particular, for Max-E2-Sat, Max-Cut, Max-di-Cut, and Vertex cover.

1,938 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: Experimental results clearly show that the proposed WNNM algorithm outperforms many state-of-the-art denoising algorithms such as BM3D in terms of both quantitative measure and visual perception quality.
Abstract: As a convex relaxation of the low rank matrix factorization problem, the nuclear norm minimization has been attracting significant research interest in recent years. The standard nuclear norm minimization regularizes each singular value equally to pursue the convexity of the objective function. However, this greatly restricts its capability and flexibility in dealing with many practical problems (e.g., denoising), where the singular values have clear physical meanings and should be treated differently. In this paper we study the weighted nuclear norm minimization (WNNM) problem, where the singular values are assigned different weights. The solutions of the WNNM problem are analyzed under different weighting conditions. We then apply the proposed WNNM algorithm to image denoising by exploiting the image nonlocal self-similarity. Experimental results clearly show that the proposed WNNM algorithm outperforms many state-of-the-art denoising algorithms such as BM3D in terms of both quantitative measure and visual perception quality.

1,876 citations

Book
01 Jan 1966
TL;DR: In this paper, Tchebycheff polynomials and other linear families have been used for approximating least-squares approximations to systems of equations with one unknown solution.
Abstract: Introduction: 1 Examples and prospectus 2 Metric spaces 3 Normed linear spaces 4 Inner-product spaces 5 Convexity 6 Existence and unicity of best approximations 7 Convex functions The Tchebycheff Solution of Inconsistent Linear Equations: 1 Introduction 2 Systems of equations with one unknown 3 Characterization of the solution 4 The special case 5 Polya's algorithm 6 The ascent algorithm 7 The descent algorithm 8 Convex programming Tchebycheff Approximation by Polynomials and Other Linear Families: 1 Introduction 2 Interpolation 3 The Weierstrass theorem 4 General linear families 5 the unicity problem 6 Discretization errors: General theory 7 Discretization: Algebraic polynomials. The inequalities of Markoff and Bernstein 8 Algorithms Least-squares Approximation and Related Topics: 1 Introduction 2 Orthogonal systems of polynomials 3 Convergence of orthogonal expansions 4 Approximation by series of Tchebycheff polynomials 5 Discrete least-squares approximation 6 The Jackson theorems Rational Approximation: 1 Introduction 2 Existence of best rational approximations 3 The characterization of best approximations 4 Unicity Continuity of best-approximation operators 5 Algorithms 6 Pade Approximation and its generalizations 7 Continued fractions Some Additional Topics: 1 The Stone approximation theorem 2 The Muntz theorem 3 The converses of the Jackson theorems 4 Polygonal approximation and bases in $C[a, b]$ 5 The Kharshiladze-Lozinski theorems 6 Approximation in the mean Notes References Index.

1,854 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
88% related
Graph (abstract data type)
69.9K papers, 1.2M citations
88% related
Scheduling (computing)
78.6K papers, 1.3M citations
87% related
Upper and lower bounds
56.9K papers, 1.1M citations
87% related
Server
79.5K papers, 1.4M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023166
2022419
20211,229
20201,375
20191,274