scispace - formally typeset
Search or ask a question
JournalISSN: 1069-5869

Journal of Fourier Analysis and Applications 

Springer Science+Business Media
About: Journal of Fourier Analysis and Applications is an academic journal published by Springer Science+Business Media. The journal publishes majorly in the area(s): Fourier analysis & Partial differential equation. It has an ISSN identifier of 1069-5869. Over the lifetime, 1397 publications have been published receiving 42352 citations. The journal is also known as: Journal of fourier analysis and applications & Fourier analysis and applications.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel method for sparse signal recovery that in many situations outperforms ℓ1 minimization in the sense that substantially fewer measurements are needed for exact recovery.
Abstract: It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained l1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms l1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted l1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations—not by reweighting the l1 norm of the coefficient sequence as is common, but by reweighting the l1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as Compressive Sensing.

4,869 citations

Book ChapterDOI
TL;DR: In this paper, a self-contained derivation from basic principles such as the Euclidean algorithm, with a focus on applying it to wavelet filtering, is presented, which asymptotically reduces the computational complexity of the transform by a factor two.
Abstract: This article is essentially tutorial in nature. We show how any discrete wavelet transform or two band subband filtering with finite filters can be decomposed into a finite sequence of simple filtering steps, which we call lifting steps but that are also known as ladder structures. This decomposition corresponds to a factorization of the polyphase matrix of the wavelet or subband filters into elementary matrices. That such a factorization is possible is well-known to algebraists (and expressed by the formulaSL(n;R[z, z−1])=E(n;R[z, z−1])); it is also used in linear systems theory in the electrical engineering community. We present here a self-contained derivation, building the decomposition from basic principles such as the Euclidean algorithm, with a focus on applying it to wavelet filtering. This factorization provides an alternative for the lattice factorization, with the advantage that it can also be used in the biorthogonal, i.e., non-unitary case. Like the lattice factorization, the decomposition presented here asymptotically reduces the computational complexity of the transform by a factor two. It has other applications, such as the possibility of defining a wavelet-like transform that maps integers to integers.

2,357 citations

Journal ArticleDOI
TL;DR: This paper studies two iterative algorithms that are minimising the cost functions of interest and adapts the algorithms and shows on one example that this adaptation can be used to achieve results that lie between those obtained with Matching Pursuit and those found with Orthogonal Matching pursuit, while retaining the computational complexity of the Matching pursuit algorithm.
Abstract: Sparse signal expansions represent or approximate a signal using a small number of elements from a large collection of elementary waveforms. Finding the optimal sparse expansion is known to be NP hard in general and non-optimal strategies such as Matching Pursuit, Orthogonal Matching Pursuit, Basis Pursuit and Basis Pursuit De-noising are often called upon. These methods show good performance in practical situations, however, they do not operate on the l0 penalised cost functions that are often at the heart of the problem. In this paper we study two iterative algorithms that are minimising the cost functions of interest. Furthermore, each iteration of these strategies has computational complexity similar to a Matching Pursuit iteration, making the methods applicable to many real world problems. However, the optimisation problem is non-convex and the strategies are only guaranteed to find local solutions, so good initialisation becomes paramount. We here study two approaches. The first approach uses the proposed algorithms to refine the solutions found with other methods, replacing the typically used conjugate gradient solver. The second strategy adapts the algorithms and we show on one example that this adaptation can be used to achieve results that lie between those obtained with Matching Pursuit and those found with Orthogonal Matching Pursuit, while retaining the computational complexity of the Matching Pursuit algorithm.

1,246 citations

Journal ArticleDOI
TL;DR: The authors survey various mathematical aspects of the uncertainty principle, including Heisenberg inequality and its variants, local uncertainty inequalities, logarithmic uncertainty inequalities and results relating to Wigner distributions, qualitative uncertainty principles, theorems on approximate concentration, and decompositions of phase space.
Abstract: We survey various mathematical aspects of the uncertainty principle, including Heisenberg’s inequality and its variants, local uncertainty inequalities, logarithmic uncertainty inequalities, results relating to Wigner distributions, qualitative uncertainty principles, theorems on approximate concentration, and decompositions of phase space.

882 citations

Journal ArticleDOI
TL;DR: In this paper, a randomized version of the Kaczmarz method for consistent, overdetermined linear systems is introduced and it is shown that it converges with expected exponential rate.
Abstract: The Kaczmarz method for solving linear systems of equations is an iterative algorithm that has found many applications ranging from computer tomography to digital signal processing. Despite the popularity of this method, useful theoretical estimates for its rate of convergence are still scarce. We introduce a randomized version of the Kaczmarz method for consistent, overdetermined linear systems and we prove that it converges with expected exponential rate. Furthermore, this is the first solver whose rate does not depend on the number of equations in the system. The solver does not even need to know the whole system but only a small random part of it. It thus outperforms all previously known methods on general extremely overdetermined systems. Even for moderately overdetermined systems, numerical simulations as well as theoretical analysis reveal that our algorithm can converge faster than the celebrated conjugate gradient algorithm. Furthermore, our theory and numerical simulations confirm a prediction of Feichtinger et al. in the context of reconstructing bandlimited functions from nonuniform sampling.

768 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202321
2022106
202192
202089
2019123
201859