scispace - formally typeset
Search or ask a question
Author

Kristian Bredies

Other affiliations: University of Bremen
Bio: Kristian Bredies is an academic researcher from University of Graz. The author has contributed to research in topics: Inverse problem & Regularization (mathematics). The author has an hindex of 35, co-authored 114 publications receiving 5514 citations. Previous affiliations of Kristian Bredies include University of Bremen.


Papers
More filters
Journal ArticleDOI
TL;DR: The novel concept of total generalized variation of a function $u$ is introduced, and some of its essential properties are proved.
Abstract: The novel concept of total generalized variation of a function $u$ is introduced, and some of its essential properties are proved. Differently from the bounded variation seminorm, the new concept involves higher-order derivatives of $u$. Numerical examples illustrate the high quality of this functional as a regularization term for mathematical imaging problems. In particular this functional selectively regularizes on different regularity levels and, as a side effect, does not lead to a staircasing effect.

1,463 citations

Journal ArticleDOI
TL;DR: This work introduces the new concept of total generalized variation for magnetic resonance imaging, a new mathematical framework, which is a generalization of the total variation theory and which eliminates these restrictions.
Abstract: Total variation was recently introduced in many different magnetic resonance imaging applications. The assumption of total variation is that images consist of areas, which are piecewise constant. However, in many practical magnetic resonance imaging situations, this assumption is not valid due to the inhomogeneities of the exciting B1 field and the receive coils. This work introduces the new concept of total generalized variation for magnetic resonance imaging, a new mathematical framework, which is a generalization of the total variation theory and which eliminates these restrictions. Two important applications are considered in this article, image denoising and image reconstruction from undersampled radial data sets with multiple coils. Apart from simulations, experimental results from in vivo measurements are presented where total generalized variation yielded improved image quality over conventional total variation in all cases.

557 citations

Journal ArticleDOI
TL;DR: In this article, a unified approach to iterative soft thresholding algorithms for the solution of linear operator equations in infinite dimensional Hilbert spaces is presented, and a new convergence analysis is presented.
Abstract: In this article a unified approach to iterative soft-thresholding algorithms for the solution of linear operator equations in infinite dimensional Hilbert spaces is presented. We formulate the algorithm in the framework of generalized gradient methods and present a new convergence analysis. As main result we show that the algorithm converges with linear rate as soon as the underlying operator satisfies the so-called finite basis injectivity property or the minimizer possesses a so-called strict sparsity pattern. Moreover it is shown that the constants can be calculated explicitly in special cases (i.e. for compact operators). Furthermore, the techniques also can be used to establish linear convergence for related methods such as the iterative thresholding algorithm for joint sparsity and the accelerated gradient projection method.

239 citations

Journal ArticleDOI
TL;DR: A unified approach to iterative soft-thresholding algorithms for the solution of linear operator equations in infinite dimensional Hilbert spaces is presented and it is shown that the constants can be calculated explicitly in special cases.
Abstract: In this article a unified approach to iterative soft-thresholding algorithms for the solution of linear operator equations in infinite dimensional Hilbert spaces is presented. We formulate the algorithm in the framework of generalized gradient methods and present a new convergence analysis. As main result we show that the algorithm converges with linear rate as soon as the underlying operator satisfies the so-called finite basis injectivity property or the minimizer possesses a so-called strict sparsity pattern. Moreover it is shown that the constants can be calculated explicitly in special cases (i.e. for compact operators). Furthermore, the techniques also can be used to establish linear convergence for related methods such as the iterative thresholding algorithm for joint sparsity and the accelerated gradient projection method.

234 citations

Journal ArticleDOI
TL;DR: In this article, the authors considered the ill-posed problem of solving linear equations in the space of vector-valued finite Radon measures with Hilbert space data and obtained approximate solutions by minimizing the Tikhonov functional with a total variation penalty.
Abstract: The ill-posed problem of solving linear equations in the space of vector-valued finite Radon measures with Hilbert space data is considered. Approximate solutions are obtained by minimizing the Tikhonov functional with a total variation penalty. The well-posedness of this regularization method and further regularization properties are mentioned. Furthermore, a flexible numerical minimization algorithm is proposed which converges subsequentially in the weak* sense and with rate O(n −1 )i n terms of the functional values. Finally, numerical results for sparse deconvolution demonstrate the applicability for a finite-dimensional discrete data space and infinite-dimensional solution space.

217 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.
Abstract: We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.

11,413 citations

Journal ArticleDOI

6,278 citations

Book
27 Nov 2013
TL;DR: The many different interpretations of proximal operators and algorithms are discussed, their connections to many other topics in optimization and applied mathematics are described, some popular algorithms are surveyed, and a large number of examples of proxiesimal operators that commonly arise in practice are provided.
Abstract: This monograph is about a class of optimization algorithms called proximal algorithms. Much like Newton's method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets. Proximal methods sit at a higher level of abstraction than classical algorithms like Newton's method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods. Here, we discuss the many different interpretations of proximal operators and algorithms, describe their connections to many other topics in optimization and applied mathematics, survey some popular algorithms, and provide a large number of examples of proximal operators that commonly arise in practice.

3,627 citations

Journal ArticleDOI
TL;DR: A simple costless modification to iterative thresholding is introduced making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures, inspired by belief propagation in graphical models.
Abstract: Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity–undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity–undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity–undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity–undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this apparently very different theoretical formalism.

2,412 citations

Posted Content
Abstract: The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of signal processing, where it has become increasingly important. In this paper, we review the basic properties of proximity operators which are relevant to signal processing and present optimization methods based on these operators. These proximal splitting methods are shown to capture and extend several well-known algorithms in a unifying framework. Applications of proximal methods in signal recovery and synthesis are discussed.

2,095 citations