scispace - formally typeset
Search or ask a question

Showing papers in "Acta Numerica in 2016"


Journal ArticleDOI
TL;DR: The state of the art in continuous optimization methods for such problems, and particular emphasis on optimal first-order schemes that can deal with typical non-smooth and large-scale objective functions used in imaging problems are described.
Abstract: A large number of imaging problems reduce to the optimization of a cost function , with typical structural properties. The aim of this paper is to describe the state of the art in continuous optimization methods for such problems, and present the most successful approaches and their interconnections. We place particular emphasis on optimal first-order schemes that can deal with typical non-smooth and large-scale objective functions used in imaging problems. We illustrate and compare the different algorithms using classical non-smooth problems in imaging, such as denoising and deblurring. Moreover, we present applications of the algorithms to more advanced problems, such as magnetic resonance imaging, multilabel image segmentation, optical flow estimation, stereo matching, and classification.

477 citations


Journal ArticleDOI
TL;DR: This review describes how techniques from the analysis of partial differential equations can be used to devise good algorithms and to quantify their efficiency and accuracy.
Abstract: The objective of molecular dynamics computations is to infer macroscopic properties of matter from atomistic models via averages with respect to probability measures dictated by the principles of statistical physics. Obtaining accurate results requires efficient sampling of atomistic configurations, which are typically generated using very long trajectories of stochastic differential equations in high dimensions, such as Langevin dynamics and its overdamped limit. Depending on the quantities of interest at the macroscopic level, one may also be interested in dynamical properties computed from averages over paths of these dynamics. This review describes how techniques from the analysis of partial differential equations can be used to devise good algorithms and to quantify their efficiency and accuracy. In particular, a crucial role is played by the study of the long-time behaviour of the solution to the Fokker–Planck equation associated with the stochastic dynamics.

209 citations


Journal ArticleDOI
TL;DR: The goal of this survey article is to impart a working knowledge of the underlying theory and practice of sparse direct methods for solving linear systems and least-squares problems, and to provide an overview of the algorithms, data structures, and software available to solve these problems.
Abstract: Wilkinson defined a sparse matrix as one with enough zeros that it pays to take advantage of them.1 This informal yet practical definition captures the essence of the goal of direct methods for solving sparse matrix problems. They exploit the sparsity of a matrix to solve problems economically: much faster and using far less memory than if all the entries of a matrix were stored and took part in explicit computations. These methods form the backbone of a wide range of problems in computational science. A glimpse of the breadth of applications relying on sparse solvers can be seen in the origins of matrices in published matrix benchmark collections (Duff and Reid 1979a, Duff, Grimes and Lewis 1989a, Davis and Hu 2011). The goal of this survey article is to impart a working knowledge of the underlying theory and practice of sparse direct methods for solving linear systems and least-squares problems, and to provide an overview of the algorithms, data structures, and software available to solve these problems, so that the reader can both understand the methods and know how best to use them.

183 citations


Journal ArticleDOI
TL;DR: It is proved that convergence of these algorithms to measure-valued solutions for the equations of compressible and incompressible inviscid fluid dynamics, and a large number of numerical experiments are presented which provide convincing evidence for the viability of the new paradigm.
Abstract: A standard paradigm for the existence of solutions in fluid dynamics is based on the construction of sequences of approximate solutions or approximate minimizers. This approach faces serious obstacles, most notably in multi-dimensional problems, where the persistence of oscillations at ever finer scales prevents compactness. Indeed, these oscillations are an indication, consistent with recent theoretical results, of the possible lack of existence/uniqueness of solutions within the standard framework of integrable functions. It is in this context that Young measures – parametrized probability measures which can describe the limits of such oscillatory sequences – offer the more general paradigm of measure-valued solutions for these problems.We present viable numerical algorithms to compute approximate measure-valued solutions, based on the realization of approximate measures as laws of Monte Carlo sampled random fields. We prove convergence of these algorithms to measure-valued solutions for the equations of compressible and incompressible inviscid fluid dynamics, and present a large number of numerical experiments which provide convincing evidence for the viability of the new paradigm. We also discuss the use of these algorithms, and their extensions, in uncertainty quantification and contexts other than fluid dynamics, such as non-convex variational problems in materials science.

98 citations


Journal ArticleDOI
TL;DR: The state-of-the-art design and implementation practices for the acceleration of the predominant linear algebra algorithms on large-scale accelerated multicore systems are presented and the development of innovativelinear algebra algorithms using three technologies – mixed precision arithmetic, batched operations, and asynchronous iterations – that are currently of high interest for accelerated multicores systems are emphasized.
Abstract: Many crucial scientific computing applications, ranging from national security to medical advances, rely on high-performance linear algebra algorithms and technologies, underscoring their importance and broad impact Here we present the state-of-the-art design and implementation practices for the acceleration of the predominant linear algebra algorithms on large-scale accelerated multicore systems Examples are given with fundamental dense linear algebra algorithms – from the LU, QR, Cholesky, and LDLT factorizations needed for solving linear systems of equations, to eigenvalue and singular value decomposition (SVD) problems The implementations presented are readily available via the open-source PLASMA and MAGMA libraries, which represent the next generation modernization of the popular LAPACK library for accelerated multicore systemsTo generate the extreme level of parallelism needed for the efficient use of these systems, algorithms of interest are redesigned and then split into well-chosen computational tasks The task execution is scheduled over the computational components of a hybrid system of multicore CPUs with GPU accelerators and/or Xeon Phi coprocessors, using either static scheduling or light-weight runtime systems The use of light-weight runtime systems keeps scheduling overheads low, similar to static scheduling, while enabling the expression of parallelism through sequential-like code This simplifies the development effort and allows exploration of the unique strengths of the various hardware components Finally, we emphasize the development of innovative linear algebra algorithms using three technologies – mixed precision arithmetic, batched operations, and asynchronous iterations – that are currently of high interest for accelerated multicore systems

14 citations


Journal ArticleDOI
TL;DR: This paper surveys how condition numbers have joined forces with probabilistic analysis to give rise to a form of condition-based analysis of algorithms via a number of examples.
Abstract: In recent decades, condition numbers have joined forces with probabilistic analysis to give rise to a form of condition-based analysis of algorithms. In this paper we survey how this analysis is done via a number of examples. We precede this catalogue of examples with short primers on both condition numbers and probabilistic analyses.

9 citations