scispace - formally typeset
Search or ask a question
Author

Andrea Walther

Bio: Andrea Walther is an academic researcher from University of Paderborn. The author has contributed to research in topics: Automatic differentiation & Jacobian matrix and determinant. The author has an hindex of 23, co-authored 109 publications receiving 5497 citations. Previous affiliations of Andrea Walther include Dresden University of Technology & Humboldt University of Berlin.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the storage problem arising in the numerical calculation of the appearing adjoint equations is tackled by proposing a low-storage approach which utilizes optimal checkpointing, which allows a remarkable memory reduction by accepting a slight increase in run-time caused by repeated forward integrations.
Abstract: This paper discusses approximation schemes for adjoints in control of the instationary Navier–Stokes system. It tackles the storage problem arising in the numerical calculation of the appearing adjoint equations by proposing a low-storage approach which utilizes optimal checkpointing. For this purpose, a new proof of optimality is given. This new approach gives so far unknown properties of the optimal checkpointing strategies and thus provides new insights. The optimal checkpointing allows a remarkable memory reduction by accepting a slight increase in run-time caused by repeated forward integrations as illustrated by means of the Navier–Stokes equations. In particular, a memory reduction of two orders of magnitude causes only a slow down factor of 2–3 in run-time. Copyright © 2005 John Wiley & Sons, Ltd.

39 citations

Book ChapterDOI
02 Nov 2004
TL;DR: The more common multi-level checkpointing as well as the less known binomial checkpointing are presented and the checkpointing approaches are compared with respect to the number of time steps the adjoint of which can be calculated, the run-time needed for the adjointed calculation and the memory requirement.
Abstract: Checkpointing techniques become more and necessary for the computation of adjoints. This paper presents the more common multi-level checkpointing as well as the less known binomial checkpointing. The checkpointing approaches are compared with respect to the number of time steps the adjoint of which can be calculated, the run-time needed for the adjoint calculation and the memory requirement. Some examples illustrate the shown results

37 citations

Journal ArticleDOI
TL;DR: It will be shown both theoretically and numerically that methods based on the continuous adjoint equation require a careful choice of both the integrator and gradient assembly formulas in order to obtain a gradient consistent with the discretized control problem.
Abstract: This paper deals with the numerical solution of optimal control problems for ODEs. The methods considered here rely on some standard optimization code to solve a discretized version of the control problem under consideration. We aim to make available to the optimization software not only the discrete objective functional, but also its gradient. The objective gradient can be computed either from forward (sensitivity) information or backward (adjoint) information. The purpose of this paper is to discuss various ways of adjoint computation. It will be shown both theoretically and numerically that methods based on the continuous adjoint equation require a careful choice of both the integrator and gradient assembly formulas in order to obtain a gradient consistent with the discretized control problem. Particular attention is given to automatic differentiation techniques which generate automatically a suitable integrator.

37 citations

Journal ArticleDOI
TL;DR: A new approach for computing a sparsity pattern for a Hessian is presented: nonlinearity information is propagated through the function evaluation yielding the nonzero structure.
Abstract: A new approach for computing a sparsity pattern for a Hessian is presented: nonlinearity information is propagated through the function evaluation yielding the nonzero structure. A complexity analysis of the proposed algorithm is given. Once the sparsity pattern is available, coloring algorithms can be applied to compute a seed matrix. To evaluate the product of the Hessian and the seed matrix, a vector version for evaluating second order adjoints is analysed. New drivers of ADOL-C are provided implementing the presented algorithms. Runtime analyses are given for some problems of the CUTE collection.

35 citations

Journal ArticleDOI
TL;DR: A technique based on algorithmic differentiation is presented which allows for a precise calculation of higher-order derivatives and can be widely applied even for the case of only numerically solvable, implicit dependencies which totally hamper a semi-analytical calculation of the derivatives.

33 citations


Cited by
More filters
28 Oct 2017
TL;DR: An automatic differentiation module of PyTorch is described — a library designed to enable rapid research on machine learning models that focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead.
Abstract: In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.

13,268 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
TL;DR: The brms package implements Bayesian multilevel models in R using the probabilistic programming language Stan, allowing users to fit linear, robust linear, binomial, Poisson, survival, ordinal, zero-inflated, hurdle, and even non-linear models all in a multileVEL context.
Abstract: The brms package implements Bayesian multilevel models in R using the probabilistic programming language Stan. A wide range of distributions and link functions are supported, allowing users to fit - among others - linear, robust linear, binomial, Poisson, survival, ordinal, zero-inflated, hurdle, and even non-linear models all in a multilevel context. Further modeling options include autocorrelation of the response variable, user defined covariance structures, censored data, as well as meta-analytic standard errors. Prior specifications are flexible and explicitly encourage users to apply prior distributions that actually reflect their beliefs. In addition, model fit can easily be assessed and compared with the Watanabe-Akaike information criterion and leave-one-out cross-validation.

4,353 citations

Journal ArticleDOI
TL;DR: This work considers approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with non‐Gaussian response variables and can directly compute very accurate approximations to the posterior marginals.
Abstract: Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalized) linear models, (generalized) additive models, smoothing spline models, state space models, semiparametric regression, spatial and spatiotemporal models, log-Gaussian Cox processes and geostatistical and geoadditive models. We consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with non-Gaussian response variables. The posterior marginals are not available in closed form owing to the non-Gaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, in terms of both convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo sampling is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where Markov chain Monte Carlo algorithms need hours or days to run, our approximations provide more precise estimates in seconds or minutes. Another advantage with our approach is its generality, which makes it possible to perform Bayesian analysis in an automatic, streamlined way, and to compute model comparison criteria and various predictive measures so that models can be compared and the model under study can be challenged.

4,164 citations

Journal ArticleDOI
TL;DR: A Bayesian calibration technique which improves on this traditional approach in two respects and attempts to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best‐fitting parameter values is presented.
Abstract: We consider prediction and uncertainty analysis for systems which are approximated using complex mathematical models. Such models, implemented as computer codes, are often generic in the sense that by a suitable choice of some of the model's input parameters the code can be used to predict the behaviour of the system in a variety of specific applications. However, in any specific application the values of necessary parameters may be unknown. In this case, physical observations of the system in the specific context are used to learn about the unknown parameters. The process of fitting the model to the observed data by adjusting the parameters is known as calibration. Calibration is typically effected by ad hoc fitting, and after calibration the model is used, with the fitted input values, to predict the future behaviour of the system. We present a Bayesian calibration technique which improves on this traditional approach in two respects. First, the predictions allow for all sources of uncertainty, including the remaining uncertainty over the fitted parameters. Second, they attempt to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best-fitting parameter values. The method is illustrated by using data from a nuclear radiation release at Tomsk, and from a more complex simulated nuclear accident exercise.

3,745 citations