scispace - formally typeset
Search or ask a question
Author

Andrea Walther

Bio: Andrea Walther is an academic researcher from University of Paderborn. The author has contributed to research in topics: Automatic differentiation & Jacobian matrix and determinant. The author has an hindex of 23, co-authored 109 publications receiving 5497 citations. Previous affiliations of Andrea Walther include Dresden University of Technology & Humboldt University of Berlin.


Papers
More filters
Book ChapterDOI
21 Apr 2002
TL;DR: This paper presents an approach to reducing the memory requirement without increasing the wall clock time by using parallel computers.
Abstract: For computational purposes such as the computation of adjoint, applying the reverse mode of automatic differentiation, or debugging one may require the values computed during the evaluation of a function in reverse order. The naive approach is to store all information needed for the reversal and to read this information backwards during the reversal. This technique leads to an enormous memory requirement, which is proportional to the computing time. The paper presents an approach to reducing the memory requirement without increasing the wall clock time by using parallel computers. During the parallel computation, only a fixed and small number of memory pads called checkpoints is stored. The data needed for the reversal is recomputed piecewise by starting the evaluation procedure from the checkpoints. We explain how this technique can be used on a parallel computer with distributed memory. Different implementation strategies will be shown, and some details with respect to resource-optimality are discussed.

10 citations

Journal ArticleDOI
TL;DR: This algorithm avoids direct computation of exact Jacobians and has significant potential benefits on problems where Jacobian calculations are expensive, as demonstrated on numerical examples.
Abstract: A class of trust-region algorithms is developed and analyzed for the solution of optimization problems with nonlinear equality and inequality constraints. These algorithms are developed for problem classes where the constraints are not available in an open, equation-based form, and constraint Jacobians are of high dimension and are expensive to calculate. Based on composite-step trust region methods and a filter approach, the resulting algorithms do not require the computation of exact Jacobians; only Jacobian vector products are used along with approximate Jacobian matrices. With these modifications, we show that the algorithm is globally convergent. Also, as demonstrated on numerical examples, our algorithm avoids direct computation of exact Jacobians and has significant potential benefits on problems where Jacobian calculations are expensive.

9 citations

Journal ArticleDOI
TL;DR: A new method is presented that computes a local minimizer of the proximal model objective, which is also known as criticality in nonsmooth optimization, and provides opportunities for structure exploitation like warm starts in the context of the nonlinear, outer loop.
Abstract: We previously derived first-order (KKT) and second-order (SOSC) optimality conditions for functions defined by evaluation programs involving smooth elementals and absolute values. For this class of...

8 citations

Journal ArticleDOI
TL;DR: Functions defined by evaluation programs involving smooth elementals and absolute values as well as the max and min operators are piecewise smooth.
Abstract: Functions defined by evaluation programs involving smooth elementals and absolute values as well as the max and min operators are piecewise smooth. They can be approximated by piecewise linear func...

8 citations

Journal ArticleDOI
TL;DR: In this paper, the sensitivity of dielectric waveguides with respect to design parameters such as permittivity values or simple geometric dependencies is analyzed based on a discretization using the Finite Integration Technique.
Abstract: . We analyze the sensitivity of dielectric waveguides with respect to design parameters such as permittivity values or simple geometric dependencies. Based on a discretization using the Finite Integration Technique the eigenvalue problem for the wave number is shown to be non-Hermitian with possibly complex solutions even in the lossless case. Nevertheless, the sensitivity can be obtained with negligible numerical effort. Numerical examples demonstrate the validity of the approach.

8 citations


Cited by
More filters
28 Oct 2017
TL;DR: An automatic differentiation module of PyTorch is described — a library designed to enable rapid research on machine learning models that focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead.
Abstract: In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.

13,268 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
TL;DR: The brms package implements Bayesian multilevel models in R using the probabilistic programming language Stan, allowing users to fit linear, robust linear, binomial, Poisson, survival, ordinal, zero-inflated, hurdle, and even non-linear models all in a multileVEL context.
Abstract: The brms package implements Bayesian multilevel models in R using the probabilistic programming language Stan. A wide range of distributions and link functions are supported, allowing users to fit - among others - linear, robust linear, binomial, Poisson, survival, ordinal, zero-inflated, hurdle, and even non-linear models all in a multilevel context. Further modeling options include autocorrelation of the response variable, user defined covariance structures, censored data, as well as meta-analytic standard errors. Prior specifications are flexible and explicitly encourage users to apply prior distributions that actually reflect their beliefs. In addition, model fit can easily be assessed and compared with the Watanabe-Akaike information criterion and leave-one-out cross-validation.

4,353 citations

Journal ArticleDOI
TL;DR: This work considers approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with non‐Gaussian response variables and can directly compute very accurate approximations to the posterior marginals.
Abstract: Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalized) linear models, (generalized) additive models, smoothing spline models, state space models, semiparametric regression, spatial and spatiotemporal models, log-Gaussian Cox processes and geostatistical and geoadditive models. We consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with non-Gaussian response variables. The posterior marginals are not available in closed form owing to the non-Gaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, in terms of both convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo sampling is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where Markov chain Monte Carlo algorithms need hours or days to run, our approximations provide more precise estimates in seconds or minutes. Another advantage with our approach is its generality, which makes it possible to perform Bayesian analysis in an automatic, streamlined way, and to compute model comparison criteria and various predictive measures so that models can be compared and the model under study can be challenged.

4,164 citations

Journal ArticleDOI
TL;DR: A Bayesian calibration technique which improves on this traditional approach in two respects and attempts to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best‐fitting parameter values is presented.
Abstract: We consider prediction and uncertainty analysis for systems which are approximated using complex mathematical models. Such models, implemented as computer codes, are often generic in the sense that by a suitable choice of some of the model's input parameters the code can be used to predict the behaviour of the system in a variety of specific applications. However, in any specific application the values of necessary parameters may be unknown. In this case, physical observations of the system in the specific context are used to learn about the unknown parameters. The process of fitting the model to the observed data by adjusting the parameters is known as calibration. Calibration is typically effected by ad hoc fitting, and after calibration the model is used, with the fitted input values, to predict the future behaviour of the system. We present a Bayesian calibration technique which improves on this traditional approach in two respects. First, the predictions allow for all sources of uncertainty, including the remaining uncertainty over the fitted parameters. Second, they attempt to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best-fitting parameter values. The method is illustrated by using data from a nuclear radiation release at Tomsk, and from a more complex simulated nuclear accident exercise.

3,745 citations