scispace - formally typeset
Search or ask a question

Showing papers by "Andrea Walther published in 2002"


Journal ArticleDOI
TL;DR: A new approach to constrained optimization that is based on direct and adjoint vector-function evaluations in combination with secant updating is proposed, which avoids the avoidance of constraint Jacobian evaluations and the reduction of the linear algebra cost per iteration in the dense, unstructured case.
Abstract: In this article we propose a new approach to constrained optimization that is based on direct and adjoint vector-function evaluations in combination with secant updating. The main goal is the avoidance of constraint Jacobian evaluations and the reduction of the linear algebra cost per iteration to $ {\cal O}(n + m)^2 $ operations in the dense, unstructured case. A crucial building block is a transformation invariant two-sided-rank-one update (TR1) for approximations to the (active) constraint Jacobian. In this article we elaborate its basic properties and report preliminary numerical results for the new total quasi-Newton approach on some small equality constrained problems. A nullspace implementation under development is briefly described. The tasks of identifying active constraints, safeguarding convergence and many other important issues in constrained optimization are not addressed in detail.

88 citations


Book ChapterDOI
21 Apr 2002
TL;DR: This paper presents an approach to reducing the memory requirement without increasing the wall clock time by using parallel computers.
Abstract: For computational purposes such as the computation of adjoint, applying the reverse mode of automatic differentiation, or debugging one may require the values computed during the evaluation of a function in reverse order. The naive approach is to store all information needed for the reversal and to read this information backwards during the reversal. This technique leads to an enormous memory requirement, which is proportional to the computing time. The paper presents an approach to reducing the memory requirement without increasing the wall clock time by using parallel computers. During the parallel computation, only a fixed and small number of memory pads called checkpoints is stored. The data needed for the reversal is recomputed piecewise by starting the evaluation procedure from the checkpoints. We explain how this technique can be used on a parallel computer with distributed memory. Different implementation strategies will be shown, and some details with respect to resource-optimality are discussed.

10 citations