scispace - formally typeset
Search or ask a question
Author

Anders Forsgren

Bio: Anders Forsgren is an academic researcher from Royal Institute of Technology. The author has contributed to research in topics: Matrix (mathematics) & Hessian matrix. The author has an hindex of 18, co-authored 65 publications receiving 1868 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization shows how their influence has transformed both the theory and practice of constrained optimization.
Abstract: Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interior-point techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar's widely publicized announcement in 1984 of a fast polynomial-time interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.

693 citations

Journal ArticleDOI
TL;DR: Minimax optimization provides robust target coverage without sacrificing the sparing of healthy tissues, even in the presence of low density lung tissue and high density titanium implants.
Abstract: Purpose: Intensity modulated proton therapy (IMPT) is sensitive to errors, mainly due to high stopping power dependency and steep beam dose gradients. Conventional margins are often insufficient to ...

361 citations

Journal ArticleDOI
TL;DR: Large-scale general (nonconvex) nonlinear programming when first and second derivatives of the objective and constraint functions are available is concerned, and a method suitable for large problems can be obtained.
Abstract: This paper concerns large-scale general (nonconvex) nonlinear programming when first and second derivatives of the objective and constraint functions are available. A method is proposed that is based on finding an approximate solution of a sequence of unconstrained subproblems parameterized by a scalar parameter. The objective function of each unconstrained subproblem is an augmented penalty-barrier function that involves both primal and dual variables. Each subproblem is solved with a modified Newton method that generates search directions from a primal-dual system similar to that proposed for interior methods. The augmented penalty-barrier function may be interpreted as a merit function for values of the primal and dual variables. An inertia-controlling symmetric indefinite factorization is used to provide descent directions and directions of negative curvature for the augmented penalty-barrier merit function. A method suitable for large problems can be obtained by providing a version of this factorization that will treat large sparse indefinite systems.

184 citations

Journal ArticleDOI
TL;DR: It is shown that diagonal ill-conditioning may be characterized by the property of strict $t$-diagonal dominance, which generalizes the idea of diagonal dominance to matrices whose diagonals are substantially larger in magnitude than the off-diagonals.
Abstract: Many interior methods for constrained optimization obtain a search direction as the solution of a symmetric linear system that becomes increasingly ill-conditioned as the solution is approached In some cases, this ill-conditioning is characterized by a subset of the diagonal elements becoming large in magnitude It has been shown that in this situation the solution can be computed accurately regardless of the size of the diagonal elements In this paper we discuss the formulation of several interior methods that use symmetric diagonally ill-conditioned systems It is shown that diagonal ill-conditioning may be characterized by the property of strict $t$-diagonal dominance, which generalizes the idea of diagonal dominance to matrices whose diagonals are substantially larger in magnitude than the off-diagonals A perturbation analysis is presented that characterizes the sensitivity of $t$-diagonally dominant systems under a certain class of structured perturbations Finally, we give a rounding-error analysis of the symmetric indefinite factorization when applied to $t$-diagonally dominant systems This analysis resolves the (until now) open question of whether the class of perturbations used in the sensitivity analysis is representative of the rounding error made during the numerical solution of the barrier equations

69 citations

Journal ArticleDOI
TL;DR: This work investigates computational schemes that enable the computation of descent directions and directions of negative curvature without the need to know the null-space matrix.
Abstract: Newton methods for large-scale minimization subject to linear equality constraints are discussed. For large-scale problems, it may be prohibitively expensive to reduce the problem to an unconstrained problem in the null space of the constraint matrix. We investigate computational schemes that enable the computation of descent directions and directions of negative curvature without the need to know the null-space matrix. The schemes are based on factorizing a sparse symmetric indefinite matrix. Three different methods are proposed based on the schemes described for computing the search directions. Convergence properties for the methods are established.

61 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A comprehensive description of the primal-dual interior-point algorithm with a filter line-search method for nonlinear programming is provided, including the feasibility restoration phase for the filter method, second-order corrections, and inertia correction of the KKT matrix.
Abstract: We present a primal-dual interior-point algorithm with a filter line-search method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration phase for the filter method, second-order corrections, and inertia correction of the KKT matrix. Heuristics are also considered that allow faster performance. This method has been implemented in the IPOPT code, which we demonstrate in a detailed numerical study based on 954 problems from the CUTEr test set. An evaluation is made of several line-search options, and a comparison is provided with two state-of-the-art interior-point codes for nonlinear programming.

7,966 citations

Journal ArticleDOI
TL;DR: This work presents a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold that has a number of advantages over other source localization techniques, including increased resolution, improved robustness to noise, limitations in data quantity, and correlation of the sources.
Abstract: We present a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. We enforce sparsity by imposing penalties based on the /spl lscr//sub 1/-norm. A number of recent theoretical results on sparsifying properties of /spl lscr//sub 1/ penalties justify this choice. Explicitly enforcing the sparsity of the representation is motivated by a desire to obtain a sharp estimate of the spatial spectrum that exhibits super-resolution. We propose to use the singular value decomposition (SVD) of the data matrix to summarize multiple time or frequency samples. Our formulation leads to an optimization problem, which we solve efficiently in a second-order cone (SOC) programming framework by an interior point implementation. We propose a grid refinement method to mitigate the effects of limiting estimates to a grid of spatial locations and introduce an automatic selection criterion for the regularization parameter involved in our approach. We demonstrate the effectiveness of the method on simulated data by plots of spatial spectra and by comparing the estimator variance to the Crame/spl acute/r-Rao bound (CRB). We observe that our approach has a number of advantages over other source localization techniques, including increased resolution, improved robustness to noise, limitations in data quantity, and correlation of the sources, as well as not requiring an accurate initialization.

2,288 citations

Journal ArticleDOI
TL;DR: A large selection of solution methods for linear systems in saddle point form are presented, with an emphasis on iterative methods for large and sparse problems.
Abstract: Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for this type of system. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.

2,253 citations