scispace - formally typeset
Search or ask a question

Showing papers by "John E. Dennis published in 2008"


Journal ArticleDOI
TL;DR: In this article, the authors introduce several derivative-free sampling methods for solving constrained optimization problems that have not yet been considered in this field, and include a genetic algorithm for completeness.

111 citations


Journal ArticleDOI
TL;DR: A parallel space decomposition PSD technique for the mesh adaptive direct search MADS algorithm and some numerical results on problems with up to 500 variables illustrate the advantages and limitations of PSD-MADS.
Abstract: This paper describes a parallel space decomposition PSD technique for the mesh adaptive direct search MADS algorithm. MADS extends a generalized pattern search for constrained nonsmooth optimization problems. The objective of the present work is to obtain good solutions to larger problems than the ones typically solved by MADS. The new method PSD-MADS is an asynchronous parallel algorithm in which the processes solve problems over subsets of variables. The convergence analysis based on the Clarke calculus is essentially the same as for the MADS algorithm. A practical implementation is described, and some numerical results on problems with up to 500 variables illustrate the advantages and limitations of PSD-MADS.

77 citations


Journal ArticleDOI
TL;DR: In this article, simplex gradients of nonsmooth functions are analyzed in the context of direct search methods like the generalized pattern search and the mesh adaptive direct search, for which there exists a convergence analysis.
Abstract: It has been shown recently that the efficiency of direct search methods that use opportunistic polling in positive spanning directions can be improved significantly by reordering the poll directions according to descent indicators built from simplex gradients. The purpose of this paper is two-fold. First, we analyse the properties of simplex gradients of nonsmooth functions in the context of direct search methods like the generalized pattern search and the mesh adaptive direct search, for which there exists a convergence analysis in the nonsmooth setting. Our analysis does not require continuous differentiability and can be seen as an extension of the accuracy properties of simplex gradients known for smooth functions. Secondly, we test the use of simplex gradients when pattern search is applied to nonsmooth functions, confirming the merit of the poll ordering strategy for such problems.

65 citations


Journal ArticleDOI
TL;DR: A detailed algorithm for constructing the set of directions whether or not the constraints are degenerate and a new approach for handling nonredundant linearly dependent constraints, which maintains GPS convergence properties without significantly increasing computational cost are introduced.
Abstract: This paper deals with generalized pattern search (GPS) algorithms for linearly constrained optimization. At each iteration, the GPS algorithm generates a set of directions that conforms to the geometry of any nearby linear constraints. This set is then used to construct trial points to be evaluated during the iteration. In a previous work, Lewis and Torczon developed a scheme for computing the conforming directions, however, the issue of degeneracy merits further investigation. The contribution of this paper is to provide a detailed algorithm for constructing the set of directions whether or not the constraints are degenerate. One difficulty in the degenerate case is the classification of constraints as redundant or nonredundant. We give a short survey of the main definitions and methods for treating redundancy and propose an approach to identify nonredundant e-active constraints. We also introduce a new approach for handling nonredundant linearly dependent constraints, which maintains GPS convergence properties without significantly increasing computational cost. Some simple numerical tests illustrate the effectiveness of the algorithm.

17 citations


ReportDOI
TL;DR: In this article, the authors present a discussion of methods for solving geophysical inverse problems with an emphasis upon newer approaches that have not yet become prominent in geophysics, and the main results are brought together in a final summary and conclusions section.
Abstract: A fundamental part of geophysics is to make inferences about the interior of the earth on the basis of data collected at or near the surface of the earth. In almost all cases these measured data are only indirectly related to the properties of the earth that are of interest, so an inverse problem must be solved in order to obtain estimates of the physical properties within the earth. In February of 1999 the U.S. Department of Energy sponsored a workshop that was intended to examine the methods currently being used to solve geophysical inverse problems and to consider what new approaches should be explored in the future. The interdisciplinary area between inverse problems in geophysics and optimization methods in mathematics was specifically targeted as one where an interchange of ideas was likely to be fruitful. Thus about half of the participants were actively involved in solving geophysical inverse problems and about half were actively involved in research on general optimization methods. This report presents some of the topics that were explored at the workshop and the conclusions that were reached. In general, the objective of a geophysical inverse problem is to find an earth model, described by a set of physical parameters, that is consistent with the observational data. It is usually assumed that the forward problem, that of calculating simulated data for an earth model, is well enough understood so that reasonably accurate synthetic data can be generated for an arbitrary model. The inverse problem is then posed as an optimization problem, where the function to be optimized is variously called the objective function, misfit function, or fitness function. The objective function is typically some measure of the difference between observational data and synthetic data calculated for a trial model. However, because of incomplete and inaccurate data, the objective function often incorporates some additional form of regularization, such as a measure of smoothness or distance from a prior model. Various other constraints may also be imposed upon the process. Inverse problems are not restricted to geophysics, but can be found in a wide variety of disciplines where inferences must be made on the basis of indirect measurements. For instance, most imaging problems, whether in the field of medicine or non-destructive evaluation, require the solution of an inverse problem. In this report, however, the examples used for illustration are taken exclusively from the field of geophysics. The generalization of these examples to other disciplines should be straightforward, as all are based on standard second-order partial differential equations of physics. In fact, sometimes the non-geophysical inverse problems are significantly easier to treat (as in medical imaging) because the limitations on data collection, and in particular on multiple views, are not so severe as they generally are in geophysics. This report begins with an introduction to geophysical inverse problems by briefly describing four canonical problems that are typical of those commonly encountered in geophysics. Next the connection with optimization methods is made by presenting a general formulation of geophysical inverse problems. This leads into the main subject of this report, a discussion of methods for solving such problems with an emphasis upon newer approaches that have not yet become prominent in geophysics. A separate section is devoted to a subject that is not encountered in all optimization problems but is particularly important in geophysics, the need for a careful appraisal of the results in terms of their resolution and uncertainty. The impact on geophysical inverse problems of continuously improving computational resources is then discussed. The main results are then brought together in a final summary and conclusions section.

9 citations


01 Nov 2008
TL;DR: In this paper, the authors compare instantiations of mesh adaptive direct search algorithms under different strategies to handle constraints and conduct extensive numerical tests from feasible and/or infeasible starting points on three real engineering applications.
Abstract: The class of Mesh Adaptive Direct Search (Mads) algorithms is designed for the optimization of constrained black-box problems. The purpose of this paper is to compare instantiations of Mads under different strategies to handle constraints. Intensive numerical tests are conducted from feasible and/or infeasible starting points on three real engineering applications.

2 citations


01 Feb 2008
TL;DR: In this article, the authors introduce a new way of choosing directions for the mesh adaptive direct search (Mads) class of algorithms, which yields convex cones of missed directions at each iteration.
Abstract: The purpose of this paper is to introduce a new way of choosing directions for the mesh adaptive direct search (Mads) class of algorithms. The advantages of this new OrthoMads instantiation of Mads are that the polling directions are chosen deterministically, ensuring that the results of a given run are repeatable, and that they are orthogonal to each other, which yields convex cones of missed directions at each iteration that are minimal in a reasonable measure. Convergence results for OrthoMads follow directly from those already published for Mads, and they hold deterministically, rather than with probability one, as is the case for LtMads, the first Mads instance. The initial numerical results are quite good for both smooth and nonsmooth and constrained and unconstrained problems considered here.

1 citations


27 May 2008
TL;DR: This paper characterize a new class of optimization problems in which objective function values are correlated with the computational time required to obtain these values, and makes use of surrogates based on CPU times of previously evaluated points, rather than their function values, all within the search step framework of mesh adaptive direct search algorithms.
Abstract: : In this paper, we characterize a new class of optimization problems in which objective function values are correlated with the computational time required to obtain these values That is, as the optimal solution is approached, the computational time required to compute an objective function values decreases significantly This is motivated by an application in which each objective function evaluation requires both a numerical fluid dynamics simulation and an image registration process, and the goal is to find the parameter values of a predetermined reference image by comparing the flow dynamics from the numerical simulation and the reference image through the image comparison process In designing an approach to numerically solve the more general class of problems in an efficient way, we make use of surrogates based on CPU times of previously evaluated points, rather than their function values, all within the search step framework of mesh adaptive direct search algorithms Because of the expected CPU time correlation, a time cutoff parameter was added to the objective function evaluation to allow its termination during the comparison process if the computational time exceeds a specified threshold The approach was tested using the NOMADm and DACE MATLABr software packages, and results are presented

1 citations