scispace - formally typeset
Open AccessJournal ArticleDOI

Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods ∗

Reads0
Chats0
TLDR
This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited, then turns to a broad class of methods for which the underlying principles allow general-ization to handle bound constraints and linear constraints.
Abstract
Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed com- puting. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow general- ization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.

read more

Citations
More filters
Journal ArticleDOI

Rich Models for Steganalysis of Digital Images

TL;DR: A novel general strategy for building steganography detectors for digital images by assembling a rich model of the noise component as a union of many diverse submodels formed by joint distributions of neighboring samples from quantized image noise residuals obtained using linear and nonlinear high-pass filters.
Journal ArticleDOI

Mesh Adaptive Direct Search Algorithms for Constrained Optimization

TL;DR: The main result of this paper is that the general MADS framework is flexible enough to allow the generation of an asymptotically dense set of refining directions along which the Clarke derivatives are nonnegative.
Journal ArticleDOI

Derivative-free optimization: a review of algorithms and comparison of software implementations

TL;DR: It is found that the ability of all these solvers to obtain good solutions diminishes with increasing problem size, and TomLAB/MULTIMIN, TOMLAB/GLCCLUSTER, MCS and TOMLab/LGO are better, on average, than other derivative-free solvers in terms of solution quality within 2,500 function evaluations.
Journal ArticleDOI

Survey of maneuvering target tracking. Part V. Multiple-model methods

TL;DR: A comprehensive survey of techniques for tracking maneuvering targets without addressing the so-called measurement-origin uncertainty is presented in this article, which is centered around three generations of algorithms: autonomous, cooperating, and variable structure.
References
More filters
Journal ArticleDOI

A simplex method for function minimization

TL;DR: A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of the vertex with the highest value by another point.
Book

Optimization and nonsmooth analysis

TL;DR: The Calculus of Variations as discussed by the authors is a generalization of the calculus of variations, which is used in many aspects of analysis, such as generalized gradient descent and optimal control.
Book

Iterative Solution of Nonlinear Equations in Several Variables

TL;DR: In this article, the authors present a list of basic reference books for convergence of Minimization Methods in linear algebra and linear algebra with a focus on convergence under partial ordering.
Journal ArticleDOI

Convergence Properties of the Nelder--Mead Simplex Method in Low Dimensions

TL;DR: This paper presents convergence properties of the Nelder--Mead algorithm applied to strictly convex functions in dimensions 1 and 2, and proves convergence to a minimizer for dimension 1, and various limited convergence results for dimension 2.
Book

Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)

TL;DR: In this paper, Schnabel proposed a modular system of algorithms for unconstrained minimization and nonlinear equations, based on Newton's method for solving one equation in one unknown convergence of sequences of real numbers.