scispace - formally typeset
Search or ask a question
JournalISSN: 0022-3239

Journal of Optimization Theory and Applications 

Springer Science+Business Media
About: Journal of Optimization Theory and Applications is an academic journal published by Springer Science+Business Media. The journal publishes majorly in the area(s): Optimal control & Theory of computation. It has an ISSN identifier of 0022-3239. Over the lifetime, 7360 publications have been published receiving 193082 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: It is conjecture that the analogy with thermodynamics can offer a new insight into optimization problems and can suggest efficient algorithms for solving them.
Abstract: We present a Monte Carlo algorithm to find approximate solutions of the traveling salesman problem. The algorithm generates randomly the permutations of the stations of the traveling salesman trip, with probability depending on the length of the corresponding route. Reasoning by analogy with statistical thermodynamics, we use the probability given by the Boltzmann-Gibbs distribution. Surprisingly enough, using this simple algorithm, one can get very close to the optimal solution of the problem or even find the true optimum. We demonstrate this on several examples. We conjecture that the analogy with thermodynamics can offer a new insight into optimization problems and can suggest efficient algorithms for solving them.

3,061 citations

Journal ArticleDOI
TL;DR: The main purpose of this paper is to suggest a method for finding the minimum of a functionf(x) subject to the constraintg(x)=0, which consists of replacingf byF=f+λg+1/2cg2, and computing the appropriate value of the Lagrange multiplier.
Abstract: The main purpose of this paper is to suggest a method for finding the minimum of a functionf(x) subject to the constraintg(x)=0. The method consists of replacingf byF=f+λg+1/2cg 2, wherec is a suitably large constant, and computing the appropriate value of the Lagrange multiplier. Only the simplest algorithm is presented. The remaining part of the paper is devoted to a survey of known methods for finding unconstrained minima, with special emphasis on the various gradient techniques that are available. This includes Newton's method and the method of conjugate gradients.

2,282 citations

Journal ArticleDOI
TL;DR: In this paper, the extremal value of the linear program as a function of the parameterizing vector and the set of values of the parametric vector for which the program is feasible were derived using linear programming duality theory.
Abstract: J. F. Benders devised a clever approach for exploiting the structure of mathematical programming problems withcomplicating variables (variables which, when temporarily fixed, render the remaining optimization problem considerably more tractable). For the class of problems specifically considered by Benders, fixing the values of the complicating variables reduces the given problem to an ordinary linear program, parameterized, of course, by the value of the complicating variables vector. The algorithm he proposed for finding the optimal value of this vector employs a cutting-plane approach for building up adequate representations of (i) the extremal value of the linear program as a function of the parameterizing vector and (ii) the set of values of the parameterizing vector for which the linear program is feasible. Linear programming duality theory was employed to derive the natural families ofcuts characterizing these representations, and the parameterized linear program itself is used to generate what are usuallydeepest cuts for building up the representations.

2,133 citations

Journal ArticleDOI
TL;DR: In this article, the Lipschitz constant is viewed as a weighting parameter that indicates how much emphasis to place on global versus local search, which accounts for the fast convergence of the new algorithm on the test functions.
Abstract: We present a new algorithm for finding the global minimum of a multivariate function subject to simple bounds. The algorithm is a modification of the standard Lipschitzian approach that eliminates the need to specify a Lipschitz constant. This is done by carrying out simultaneous searches using all possible constants from zero to infinity. On nine standard test functions, the new algorithm converges in fewer function evaluations than most competing methods. The motivation for the new algorithm stems from a different way of looking at the Lipschitz constant. In particular, the Lipschitz constant is viewed as a weighting parameter that indicates how much emphasis to place on global versus local search. In standard Lipschitzian methods, this constant is usually large because it must equal or exceed the maximum rate of change of the objective function. As a result, these methods place a high emphasis on global search and exhibit slow convergence. In contrast, the new algorithm carries out simultaneous searches using all possible constants, and therefore operates at both the global and local level. Once the global part of the algorithm finds the basin of convergence of the optimum, the local part of the algorithm quickly and automatically exploits it. This accounts for the fast convergence of the new algorithm on the test functions.

1,994 citations

Journal ArticleDOI
TL;DR: In this article, the convergence properties of a block coordinate descent method applied to minimize a non-convex function f(x1,.., x 2, N 3 ) with certain separability and regularity properties were studied.
Abstract: We study the convergence properties of a (block) coordinate descent method applied to minimize a nondifferentiable (nonconvex) function f(x 1, . . . , x N ) with certain separability and regularity properties. Assuming that f is continuous on a compact level set, the subsequence convergence of the iterates to a stationary point is shown when either f is pseudoconvex in every pair of coordinate blocks from among N-1 coordinate blocks or f has at most one minimum in each of N-2 coordinate blocks. If f is quasiconvex and hemivariate in every coordinate block, then the assumptions of continuity of f and compactness of the level set may be relaxed further. These results are applied to derive new (and old) convergence results for the proximal minimization algorithm, an algorithm of Arimoto and Blahut, and an algorithm of Han. They are applied also to a problem of blind source separation.

1,988 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202398
2022201
2021198
2020191
2019213
2018177