scispace - formally typeset
Search or ask a question

Showing papers in "Optimization Methods & Software in 2002"


Journal ArticleDOI
TL;DR: It is shown that, under mild regularity conditions, such a min-max problem generates a probability distribution on the set of permissible distributions with the min- max problem being equivalent to the expected value problem with respect to the corresponding weighted distribution.
Abstract: In practical applications of stochastic programming the involved probability distributions are never known exactly. One can try to hedge against the worst expected value resulting from a considered set of permissible distributions. This leads to a min-max formulation of the corresponding stochastic programming problem. We show that, under mild regularity conditions, such a min-max problem generates a probability distribution on the set of permissible distributions with the min-max problem being equivalent to the expected value problem with respect to the corresponding weighted distribution. We consider examples of the news vendor problem, the problem of moments and problems involving unimodal distributions. Finally, we discuss the Monte Carlo sample average approach to solving such min-max problems.

258 citations


Journal ArticleDOI
TL;DR: A self-contained convergence analysis that uses the formalism of the theory of self-concordant functions, but for the main results, direct proofs based on the properties of the logarithmic function are given.
Abstract: We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a self-contained convergence analysis that uses the formalism of the theory of self-concordant functions, but for the main results, we give direct proofs based on the properties of the logarithmic function. We also provide an in-depth analysis of two extensions that are very relevant to practical problems: the case of multiple cuts and the case of deep cuts. We further examine extensions to problems including feasible sets partially described by an explicit barrier function, and to the case of nonlinear cuts. Finally, we review several implementation issues and discuss some applications.

209 citations


Journal ArticleDOI
Jos F. Sturm1
TL;DR: This article is the first article to provide an elaborate discussion of the implementation of the primal-dual interior point method for mixed semidefinite and second order cone optimization in SeDuMi.
Abstract: There is a large number of implementational choices to be made for the primal-dual interior point method in the context of mixed semidefinite and second order cone optimization. This paper presents such implementational issues in a unified framework, and compares the choices made by dierent research groups. This is also the first paper to provide an elaborate discussion of the implementation in SeDuMi.

204 citations


Journal ArticleDOI
TL;DR: A greedy randomized adaptive search procedure (GRASP), a variable neighborhood search (VNS), and a path-relinking (PR) intensification heuristic for MAX-CUT are proposed and tested and Computational results indicate that these randomized heuristics find near-optimal solutions.
Abstract: Given an undirected graph with edge weights, the MAX-CUT problem consists in finding a partition of the nodes into two subsets, such that the sum of the weights of the edges having endpoints in different subsets is maximized. It is a well-known NP-hard problem with applications in several fields, including VLSI design and statistical physics. In this article, a greedy randomized adaptive search procedure (GRASP), a variable neighborhood search (VNS), and a path-relinking (PR) intensification heuristic for MAX-CUT are proposed and tested. New hybrid heuristics that combine GRASP, VNS, and PR are also proposed and tested. Computational results indicate that these randomized heuristics find near-optimal solutions. On a set of standard test problems, new best known solutions were produced for many of the instances.

200 citations


Journal ArticleDOI
TL;DR: Finite termination of a Newton method is shown to the unique global solution starting from any point in R n, if the function is well conditioned and if not, an Armijo stepsize is used.
Abstract: A fundamental classification problem of data mining and machine learning is that of minimizing a strongly convex, piecewise quadratic function on the n -dimensional real space R n . We show finite termination of a Newton method to the unique global solution starting from any point in R n . If the function is well conditioned, then no stepsize is required from the start and if not, an Armijo stepsize is used. In either case, the algorithm finds the unique global minimum solution in a finite number of iterations.

170 citations


Journal ArticleDOI
TL;DR: An overview of the development and history of the bundle methods from the seventies to the present is given, which focuses on the convex unconstrained case with a single objective function.
Abstract: Bundle methods are at the moment the most efficient and promising methods for nonsmooth optimization. They have been successfully used in many practical applications, for example, in economics, mechanics, engineering and optimal control. The aim of this paper is to give an overview of the development and history of the bundle methods from the seventies to the present. For simplicity, we first concentrate on the convex unconstrained case with a single objective function. The methods are later extended to nonconvex, constrained and multicriteria cases.

164 citations


Journal ArticleDOI
TL;DR: It is shown how such approximations may be expressed through ordinary differential equations with coefficients given in explicit analytical form and this allows exact parametric representation of reach tubes through families of external and internal ellipsoidal tubes as compared with earlier methods based on constructing one or several isolated approximating tubes.
Abstract: The paper describes the calculation of the reach sets and tubes for linear control systems with time-varying coefficients and ellipsoidal hard bounds on the controls and initial states. This is achieved by parametrized families of external and internal ellipsoidal approximations constructed such that they touch the reach sets at every point of their boundary at any instant of time (both from outside and inside, respectively). The surface of the reach tube would then be entirely covered by curves that belong to the approximating tubes. It is further shown how such approximations may be expressed through ordinary differential equations with coefficients given in explicit analytical form. This allows exact parametric representation of reach tubes through families of external and internal ellipsoidal tubes as compared with earlier methods based on constructing one or several isolated approximating tubes. The approach opens new routes to the arrangement of efficient numerical algorithms. The present Part I dea...

116 citations


Journal ArticleDOI
TL;DR: The numerical experience with the probabilistic lot-sizing problem shows the potential of the solution approach and the efficiency of the algorithms implemented.
Abstract: Stochastic integer programming problems under probabilistic constraints are considered. Deterministic equivalent formulations of the original problem are obtained by using p-efficient points of the distribution function of the right hand side vector. A branch and bound solution method is proposed based on a partial enumeration of the set of these points. The numerical experience with the probabilistic lot-sizing problem shows the potential of the solution approach and the efficiency of the algorithms implemented.

109 citations


Journal ArticleDOI
TL;DR: A new approach of hybrid direct search methods with meta-heuristics of simulated annealing for finding a global minimum of a nonlinear function with continuous variables, which shows that SSA and DSSA are promising in practice.
Abstract: In this article we give a new approach of hybrid direct search methods with meta-heuristics of simulated annealing for finding a global minimum of a nonlinear function with continuous variables. First, we suggest a Simple Direct Search (SDS) method, which comes from some ideas of other well-known direct search methods. Since our goal is to find global minima and the SDS method is still a local search method, we hybridize it with the standard simulated annealing to design a new method, called Simplex Simulated Annealing (SSA) method, which is expected to have some ability to look for a global minimum. To obtain faster convergence, we first accelerate the cooling schedule in SSA, and in the final stage, we apply Kelley's modification of the Nelder-Mead method on the best solutions found by the accelerated SSA method to improve the final results. We refer to this last method as the Direct Search Simulated Annealing (DSSA) method. The performance of SSA and DSSA is reported through extensive numerical experim...

103 citations


Journal ArticleDOI
TL;DR: This paper shows that the inexact Levenberg-Marquardt method (ILMM), which does not require computing exact search directions, has a superlinear rate of convergence under the same local error bound assumption and proposes the ILMM with Armijo's stepsize rule that has global convergence under mild conditions.
Abstract: In this paper, we consider convergence properties of the Levenberg-Marquardt method for solving nonlinear equations. It is well-known that the nonsingularity of Jacobian at a solution guarantees that the Levenberg-Marquardt method has a quadratic rate of convergence. Recently, Yamashita and Fukushima showed that the Levenberg-Marquardt method has a quadratic rate of convergence under the local error bound assumption, which is milder than the nonsingularity of Jacobian. In this paper, we show that the inexact Levenberg-Marquardt method (ILMM), which does not require computing exact search directions, has a superlinear rate of convergence under the same local error bound assumption. Moreover, we propose the ILMM with Armijo's stepsize rule that has global convergence under mild conditions.

101 citations


Journal ArticleDOI
TL;DR: A new approach to constrained optimization that is based on direct and adjoint vector-function evaluations in combination with secant updating is proposed, which avoids the avoidance of constraint Jacobian evaluations and the reduction of the linear algebra cost per iteration in the dense, unstructured case.
Abstract: In this article we propose a new approach to constrained optimization that is based on direct and adjoint vector-function evaluations in combination with secant updating. The main goal is the avoidance of constraint Jacobian evaluations and the reduction of the linear algebra cost per iteration to $ {\cal O}(n + m)^2 $ operations in the dense, unstructured case. A crucial building block is a transformation invariant two-sided-rank-one update (TR1) for approximations to the (active) constraint Jacobian. In this article we elaborate its basic properties and report preliminary numerical results for the new total quasi-Newton approach on some small equality constrained problems. A nullspace implementation under development is briefly described. The tasks of identifying active constraints, safeguarding convergence and many other important issues in constrained optimization are not addressed in detail.

Journal ArticleDOI
TL;DR: It appears that the proposed technique may well work for nonellipsoidal, box-valued constraints and this broadens the range of applications of the approach and opens new routes to the arrangement of efficient numerical algorithms.
Abstract: Following Part I, this article continues to describe the calculation of the reach sets and tubes for linear control systems with time-varying coefficients and ellipsoidal hard bounds on the controls and initial states. It deals with parametrized families of internal ellipsoidal approximations constructed such that they touch the reach sets at every point of their boundary at any instant of time. The reach tubes are thus touched internally by ellipsoidal tubes along some curves. The ellipsoidal tubes are chosen here in such a way that the touching curves do not intersect and that the boundary of the reach tube would be entirely covered by such curves. This allows exact parametric representation of reach tubes through unions of tight internal ellipsoidal tubes as compared with earlier methods based on constructing one or several isolated approximating tubes. The method of external and internal ellipsoidal approximations is then propagated to systems with box-valued hard bounds on the controls and initial st...

Journal ArticleDOI
TL;DR: A new method for the unconstrained minimization of a function presented as a difference of two convex functions is proposed, based on continuous approximations to the Demyanov-Rubinov quasidifferential.
Abstract: In this paper, we propose a new method for the unconstrained minimization of a function presented as a difference of two convex functions. This method is based on continuous approximations to the D...

Journal ArticleDOI
TL;DR: This article presents a generic primal-dual interior-point algorithm for linear optimization in which the search direction depends on a univariate kernel function which is also used as proximity measure in the analysis of the algorithm.
Abstract: In this article we present a generic primal-dual interior-point algorithm for linear optimization in which the search direction depends on a univariate kernel function which is also used as proximity measure in the analysis of the algorithm. We present some powerful tools for the analysis of the algorithm under the assumption that the kernel function satisfies three easy to check and mild conditions (i.e., exponential convexity, superconvexity and monotonicity of the second derivative). The approach is demonstrated by introducing a new kernel function and showing that the corresponding large-update algorithm improves the iteration complexity with a factor n 1/4 when compared with the classical method, which is based on the use of the logarithmic barrier function.

Journal ArticleDOI
TL;DR: It is shown that the least exact penalty parameter of an equivalent parametric optimization problem can be diminished and the Lipschitz penalty function with a small penalty parameter is more suitable for solving some nonconvex constrained problems than the classical penalty function.
Abstract: In this article, we study the nonlinear penalization of a constrained optimization problem and show that the least exact penalty parameter of an equivalent parametric optimization problem can be diminished. We apply the theory of increasing positively homogeneous (IPH) functions so as to derive a simple formula for computing the least exact penalty parameter for the classical penalty function through perturbation function. We establish that various equivalent parametric reformulations of constrained optimization problems lead to reduction of exact penalty parameters. To construct a Lipschitz penalty function with a small exact penalty parameter for a Lipschitz programming problem, we make a transformation to the objective function by virtue of an increasing concave function. We present results of numerical experiments, which demonstrate that the Lipschitz penalty function with a small penalty parameter is more suitable for solving some nonconvex constrained problems than the classical penalty function.

Journal ArticleDOI
TL;DR: This article surveys some recent results on the generation of implicitly given hypergraphs and their applications in Boolean and integer programming, data mining, reliability theory, and combinatorics and considers the problem of incrementally generating the family F π of all minimal subsets satisfying property ~.
Abstract: This article surveys some recent results on the generation of implicitly given hypergraphs and their applications in Boolean and integer programming, data mining, reliability theory, and combinatorics. Given a monotone property ~ over the subsets of a finite set V, we consider the problem of incrementally generating the family F π of all minimal subsets satisfying property ~ , when ~ is given by a polynomial-time satisfiability oracle. For a number of interesting monotone properties, the family F π turns out to be uniformly dual-bounded , allowing for the incrementally efficient enumeration of the members of F π. Important applications include the efficient generation of minimal infrequent sets of a database (data mining), minimal connectivity ensuring collections of subgraphs from a given list (reliability theory), minimal feasible solutions to a system of monotone inequalities in integer variables (integer programming), minimal spanning collections of subspaces from a given list (linear algebra) and max...

Journal ArticleDOI
TL;DR: New lower and upper bounds are given for the probability of the union of events using the new concept of hypercherry trees, generalisations of Tomescu's bounds in the same sense as the upper bound by Bukszár and Prékopa was a generalisation of the Hunter bound.
Abstract: In this article new lower and upper bounds are given for the probability of the union of events. For this purpose the new concept of hypercherry trees has been introduced. Earlier the concept of cherry tree and its application for bounding the probability of union of events was introduced by Bukszar and Prekopa. This, based on the cherry tree bound, is always an upper bound, and it can be regarded as a generalisation of the upper bound introduced by Hunter by means of maximum weight spanning trees. Later the Hunter bound was generalised by Tomescu. He used the concept of hypertrees in the framework of uniform hypergraphs and on the basis of these new hypergraph structures it became possible to define not only upper but also lower bounds on the probability of union of events. The new bounds of the paper are generalisations of Tomescu's bounds in the same sense as the upper bound by Bukszar and Prekopa was a generalisation of the Hunter bound. The efficiency of the new bounds is illustrated on some test pro...

Journal ArticleDOI
TL;DR: This paper is focused on the case where the groups are defined in an ordinal way and a new approach is presented to determine the parameters of an outranking relation classification model using a training sample of alternatives.
Abstract: Classification problems involve the assignment of a discrete set of alternatives described over some criteria into predefined groups. This paper is focused on the case where the groups are defined in an ordinal way. The methodology proposed in the paper is based on the multicriteria decision aid (MCDA) approach and the outranking relations theory. A new approach is presented to determine the parameters of an outranking relation classification model using a training sample of alternatives. The proposed methodology is extensively validated in terms of its classification performance through a comprehensive Monte Carlo simulation. The results of the methodology are compared to linear, quadratic and logit analysis.

Journal ArticleDOI
TL;DR: It is shown that a Riccati-based Multistage Stochastic Programming solver for problems with separable convex linear/nonlinear objective developed in previous papers can be extended to solve more general Stochastics Programming problems.
Abstract: We show that a Riccati-based Multistage Stochastic Programming solver for problems with separable convex linear/nonlinear objective developed in previous papers can be extended to solve more general Stochastic Programming problems. With a Lagrangean relaxation approach, also local and global equality constraints can be handled by the Riccati-based primal interior point solver. The efficiency of the approach is demonstrated on a 10 staged stochastic programming problem containing both local and global equality constraints. The problem has 1.9 million scenarios, 67 million variables and 119 million constraints, and was solved in 97 min on a 32 node PC cluster.

Journal ArticleDOI
TL;DR: Modified versions of the rollout algorithms to solve deterministic optimization problems are proposed, defined in such a way to limit the computational cost, without worsening the quality of the final approximate solution obtained.
Abstract: Rollout algorithms are new computational approaches used to determine near-optimal solutions for deterministic and stochastic combinatorial optimization problems. They are built on a generic base heuristic with the aim to construct another hopefully improved heuristic. However, rollout algorithms can be very expensive from the computational point of view, so their use for practical applications can be limited. In this article, we propose modified versions of the rollout algorithms to solve deterministic optimization problems, defined in such a way to limit the computational cost, without worsening the quality of the final approximate solution obtained.

Journal ArticleDOI
TL;DR: This paper describes how to discover interesting characteristics of mathematical programs via sampling, and describes solutions to several difficulties that arise in practice.
Abstract: It is often important to know more about the characteristics of a mathematical program than the simple information that is returned by a solver. For example, to choose an appropriate solution algorithm, one may need to know something about the convexity of the functions in a nonlinear program, or about which constraints are redundant. For the complex model forms, particularly nonlinear programs and mixed-integer programs, random sampling can discover a great deal of information. This paper describes how to discover interesting characteristics of mathematical programs via sampling, and describes solutions to several difficulties that arise in practice. Several new techniques for discovering characteristics and for improving the accuracy of the characterizations are described.

Journal ArticleDOI
TL;DR: The article shows that the problem of minimizing the function F is reduced to solving a family of minimax problems and numerical methods for finding stationary points satisfying the necessary condition are proposed.
Abstract: The following problem is discussed: Find a (constrained or unconstrained) minimizer of the function $$ F(x) = \max _{y\in G_1}\min _{z\in G_2} \varphi (x,y,z),$$ where } ( x,y,z ) is a function defined and continuous on $ R^n\times R^p\times R^q, G_1\subset R^p $ and $ G_2\subset R^q $ are compact sets in the respective spaces. The case where the function } is continuously differentiable was studied earlier. It is well known that the problem of minimizing the function F is nonconvex and multiextremal. It is shown in the article that the problem is reduced to solving a family of minimax problems. The discrete case (where G 1 and G 2 contain a finite number of points) is discussed in more detail. In such a case the problem is reduced to solving a finite number of minimax problems. A necessary condition for a point to be a global minimizer and a sufficient condition for a point to be a local one are proved. Numerical methods for finding stationary points (i.e. points satisfying the necessary condition) are p...

Journal ArticleDOI
TL;DR: A tailored Lagrangean heuristic that overcomes the drawbacks of classical procedures is described, able to provide good upper bounds for large-scale real-world applications, whereas general-purpose ILP solvers fail to determine even a feasible solution.
Abstract: This article addresses the Capacitated Plant Location Problem with Multiple Facilities in the Same Site (CPLPM), a special case of the classical Capacitated Plant Location Problem (CPLP) in which several facilities (possibly having different fixed costs and different capacities) can be opened in the same site. Applications of the CPLPM arise in a number of contexts, such as the location of polling stations. Although the CPLPM can be modelled and solved as a standard CPLP, this approach usually performs very poorly. In this article we describe a tailored Lagrangean heuristic that overcomes the drawbacks of classical procedures. Our algorithm was tested on a set of randomly generated instances and on a large real-world instance. Computational results indicate that our algorithm is able to provide high quality lower and upper bounds in a reasonable amount of time. In particular, our technique is able to provide good upper bounds for large-scale real-world applications, whereas general-purpose ILP solvers fai...

Journal ArticleDOI
TL;DR: This work proposes a more lenient stopping rule for the line search that is suitable for objective univariate functions that are not necessarily convex in the bracketed search interval and describes a remedy to special cases where the minimum point of the cubic interpolant constructed in each line search iteration is very close to zero.
Abstract: An iterative univariate minimizer (line search) is often used to generate a steplength in each step of a descent method for minimizing a multivariate function. The line search performance strongly depends on the choice of the stopping rule enforced. This termination criterion and other algorithmic details also affect the overall efficiency of the multivariate minimization procedure. Here we propose a more lenient stopping rule for the line search that is suitable for objective univariate functions that are not necessarily convex in the bracketed search interval. We also describe a remedy to special cases where the minimum point of the cubic interpolant constructed in each line search iteration is very close to zero. Results in the context of the truncated Newton package TNPACK for 18 standard test functions, as well as molecular potential functions, show that these strategies can lead to modest performance improvements in general, and significant improvements in special cases.

Journal ArticleDOI
TL;DR: This article first exploits some interesting properties of a self-regular proximity function, proposed recently by the authors of this work and Roos, and uses it to define a neighborhood of the central path of linear optimization (LO), and shows that this self- regularity based IPM can also predict precisely the change of the duality gap as the standard IPM does.
Abstract: Primal-dual interior-point methods (IPMs) have shown their power in solving large classes of optimization problems. However, at present there is still a gap between the practical behavior of these algorithms and their theoretical worst-case complexity results, with respect to the strategies of updating the duality gap parameter in the algorithm. The so-called small-update IPMs enjoy the best known theoretical worst-case iteration bound but work very poorly in practice, while the so-called large-update IPMs have superior practical performance but with relatively weaker theoretical results. In this article, by restricting us to linear optimization (LO), we first exploit some interesting properties of a self-regular proximity function, proposed recently by the authors of this work and Roos, and use it to define a neighborhood of the central path. These simple but interesting properties of the proximity function indicate that, when the current iterate is in a large neighborhood of the central path, then the l...

Journal ArticleDOI
TL;DR: This paper recalls these rules, but also summarises the different possible strategies for the differentiation of a multi-level program in reverse mode, and presents complexity measures of the adjoint codes generated using these strategies in terms of execution time and memory requirement.
Abstract: Some papers present the rules to apply to a straight line program to differentiate it in reverse mode, as well as theoretical complexity measures. This paper recalls these rules, but also summarises the different possible strategies for the differentiation of a multi-level program in reverse mode. We focus on the reverse mode, because the computation of derivatives in reverse order (w.r.t. the computation of original variables) makes the problem much more complicated. A lot of strategies can be applied to generate an adjoint code: these strategies are applied within hand-coded discrete adjoints or within automatically generated adjoints. But they are not necessarily known by both communities, this is why we describe several of them in this paper. Until now, the comparison of these strategies on a code is difficult because no complexity measure has been associated to them. This paper presents complexity measures of the adjoint codes generated using these strategies in terms of execution time and memory req...

Journal ArticleDOI
TL;DR: A new approach to globalizing the Josephy-Newton algorithm for solving the monotone variational inequality problem is proposed, based on a linesearch in the regularized Josephy -Newton direction which finds a trial point and a proximal point subproblem, for which this trial point is an acceptable approximate solution.
Abstract: We propose a new approach to globalizing the Josephy-Newton algorithm for solving the monotone variational inequality problem. Known globalization strategies rely either on minimization of a suitable merit function, or on a projection-type approach. The technique proposed here is based on a linesearch in the regularized Josephy-Newton direction which finds a trial point and a proximal point subproblem (i.e., subproblem with suitable parameters), for which this trial point is an acceptable approximate solution. We emphasize that this requires only checking a certain approximation criterion, and in particular, does not entail actually solving any nonlinear proximal point subproblems. The method converges globally under very mild assumptions. Furthermore, an easy modification of the method secures the local superlinear rate of convergence under standard conditions.

Journal ArticleDOI
TL;DR: It is shown that the closed loop system is a Riesz spectral system and as consequences, the exponential stability, the observability and the controllability of the system are concluded.
Abstract: A linear feedback control is designed regardless of dissipativity of the system for the stabilization of a flexible beam with a tip rigid body. The Riesz basis approach is adopted in the investigation. It is shown that the closed loop system is a Riesz spectral system and as consequences, the exponential stability, the observability and the controllability of the system are concluded. Finally, some numerical results are also presented.

Journal ArticleDOI
TL;DR: This article focuses on slice models arising from Data Envelopment Analysis (DEA), and describes the techniques that are able to solve DEA problems with large numbers of units, but also able to evaluate the confidence intervals.
Abstract: Slice models are collections of mathematical programs with the same structure but different data. Because they involve multiple problems, slice models tend to be data-intensive and time consuming to solve. However, by incorporating additional information in the solution process, such as the common structure and shared data, we are able to solve these models much more efficiently. In addition because of the efficiency we achieve, we are able to process much larger real-world problems and extend slice model results through the application of more computationally-intensive procedures. In this article, we focus on slice models arising from Data Envelopment Analysis (DEA). In DEA problems, slice models are used to evaluate the efficiency of production units. Using a smoothed bootstrap technique, confidence intervals can be obtained for the resulting efficiency measurements, however at such a high computational cost that often this analysis cannot be done. Under the techniques that we describe for improving the...

Journal ArticleDOI
TL;DR: Using shadow prices (Lagrange multipliers) on aggregate endowments, one may identify side-payments that yield core solutions to cooperative production games.
Abstract: Stochastic programming offers handy instruments to analyze exchange of goods and risks. Absent efficient markets for some of those items, such programming may imitate or synthesize market-like transfers among concerned parties. Specifically, using shadow prices (Lagrange multipliers) on aggregate endowments, one may identify side-payments that yield core solutions to cooperative production games.