scispace - formally typeset
Search or ask a question

Showing papers in "Computational Optimization and Applications in 2003"


Journal ArticleDOI
TL;DR: Two new versions of forward and backward type algorithms are presented for computing such optimally reduced probability measures approximately for convex stochastic programs with an (approximate) initial probability distribution P having finite support supp P.
Abstract: We consider convex stochastic programs with an (approximate) initial probability distribution P having finite support supp P, i.e., finitely many scenarios. The behaviour of such stochastic programs is stable with respect to perturbations of P measured in terms of a Fortet-Mourier probability metric. The problem of optimal scenario reduction consists in determining a probability measure that is supported by a subset of supp P of prescribed cardinality and is closest to P in terms of such a probability metric. Two new versions of forward and backward type algorithms are presented for computing such optimally reduced probability measures approximately. Compared to earlier versions, the computational performance (accuracy, running time) of the new algorithms has been improved considerably. Numerical experience is reported for different instances of scenario trees with computable optimal lower bounds. The test examples also include a ternary scenario tree representing the weekly electrical load process in a power management model.

851 citations


Journal ArticleDOI
TL;DR: This work presents a detailed computational study of the application of the SAA method to solve three classes of stochastic routing problems and finds provably near-optimal solutions to these difficult Stochastic programs using only a moderate amount of computation time.
Abstract: The sample average approximation (SAA) method is an approach for solving stochastic optimization problems by using Monte Carlo simulation. In this technique the expected objective function of the stochastic problem is approximated by a sample average estimate derived from a random sample. The resulting sample average approximating problem is then solved by deterministic optimization techniques. The process is repeated with different samples to obtain candidate solutions along with statistical estimates of their optimality gaps. We present a detailed computational study of the application of the SAA method to solve three classes of stochastic routing problems. These stochastic problems involve an extremely large number of scenarios and first-stage integer variables. For each of the three problem classes, we use decomposition and branch-and-cut to solve the approximating problem within the SAA scheme. Our computational results indicate that the proposed method is successful in solving problems with up to 21694 scenarios to within an estimated 1.0% of optimality. Furthermore, a surprising observation is that the number of optimality cuts required to solve the approximating problem to optimality does not significantly increase with the size of the sample. Therefore, the observed computation times needed to find optimal solutions to the approximating problems grow only linearly with the sample size. As a result, we are able to find provably near-optimal solutions to these difficult stochastic programs using only a moderate amount of computation time.

461 citations


Journal ArticleDOI
TL;DR: This work presents an algorithm that produces a discrete joint distribution consistent with specified values of the first four marginal moments and correlations, constructed by decomposing the multivariate problem into univariate ones, and using an iterative procedure that combines simulation, Cholesky decomposition and various transformations to achieve the correct correlations.
Abstract: In stochastic programming models we always face the problem of how to represent the random variables. This is particularly difficult with multidimensional distributions. We present an algorithm that produces a discrete joint distribution consistent with specified values of the first four marginal moments and correlations. The joint distribution is constructed by decomposing the multivariate problem into univariate ones, and using an iterative procedure that combines simulation, Cholesky decomposition and various transformations to achieve the correct correlations without changing the marginal moments. With the algorithm, we can generate 1000 one-period scenarios for 12 random variables in 16 seconds, and for 20 random variables in 48 seconds, on a Pentium III machine.

401 citations


Journal ArticleDOI
TL;DR: Algorithms for two-stage stochastic linear programming with recourse and their implementation on a grid computing platform are described and large sample-average approximations of problems from the literature are presented.
Abstract: We describe algorithms for two-stage stochastic linear programming with recourse and their implementation on a grid computing platform. In particular, we examine serial and asynchronous versions of the L-shaped method and a trust-region method. The parallel platform of choice is the dynamic, heterogeneous, opportunistic platform provided by the Condor system. The algorithms are of master-worker type (with the workers being used to solve second-stage problems), and the MW runtime support library (which supports master-worker computations) is key to the implementation. Computational results are presented on large sample-average approximations of problems from the literature.

181 citations


Journal ArticleDOI
TL;DR: It is shown that the squared smoothing function is strongly semismooth and a new proof is provided, based on a penalized natural complementarity function, for the solution set of the second-order-cone complementarity problem being bounded.
Abstract: Two results on the second-order-cone complementarity problem are presented. We show that the squared smoothing function is strongly semismooth. Under monotonicity and strict feasibility we provide a new proof, based on a penalized natural complementarity function, for the solution set of the second-order-cone complementarity problem being bounded. Numerical results of squared smoothing Newton algorithms are reported.

167 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that SDP and SOCP relaxations provide exact optimal solutions for a class of non-convex quadratic optimization problems with off-diagonal coefficient matrices.
Abstract: We show that SDP (semidefinite programming) and SOCP (second order cone programming) relaxations provide exact optimal solutions for a class of nonconvex quadratic optimization problems. It is a generalization of the results by S. Zhang for a subclass of quadratic maximization problems that have nonnegative off-diagonal coefficient matrices of quadratic objective functions and diagonal coefficient matrices of quadratic constraint functions. A new SOCP relaxation is proposed for the class of nonconvex quadratic optimization problems by extracting valid quadratic inequalities for positive semidefinite cones. Its effectiveness to obtain optimal values is shown to be the same as the SDP relaxation theoretically. Numerical results are presented to demonstrate that the SOCP relaxation is much more efficient than the SDP relaxation.

158 citations


Journal ArticleDOI
TL;DR: A convex nonlinear relaxation of the nonlinear convex GDP problem that relies on the convex hull of each of the disjunctions that is obtained by variable disaggregation and reformulation of the inequalities is proposed.
Abstract: Generalized Disjunctive Programming (GDP) has been introduced recently as an alternative to mixed-integer programming for representing discrete/continuous optimization problems. The basic idea of GDP consists of representing these problems in terms of sets of disjunctions in the continuous space, and logic propositions in terms of Boolean variables. In this paper we consider GDP problems involving convex nonlinear inequalities in the disjunctions. Based on the work by Stubbs and Mehrotra [21] and Ceria and Soares [6], we propose a convex nonlinear relaxation of the nonlinear convex GDP problem that relies on the convex hull of each of the disjunctions that is obtained by variable disaggregation and reformulation of the inequalities. The proposed nonlinear relaxation is used to formulate the GDP problem as a Mixed-Integer Nonlinear Programming (MINLP) problem that is shown to be tighter than the conventional “big-M” formulation. A disjunctive branch and bound method is also presented, and numerical results are given for a set of test problems.

156 citations


Journal ArticleDOI
TL;DR: The performance of evolution strategies is compared empirically with that of several other direct optimization strategies in the noisy, spherical environment that the theoretical results have been obtained in and it is seen that for low levels of noise, most of the strategies exhibit similar degrees of efficiency.
Abstract: Evolution strategies are general, nature-inspired heuristics for search and optimization. Due to their use of populations of candidate solutions and their advanced adaptation schemes, there is a common belief that evolution strategies are especially useful for optimization in the presence of noise. Empirical evidence as well as a number of theoretical findings with respect to the performance of evolution strategies on a class of spherical objective functions disturbed by Gaussian noise support that belief. However, little is known with respect to the capabilities in the presence of noise of evolution strategies relative to those of other direct optimization strategies. In the present paper, theoretical results with respect to the performance of evolution strategies in the presence of Gaussian noise are summarized and discussed. Then, the performance of evolution strategies is compared empirically with that of several other direct optimization strategies in the noisy, spherical environment that the theoretical results have been obtained in. Due to the simplicity of that environment, the results are easily interpretable and can serve to reveal the respective strengths and weaknesses of the algorithms. It is seen that for low levels of noise, most of the strategies exhibit similar degrees of efficiency. For higher levels of noise, their step length adaptation scheme affords evolution strategies a greater degree of robustness than the other algorithms tested.

105 citations


Journal ArticleDOI
TL;DR: It turns out the quadratic programming scheme outperforms the other three approaches for this problem in a computational experiment.
Abstract: Given a set of circles C e lc1, …, cnr on the Euclidean plane with centers l(a1, b1), …, (an, bn)r and radii lr1, …, rnr, the smallest enclosing circle (of fixed circles) problem is to find the circle of minimum radius that encloses all circles in C. We survey four known approaches for this problem, including a second order cone reformulation, a subgradient approach, a quadratic programming scheme, and a randomized incremental algorithm. For the last algorithm we also give some implementation details. It turns out the quadratic programming scheme outperforms the other three in our computational experiment.

92 citations


Journal ArticleDOI
TL;DR: This contribution deals with an efficient method for the numerical realization of the exterior and interior Bernoulli free boundary problems based on a shape optimization approach.
Abstract: This contribution deals with an efficient method for the numerical realization of the exterior and interior Bernoulli free boundary problems. It is based on a shape optimization approach. The state problems are solved by a fictitious domain solver using boundary Lagrange multipliers.

69 citations


Journal ArticleDOI
TL;DR: While being capable of discovering all putative global optima in the range considered, the method proposed improves by more than two orders of magnitude the speed and the percentage of success in finding theglobal optima of clusters of 75, 98, 102 atoms.
Abstract: A stochastic global optimization method is applied to the challenging problem of finding the minimum energy conformation of a cluster of identical atoms interacting through the Lennard-Jones potential. The method proposed incorporates within an already existing and quite successful method, monotonic basin hopping, a two-phase local search procedure which is capable of significantly enlarging the basin of attraction of the global optimum. The experiments reported confirm the considerable advantages of this approach, in particular for all those cases which are considered in the literature as the most challenging ones, namely 75, 98, 102 atoms. While being capable of discovering all putative global optima in the range considered, the method proposed improves by more than two orders of magnitude the speed and the percentage of success in finding the global optima of clusters of 75, 98, 102 atoms.

Journal ArticleDOI
TL;DR: It is suggested that the patients in the Good group should not receive chemotherapy while those in the Intermediate group should receive chemotherapy based on the survival curve analysis, the first instance of a classifiable group of breast cancer patients for which chemotherapy can possibly enhance survival.
Abstract: The identification of breast cancer patients for whom chemotherapy could prolong survival time is treated here as a data mining problem. This identification is achieved by clustering 253 breast cancer patients into three prognostic groups: Good, Poor and Intermediate. Each of the three groups has a significantly distinct Kaplan-Meier survival curve. Of particular significance is the Intermediate group, because patients with chemotherapy in this group do better than those without chemotherapy in the same group. This is the reverse case to that of the overall population of 253 patients for which patients undergoing chemotherapy have worse survival than those who do not. We also prescribe a procedure that utilizes three nonlinear smooth support vector machines (SSVMs) for classifying breast cancer patients into the three above prognostic groups. These results suggest that the patients in the Good group should not receive chemotherapy while those in the Intermediate group should receive chemotherapy based on our survival curve analysis. To our knowledge this is the first instance of a classifiable group of breast cancer patients for which chemotherapy can possibly enhance survival.

Journal ArticleDOI
TL;DR: This paper deals with the portfolio selection problem of risky assets with a diagonal covariance matrix, upper bounds on all assets and transactions costs, and an algorithm for its solution is formulated which terminates in a number of iterations that is at most three times the number of assets.
Abstract: This paper deals with the portfolio selection problem of risky assets with a diagonal covariance matrix, upper bounds on all assets and transactions costs. An algorithm for its solution is formulated which terminates in a number of iterations that is at most three times the number of assets. The efficient portfolios, under appropriate assumptions, are shown to have the following structure. As the risk tolerance parameter increases, an asset's holdings increases to its target, then stays there for a while, then increases to its upper bound, reaches it and stays there. Then the holdings of the asset with the next highest expected return proceeds in a similar way and so on.

Journal ArticleDOI
TL;DR: This note brings out a property of the functions that enter such equations, for instance through penalty expressions, that is increasingly important in the numerical treatment of complementarity problems and models of equilibrium.
Abstract: Piecewise smooth equations are increasingly important in the numerical treatment of complementarity problems and models of equilibrium. This note brings out a property of the functions that enter such equations, for instance through penalty expressions.

Journal ArticleDOI
TL;DR: It is shown how the Cauchy point, which is often computed in trust region methods, must be modified so that the feasible method is effective for problems containing both equality and inequality constraints.
Abstract: A slack-based feasible interior point method is described which can be derived as a modification of infeasible methods. The modification is minor for most line search methods, but trust region methods require special attention. It is shown how the Cauchy point, which is often computed in trust region methods, must be modified so that the feasible method is effective for problems containing both equality and inequality constraints. The relationship between slack-based methods and traditional feasible methods is discussed. Numerical results using the KNITRO package show the relative performance of feasible versus infeasible interior point methods.

Journal ArticleDOI
TL;DR: Several well-known quasi-Newton methods such as BFGS and DFP are proved to exhibit the local and superlinear convergence of these quasi- newton methods.
Abstract: Quasi-Newton methods in conjunction with the piecewise sequential quadratic programming are investigated for solving mathematical programming with equilibrium constraints, in particular for problems with complementarity constraints. Local convergence as well as superlinear convergence of these quasi-Newton methods can be established under suitable assumptions. In particular, several well-known quasi-Newton methods such as BFGS and DFP are proved to exhibit the local and superlinear convergence.

Journal ArticleDOI
TL;DR: The proposed variance reduction method based on sensitivity derivatives is shown to accelerate convergence of the Monte Carlo method.
Abstract: A general framework is proposed for what we call the sensitivity derivative Monte Carlo (SDMC) solution of optimal control problems with a stochastic parameter This method employs the residual in the first-order Taylor series expansion of the cost functional in terms of the stochastic parameter rather than the cost functional itself A rigorous estimate is derived for the variance of the residual, and it is verified by numerical experiments involving the generalized steady-state Burgers equation with a stochastic coefficient of viscosity Specifically, the numerical results show that for a given number of samples, the present method yields an order of magnitude higher accuracy than a conventional Monte Carlo method In other words, the proposed variance reduction method based on sensitivity derivatives is shown to accelerate convergence of the Monte Carlo method As the sensitivity derivatives are computed only at the mean values of the relevant parameters, the related extra cost of the proposed method is a fraction of the total time of the Monte Carlo method

Journal ArticleDOI
TL;DR: A primal-dual interior point method of the “feasible” type, with the additional property that the objective function decreases at each iteration, and the initial point is allowed to lie on the boundary of the feasible set.
Abstract: We propose and analyze a primal-dual interior point method of the “feasible” type, with the additional property that the objective function decreases at each iteration. A distinctive feature of the method is the use of different barrier parameter values for each constraint, with the purpose of better steering the constructed sequence away from non-KKT stationary points. Assets of the proposed scheme include relative simplicity of the algorithm and of the convergence analysis, strong global and local convergence properties, and good performance in preliminary tests. In addition, the initial point is allowed to lie on the boundary of the feasible set.

Journal ArticleDOI
TL;DR: Stability properties of the recourse function as a function of the first-stage decision, as well as of the underlying probability distribution of random parameters, are established, which leads to stability results for the optimal solution of the minimum risk problem when the underlying probabilities distribution is subjected to perturbations.
Abstract: In the setting of stochastic recourse programs, we consider the problem of minimizing the probability of total costs exceeding a certain threshold value. The problem is referred to as the minimum risk problem and is posed in order to obtain a more adequate description of risk aversion than that of the accustomed expected value problem. We establish continuity properties of the recourse function as a function of the first-stage decision, as well as of the underlying probability distribution of random parameters. This leads to stability results for the optimal solution of the minimum risk problem when the underlying probability distribution is subjected to perturbations. Furthermore, an algorithm for the minimum risk problem is elaborated and we present results of some preliminary computational experiments.

Journal ArticleDOI
TL;DR: The study achieves an improved understanding and description of the sampling phenomena based on the concepts of fractal geometry and incorporating the knowledge of the accuracy ofThe sampling (fractal model) in the stochastic optimization framework thereby, automating and improving the combinatorial optimization algorithm.
Abstract: The generalized approach to stochastic optimization involves two computationally intensive recursive loops: (1) the outer optimization loop, (2) the inner sampling loop. Furthermore, inclusion of discrete decision variables adds to the complexity. The focus of the current endeavor is to reduce the computational intensity of the two recursive loops. The study achieves the goals through an improved understanding and description of the sampling phenomena based on the concepts of fractal geometry and incorporating the knowledge of the accuracy of the sampling (fractal model) in the stochastic optimization framework thereby, automating and improving the combinatorial optimization algorithm. The efficiency of the algorithm is presented in the context of a large scale real world problem, related to the nuclear waste at Hanford, involving discrete and continuous decision variables, and uncertainties. These new developments reduced the computational intensity for solving this problem from an estimated 20 days of CPU time on a dedicated Alpha workstation to 18 hours of CPU time on the same machine.

Journal ArticleDOI
TL;DR: This paper proposes a stepsize rule for EKF and establishes global convergence of the algorithm under the boundedness of the generated sequence and appropriate assumptions on the objective function and reports some numerical results, which demonstrate that the proposed method is promising.
Abstract: In this paper, we consider the Extended Kalman Filter (EKF) for solving nonlinear least squares problems. EKF is an incremental iterative method based on Gauss-Newton method that has nice convergence properties. Although EKF has the global convergence property under some conditions, the convergence rate is only sublinear under the same conditions. One of the reasons why EKF shows slow convergence is the lack of explicit stepsize. In the paper, we propose a stepsize rule for EKF and establish global convergence of the algorithm under the boundedness of the generated sequence and appropriate assumptions on the objective function. A notable feature of the stepsize rule is that the stepsize is kept greater than or equal to 1 at each iteration, and increases at a linear rate of k under an additional condition. Therefore, we can expect that the proposed method converges faster than the original EKF. We report some numerical results, which demonstrate that the proposed method is promising.

Journal ArticleDOI
TL;DR: Upper bounds for the condition number of the preconditioned matrices used in the solution of systems of linear equations defining the algorithm search directions are derived.
Abstract: We study and compare preconditioners available for network interior point methods. We derive upper bounds for the condition number of the preconditioned matrices used in the solution of systems of linear equations defining the algorithm search directions. The preconditioners are tested using PDNET, a state-of-the-art interior point code for the minimum cost network flow problem. A computational comparison using a set of standard problems improves the understanding of the effectiveness of preconditioners in network interior point methods.

Journal ArticleDOI
TL;DR: This paper provides a characterization of the solution of the optimal control problem for piecewise affine discrete-time systems with a quadratic cost function and provides a simple method for solving this and the previously solved &ell∞ problem.
Abstract: This paper makes two contributionss firstly, it provides a characterization of the solution of the optimal control problem for piecewise affine discrete-time systems with a quadratic cost function (the generally preferred option) and, secondly, provides a simple method (reverse transformation) for solving this and the previously solved e∞ problem. The characterization is useful for on-line implementation.

Journal ArticleDOI
TL;DR: Numerical experiments for continuous, discrete and mixed discrete optimization problems were performed, and the numerical results show that the approach is effective for solving these problems.
Abstract: This paper presents a unified gradient flow approach to nonlinear constrained optimization problems This method is based on a continuous gradient flow reformulation of constrained optimization problems and on a two level time discretization of the gradient flow equation with a splitting parameter t The convergence of the scheme is analyzed and it is shown that the scheme becomes first order when t ∈ [0, 1] and second order when t e 1 and the time discretization step length is sufficiently large Numerical experiments for continuous, discrete and mixed discrete optimization problems were performed, and the numerical results show that the approach is effective for solving these problems

Journal ArticleDOI
TL;DR: It is shown that f is a strongly semismooth function if g is continuous and B is affine with respect to t and stronglySemismooth withrespect to x, i.e., B(x, t) = u(x)t + v(x), where u and v are two strongly Semismooth functions in ℝn.
Abstract: As shown by an example, the integral function f : {\bb R}n → {\bb R}, defined by f(x) e ∫ab[B(x, t)]+g(t) dt, may not be a strongly semismooth function, even if g(t) ≡ 1 and B is a quadratic polynomial with respect to t and infinitely many times smooth with respect to x. We show that f is a strongly semismooth function if g is continuous and B is affine with respect to t and strongly semismooth with respect to x, i.e., B(x, t) e u(x)t + v(x), where u and v are two strongly semismooth functions in {\bb R}n. We also show that f is not a piecewise smooth function if u and v are two linearly independent linear functions, g is continuous and g n 0 in [a, b], and n ≥ 2. We apply the first result to the edge convex minimum norm network interpolation problem, which is a two-dimensional interpolation problem.

Journal ArticleDOI
TL;DR: It is proved that stationary points, local minimizers and global minimizers of the exact augmented Lagrangian function correspond exactly to KKT pairs, local solutions and global solutions of the constrained problem.
Abstract: This paper is aimed toward the definition of a new exact augmented Lagrangian function for two-sided inequality constrained problems. The distinguishing feature of this augmented Lagrangian function is that it employs only one multiplier for each two-sided constraint. We prove that stationary points, local minimizers and global minimizers of the exact augmented Lagrangian function correspond exactly to KKT pairs, local solutions and global solutions of the constrained problem.

Journal ArticleDOI
TL;DR: A Lagrange-Newton-SQP method is analyzed for the optimal control of the Burgers equation and the convergence of the method is proved in appropriate Banach spaces based on a second-order sufficient optimality condition and the theory of Newton methods for generalized equations inBanach spaces.
Abstract: A Lagrange-Newton-SQP method is analyzed for the optimal control of the Burgers equation. Boundary controls are given, which are restricted by pointwise lower and upper bounds. The convergence of the method is proved in appropriate Banach spaces. This proof is based on a second-order sufficient optimality condition and the theory of Newton methods for generalized equations in Banach spaces. For the numerical realization a primal-dual active set strategy is applied. To illustrate the theoretical investigations, numerical examples are included. Moreover, a globalization technique for the SQP method is tested numerically.

Journal ArticleDOI
TL;DR: It is shown that local epi-sub-Lipschitz continuity of the function-valued mapping associated with a perturbed optimization problem yields the local Lipschitzer continuity in the inf-projections (= marginal functions, = infimal functions).
Abstract: It is shown that local epi-sub-Lipschitz continuity of the function-valued mapping associated with a perturbed optimization problem yields the local Lipschitz continuity of the inf-projections (e marginal functions, e infimal functions). The use of the theorem is illustrated by considering perturbed nonlinear optimization problems with linear constraints.

Journal ArticleDOI
TL;DR: An infeasible interior point algorithm for convex minimization problems is described and global convergence under standard conditions on the problem data is proved, without any assumption on the behavior of the algorithm.
Abstract: We describe an infeasible interior point algorithm for convex minimization problems. The method uses quasi-Newton techniques for approximating the second derivatives and providing superlinear convergence. We propose a new feasibility control of the iterates by introducing shift variables and by penalizing them in the barrier problem. We prove global convergence under standard conditions on the problem data, without any assumption on the behavior of the algorithm.

Journal ArticleDOI
TL;DR: A new active set Newton-type algorithm for the solution of inequality constrained minimization problems is proposed to show viability of the approach in large scale problems having only a limited number of constraints.
Abstract: A new active set Newton-type algorithm for the solution of inequality constrained minimization problems is proposed. The algorithm possesses the following favorable characteristics: (i) global convergence under mild assumptionss (ii) superlinear convergence of primal variables without strict complementaritys (iii) a Newton-type direction computed by means of a truncated conjugate gradient method. Preliminary computational results are reported to show viability of the approach in large scale problems having only a limited number of constraints.