scispace - formally typeset
Search or ask a question

Showing papers in "Computational Optimization and Applications in 2007"


Journal ArticleDOI
TL;DR: This paper addresses the effectiveness and potential of the SDRE technique for the design of nonlinear compensator-based feedback controllers and the asymptotic convergence of the estimator and the compensated system.
Abstract: State-dependent Riccati equation (SDRE) techniques are rapidly emerging as general design and synthesis methods of nonlinear feedback controllers and estimators for a broad class of nonlinear regulator problems. In essence, the SDRE approach involves mimicking standard linear quadratic regulator (LQR) formulation for linear systems. In particular, the technique consists of using direct parameterization to bring the nonlinear system to a linear structure having state-dependent coefficient matrices. Theoretical advances have been made regarding the nonlinear regulator problem and the asymptotic stability properties of the system with full state feedback. However, there have not been any attempts at the theory regarding the asymptotic convergence of the estimator and the compensated system. This paper addresses these two issues as well as discussing numerical methods for approximating the solution to the SDRE. The Taylor series numerical methods works only for a certain class of systems, namely with constant control coefficient matrices, and only in small regions. The interpolation numerical method can be applied globally to a much larger class of systems. Examples will be provided to illustrate the effectiveness and potential of the SDRE technique for the design of nonlinear compensator-based feedback controllers.

225 citations


Journal ArticleDOI
TL;DR: This work presents combinatorial methods to preprocess these matrices to establish more favorable numerical properties for the subsequent factorization in a sparse direct LDLT factorization method where the pivoting is restricted to static supernode data structures.
Abstract: Interior-point methods are among the most efficient approaches for solving large-scale nonlinear programming problems. At the core of these methods, highly ill-conditioned symmetric saddle-point problems have to be solved. We present combinatorial methods to preprocess these matrices in order to establish more favorable numerical properties for the subsequent factorization. Our approach is based on symmetric weighted matchings and is used in a sparse direct LDL T factorization method where the pivoting is restricted to static supernode data structures. In addition, we will dynamically expand the supernode data structure in cases where additional fill-in helps to select better numerical pivot elements. This technique can be seen as an alternative to the more traditional threshold pivoting techniques. We demonstrate the competitiveness of this approach within an interior-point method on a large set of test problems from the CUTE and COPS sets, as well as large optimal control problems based on partial differential equations. The largest nonlinear optimization problem solved has more than 12 million variables and 6 million constraints.

186 citations


Journal ArticleDOI
TL;DR: This work investigates equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax+B|x| = b and shows that this absolute value equation is NP-hard to solve, and that solving it with B = I solves the general linear complementarity problem.
Abstract: We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax+B|x| = b, where A and B are arbitrary m× n real matrices. We show that this absolute value equation is NP-hard to solve, and that solving it with B = I solves the general linear complementarity problem. We give sufficient optimality conditions and duality results for absolute value programs as well as theorems of the alternative for absolute value inequalities. We also propose concave minimization formulations for absolute value equations that are solved by a finite succession of linear programs. These algorithms terminate at a local minimum that solves the absolute value equation in almost all solvable random problems tried.

185 citations


Journal ArticleDOI
TL;DR: A new scaled conjugate gradient algorithm is presented and analyzed, based on an interpretation of the secant equation and on the inexact Wolfe line search conditions, which substantially outperforms the spectral conjugates SCG algorithm.
Abstract: In this work we present and analyze a new scaled conjugate gradient algorithm and its implementation, based on an interpretation of the secant equation and on the inexact Wolfe line search conditions. The best spectral conjugate gradient algorithm SCG by Birgin and Martinez (2001), which is mainly a scaled variant of Perry's (1977), is modified in such a manner to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded in the restart philosophy of Beale---Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative manner by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Preliminary computational results, for a set consisting of 500 unconstrained optimization test problems, show that this new scaled conjugate gradient algorithm substantially outperforms the spectral conjugate gradient SCG algorithm.

162 citations


Journal ArticleDOI
TL;DR: The eigenvalue complementarity problem (EiCP) is studied and is shown to be equivalent to a Nonlinear Complementarity Problem, a Mathematical Programming Problem with Complementity Constraints and a Global Optimization Problem.
Abstract: In this paper an eigenvalue complementarity problem (EiCP) is studied, which finds its origins in the solution of a contact problem in mechanics The EiCP is shown to be equivalent to a Nonlinear Complementarity Problem, a Mathematical Programming Problem with Complementarity Constraints and a Global Optimization Problem A finite Reformulation---Linearization Technique (Rlt)-based tree search algorithm is introduced for processing the EiCP via the lattermost of these formulations Computational experience is included to highlight the efficacy of the above formulations and corresponding techniques for the solution of the EiCP

90 citations


Journal ArticleDOI
TL;DR: The attraction-repulsion concept of electromagnetism-like (EM) algorithm is used to boost the mutation operation of the original differential evolution, and results presented show the potential of this new approach.
Abstract: Differential evolution (DE) has gained a lot of attention from the global optimization research community. It has proved to be a very robust algorithm for solving non-differentiable and non-convex global optimization problems. In this paper, we propose some modifications to the original algorithm. Specifically, we use the attraction-repulsion concept of electromagnetism-like (EM) algorithm to boost the mutation operation of the original differential evolution. We carried out a numerical study using a set of 50 test problems, many of which are inspired by practical applications. Results presented show the potential of this new approach.

81 citations


Journal ArticleDOI
TL;DR: This paper investigates the use of a primal–dual penalty approach to overcome the inability of interior-point methods to efficiently re-optimize by solving closely related problems after a warmstart and proves exactness and convergence and shows encouraging numerical results.
Abstract: One perceived deficiency of interior-point methods in comparison to active set methods is their inability to efficiently re-optimize by solving closely related problems after a warmstart. In this paper, we investigate the use of a primal---dual penalty approach to overcome this problem. We prove exactness and convergence and show encouraging numerical results on a set of linear and mixed integer programming problems.

78 citations


Journal ArticleDOI
TL;DR: Issues of indefinite preconditioning of reduced Newton systems arising in optimization with interior point methods are addressed and an approximate constraint preconditionser is considered in which sparse approximation of the Jacobian is used instead of the complete matrix.
Abstract: Issues of indefinite preconditioning of reduced Newton systems arising in optimization with interior point methods are addressed in this paper. Constraint preconditioners have shown much promise in this context. However, there are situations in which an unfavorable sparsity pattern of Jacobian matrix may adversely affect the preconditioner and make its inverse representation unacceptably dense hence too expensive to be used in practice. A remedy to such situations is proposed in this paper. An approximate constraint preconditioner is considered in which sparse approximation of the Jacobian is used instead of the complete matrix. Spectral analysis of the preconditioned matrix is performed and bounds on its non-unit eigenvalues are provided. Preliminary computational results are encouraging.

78 citations


Journal ArticleDOI
TL;DR: This paper presents the integration schemes that are automatically generated when differentiating the discretization of the state equation using Automatic Differentiation (AD), and shows that they can be seen as discretized methods for the and adjoint differential equation of the underlying control problem.
Abstract: This paper considers the numerical solution of optimal control problems based on ODEs. We assume that an explicit Runge-Kutta method is applied to integrate the state equation in the context of a recursive discretization approach. To compute the gradient of the cost function, one may employ Automatic Differentiation (AD). This paper presents the integration schemes that are automatically generated when differentiating the discretization of the state equation using AD. We show that they can be seen as discretization methods for the sensitivity and adjoint differential equation of the underlying control problem. Furthermore, we prove that the convergence rate of the scheme automatically derived for the sensitivity equation coincides with the convergence rate of the integration scheme for the state equation. Under mild additional assumptions on the coefficients of the integration scheme for the state equation, we show a similar result for the scheme automatically derived for the adjoint equation. Numerical results illustrate the presented theoretical results.

74 citations


Journal ArticleDOI
TL;DR: This paper describes a parallel implementation of the primal–dual method on a shared memory system and results are presented, including the solution of some large scale problems with over 50,000 constraints.
Abstract: Primal---dual interior point methods and the HKM method in particular have been implemented in a number of software packages for semidefinite programming. These methods have performed well in practice on small to medium sized SDPs. However, primal---dual codes have had some trouble in solving larger problems because of the storage requirements and required computational effort. In this paper we describe a parallel implementation of the primal---dual method on a shared memory system. Computational results are presented, including the solution of some large scale problems with over 50,000 constraints.

71 citations


Journal ArticleDOI
TL;DR: The numerical experiments reveal that the iterative hybrid approach works better than Cholesky factorization on some classes of large-scale problems.
Abstract: We devise a hybrid approach for solving linear systems arising from interior point methods applied to linear programming problems. These systems are solved by preconditioned conjugate gradient method that works in two phases. During phase I it uses a kind of incomplete Cholesky preconditioner such that fill-in can be controlled in terms of available memory. As the optimal solution of the problem is approached, the linear systems becomes highly ill-conditioned and the method changes to phase II. In this phase a preconditioner based on the LU factorization is found to work better near a solution of the LP problem. The numerical experiments reveal that the iterative hybrid approach works better than Cholesky factorization on some classes of large-scale problems.

Journal ArticleDOI
TL;DR: It is shown that the preconditioner initially developed for multicommodity flows applies to any primal block-angular problem, although its efficiency depends on each particular linking constraints structure.
Abstract: Multicommodity flows belong to the class of primal block-angular problems. An efficient interior-point method has already been developed for linear and quadratic network optimization problems. It solved normal equations, using sparse Cholesky factorizations for diagonal blocks, and a preconditioned conjugate gradient for linking constraints. In this work we extend this procedure, showing that the preconditioner initially developed for multicommodity flows applies to any primal block-angular problem, although its efficiency depends on each particular linking constraints structure. We discuss the conditions under which the preconditioner is effective. The procedure is implemented in a user-friendly package in the MATLAB environment. Computational results are reported for four primal block-angular problems: multicommodity flows, nonoriented multicommodity flows, minimum-distance controlled tabular adjustment for statistical data protection, and the minimum congestion problem. The results show that this procedure holds great potential for solving large primal-block angular problems efficiently.

Journal ArticleDOI
TL;DR: Numerical results on a set of standard test problems show that the proposed techniques can be of value in the solution of large-dimensional systems of equations.
Abstract: In this paper we study nonmonotone globalization techniques, in connection with iterative derivative-free methods for solving a system of nonlinear equations in several variables. First we define and analyze a class of nonmonotone derivative-free linesearch techniques for unconstrained minimization of differentiable functions. Then we introduce a globalization scheme, which combines nonmonotone watchdog rules and nonmonotone linesearches, and we study the application of this scheme to some recent extensions of the Barzilai---Borwein gradient method and to hybrid stabilization algorithms employing linesearches along coordinate directions. Numerical results on a set of standard test problems show that the proposed techniques can be of value in the solution of large-dimensional systems of equations.

Journal ArticleDOI
TL;DR: The dual simplex algorithm has become a strong contender in solving large scale linear and mixed integer programs as mentioned in this paper, and the connection between dual feasibility and LP-preprocessing is discussed.
Abstract: The dual simplex algorithm has become a strong contender in solving large scale LP problems. One key problem of any dual simplex algorithm is to obtain a dual feasible basis as a starting point. We give an overview of methods which have been proposed in the literature and present new stable and efficient ways to combine them within a state-of-the-art optimization system for solving real world linear and mixed integer programs. Furthermore, we address implementation aspects and the connection between dual feasibility and LP-preprocessing. Computational results are given for a large set of large scale LP problems, which show our dual simplex implementation to be superior to the best existing research and open-source codes and competitive to the leading commercial code on many of our most difficult problem instances.

Journal ArticleDOI
TL;DR: This paper focuses on the use of preconditioned iterative techniques to solve the KKT system arising at each iteration of a Potential Reduction method for convex Quadratic Programming.
Abstract: Iterative solvers appear to be very promising in the development of efficient software, based on Interior Point methods, for large-scale nonlinear optimization problems. In this paper we focus on the use of preconditioned iterative techniques to solve the KKT system arising at each iteration of a Potential Reduction method for convex Quadratic Programming. We consider the augmented system approach and analyze the behaviour of the Constraint Preconditioner with the Conjugate Gradient algorithm. Comparisons with a direct solution of the augmented system and with MOSEK show the effectiveness of the iterative approach on large-scale sparse problems.

Journal ArticleDOI
TL;DR: A two-phase iterative algorithm that starts by solving the normal equations with PCG in each IPM iteration, and then switches to solve the preconditioned reduced augmented system with symmetric quasi-minimal residual (SQMR) method when it is advantageous to do so is proposed.
Abstract: We propose to compute the search direction at each interior-point iteration for a linear program via a reduced augmented system that typically has a much smaller dimension than the original augmented system. This reduced system is potentially less susceptible to the ill-conditioning effect of the elements in the (1,1) block of the augmented matrix. A preconditioner is then designed by approximating the block structure of the inverse of the transformed matrix to further improve the spectral properties of the transformed system. The resulting preconditioned system is likely to become better conditioned toward the end of the interior-point algorithm. Capitalizing on the special spectral properties of the transformed matrix, we further proposed a two-phase iterative algorithm that starts by solving the normal equations with PCG in each IPM iteration, and then switches to solve the preconditioned reduced augmented system with symmetric quasi-minimal residual (SQMR) method when it is advantageous to do so. The experimental results have demonstrated that our proposed method is competitive with direct methods in solving large-scale LP problems and a set of highly degenerate LP problems.

Journal ArticleDOI
TL;DR: The new rule for sequentially selecting the subproblems appears to be suited to tackle large scale problems, while the introduction of the proximal point term allows us to ensure the global convergence of the algorithm for the general case of nonconvex objective function.
Abstract: In this work we consider nonlinear minimization problems with a single linear equality constraint and box constraints. In particular we are interested in solving problems where the number of variables is so huge that traditional optimization methods cannot be directly applied. Many interesting real world problems lead to the solution of large scale constrained problems with this structure. For example, the special subclass of problems with convex quadratic objective function plays a fundamental role in the training of Support Vector Machine, which is a technique for machine learning problems. For this particular subclass of convex quadratic problem, some convergent decomposition methods, based on the solution of a sequence of smaller subproblems, have been proposed. In this paper we define a new globally convergent decomposition algorithm that differs from the previous methods in the rule for the choice of the subproblem variables and in the presence of a proximal point modification in the objective function of the subproblems. In particular, the new rule for sequentially selecting the subproblems appears to be suited to tackle large scale problems, while the introduction of the proximal point term allows us to ensure the global convergence of the algorithm for the general case of nonconvex objective function. Furthermore, we report some preliminary numerical results on support vector classification problems with up to 100 thousands variables.

Journal ArticleDOI
TL;DR: The error of the MEKF algorithm is proved to be exponentially bounded and the algorithm is used to optimize the parameters in certain nonlinear input–output mappings.
Abstract: The solution of nonlinear least-squares problems is investigated. The asymptotic behavior is studied and conditions for convergence are derived. To deal with such problems in a recursive and efficient way, it is proposed an algorithm that is based on a modified extended Kalman filter (MEKF). The error of the MEKF algorithm is proved to be exponentially bounded. Batch and iterated versions of the algorithm are given, too. As an application, the algorithm is used to optimize the parameters in certain nonlinear input---output mappings. Simulation results on interpolation of real data and prediction of chaotic time series are shown.

Journal ArticleDOI
TL;DR: This paper proposes simple extensions of existing formulations, based on the concept of regularization which has been introduced within the context of the statistical learning theory, to improve the performance of the proposed formulations over the ones traditionally used in preference disaggregation analysis.
Abstract: Disaggregation methods have been extensively used in multiple criteria decision making to infer preferential information from reference examples, using linear programming techniques. This paper proposes simple extensions of existing formulations, based on the concept of regularization which has been introduced within the context of the statistical learning theory. The properties of the resulting new formulations are analyzed for both ranking and classification problems and experimental results are presented demonstrating the improved performance of the proposed formulations over the ones traditionally used in preference disaggregation analysis.

Journal ArticleDOI
TL;DR: The results presented here indicate clustering of eigenvalues and, hence, faster convergence of Krylov subspace iterative methods when the entries of C are small in interior point methods for optimization.
Abstract: The problem of finding good preconditioners for the numerical solution of a certain important class of indefinite linear systems is considered. These systems are of a 2 by 2 block (KKT) structure in which the (2,2) block (denoted by -C) is assumed to be nonzero. In Constraint preconditioning for indefinite linear systems, SIAM J. Matrix Anal. Appl. 21 (2000), Keller, Gould and Wathen introduced the idea of using constraint preconditioners that have a specific 2 by 2 block structure for the case of C being zero. We shall give results concerning the spectrum and form of the eigenvectors when a preconditioner of the form considered by Keller, Gould and Wathen is used but the system we wish to solve may have C ?0. In particular, the results presented here indicate clustering of eigenvalues and, hence, faster convergence of Krylov subspace iterative methods when the entries of C are small; such a situations arise naturally in interior point methods for optimization and we present results for such problems which validate our conclusions.

Journal ArticleDOI
TL;DR: The algorithm for the solution of a semismooth system of equations with box constraints is described, an affine-scaling trust-region method that has strong global and local convergence properties under suitable assumptions.
Abstract: An algorithm for the solution of a semismooth system of equations with box constraints is described. The method is an affine-scaling trust-region method. All iterates generated by this method are strictly feasible. In this way, possible domain violations outside or on the boundary of the box are avoided. The method is shown to have strong global and local convergence properties under suitable assumptions, in particular, when the method is used with a special scaling matrix. Numerical results are presented for a number of problems arising from different areas.

Journal ArticleDOI
TL;DR: Numerical results on randomly generated problems suggest that the proposed algorithms may be of great practical interest, and global and local quadratic convergence are proved under nondegeneracy assumptions for both algorithms.
Abstract: Two interior-point algorithms are proposed and analyzed, for the (local) solution of (possibly) indefinite quadratic programming problems. They are of the Newton-KKT variety in that (much like in the case of primal-dual algorithms for linear programming) search directions for the "primal" variables and the Karush-Kuhn-Tucker (KKT) multiplier estimates are components of the Newton (or quasi-Newton) direction for the solution of the equalities in the first-order KKT conditions of optimality or a perturbed version of these conditions. Our algorithms are adapted from previously proposed algorithms for convex quadratic programming and general nonlinear programming. First, inspired by recent work by P. Tseng based on a "primal" affine-scaling algorithm (a la Dikin) [J. of Global Optimization, 30 (2004), no. 2, 285---300], we consider a simple Newton-KKT affine-scaling algorithm. Then, a "barrier" version of the same algorithm is considered, which reduces to the affine-scaling version when the barrier parameter is set to zero at every iteration, rather than to the prescribed value. Global and local quadratic convergence are proved under nondegeneracy assumptions for both algorithms. Numerical results on randomly generated problems suggest that the proposed algorithms may be of great practical interest.

Journal ArticleDOI
TL;DR: It is shown that a choice of the sampling based on low-discrepancy sequences permits to achieve, under suitable hypotheses, an almost linear sample complexity, thus contributing to mitigate the curse of dimensionality of the approximate DP procedure.
Abstract: Dynamic Programming (DP) is known to be a standard optimization tool for solving Stochastic Optimal Control (SOC) problems, either over a finite or an infinite horizon of stages. Under very general assumptions, commonly employed numerical algorithms are based on approximations of the cost-to-go functions, by means of suitable parametric models built from a set of sampling points in the d-dimensional state space. Here the problem of sample complexity, i.e., how "fast" the number of points must grow with the input dimension in order to have an accurate estimate of the cost-to-go functions in typical DP approaches such as value iteration and policy iteration, is discussed. It is shown that a choice of the sampling based on low-discrepancy sequences, commonly used for efficient numerical integration, permits to achieve, under suitable hypotheses, an almost linear sample complexity, thus contributing to mitigate the curse of dimensionality of the approximate DP procedure.

Journal ArticleDOI
TL;DR: This paper proposes the use of an iterative method, based on a planar Conjugate Gradient scheme, for the iterative computation of negative curvature directions of an objective function, within large scale optimization frameworks.
Abstract: In this paper we deal with the iterative computation of negative curvature directions of an objective function, within large scale optimization frameworks. In particular, suitable directions of negative curvature of the objective function represent an essential tool, to guarantee convergence to second order critical points. However, an "adequate" negative curvature direction is often required to have a good resemblance to an eigenvector corresponding to the smallest eigenvalue of the Hessian matrix. Thus, its computation may be a very difficult task on large scale problems. Several strategies proposed in literature compute such a direction relying on matrix factorizations, so that they may be inefficient or even impracticable in a large scale setting. On the other hand, the iterative methods proposed either need to store a large matrix, or they need to rerun the recurrence. On this guideline, in this paper we propose the use of an iterative method, based on a planar Conjugate Gradient scheme. Under mild assumptions, we provide theory for using the latter method to compute adequate negative curvature directions, within optimization frameworks. In our proposal any matrix storage is avoided, along with any additional rerun.

Journal ArticleDOI
TL;DR: An algorithm for solving nonlinear optimization problems with general equality and box constraints based on smoothing of the exact l1-penalty function and solving the resulting problem by any box-constraint optimization method is introduced.
Abstract: We introduce an algorithm for solving nonlinear optimization problems with general equality and box constraints. The proposed algorithm is based on smoothing of the exact l 1-penalty function and solving the resulting problem by any box-constraint optimization method. We introduce a general algorithm and present theoretical results for updating the penalty and smoothing parameter. We apply the algorithm to optimization problems for nonlinear traffic network models and report on numerical results for a variety of network problems and different solvers for the subproblems.

Journal ArticleDOI
TL;DR: The proposed model is aimed at deriving powerful classification rules by accurately learning from few data by solving a new mixed integer programming model that extends the notion of discrete support vector machines, in order to derive an optimal set of separating hyperplanes for binary classification problems.
Abstract: In the context of learning theory many efforts have been devoted to developing classification algorithms able to scale up with massive data problems. In this paper the complementary issue is addressed, aimed at deriving powerful classification rules by accurately learning from few data. This task is accomplished by solving a new mixed integer programming model that extends the notion of discrete support vector machines, in order to derive an optimal set of separating hyperplanes for binary classification problems. According to the cardinality of the set of hyperplanes, the classification region may take the form of a convex polyhedron or a polytope in the original space where the examples are defined. Computational tests on benchmark datasets highlight the effectiveness of the proposed model, that yields the greatest accuracy when compared to other classification approaches.

Journal ArticleDOI
TL;DR: Some preliminary empirical results are presented that illustrate MVE clustering as an appropriate method for clustering data from mixtures of “ellipsoidal” distributions and compare its performance with the k-means clustering algorithm as well as the MCLUST algorithm, available in the statistical package R.
Abstract: We propose minimum volume ellipsoids (MVE) clustering as an alternative clustering technique to k-means for data clusters with ellipsoidal shapes and explore its value and practicality. MVE clustering allocates data points into clusters in a way that minimizes the geometric mean of the volumes of each cluster's covering ellipsoids. Motivations for this approach include its scale-invariance, its ability to handle asymmetric and unequal clusters, and our ability to formulate it as a mixed-integer semidefinite programming problem that can be solved to global optimality. We present some preliminary empirical results that illustrate MVE clustering as an appropriate method for clustering data from mixtures of "ellipsoidal" distributions and compare its performance with the k-means clustering algorithm as well as the MCLUST algorithm (which is based on a maximum likelihood EM algorithm) available in the statistical package R.

Journal ArticleDOI
TL;DR: An attempt to eliminate the need for defining a priori the proper penalty parameter in GA search for pipe networks optimal designs by using the ratio of the best feasible and infeasible designs at each generation to guide the direction of the search towards the boundary of the feasible domain by automatically adjusting the value of the penalty parameter.
Abstract: Commercial application of genetic algorithms (GAs) to engineering design problems, including optimal design of pipe networks, could be facilitated by the development of algorithms that require the least number of parameter tuning. This paper presents an attempt to eliminate the need for defining a priori the proper penalty parameter in GA search for pipe networks optimal designs. The method is based on the assumption that the optimal solution of a pipe network design problem lies somewhere on, or near, the boundary of the feasible region. The proposed method uses the ratio of the best feasible and infeasible designs at each generation to guide the direction of the search towards the boundary of the feasible domain by automatically adjusting the value of the penalty parameter. The value of the ratio greater than unity is interpreted as the search being performed in the feasible region and vice versa. The new adapted value of the penalty parameter at each generation is therefore calculated as the product of its current value and the aforementioned ratio. The genetic search so constructed is shown to converge to the boundary of the feasible region irrespective of the starting value of the constraint violation penalty parameter. The proposed method is described here in the context of pipe network optimisation problems but is equally applicable to any other constrained optimisation problem. The effectiveness of the method is illustrated with a benchmark pipe network optimization example from the literature.

Journal ArticleDOI
TL;DR: It is proved that the optimal configuration cannot be two-dimensional, and an upper bound for the distance to the nearest neighbour of every particle, which depends on the position of this particle, is established.
Abstract: We establish new lower bounds on the distance between two points of a minimum energy configuration of N points in ?3 interacting according to a pairwise potential function. For the Lennard-Jones case, this bound is 0.67985 (and 0.7633 in the planar case). A similar argument yields an estimate for the minimal distance in Morse clusters, which improves previously known lower bounds. Moreover, we prove that the optimal configuration cannot be two-dimensional, and establish an upper bound for the distance to the nearest neighbour of every particle, which depends on the position of this particle. On the boundary of the optimal configuration polytope, this is unity while in the interior, this bound depends on the potential function. In the Lennard-Jones case, we get the value $\sqrt[6]{\frac {11}{5}}\approx1.1404$ . Also, denoting by V N the global minimum in an N point minimum energy configuration, we prove in Lennard-Jones clusters $\frac{V_{N}}{N}\ge-41.66$ for all N?2, while asymptotically $\lim_{N\to\infty}\frac{V_{N}}{N}\le-8.611$ holds (as opposed to $\frac{V_{N}}{N}\ge-8.22$ in the planar case, confirming non-planarity for large N).

Journal ArticleDOI
TL;DR: A stopping criterion deriving from the convergence theory of inexact Potential Reduction methods is analyzed and the possibility of relaxing it is investigated in order to reduce as much as possible the overall computational cost.
Abstract: We focus on the use of adaptive stopping criteria in iterative methods for KKT systems that arise in Potential Reduction methods for quadratic programming. The aim of these criteria is to relate the accuracy in the solution of the KKT system to the quality of the current iterate, to get computational efficiency. We analyze a stopping criterion deriving from the convergence theory of inexact Potential Reduction methods and investigate the possibility of relaxing it in order to reduce as much as possible the overall computational cost. We also devise computational strategies to face a possible slowdown of convergence when an insufficient accuracy is required.