scispace - formally typeset
Search or ask a question

Showing papers on "Nonlinear programming published in 1996"


Book
01 Jan 1996
TL;DR: This review discusses mathematics, linear programming, and set--Constrained and Unconstrained Optimization, as well as methods of Proof and Some Notation, and problems with Equality Constraints.
Abstract: Preface. MATHEMATICAL REVIEW. Methods of Proof and Some Notation. Vector Spaces and Matrices. Transformations. Concepts from Geometry. Elements of Calculus. UNCONSTRAINED OPTIMIZATION. Basics of Set--Constrained and Unconstrained Optimization. One--Dimensional Search Methods. Gradient Methods. Newton's Method. Conjugate Direction Methods. Quasi--Newton Methods. Solving Ax = b. Unconstrained Optimization and Neural Networks. Genetic Algorithms. LINEAR PROGRAMMING. Introduction to Linear Programming. Simplex Method. Duality. Non--Simplex Methods. NONLINEAR CONSTRAINED OPTIMIZATION. Problems with Equality Constraints. Problems with Inequality Constraints. Convex Optimization Problems. Algorithms for Constrained Optimization. References. Index.

3,283 citations


Journal ArticleDOI
TL;DR: Difficulty connected with solving the general nonlinear programming problem is discussed; several approaches that have emerged in the evolutionary computation community are surveyed; and a set of 11 interesting test cases are provided that may serve as a handy reference for future methods.
Abstract: Evolutionary computation techniques have received a great deal of attention regarding their potential as optimization techniques for complex numerical functions. However, they have not produced a significant breakthrough in the area of nonlinear programming due to the fact that they have not addressed the issue of constraints in a systematic way. Only recently have several methods been proposed for handling nonlinear constraints by evolutionary algorithms for numerical optimization problems; however, these methods have several drawbacks, and the experimental results on many test cases have been disappointing. In this paper we (1) discuss difficulties connected with solving the general nonlinear programming problem; (2) survey several approaches that have emerged in the evolutionary computation community; and (3) provide a set of 11 interesting test cases that may serve as a handy reference for future methods.

1,620 citations


ReportDOI
01 Mar 1996
TL;DR: An algorithm for solving large nonlinear optimization problems with simple bounds is described, based on the gradient projection method and uses a limited-memory BFGS matrix to approximate the Hessian of the objective function.
Abstract: An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited-memory BFGS matrix to approximate the Hessian of the objective function. We show how to take advantage of the form of the limited-memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.

1,581 citations


Journal ArticleDOI
TL;DR: A problem-specific genetic algorithm (GA) is developed and demonstrated to analyze series-parallel systems and to determine the optimal design configuration when there are multiple component choices available for each of several k-out-of-n:G subsystems.
Abstract: A problem-specific genetic algorithm (GA) is developed and demonstrated to analyze series-parallel systems and to determine the optimal design configuration when there are multiple component choices available for each of several k-out-of-n:G subsystems. The problem is to select components and redundancy-levels to optimize some objective function, given system-level constraints on reliability, cost, and/or weight. Previous formulations of the problem have implicit restrictions concerning the type of redundancy allowed, the number of available component choices, and whether mixing of components is allowed. GA is a robust evolutionary optimization search technique with very few restrictions concerning the type or size of the design problem. The solution approach was to solve the dual of a nonlinear optimization problem by using a dynamic penalty function. GA performs very well on two types of problems: (1) redundancy allocation originally proposed by Fyffe, Hines, Lee, and (2) randomly generated problem with more complex k-out-of-n:G configurations.

777 citations


Journal ArticleDOI
TL;DR: The improved genetic algorithm (GA) formulation for pipe network optimization has been developed and found a solution for the New York tunriels problem which is the lowest-cost feasible discrete size solution yet presented in the literature.
Abstract: An improved genetic algorithm (GA) formulation for pipe network optimization has been developed. The new GA uses variable power scaling of the fitness function. The exponent introduced into the fitness function is increased in magnitude as the GA computer run proceeds. In addition to the more commonly used bitwise mutation operator, an adjacency or creeping mutation operator is introduced. Finally, Gray codes rather than binary codes are used to represent the set of decision variables which make up i the pipe network design. Results are presented comparing the performance of the traditional or simple GA formulation and the improved GA formulation for the New York City tunnels problem. The case study results indicate the improved GA performs significantly better than the simple GA. In addition, the improved GA performs better than previously used traditional optimization methods such as linear, dynamic, and nonlinear programming methods and an enumerative search method. The improved GA found a solution for the New York tunriels problem which is the lowest-cost feasible discrete size solution yet presented in the literature.

507 citations


Journal ArticleDOI
TL;DR: Physical programming is a new approach to realistic design optimization that may be appealing to the design engineer in an industrial setting that provides the means to reliably employ optimization with minimal prior knowledge thereof.
Abstract: A new effective and computationally efficient approach for design optimization, hereby entitled physical programming, is developed. This new approach is intended to substantially reduce the computational intensity of large problems and to place the design process into a more flexible and natural framework. Knowledge of the desired attributes of the optimal design is judiciously exploited. For each attribute of interest to the designer (each criterion), regions are defined that delineate degrees of desirability : unacceptable, highly undesirable, undesirable, tolerable, desirable, and highly desirable. This approach completely eliminates the need for iterative weight setting, which is the object of the typical computational bottleneck in large design optimization problems. Two key advantages of physical programming are 1) once the designer's preferences are articulated, obtaining the corresponding optimal design is a noniterative process-in stark contrast to conventional weight-based methods and 2) it provides the means to reliably employ optimization with minimal prior knowledge thereof. The mathematical infrastructure that supports the physical programming design optimization framework is developed, and a numerical example provided. Physical programming is a new approach to realistic design optimization that may be appealing to the design engineer in an industrial setting.

496 citations


Journal ArticleDOI
TL;DR: Numerical integration, collocation, direct transcription, and differential inclusion are categorized in this paper by their numerical integration technique, the order of the Integration technique, and the unknowns of the parameter optimization problem.
Abstract: Several methods exist for converting optimal control problems into parameter optimization problems Numerical integration, collocation, direct transcription, and differential inclusion are examples of these conversion methods Because of the similarities of these methods, they are categorized in this paper by their numerical integration technique, the order of the integration technique, and the unknowns of the parameter optimization problem The integration techniques are divided into explicit and implicit approaches, and the unknowns are some combination of design parameters, control parameters, and state parameters The method called numerical integration has the controls as unknowns and uses explicit integration Collocation and direct transcription have the controls and states as unknowns and use implicit integration Differential inclusion has the states as unknowns and uses implicit integration (Author)

372 citations


Journal ArticleDOI
TL;DR: Valid inequalities and range contraction techniques that can be used to reduce the size of the search space of global optimization problems are presented and incorporated within the branch-and-bound framework to result in a branch- and-reduce global optimization algorithm.
Abstract: This paper presents valid inequalities and range contraction techniques that can be used to reduce the size of the search space of global optimization problems. To demonstrate the algorithmic usefulness of these techniques, we incorporate them within the branch-and-bound framework. This results in a branch-and-reduce global optimization algorithm. A detailed discussion of the algorithm components and theoretical properties are provided. Specialized algorithms for polynomial and multiplicative programs are developed. Extensive computational results are presented for engineering design problems, standard global optimization test problems, univariate polynomial programs, linear multiplicative programs, mixed-integer nonlinear programs and concave quadratic programs. For the problems solved, the computer implementation of the proposed algorithm provides very accurate solutions in modest computational time.

343 citations


01 Jan 1996
TL;DR: The primal-dual interior point method for linear programming has been shown to be locally and quadratically convergent under only the standard Newton method assumptions in this article, and a global convergence theory for this algorithm has been established.
Abstract: In this work, we first study in detail the formulation of the primal-dual interior-point method for linear programming. We show that, contrary to popular belief, it cannot be viewed.as a damped Newton method applied to the Karush-Kuhn-Tucker conditions for the loga- rithmic barrier function problem. Next, we extend the formulation to general nonlinear programming, and then validate this extension by demonstrating that this algorithm can be implemented so that it is locally and Q-quadratically convergent under only the standard Newton method assumptions. We also establish a global convergence theory for this algorithm and include promising numerical experimentation.

337 citations


Journal ArticleDOI
TL;DR: This work study in detail the formulation of the primal-dual interior-point method for linear programming and extends the formulation to general nonlinear programming, and proves that this algorithm can be implemented so that it is locally and Q-quadratically convergent under only the standard Newton method assumptions.
Abstract: In this work, we first study in detail the formulation of the primal-dual interior-point method for linear programming. We show that, contrary to popular belief, it cannot be viewed as a damped Newton method applied to the Karush-Kuhn-Tucker conditions for the logarithmic barrier function problem. Next, we extend the formulation to general nonlinear programming, and then validate this extension by demonstrating that this algorithm can be implemented so that it is locally and Q-quadratically convergent under only the standard Newton method assumptions. We also establish a global convergence theory for this algorithm and include promising numerical experimentation.

328 citations


Book
30 Nov 1996
TL;DR: This chapter discusses the two-level Mathematical Programming Problem and the Stackelberg Problem: Linear and Convex Case, which involves solving the problem of how to combine linear and nonlinear programming techniques.
Abstract: Preface. 1. Introduction. 2. Mathematical Preliminaries. 3. Differentiable Nonlinear Programming. 4. Nondifferentiable Nonlinear Programming. 5. Linear Programming. 6. Optimal-Value Functions. 7. Two-Level Mathematical Programming Problem. 8. Large-Scale Nonlinear Programming: Decomposition Methods. 9. Min-Max Problem. 10. Satisfaction Optimization Problem. 11. Two-Level Design Problem (Mathematical Programming with Optimal-Value Functions). 12. General Resource Allocation Problem for Decentralized Systems. 13. Min-Max Type Multi-Objective Programming Problem. 14. Best Approximation Problem by Chebyshev Norm. 15. The Stackelberg Problem: General Case. 16. The Stackelberg Problem: Linear and Convex Case. References. Index.

Journal ArticleDOI
TL;DR: Linear and nonlinear variational inequality problems over a polyhedral convex set are analyzed parametrically and Robinson's notion of strong regularity, as a criterion for the solution set to be a singleton depending Lipschitz continuously on the parameters, is characterized in terms of a new ``critical face'' condition.
Abstract: \rm Linear and nonlinear variational inequality problems over a polyhedral convex set are analyzed parametrically. Robinson's notion of strong regularity, as a criterion for the solution set to be a singleton depending Lipschitz continuously on the parameters, is characterized in terms of a new ``critical face'' condition and in other ways. The consequences for complementarity problems are worked out as a special case. Application is also made to standard nonlinear programming problems with parameters that include the canonical perturbations. In that framework a new characterization of strong regularity is obtained for the variational inequality associated with the Karush--Kuhn--Tucker conditions.

Journal ArticleDOI
TL;DR: In this article, a new algorithm for computing quantile regression estimates for problems in which the response function is nonlinear in parameters is described, and the algorithm is closely related to recent developments on interior point methods for solving linear programs.


Journal ArticleDOI
TL;DR: Three nonlinear optimization algorithms, namely, the Hooke and Jeeves' method, the quasi-Newton, and conjugate gradient search procedures are investigated for solving a radio communication system design problem that seeks an optimal location of a single transmitter, or that of multiple transmitters, in order to serve a specified distribution of receivers.
Abstract: This paper is concerned with the mathematical modeling and analysis of a radio communication system design problem that seeks an optimal location of a single transmitter, or that of multiple transmitters, in order to serve a specified distribution of receivers. The problem is modeled by discretizing the radio coverage region into a grid of receiver locations and by specifying a function that estimates the path-loss or signal attenuation for each receiver location, given a particular location for a transmitter that communicates with it. The resulting model is a nonlinear programming problem having an implicitly defined objective function of minimizing a measure of weighted path-losses. Specializations of three nonlinear optimization algorithms, namely, the Hooke and Jeeves' method, the quasi-Newton, and conjugate gradient search procedures are investigated for solving this problem. The technique described here is intended to interact with various propagation prediction models and may be used in a CAD system for radio communication system design.

Journal ArticleDOI
TL;DR: In this paper, an efficient Monte-Carlo method for locating the critical slip surface is presented, which is articulated in a sequence of stages, where each new slip surface was randomly generated by an appropriate technique.
Abstract: The search for the critical slip surface in slope-stability analysis is performed by means of a minimization of the safety factor. The procedures most widely used are deterministic methods of nonlinear programming, and random search methods have been neglected, since they are considered to be generally less efficient. In this paper, an efficient Monte-Carlo method for locating the critical slip surface is presented. The procedure is articulated in a sequence of stages, where each new slip surface is randomly generated by an appropriate technique. From a comparative analysis, the proposed method provides solutions of the same quality as the best nonlinear programming methods. However, the structure of the presented method is very simple, and it can be more easily programmed, integrated, and modified for particular exigencies.

Journal ArticleDOI
TL;DR: Two formulations of a nonlinear model predictive control scheme based on the second-order Volterra series model are presented and the first formulation determines the control action using successive substitution, and the second method directly solves a fourth-order nonlinear programming problem on-line.

Journal ArticleDOI
01 Mar 1996
TL;DR: A novel global minimization method, called NOVEL (Nonlinear Optimization via External Lead), is proposed, and its superior performance on neural network learning problems is demonstrated.
Abstract: We propose a novel global minimization method, called NOVEL (Nonlinear Optimization via External Lead), and demonstrate its superior performance on neural network learning problems The goal is improved learning of application problems that achieves either smaller networks or less error prone networks of the same size This training method combines global and local searches to find a good local minimum In benchmark comparisons against the best global optimization algorithms, it demonstrates superior performance improvement

Journal ArticleDOI
TL;DR: The experimental analysis indicates that ignoring the transa,ction costss results in inefficient portfolios, and there does not exist statistica,lly significant difference in portfolio performance with different methods to estimate the expected return of se~urit~ies, when considering the tra,nsact,ion costs int,o the p~rt~folio return.
Abstract: Tra,nsact>ion costss are a. source of concern for port,folio managers. Due to nonlinearity of the cost function, the ordinary quadratic programming solution technique cannot be applied. This paper addresses the portfolio optinlization problem subject to transaction costs. The transaction cost is assumed to be a V-sha,ped function of difference between an existing and new portfolio. A nonlinear programming solution technique is used to solve t,he proposed problem. The port,folio optimiza,t,ion syst,em ca,lled POSTRAC (Portfolio Optirniza,tion System with TRAnsaction Costs) is proposed. The experimental analysis indicates that ignoring the transa,ction costss results in inefficient portfolios. It is also shown tlmt there does not exist statistica,lly significant difference in portfolio performance with different methods to estimate the expected return of se~urit~ies, when considering the tra,nsact,ion costs int,o the p~rt~folio return.

Book
16 Aug 1996
TL;DR: In this paper, the authors present a dynamic network model for urban transportation networks and propose a solution algorithm for an ideal route choice model based on the Frank-Wolfe algorithm, which solves the LP Subproblem.
Abstract: I Dynamic Transportation Network Analysis.- 1 Introduction.- 1.1 Requirements for Dynamic Modeling.- 1.2 Urban Transportation Network Analysis.- 1.3 Overview of Dynamic Network Models.- 1.4 Hierarchy of Dynamic Network Models.- 1.5 Notes.- II Mathematical Background.- 2 Variational Inequalities and Continuous Optimal Control.- 2.1 Variational Inequality Problems.- 2.1.1 Definitions.- 2.1.2 Existence and Uniqueness.- 2.1.3 Relaxation Algorithm.- 2.2 Continuous Optimal Control Problems.- 2.2.1 Definitions.- 2.2.2 No Constraints.- 2.2.3 Equality and Inequality Constraints.- 2.2.4 Equality and Nonnegativity Constraints.- 2.3 Hierarchical Optimal Control Problems.- 2.3.1 Static Two-Person Games.- 2.3.2 Dynamic Games.- 2.3.3 Bilevel Optimal Control Problems.- 2.4 Notes.- 3 Discrete Optimal Control and Nonlinear Programming.- 3.1 Discrete Optimal Control Problems.- 3.1.1 No Constraints.- 3.1.2 Equality and Inequality Constraints.- 3.1.3 Equality and Nonnegativity Constraints.- 3.2 Nonlinear Programming Problems.- 3.2.1 Unconstrained Minimization.- 3.2.2 General Constraints.- 3.2.3 Linear Equality and Nonnegativity Constraints.- 3.2.4 Discrete Optimal Control and Nonlinear Programs.- 3.3 Solution Algorithms.- 3.3.1 One Dimensional Minimization.- 3.3.2 Frank-Wolfe Algorithm.- 3.4 Notes.- III Deterministic Dynamic Route Choice.- 4 Network Flow Constraints and Definitions of Travel Times.- 4.1 Flow Conservation Constraints.- 4.2 Definitions.- 4.3 Flow Propagation Constraints.- 4.3.1 Type I.- 4.3.2 Type II.- 4.3.3 Type III.- 4.4 First-In-First-Out Constraints.- 4.5 Link Capacity and Oversaturation.- 4.5.1 Maximal Number of Vehicles on a Link.- 4.5.2 Maximal Exit Flow from a Link.- 4.5.3 Constraints for Spillback.- 4.6 Notes.- 5 Ideal Dynamic Route Choice Models.- 5.1 An Example with Two Parallel Routes.- 5.2 Definition of an Ideal State.- 5.3 A Route-Time-Based Model.- 5.3.1 Route-Time-Based Conditions.- 5.3.2 Dynamic Network Constraints.- 5.3.3 The Variational Inequality Problem.- 5.4 A Link-Time-Based Model.- 5.4.1 Link-Time-Based Conditions.- 5.4.2 The Variational Inequality Problem.- 5.5 A Numerical Example.- 5.6 A Multi-Class Route-Cost-Based Model.- 5.6.1 Multi-Class Route-Cost-Based Conditions.- 5.6.2 Dynamic Network Constraints.- 5.6.3 The Variational Inequality Problem.- 5.7 A Multi-Class Link-Cost-Based Model.- 5.7.1 Multi-Class Link-Cost-Based Conditions.- 5.7.2 The Variational Inequality Problem.- 5.8 Notes.- 6 A Solution Algorithm for an Ideal Route Choice Model.- 6.1 Statement of the Algorithm.- 6.1.1 Discrete VI Model for the Link-Time-Based Case.- 6.1.2 Relaxation Procedure and Optimization Problem.- 6.1.3 The Frank-Wolfe Method.- 6.2 Solving the LP Subproblem.- 6.3 Computational Experience.- 6.4 Notes.- 7 Instantaneous Dynamic Route Choice Models.- 7.1 Definition of an Instantaneous State.- 7.2 A Route-Time-Based Model.- 7.2.1 Route-Time-Based Conditions.- 7.2.2 Dynamic Network Constraints.- 7.2.3 The Variational Inequality Problem.- 7.3 A Link-Time-Based Model.- 7.3.1 Link-Time-Based Conditions.- 7.3.2 The Variational Inequality Problem.- 7.4 Solution Algorithm.- 7.4.1 Discrete VI Model for the Link-Time-Based Case.- 7.4.2 Relaxation Procedure and Optimization Program.- 7.4.3 The Frank-Wolfe Method.- 7.4.4 Numerical Example.- 7.5 Notes.- 8 Extensions of Instantaneous Route Choice Models.- 8.1 Optimal Control Model 1.- 8.1.1 Model Formulation.- 8.1.2 Optimality Conditions.- 8.1.3 DUO Equivalence Analysis.- 8.2 Optimal Control Model 2.- 8.2.1 Model Formulation.- 8.2.2 Optimality Conditions.- 8.3 A Multi-Class Route-Cost-Based Model.- 8.3.1 Multi-Class Route-Cost-Based Conditions.- 8.3.2 Dynamic Network Constraints.- 8.3.3 The Variational Inequality Problem.- 8.4 A Multi-Class Link-Cost-Based Model.- 8.4.1 Multi-Class Link-Cost-Based Conditions.- 8.4.2 The Variational Inequality Problem.- 8.5 Notes.- IV Stochastic Dynamic Route Choice.- 9 Ideal Stochastic Dynamic Route Choice Models.- 9.1 Redefinition of Dynamic Travel Times.- 9.2 Formulation of the Model.- 9.2.1 Network Constraints.- 9.2.2 Stochastic Route Choice and the Ideal SDUO State.- 9.2.3 Two Popular Route Choice Functions.- 9.2.4 Ideal Route Choice Conditions and VI Problem.- 9.2.5 Analysis of Dispersed Route Choice.- 9.3 Solution Algorithm.- 9.3.1 The Discrete Variational Inequality Problem.- 9.3.2 The Relaxation Method.- 9.3.3 Method of Successive Averages.- 9.3.4 Summary of the Solution Algorithm.- 9.3.5 A Logit-Based Ideal Stochastic Loading.- 9.3.6 Proof of the Algorithm.- 9.4 Numerical Example.- 9.5 Notes.- 10 Instantaneous Stochastic Dynamic Route Choice Models.- 10.1 Formulation of the Model.- 10.1.1 Network Constraints.- 10.1.2 Definition of an Instantaneous SDUO State.- 10.1.3 Instantaneous Route Choice Conditions and VI Problem.- 10.2 Solution Algorithm.- 10.2.1 The Discrete Variational Inequality Problem.- 10.2.2 The Relaxation Method.- 10.2.3 Method of Successive Averages.- 10.2.4 Summary of the Solution Algorithm.- 10.2.5 A Logit-Based Instantaneous Stochastic Loading.- 10.2.6 Proof of the Algorithm.- 10.3 An Instantaneous Optimal Control Model.- 10.4 Numerical Example.- 10.5 Notes.- V General Dynamic Travel Choices.- 11 Combined Departure Time/Route Choice Models.- 11.1 Additional Network Constraints.- 11.2 A Route-Based Model.- 11.2.1 Route-Based Conditions.- 11.2.2 Dynamic Network Constraints.- 11.2.3 The Variational Inequality Problem.- 11.3 A Link-Based Model.- 11.3.1 Link-Based Conditions.- 11.3.2 The Variational Inequality Problem.- 11.4 Solution Algorithm and An Example.- 11.4.1 Discrete Variational Inequality Problem.- 11.4.2 Relaxation Procedure and Optimization Problem.- 11.4.3 Numerical Example.- 11.5 Notes.- 12 Combined Mode/Departure Time/Route Choice Models.- 12.1 The Combined Travel Choice Problem.- 12.2 Individual Travel Choice Problems.- 12.2.1 Mode Choice Problem.- 12.2.2 Departure Time/Route Choice for Motorists.- 12.3 The Link-Time-Based Model.- 12.3.1 Network Constraints.- 12.3.2 The Variational Inequality Problem.- 12.4 Notes.- VI Implications for ITS.- 13 Link Travel Time Functions for Dynamic Network Models.- 13.1 Functions for Various Purposes.- 13.2 Stochastic Link Travel Time Functions.- 13.2.1 Moving Queue Concept.- 13.2.2 Cruise Time.- 13.2.3 Delay and Link Travel Time Functions.- 13.3 Deterministic Link Travel Time Functions.- 13.4 Implications of the Proposed Functions.- 13.4.1 Number of Link Flow Variables.- 13.4.2 Physical Constraints for Link Traffic Flow.- 13.4.3 Notes on Functions for Arterial Links.- 13.5 Functions for Freeway Segments.- 13.6 Notes.- 14 Implementation in Intelligent Transportation Systems.- 14.1 Implementation Issues.- 14.1.1 Traffic Prediction.- 14.1.2 Dynamic Route Guidance.- 14.1.3 Integrated Traffic Control/Information System.- 14.1.4 Incident Management.- 14.1.5 Congestion Pricing.- 14.1.6 Operations and Control for AHS.- 14.1.7 Transportation Planning.- 14.2 Practical Considerations.- 14.2.1 Rolling Horizon Implementation.- 14.2.2 Traveler Knowledge of Information.- 14.2.3 Response to Current and Anticipated Conditions.- 14.2.4 Flow-Based vs. Vehicle-Based Models.- 14.2.5 Different Types of Travelers.- 14.3 Data Requirements.- 14.3.1 Time-Dependent O-D Matrices.- 14.3.2 Network Geometry and Control Data.- 14.3.3 Traffic Flow Data.- 14.3.4 Traveler Information.- 14.4 Notes.- References.- Author Index.- List of Figures.- List of Tables.

Journal ArticleDOI
TL;DR: A restricted DP heuristic (a generalization of the nearest neighbor heuristic) is presented that can include all the above considerations but solves much larger problems but cannot guarantee optimality.

Journal ArticleDOI
TL;DR: This paper presents two general algorithms for simulated annealing that have been applied to job shop scheduling problem and the traveling salesman problem and it is observed that it is possible to achieve superlinear speedups using the algorithm.

Journal ArticleDOI
TL;DR: It is shown that the original problem is equivalent to a convex minimization problem with simple linear constraints, and a special problem of minimizing a concave quadratic function subject to finitely many convexquadratic constraints, which is also shown to be equivalents to a minimax convex problem.
Abstract: We consider the problem of minimizing an indefinite quadratic objective function subject to twosided indefinite quadratic constraints. Under a suitable simultaneous diagonalization assumption (which trivially holds for trust region type problems), we prove that the original problem is equivalent to a convex minimization problem with simple linear constraints. We then consider a special problem of minimizing a concave quadratic function subject to finitely many convex quadratic constraints, which is also shown to be equivalent to a minimax convex problem. In both cases we derive the explicit nonlinear transformations which allow for recovering the optimal solution of the nonconvex problems via their equivalent convex counterparts. Special cases and applications are also discussed. We outline interior-point polynomial-time algorithms for the solution of the equivalent convex programs.

Proceedings ArticleDOI
08 Nov 1996
TL;DR: This talk discusses the stochastic counterpart (sample path) method where a relatively large sample is generated and the expected value function is approximated by the corresponding average function, and the obtained approximation problem is solved by deterministic methods of nonlinear programming.
Abstract: In this talk we consider a problem of optimizing an expected value function by Monte Carlo simulation methods. We discuss, somewhat in details, the stochastic counterpart (sample path) method where a relatively large sample is generated and the expected value function is approximated by the corresponding average function. Consequently the obtained approximation problem is solved by deterministic methods of nonlinear programming. One of advantages of this approach, compared with the classical stochastic approximation method, is that a statistical inference can be incorporated into optimization algorithms. This allows to develop a validation analysis, stopping rules and variance reduction techniques which in some cases considerably enhance numerical performance of the stochastic counterpart method.

Journal ArticleDOI
TL;DR: In this paper, a mathematical programming-based approach to computer-aided molecular design is presented using a set of structural groups, the problem is formulated as a mixed integer nonlinear program in which discrete variables represent the number of each type of structural group present in the candidate compound.

Journal ArticleDOI
Yifan Tang1
TL;DR: In this paper, a new approach for the systemized optimization of power distribution systems is presented, which is modelled in the optimization objective function via outage costs and costs of switching devices, along with the nonlinear costs of investment, maintenance and energy losses of both the substations and the feeders.
Abstract: A new approach for the systemized optimization of power distribution systems is presented in this paper. Distribution system reliability is modelled in the optimization objective function via outage costs and costs of switching devices, along with the nonlinear costs of investment, maintenance and energy losses of both the substations and the feeders. The optimization model established is multi-stage, mixed-integer and nonlinear, which is solved by a network-flow programming algorithm. A multi-stage interlacing strategy and a nonlinearity iteration method are also designed. Supported by an extensive database, the planning software tool has been applied to optimize the power distribution system of a developing city.

BookDOI
01 Jan 1996
TL;DR: In this paper, an algorithm using Quadratic Interpolation for Unconstrained Derivative Free Optimization (QIFO) is presented. But the algorithm is not suitable for large scale optimization problems.
Abstract: Towards a Discrete Newton Method with Memory for Largescale Optimization R.H. Byrd, et al. On Regularity for Generalized Systems and Applications M. Castellani, et al. An Algorithm Using Quadratic Interpolation for Unconstrained Derivative Free Optimization A.R. Conn, P.L. Toint. Massively Parallel Solution of Large Scale Network Flow Problems R. De Leone, et al. Correlation Theorems for Nonsmooth Systems V.F. Dem'yanov. Successive Projection Methods for the Solution of Overdetermined Nonlinear Systems M.A. Diniz-Erhardt, J.M. Martinez. On Exact Augmented Lagrangian Functions in Nonlinear Programming G. Di Pillo, S. Lucidi. Spacetransformation Technique: The State of the Art Y.G. Evtushenko, G.Z. Vitali. Semismoothness and Superlinear Convergence in Nonsmooth Optimization and Nonsmooth Equations H. Jiang, et al. On the Solution of the Monotone and Nonmonotone Linear Complementarity Problem by an Infeasible Interior Point Algorithm J. Judice, et al. Ergodic Results in Subgradient Optimization T. Larsson, et al. Protoderivatives and the Geometry of Solution Mappings in Nonlinear Programming A.B. Levy, R.T. Rockafellar. Index.

Dissertation
01 Jan 1996
Abstract: This thesis introduces and analyzes a family of trust-region interior-point (TRIP) reduced sequential quadratic programming (SQP) algorithms for the solution of minimization problems with nonlinear equality constraints and simple bounds on some of the variables. These nonlinear programming problems appear in applications in control, design, parameter identification, and inversion. In particular they often arise in the discretization of optimal control problems. The TRIP reduced SQP algorithms treat states and controls as independent variables. They are designed to take advantage of the structure of the problem. In particular they do not rely on matrix factorizations of the linearized constraints, but use solutions of the linearized state and adjoint equations. These algorithms result from a successful combination of a reduced SQP algorithm, a trust-region globalization, and a primal-dual affine scaling interior-point method. The TRIP reduced SQP algorithms have very strong theoretical properties. It is shown in this thesis that they converge globally to points satisfying first and second order necessary optimality conditions, and in a neighborhood of a local minimizer the rate of convergence is quadratic. Our algorithms and convergence results reduce to those of Coleman and Li for box-constrained optimization. An inexact analysis is presented to provide a practical way of controlling residuals of linear systems and directional derivatives. Complementing this theory, numerical experiments for two nonlinear optimal control problems are included showing the robustness and effectiveness of these algorithms. Another topic of this dissertation is a specialized analysis of these algorithms for equality-constrained optimization problems. The important feature of the way this family of algorithms specializes for these problems is that they do not require the computation of normal components for the step and an orthogonal basis for the null space of the Jacobian of the equality constraints. An extension of More and Sorensen's result for unconstrained optimization is presented, showing global convergence for these algorithms to a point satisfying the second-order necessary optimality conditions.

Book ChapterDOI
Andrew R. Conn1, Philippe L. Toint
01 Jan 1996
TL;DR: This paper explores the use of multivariate interpolation techniques in the context of methods for unconstrained optimization that do not require derivative of the objective function and proposes a new algorithm that uses quadratic models in a trust region framework.
Abstract: This paper explores the use of multivariate interpolation techniques in the context of methods for unconstrained optimization that do not require derivative of the objective function. A new algorithm is proposed that uses quadratic models in a trust region framework. The algorithm is constructed to require few evaluations of the objective function and is designed to be relatively insensitive to noise in the objective function values. Its performance is analyzed on a set of 20 examples, both with and without noise.

Journal ArticleDOI
Defeng Sun1
TL;DR: In this paper, a class of globally convergent iterative methods for solving nonlinear projection equations is provided under a continuity condition of the mappingF. When F is pseudomonotone, a necessary and sufficient condition on the nonemptiness of the solution set is obtained.
Abstract: A class of globally convergent iterative methods for solving nonlinear projection equations is provided under a continuity condition of the mappingF. WhenF is pseudomonotone, a necessary and sufficient condition on the nonemptiness of the solution set is obtained.