scispace - formally typeset
Search or ask a question

Showing papers on "Nonlinear programming published in 2008"


Journal ArticleDOI
TL;DR: This paper presents a detailed overview of the basic concepts of PSO and its variants, and provides a comprehensive survey on the power system applications that have benefited from the powerful nature ofPSO as an optimization technique.
Abstract: Many areas in power systems require solving one or more nonlinear optimization problems. While analytical methods might suffer from slow convergence and the curse of dimensionality, heuristics-based swarm intelligence can be an efficient alternative. Particle swarm optimization (PSO), part of the swarm intelligence family, is known to effectively solve large-scale nonlinear optimization problems. This paper presents a detailed overview of the basic concepts of PSO and its variants. Also, it provides a comprehensive survey on the power system applications that have benefited from the powerful nature of PSO as an optimization technique. For each application, technical details that are required for applying PSO, such as its type, particle formulation (solution representation), and the most efficient fitness functions are also discussed.

2,147 citations


Journal ArticleDOI
TL;DR: A class of hybrid algorithms, of which branch-and-bound and polyhedral outer approximation are the two extreme cases, are proposed and implemented and Computational results that demonstrate the effectiveness of this framework are reported.

891 citations


Journal ArticleDOI
TL;DR: This work introduces a decentralized scheme for least-squares and best linear unbiased estimation (BLUE) and establishes its convergence in the presence of communication noise and introduces a method of multipliers in conjunction with a block coordinate descent approach to demonstrate how the resultant algorithm can be decomposed into a set of simpler tasks suitable for distributed implementation.
Abstract: We deal with distributed estimation of deterministic vector parameters using ad hoc wireless sensor networks (WSNs). We cast the decentralized estimation problem as the solution of multiple constrained convex optimization subproblems. Using the method of multipliers in conjunction with a block coordinate descent approach we demonstrate how the resultant algorithm can be decomposed into a set of simpler tasks suitable for distributed implementation. Different from existing alternatives, our approach does not require the centralized estimator to be expressible in a separable closed form in terms of averages, thus allowing for decentralized computation even of nonlinear estimators, including maximum likelihood estimators (MLE) in nonlinear and non-Gaussian data models. We prove that these algorithms have guaranteed convergence to the desired estimator when the sensor links are assumed ideal. Furthermore, our decentralized algorithms exhibit resilience in the presence of receiver and/or quantization noise. In particular, we introduce a decentralized scheme for least-squares and best linear unbiased estimation (BLUE) and establish its convergence in the presence of communication noise. Our algorithms also exhibit potential for higher convergence rate with respect to existing schemes. Corroborating simulations demonstrate the merits of the novel distributed estimation algorithms.

740 citations


Journal ArticleDOI
TL;DR: In this article, the optimal power flow problems (OPF) were reformulated into a semidefinite programming (SDP) model and developed an algorithm of interior point method (IPM) for SDP.

576 citations


Journal ArticleDOI
TL;DR: A nonlinear optimization algorithm for solving the problem of phase retrieval with transverse translation diversity, where the diverse far-field intensity measurements are taken after translating the object relative to a known illumination pattern, achieves superior reconstructions.
Abstract: We develop and test a nonlinear optimization algorithm for solving the problem of phase retrieval with transverse translation diversity, where the diverse far-field intensity measurements are taken after translating the object relative to a known illumination pattern. Analytical expressions for the gradient of a squared-error metric with respect to the object, illumination and translations allow joint optimization of the object and system parameters. This approach achieves superior reconstructions, with respect to a previously reported technique [H. M. L. Faulkner and J. M. Rodenburg, Phys. Rev. Lett. 93, 023903 (2004)], when the system parameters are inaccurately known or in the presence of noise. Applicability of this method for samples that are smaller than the illumination pattern is explored.

491 citations


Journal ArticleDOI
TL;DR: The car-following behavior of individual drivers in real city traffic is studied on the basis of trajectory data sets recorded by a vehicle equipped with a radar sensor and it is found that intradriver variability rather than interdriver variability accounts for a large part of the calibration errors.
Abstract: The car-following behavior of individual drivers in real city traffic is studied on the basis of (publicly available) trajectory data sets recorded by a vehicle equipped with a radar sensor. By means of a nonlinear optimization procedure based on a genetic algorithm, the intelligent driver model and the velocity difference model are calibrated by minimizing the deviations between the observed driving dynamics and the simulated trajectory in following the same leading vehicle. The reliability and robustness of the nonlinear fits are assessed by applying different optimization criteria, that is, different measures for the deviations between two trajectories. The obtained errors are between 11% and 29%, which is consistent with typical error ranges obtained in previous studies. It is also found that the calibrated parameter values of the velocity difference model depend strongly on the optimization criterion, whereas the intelligent driver model is more robust. The influence of a reaction time is investigate...

378 citations


Posted Content
TL;DR: Strodiot and Zentralblatt as mentioned in this paper introduced the concept of unconstrained optimization, which is a generalization of linear programming, and showed that it is possible to obtain convergence properties for both standard and accelerated steepest descent methods.
Abstract: This new edition covers the central concepts of practical optimization techniques, with an emphasis on methods that are both state-of-the-art and popular. One major insight is the connection between the purely analytical character of an optimization problem and the behavior of algorithms used to solve a problem. This was a major theme of the first edition of this book and the fourth edition expands and further illustrates this relationship. As in the earlier editions, the material in this fourth edition is organized into three separate parts. Part I is a self-contained introduction to linear programming. The presentation in this part is fairly conventional, covering the main elements of the underlying theory of linear programming, many of the most effective numerical algorithms, and many of its important special applications. Part II, which is independent of Part I, covers the theory of unconstrained optimization, including both derivations of the appropriate optimality conditions and an introduction to basic algorithms. This part of the book explores the general properties of algorithms and defines various notions of convergence. Part III extends the concepts developed in the second part to constrained optimization problems. Except for a few isolated sections, this part is also independent of Part I. It is possible to go directly into Parts II and III omitting Part I, and, in fact, the book has been used in this way in many universities.New to this edition is a chapter devoted to Conic Linear Programming, a powerful generalization of Linear Programming. Indeed, many conic structures are possible and useful in a variety of applications. It must be recognized, however, that conic linear programming is an advanced topic, requiring special study. Another important topic is an accelerated steepest descent method that exhibits superior convergence properties, and for this reason, has become quite popular. The proof of the convergence property for both standard and accelerated steepest descent methods are presented in Chapter 8. As in previous editions, end-of-chapter exercises appear for all chapters.From the reviews of the Third Edition: this very well-written book is a classic textbook in Optimization. It should be present in the bookcase of each student, researcher, and specialist from the host of disciplines from which practical optimization applications are drawn. (Jean-Jacques Strodiot, Zentralblatt MATH, Vol. 1207, 2011)

364 citations


Journal ArticleDOI
TL;DR: In this paper, a method is suggested to solve the nonlinear interval number programming problem with uncertain coefficients both in nonlinear objective function and nonlinear constraints, based on an order relation of interval number.

321 citations


Journal ArticleDOI
TL;DR: In this work a methodology is presented for the rigorous optimization of nonlinear programming problems in which the objective function and (or) some constraints are represented by noisy implicit black box functions.
Abstract: In this work a methodology is presented for the rigorous optimization of nonlinear programming problems in which the objective function and (or) some constraints are represented by noisy implicit black box functions. The special application considered is the optimization of modular process simulators in which the derivatives are not available and some unit operations introduce noise preventing the calculation of accurate derivatives. The black box modules are substituted by metamodels based on a kriging interpolation that assumes that the errors are not independent but a function of the independent variables. A Kriging metamodel uses non-Euclidean measure of distance to avoid sensitivity to the units of measure. It includes adjustable parameters that weigh the importance of each variable for obtaining a good model representation, and it allows calculating errors that can be used to establish stopping criteria and provide a solid base to deal with “possible infeasibility” due to inaccuracies in the metamodel representation of objective function and constraints. The algorithm continues with a refining stage and successive bound contraction in the domain of independent variables with or without kriging recalibration until an acceptable accuracy in the metamodel is obtained. The procedure is illustrated with several examples. © 2008 American Institute of Chemical Engineers AIChE J, 2008

239 citations


Journal ArticleDOI
TL;DR: A new derivative-free algorithm, ORBIT, is presented for unconstrained local optimization of computationally expensive functions, using a trust-region framework using interpolating Radial Basis Function models to interpolate nonlinear functions using fewer function evaluations than the polynomial models considered by present techniques.
Abstract: We present a new derivative-free algorithm, ORBIT, for unconstrained local optimization of computationally expensive functions. A trust-region framework using interpolating Radial Basis Function (RBF) models is employed. The RBF models considered often allow ORBIT to interpolate nonlinear functions using fewer function evaluations than the polynomial models considered by present techniques. Approximation guarantees are obtained by ensuring that a subset of the interpolation points is sufficiently poised for linear interpolation. The RBF property of conditional positive definiteness yields a natural method for adding additional points. We present numerical results on test problems to motivate the use of ORBIT when only a relatively small number of expensive function evaluations are available. Results on two very different application problems, calibration of a watershed model and optimization of a PDE-based bioremediation plan, are also encouraging and support ORBIT's effectiveness on blackbox functions for which no special mathematical structure is known or available.

230 citations


Journal ArticleDOI
TL;DR: In this paper, the authors address the optimization of supply chain design and planning under responsive criterion and economic criterion with the presence of demand uncertainty by using a probabilistic model for stock-out.

Journal ArticleDOI
TL;DR: This neural network is capable of solving a large class of quadratic programming problems and is proven to be globally stable and to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints.
Abstract: In this paper, a one-layer recurrent neural network with a discontinuous hard-limiting activation function is proposed for quadratic programming. This neural network is capable of solving a large class of quadratic programming problems. The state variables of the neural network are proven to be globally stable and the output variables are proven to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints. In addition, a sequential quadratic programming approach based on the proposed recurrent neural network is developed for general nonlinear programming. Simulation results on numerical examples and support vector machine (SVM) learning show the effectiveness and performance of the neural network.

Journal ArticleDOI
TL;DR: Convergence rates for the error between the direct transcription solution and the true solution of an unconstrained optimal control problem using Gauss-Radau quadrature are presented.
Abstract: We present convergence rates for the error between the direct transcription solution and the true solution of an unconstrained optimal control problem. The problem is discretized using collocation at Radau points (aka Gauss-Radau or Legendre-Gauss-Radau quadrature). The precision of Radau quadrature is the highest after Gauss (aka Legendre-Gauss) quadrature, and it has the added advantage that the end point is one of the abscissas where the function, to be integrated, is evaluated. We analyze convergence from a Nonlinear Programming (NLP)/matrix algebra perspective. This enables us to predict the norms of various constituents of a matrix that is "close" to the KKT matrix of the discretized problem. We present the convergence rates for the various components, for a sufficiently small discretization size, as functions of the discretization size and the number of collocation points. We illustrate this using several test examples. This also leads to an adjoint estimation procedure, given the Lagrange multipliers for the large scale NLP.

Journal ArticleDOI
TL;DR: In this paper, a nonlinear optimization method for large-scale 3D elastic full-waveform seismic inversion is presented, combining outer Gauss-Newton nonlinear iterations with inner conjugate gradient linear iterations, globalized by an Armijo backtracking line search, solved on a sequence of finer grids and higher frequencies to remain in the vicinity of the global optimum, inexactly terminated to prevent oversolving.
Abstract: We present a nonlinear optimization method for large-scale 3D elastic full-waveform seismic inversion. The method combines outer Gauss–Newton nonlinear iterations with inner conjugate gradient linear iterations, globalized by an Armijo backtracking line search, solved on a sequence of finer grids and higher frequencies to remain in the vicinity of the global optimum, inexactly terminated to prevent oversolving, preconditioned by L-BFGS/Frankel, regularized by a total variation operator to capture sharp interfaces, finely discretized by finite elements in the Lame parameter space to provide flexibility and avoid bias, implemented in matrix-free fashion with adjoint-based computation of reduced gradient and reduced Hessian-vector products, checkpointed to avoid full spacetime waveform storage, and partitioned spatially across processors to parallelize the solutions of the forward and adjoint wave equations and the evaluation of gradient-like information. Several numerical examples demonstrate the grid independence of linear and nonlinear iterations, the effectiveness of the preconditioner, the ability to solve inverse problems with up to 17 million inversion parameters on up to 2048 processors, the effectiveness of multiscale continuation in keeping iterates in the basin of attraction of the global minimum, and the ability to fit the observational data while reconstructing the model with reasonable resolution and capturing sharp interfaces.

Journal ArticleDOI
TL;DR: In this paper, car-following behavior of individual drivers in real city traffic is studied on the basis of (publicly available) trajectory datasets recorded by a vehicle equipped with an radar sensor.
Abstract: The car-following behavior of individual drivers in real city traffic is studied on the basis of (publicly available) trajectory datasets recorded by a vehicle equipped with an radar sensor. By means of a nonlinear optimization procedure based on a genetic algorithm, we calibrate the Intelligent Driver Model and the Velocity Difference Model by minimizing the deviations between the observed driving dynamics and the simulated trajectory when following the same leading vehicle. The reliability and robustness of the nonlinear fits are assessed by applying different optimization criteria, i.e., different measures for the deviations between two trajectories. The obtained errors are in the range between~11% and~29% which is consistent with typical error ranges obtained in previous studies. In addition, we found that the calibrated parameter values of the Velocity Difference Model strongly depend on the optimization criterion, while the Intelligent Driver Model is more robust in this respect. By applying an explicit delay to the model input, we investigated the influence of a reaction time. Remarkably, we found a negligible influence of the reaction time indicating that drivers compensate for their reaction time by anticipation. Furthermore, the parameter sets calibrated to a certain trajectory are applied to the other trajectories allowing for model validation. The results indicate that ``intra-driver variability'' rather than ``inter-driver variability'' accounts for a large part of the calibration errors. The results are used to suggest some criteria towards a benchmarking of car-following models.

Journal ArticleDOI
TL;DR: The current state of the art of interior point methods (IPMs) for convex, conic, and general nonlinear optimization is described in this paper, where the authors discuss the theory, outline the algorithms, and comment on the applicability of this class of methods.
Abstract: This article describes the current state of the art of interior-point methods (IPMs) for convex, conic, and general nonlinear optimization. We discuss the theory, outline the algorithms, and comment on the applicability of this class of methods, which have revolutionized the field over the last twenty years.

Journal ArticleDOI
TL;DR: A class of trust-region methods is presented for solving unconstrained nonlinear and possibly nonconvex discretized optimization problems, like those arising in systems governed by partial differential equations, leading to true multilevel/multiscale optimization methods reminiscent of multigrid methods in linear algebra and the solution ofpartial differential equations.
Abstract: A class of trust-region methods is presented for solving unconstrained nonlinear and possibly nonconvex discretized optimization problems, like those arising in systems governed by partial differential equations. The algorithms in this class make use of the discretization level as a means of speeding up the computation of the step. This use is recursive, leading to true multilevel/multiscale optimization methods reminiscent of multigrid methods in linear algebra and the solution of partial differential equations. A simple algorithm of the class is then described and its numerical performance is shown to be numerically promising. This observation then motivates a proof of global convergence to first-order stationary points on the fine grid that is valid for all algorithms in the class.

Proceedings ArticleDOI
13 Apr 2008
TL;DR: This paper investigates how to design distributed algorithm for a future multi-hop CR network, with the objective of maximizing data rates for a set of user communication sessions via a cross-layer optimization approach.
Abstract: Cognitive radio (CR) is a revolution in radio technology and is viewed as an enabling technology for dynamic spectrum access. This paper investigates how to design distributed algorithm for a future multi-hop CR network, with the objective of maximizing data rates for a set of user communication sessions. We study this problem via a cross-layer optimization approach, with joint consideration of power control, scheduling, and routing. The main contribution of this paper is the development of a distributed optimization algorithm that iteratively increases data rates for user communication sessions. During each iteration, there are two separate processes, a Conservative Iterative Process (CIP) and an Aggressive Iterative Process (AIP). For both CIP and AIP, we describe our design of routing, minimalist scheduling, and power control/scheduling modules. To evaluate the performance of the distributed optimization algorithm, we compare it to an upper bound of the objective function, since the exact optimal solution to the objective function cannot be obtained via its mixed integer nonlinear programming (MINLP) formulation. Since the achievable performance via our distributed algorithm is close to the upper bound and the optimal solution (unknown) lies between the upper bound and the feasible solution obtained by our distributed algorithm, we conclude that the results obtained by our distributed algorithm are very close to the optimal solution.

Journal ArticleDOI
TL;DR: In this article, a process optimization method for the design of reverse osmosis (RO) processes is developed for a brackish water reverse oslosis (BWRO) desalination project, for which the optimal design is characterized depending on the economical conditions.

Journal ArticleDOI
TL;DR: In this article, an efficient procedure to compute strict upper and lower bounds for the exact collapse multiplier in limit analysis is presented, with a formulation that explicitly considers the exact convex yield condition.
Abstract: SUMMARY An efficient procedure to compute strict upper and lower bounds for the exact collapse multiplier in limit analysis is presented, with a formulation that explicitly considers the exact convex yield condition. The approach consists of two main steps. First, the continuous problem, under the form of the static principle of limit analysis, is discretized twice (one per bound) using particularly chosen finite element spaces for the stresses and velocities that guarantee the attainment of an upper or a lower bound. The second step consists of solving the resulting discrete non-linear optimization problems. These are reformulated as second-order cone programs, which allows for the use of primal–dual interior point methods that optimally exploit the convexity and duality properties of the limit analysis model. To benefit from the fact that collapse mechanisms are typically highly localized, a novel method for adaptive meshing is introduced. The method first decomposes the total bound gap as the sum of positive contributions from each element in the mesh and then refines those elements with higher contributions. The efficiency of the methodology is illustrated with applications in plane stress and plane strain problems. Copyright q 2008 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an inverse reliability analysis method that can be used to obtain accurate probability of failure calculation without requiring the second-order sensitivities for reliability-based design optimization (RBDO) of nonlinear and multi-dimensional systems.

Journal ArticleDOI
TL;DR: In this article, the authors present a study of the implementation of the new load flow equations format in an optimal power flow (OPF) program which accounts for control devices such as tap-changing transformers, phase-shifting transformers and unified power flow controllers.
Abstract: Recent research has shown that the load flow equations describing the steady-state conditions in a meshed network can be placed in extended conic quadratic (ECQ) format. This paper presents a study of the implementation of the new load flow equations format in an optimal power flow (OPF) program which accounts for control devices such as tap-changing transformers, phase-shifting transformers, and unified power flow controllers. The proposed OPF representation retains the advantages of the ECQ format: 1) it can be easily integrated within optimization routines that require the evaluation of second-order derivatives, 2) it can be efficiently solved for using primal-dual interior-point methods, and 3) it can make use of linear programming scaling techniques for improving numerical conditioning. The ECQ-OPF program is employed to solve the economic dispatch and active power loss minimization problems. Numerical testing is used to validate the proposed approach by comparing against solution methods and results of standard test systems.

Proceedings Article
08 Dec 2008
TL;DR: A sharp bound is held on the excess risk of the output of an online algorithm in terms of the average regret, that allows one to use recent algorithms with logarithmic cumulative regret guarantees to achieve fast convergence rates for the excessrisk with high probability.
Abstract: This paper examines the generalization properties of online convex programming algorithms when the loss function is Lipschitz and strongly convex. Our main result is a sharp bound, that holds with high probability, on the excess risk of the output of an online algorithm in terms of the average regret. This allows one to use recent algorithms with logarithmic cumulative regret guarantees to achieve fast convergence rates for the excess risk with high probability. As a corollary, we characterize the convergence rate of PEGASOS (with high probability), a recently proposed method for solving the SVM optimization problem.

Journal ArticleDOI
TL;DR: In this article, the authors address the problem of optimizing corn-based bioethanol plants through the use of heat integration and mathematical programming techniques, which can reduce the operating costs of the plant.
Abstract: In this work, we address the problem of optimizing corn-based bioethanol plants through the use of heat integration and mathematical programming techniques. The goal is to reduce the operating costs of the plant. Capital cost, energy usage, and yields—all contribute to production cost. Yield and energy usage also influence the viability of corn-based ethanol as a sustainable fuel. We first propose a limited superstructure of alternative designs including the various process units and utility streams involved in ethanol production. Our objective is to determine the connections in the network and the flow in each stream in the network such that we minimize the energy requirement of the overall plant. This is accomplished through the formulation of a mixed-integer nonlinear programming problem involving short-cut models for mass and energy balances for all the units in the system, where the model is solved through two nonlinear programming subproblems. We then perform a heat integration study on the resulting flowsheet; the modified flowsheet includes multieffect distillation columns that further reduces energy consumption. The results indicate that it is possible to reduce the current steam consumption required in the transformation of corn into fuel grade ethanol by more than 40% compared to initial basic design. © 2008 American Institute of Chemical Engineers AIChE J, 2008

01 Jan 2008
TL;DR: It is now realistic to solve NLPs on the order of a million variables, for instance, with the IPOPT algorithm, and the recent NLP sensitivity extension to IPopT quickly computes approximate solutions of perturbed NLPs, allowing on-line computations to be drastically reduced.
Abstract: Integration of real-time optimization and control with higher level decision-making (scheduling and planning) is an essential goal for profitable operation in a highly competitive environment. While integrated large-scale optimization models have been formulated for this task, their size and complexity remains a challenge to many available optimization solvers. On the other hand, recent development of powerful, large-scale solvers leads to a reconsideration of these formulations, in particular, through development of efficient large-scale barrier methods for nonlinear programming (NLP). As a result, it is now realistic to solve NLPs on the order of a million variables, for instance, with the IPOPT algorithm. Moreover, the recent NLP sensitivity extension to IPOPT quickly computes approximate solutions of perturbed NLPs. This allows on-line computations to be drastically reduced, even when large nonlinear optimization models are considered. These developments are demonstrated on dynamic real-time optimization strategies that can be used to merge and replace the tasks of (steady-state) real-time optimization and (linear) model predictive control. We consider a recent case study of a low density polyethylene (LDPE) process to illustrate these concepts.

Journal ArticleDOI
TL;DR: Semidefinite programming relaxations are proposed and studied, which are bounded and hence suitable for use with finite KKT-branching and demonstrate the practical effectiveness of the method.
Abstract: Existing global optimization techniques for nonconvex quadratic programming (QP) branch by recursively partitioning the convex feasible set and thus generate an infinite number of branch-and-bound nodes. An open question of theoretical interest is how to develop a finite branch-and-bound algorithm for nonconvex QP. One idea, which guarantees a finite number of branching decisions, is to enforce the first-order Karush-Kuhn-Tucker (KKT) conditions through branching. In addition, such an approach naturally yields linear programming (LP) relaxations at each node. However, the LP relaxations are unbounded, a fact that precludes their use. In this paper, we propose and study semidefinite programming relaxations, which are bounded and hence suitable for use with finite KKT-branching. Computational results demonstrate the practical effectiveness of the method, with a particular highlight being that only a small number of nodes are required.

Journal ArticleDOI
TL;DR: It is shown that the proposed neural network is stable at a Karush-Kuhn-Tucker point in the sense of Lyapunov and its output trajectory is globally convergent to a minimum solution and there is no restriction on the initial point.
Abstract: This paper presents a novel recurrent neural network for solving nonlinear optimization problems with inequality constraints. Under the condition that the Hessian matrix of the associated Lagrangian function is positive semidefinite, it is shown that the proposed neural network is stable at a Karush-Kuhn-Tucker point in the sense of Lyapunov and its output trajectory is globally convergent to a minimum solution. Compared with variety of the existing projection neural networks, including their extensions and modification, for solving such nonlinearly constrained optimization problems, it is shown that the proposed neural network can solve constrained convex optimization problems and a class of constrained nonconvex optimization problems and there is no restriction on the initial point. Simulation results show the effectiveness of the proposed neural network in solving nonlinearly constrained optimization problems.

Journal ArticleDOI
TL;DR: A fast MHE algorithm able to overcome the bottleneck to solve dynamic optimization problems and it is shown that highly accurate state estimates can be obtained in large-scale MHE applications with negligible on-line computational costs.

Journal ArticleDOI
TL;DR: An additional hybrid algorithm is defined, in which the interior-point method is replaced by the Newtonian resolution of a Karush-Kuhn-Tucker (KKT) system identified by the augmented Lagrangian algorithm.
Abstract: Optimization methods that employ the classical Powell-Hestenes-Rockafellar augmented Lagrangian are useful tools for solving nonlinear programming problems. Their reputation decreased in the last 10 years due to the comparative success of interior-point Newtonian algorithms, which are asymptotically faster. In this research, a combination of both approaches is evaluated. The idea is to produce a competitive method, being more robust and efficient than its 'pure' counterparts for critical problems. Moreover, an additional hybrid algorithm is defined, in which the interior-point method is replaced by the Newtonian resolution of a Karush-Kuhn-Tucker (KKT) system identified by the augmented Lagrangian algorithm. The software used in this work is freely available through the Tango Project web page:http://www.ime.usp.br/∼egbirgin/tango/

Journal ArticleDOI
TL;DR: In this article, the authors proposed a piecewise MILP relaxation approach to solve the under-and overestimation problem for bilinear programs via three systematic approaches, and two segmentation schemes.
Abstract: Many practical problems of interest in chemical engineering and other fields can be formulated as bilinear programs (BLPs). For such problems, a local nonlinear programming solver often provides a suboptimal solution or even fails to locate a feasible one. Numerous global optimization algorithms devised for bilinear programs rely on linear programming (LP) relaxation, which is often weak, and, thus, slows down the convergence rate of the global optimization algorithm. An interesting recent development is the idea of using an ab initio partitioning of the search domain to improve the relaxation quality, which results in a relaxation problem that is a mixed-integer linear program (MILP) rather than LP, called as piecewise MILP relaxation. However, much work is in order to fully exploit the potential of such approach. Several novel formulations are developed for piecewise MILP under- and overestimators for BLPs via three systematic approaches, and two segmentation schemes. As is demonstrated and evaluated the superiority of the novel models is shown, using a variety of examples. In addition, metrics are defined to measure the effectiveness of piecewise MILP relaxation within a two-level-relaxation framework, and several theoretical results are presented, as well as valuable insights into the properties of such relaxations, which may prove useful in developing global optimization algorithms. © 2008 American Institute of Chemical Engineers AIChE J, 2008