scispace - formally typeset
Search or ask a question

Showing papers on "Nonlinear programming published in 1987"


Book
01 Jan 1987
TL;DR: The optimal control problem is illustrated with examples of large, sparse nonlinear programming and a comparison of optimal control problems in the context of discrete-time programming.
Abstract: Preface 1. Introduction to nonlinear programming 2. Large, sparse nonlinear programming 3. Optimal control preliminaries 4. The optimal control problem 5. Optimal control examples Appendix A. Software Bibliography, Index.

1,541 citations


Journal ArticleDOI
TL;DR: A special class of indefinite quadratic programs is constructed, with simple constraints and integer data, and it is shown that checking (a) or (b) on this class is NP-complete.
Abstract: In continuous variable, smooth, nonconvex nonlinear programming, we analyze the complexity of checking whether(a)a given feasible solution is not a local minimum, and(b)the objective function is not bounded below on the set of feasible solutions. We construct a special class of indefinite quadratic programs, with simple constraints and integer data, and show that checking (a) or (b) on this class is NP-complete. As a corollary, we show that checking whether a given integer square matrix is not copositive, is NP-complete.

1,117 citations


Journal ArticleDOI
TL;DR: In this article, an algorithm for the direct numerical solution of an optimal control problem is given, which employs cubic polynomials to represent state variables, linearly interpolates control variables, and uses collocation to satisfy the differential equations.
Abstract: An algorithm for the direct numerical solution of an optimal control problem is given. The method employs cubic polynomials to represent state variables, linearly interpolates control variables, and uses collocation to satisfy the differential equations. This representation transforms the optimal control problem to a mathematical programming problem which is solved by sequential quadratic programming. The method is easy to program for a very general trajectory optimization problem and is shown to be very efficient for several sample problems. Results are compared with solutions obtained with other methods.

1,100 citations


Book
01 Jan 1987
TL;DR: A comprehensive introduction to the subject, this book shows in detail how such problems can be solved in many different fields, and proves the vanishing of a determinant whose ...
Abstract: Sun, 06 Jan 2019 10:24:00 GMT lectures on modern convex optimization pdf Convex optimization problems arise frequently in many different fields. A comprehensive introduction to the subject, this book shows in detail how such problems can ... Mon, 07 Jan 2019 07:24:00 GMT Amazon.com: Convex Optimization, With Corrections 2008 ... Resources for Mathematics, mostly research and university level Sat, 05 Jan 2019 10:32:00 GMT Mathematics by Classifications mathontheweb.org Linear programming (LP, also called linear optimization) is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose ... Mon, 12 Feb 2001 23:53:00 GMT Linear programming Wikipedia A polytope may be convex. The convex polytopes are the simplest kind of polytopes, and form the basis for several different generalizations of the concept of polytopes. Thu, 27 Dec 2018 02:58:00 GMT Polytope Wikipedia Lectures (HTF) refers to Hastie, Tibshirani, and Friedman's book The Elements of Statistical Learning (SSBD) refers to Shalev-Shwartz and Ben-David's book ... Thu, 03 Jan 2019 23:28:00 GMT Foundations of Machine Learning bloomberg.github.io Box and Cox (1964) developed the transformation. Estimation of any Box-Cox parameters is by maximum likelihood. Box and Cox (1964) offered an example in which the ... Sat, 05 Jan 2019 16:09:00 GMT Glossary of research economics econterms Object Recognition I: Context (oral) Object-Graphs for Context-Aware Category Discovery (PDF, project) Yong Jae Lee, Kristen Grauman Grouplet: a Structured Image ... Thu, 13 Sep 2018 07:04:00 GMT CVPR 2010 papers on the web Papers This is the homepage of Thierry Roncalli ... La convergence de la gestion traditionnelle et de la gestion alternative, d"une part, l'émergence de la gestion ... Fri, 04 Jan 2019 21:03:00 GMT Thierry Roncalli's Home Page Download 1,250 free online courses from the world's top universities -Stanford, Yale, MIT, & more. Over 40,000 hours of free audio & video lectures. Sun, 23 Dec 2018 23:55:00 GMT 1,300 Free Online Courses from Top Universities | Open Culture DAMASK, micromechanical modeling,sheet forming, simulation, yield surface, crystal plasticity, CPFE, CPFEM, DAMASK, spectral solver, micromechanics, damage, Finite ... Fri, 07 Dec 2018 09:26:00 GMT Sheet Forming Simulations using Crystal Plasticity Finite ... This is the main resources page for the book Real-Time Rendering, Fourth Edition, by Tomas Akenine-Möller, Eric Haines, Naty Hoffman, Angelo Pesce, Micha&lstrok ... Mon, 07 Jan 2019 04:53:00 GMT Real-Time Rendering Resources Other writings: Darij Grinberg, On a double Sylvester determinant, unfinished draft. PDF file. Sourcecode of the paper. We prove the vanishing of a determinant whose ... Sat, 05 Jan 2019 05:03:00 GMT Darij Grinberg Algebra notes www.rz.ifi.lmu.de Electrical Engineering and Computer Science (EECS) spans a spectrum of topics from (i) materials, devices, circuits, and processors through (ii) control, signal ... Tue, 01 Jan 2019 12:10:00 GMT Department of Electrical Engineering and Computer Science ... 500 libros digitales PDF gratis matematica algebra lineal analisis funcional probabilidades topologia teoria de numeros estadistica calculo Tue, 01 Jan 2019 22:54:00 GMT 500 libros digitales gratis math books free download ... It is a great book for learning Probability theory. It assumes no background other than elementary mathematics. As of Jan. 2007 used copies are listed on Amazon at ... Sat, 05 Jan 2019 09:35:00 GMT Amazon.com: Fundamentals of Applied Probability Theory ... Os cursos universitários online grátis são

1,046 citations


Book
01 Jan 1987
TL;DR: This chapter discusses the development of optimization models for constrained computer programming and some of the methods used to achieve this goal were developed in the 1980s and 1990s.
Abstract: I* Basics 1 Optimization Models 2 Fundamentals of Optimization 3 Representation of Linear Constraints II Linear Programming 4 Geometry of Linear Programming 5 The Simplex Method Introduction 6 Duality and Sensitivity 7 Enhancements of the Simplex Method 8 Network Problems 9 Computational Complexity of Computer Programming III Unconstained Nonlinear Optimization 10 Basics of Unconstrained Optimization 11 Methods for Unconstrained Optimization 12 Low-Storage Methods 13 Nonlinear Least-Squares IV *Nonlinear Programming 14 Optimality Conditions for Constrained Problems 15 Feasible-Point Methods 16 Penalty and Barrier Methods 17 Interior Point Methods Appendixes

821 citations


Book
01 Oct 1987
TL;DR: This paper presents a meta-modelling framework for solving the optimization problems that can be formulated as nonconvex quadratic problems and some of the methods used for solving these problems have been developed.
Abstract: Convex sets and functions.- Optimality conditions in nonlinear programming.- Combinatorial optimization problems that can be formulated as nonconvex quadratic problems.- Enumerative methods in nonconvex programming.- Cutting plane methods.- Branch and bound methods.- Bilinear programming methods for nonconvex quadratic problems.- Large scale problems.- Global minimization of indefinite quadratic problems.- Test problems for global nonconvex quadratic programming algorithms.

472 citations


Journal ArticleDOI
TL;DR: In this article, an equality relaxation variant to the outer-approximation algorithm for solving mixed-integer nonlinear programming (MINLP) problems that arise in structural optimization of process flowsheets is presented.
Abstract: "This paper presents an Equality Relaxation variant to the Outer-Approximation algorithm for solving mixed-integer nonlinear programming (MINLP) problems that arise in structural optimization of process flowsheets. The propsed algorithm has the important capability of being able to explicitly handle nonlinear equations within MINLP formulations that have linear integer variables and linear/nonlinear continuous varibales. It is shown that through the explicit treatment of nonlinear equations, the proposed algorithm avoids computational difficulties (e.g. singularities, destruction of sparsity) that are experienced with algebraic or numerical elimination schemes.Also, theoretical properties of the Equality-Relaxation algorithm are discussed, and its performance is demonstrated with a planning problem and a flowsheet synthesis problem. Finally, a simple procedure for structural sensitivity analysis is presented."

266 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of minimizing the L 1 norm of the error impulse response for SISO continuous-time systems was studied, where irrational solutions are obtained even when the problem data are rational.
Abstract: Previous work has been concerned with minimizing the l^{1}- norm of an error pulse response for discrete-time SISO [1] and MIMO [2] systems. In this paper we study the problem of minimizing the L1-norm of the error impulse response for SISO continuous-time systems. This problem is quite different from the discrete-time problem in that irrational solutions are obtained even when the problem data are rational. Two methods are suggested for the solution of the continuous-time problem; an exact method which leads to a finite-dimensional nonlinear programming problem, and an approximate method which leads to a linear programming problem.

227 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compared four different conjugate-gradient methods, representative of up-to-date available scientific software, by applying them to two different meteorological problems of interest using criteria of computational economy and accuracy.
Abstract: During the last few years new meteorological variational analysis methods have evolved, requiring large-scale minimization of a nonlinear objective function described in terms of discrete variables. The conjugate-gradient method was found to represent a good compromise in convergence rates and computer memory requirements between simpler and more complex methods of nonlinear optimization. In this study different available conjugate-gradient algorithms are presented with the aim of assessing their use in large-scale typical minimization problems in meteorology. Computational efficiency and accuracy are our principal criteria. Four different conjugate-gradient methods, representative of up-to-date available scientific software, were compared by applying them to two different meteorological problems of interest using criteria of computational economy and accuracy. Conclusions are presented as to the adequacy of the different conjugate algorithms for large-scale minimization problems in different met...

194 citations


Book ChapterDOI
01 Jan 1987
TL;DR: In this article, the authors exploit the tools developed in the earlier parts to obtain detailed information about local optimizers in the non-degenerate case, and show that these optimizers obey a weak type of differentiability.
Abstract: This paper continues the local analysis of nonlinear programming problems begun in Parts I and II. In this part we exploit the tools developed in the earlier parts to obtain detailed information about local optimizers in the nondegenerate case. We show, for example, that these optimizers obey a weak type of differentiability and we compute their derivatives in this weak sense.

173 citations


Journal ArticleDOI
TL;DR: Two new versions of the controlled random search procedure for global optimization (CRS) are described, intended to drive an optimizing accelerator, based on a concurrent processing architecture, which can be attached to a workstation to achieve a significant increase in speed.
Abstract: This paper describes two new versions of the controlled random search procedure for global optimization (CRS). Designed primarily to suit the user of a CAD workstation, these algorithms can also be used effectively in other contexts. The first, known as CRS3, speeds the final convergence of the optimization by combining a local optimization algorithm with the global search procedure. The second, called CCRS, is a concurrent version of CRS3. This algorithm is intended to drive an optimizing accelerator, based on a concurrent processing architecture, which can be attached to a workstation to achieve a significant increase in speed. The results are given of comparative trials which involve both unconstrained and constrained optimization.

Book ChapterDOI
01 Jun 1987
TL;DR: This is a partial survey of results on the complexity of the linear programming problem since the ellipsoid method, including polynomial and strongly polynometric algorithms, probabilistic analysis of simplex algorithms, and recent interior point methods.
Abstract: This is a partial survey of results on the complexity of the linear programming problem since the ellipsoid method. The main topics are polynomial and strongly polynomial algorithms, probabilistic analysis of simplex algorithms, and recent interior point methods. Introduction Our purpose here is to survey theoretical developments in linear programming, starting from the ellipsoid method, mainly from the viewpoint of computational complexity. The survey does not attempt to be complete and naturally reflects the author's perspective, which may differ from the viewpoints of others. Linear programming is perhaps the most successful discipline of Operations Research. The standard form of the linear programming problem is to maximize a linear function c T x (c,xϵR n ) over all vectors x such that Ax = b and x ≥ 0. We denote such a problem by (A, b, c). Currently, the main tool for solving the linear programming problem in practice is the class of simplex algorithms proposed and developed by Dantzig [43]. However, applications of nonlinear programming methods, inspired by Karmarkar's work [79], may also become practical tools for certain classes of linear programming problems. Complexity-based questions about linear programming and related parameters of polyhedra (see, e.g., [66]) have been raised since the 1950s, before the field of computational complexity started to develop. The practical performance of the simplex algorithms has always seemed surprisingly good. In particular, the number of iterations seemed polynomial and even linear in the dimensions of problems being solved. Exponential examples were constructed only in the early 1970s, starting with the work of Klee and Minty [85].

Journal ArticleDOI
TL;DR: A method is proposed, using piecewise constant functions for the independent variables, that combines the technologies of quasi-Newton optimization algorithms and global spline collocation to simultaneously optimize and integrate systems described by differential/algebraic equations.

Journal ArticleDOI
TL;DR: This unified method demonstrates that nonlinear formulations (of the sort reported) allow more synergistic activity and in contrast to linear formulations, allow antagonistic activity.
Abstract: Estimating forces in muscles and joints during locomotion requires formulations consistent with available methods of solving the indeterminate problem. Direct comparisons of results between differing optimization methods proposed in the literature have been difficult owing to widely varying model formulations, algorithms, input data, and other factors. We present an application of a new optimization program which includes linear and nonlinear techniques allowing a variety of cost functions and greater flexibility in problem formulation. Unified solution methods such as the one demonstrated here, offer direct evaluations of such factors as optimization criteria and constraints. This unified method demonstrates that nonlinear formulations (of the sort reported) allow more synergistic activity and in contrast to linear formulations, allow antagonistic activity. Concurrence of EMG activity and predicted forces is better with nonlinear predictions than linear predictions. The prediction of synergistic and antagonistic activity expectedly leads to higher joint force predictions. Relaxation of the requirement that muscles resolve the entire intersegmental moment maintains muscle synergism in the nonlinear formulation while relieving muscle antagonism and reducing the predicted joint contact force. Such unified methods allow more possibilities for exploring new optimization formulations, and in comparing the solutions to previously reported formulations.

Journal ArticleDOI
TL;DR: In this article, a differential dynamic programming (DDP) algorithm is used for unsteady, nonlinear, groundwater management problems, and the dimensionality problems associated with embedding the hydraulic equations as constraints in the management model are significantly reduced.
Abstract: Optimal groundwater management models are based on the hydraulic equations of the aquifer system. These equations relate the state variables of the groundwater system, the head, and the decision variables that control the magnitude, location, and timing of pumping, or artificial recharge. For the unconfined aquifer these management models are large-scale, nonlinear programming problems. A differential dynamic programming (DDP) algorithm is used for unsteady, nonlinear, groundwater management problems. Due to the stagewise decomposition of DDP, the dimensionality problems associated with embedding the hydraulic equations as constraints in the management model are significantly reduced. In addition, DDP shows a linear growth in computing effort with respect to the number of stages or planning periods, and quadratic convergence. Several example problems illustrate the application of DDP to the optimal control of nonlinear groundwater hydraulics.

Journal ArticleDOI
TL;DR: The linear programming network of Tank and Hopfield is shown to obey the same unifying stationary cocontent theorem as the canonical nonlinear programming circuit of Chua and Lin.
Abstract: The linear programming network of Tank and Hopfield is shown to obey the same unifying stationary cocontent theorem as the canonical nonlinear programming circuit of Chua and Lin. Application of this theorem highlights an error in the design of Tank and Hopfield and suggests how this can be corrected to guarantee the existence of a bounded solution. The accuracy of the solution is maximized when the circuit of Tank and Hopfield reduces to the simplest special case of the canonical nonlinear programming circuit.

Journal ArticleDOI
TL;DR: In this article, an extension of the procedure proposed by Floudas and Grossman for the synthesis of heat exchanger networks under multi-period operation is presented, which can be performed by a nonlinear programming formulation that is based on a superstructure representation that includes all possible structural options for a given set of matches that are predicted for different time periods.

Journal ArticleDOI
TL;DR: In this article, a simple reliability apportionment example with a budgetary constraint for a 2-component series structure is analyzed and generalized into a more realistic problem with multiple components and constraints.
Abstract: The concept of reliability apportionment is general and has even been applied to the allocation of man-machine reliability. Typically, however, the process of optimally apportioning individual component reliability to meet some desired system reliability level subject, perhaps, to constraints on cost, volume, weight, etc. has always been imprecise and vague at best. In real problems, the resource constraints are no more sacred than the objective system reliability; they are frequently flexible. In view of the inherent vagueness of the reliability objective as well as constraints in a typical ill-structured reliability apportionment problem, this paper formulates the nonlinear optimization problem in the fuzzy-set theoretic perspective. To illustrate the philosophy, a simple reliability apportionment example with a budgetary constraint for a 2-component series structure is analyzed. Then the concept is generalized into a more realistic problem with multiple components and constraints. This tutorial paper is an easy introduction for the newcomer to fuzzy-system theory.

Journal ArticleDOI
TL;DR: A new method for the design of digital all-pass filters using Chebyshev criterion, based on a phase approximation algorithm for polynomial transfer functions, has the advantage that it finds the best uniform phase approximation to an arbitrarily specified phase response without any initial guess of the solution.
Abstract: A new method for the design of digital all-pass filters using Chebyshev criterion is introduced. It is based on a phase approximation algorithm for polynomial transfer functions. The algorithm exploits a scheme of iteratively linearizing the nonlinear constraints in a nonlinear programming and converges theoretically. The design method has the advantage that it finds the best uniform phase approximation to an arbitrarily specified phase response without any initial guess of the solution. Design examples of orders up to 80, obtained on an IBM-PC/XT personal computer, are given to show the practicability of the method.

Journal ArticleDOI
TL;DR: In this article, a more comprehensive study on the optimization of a three-phase induction motor design was performed, including the relationship between motor cost, efficiency, and power factor; the effect of the properties of the electrical steel; and other effects as they occur in an optimal design.
Abstract: This two-part paper deals with the optimization of the induction motor designs with respect to cost and efficiency. Most studies on the design of an induction motor using optimization techniques are concerned with the minimization of the motor cost and describe the optimization technique that was employed, giving the results of a single (or several) optimal design(s). In the present paper, a more comprehensive study on the optimization of a three-phase induction motor design was performed. This includes the relationship between motor cost, efficiency, and power factor; the effect of the properties of the electrical steel; and other effects as they occur in an optimal design. In addition, the optimization procedure that was used in this paper includes a design program, where some of the secondary parameters (which are called here variable constants), are modified according to the optimal results, in contrast to other studies where these parameters remain constant for the entire optimization. In this part, a new mathematical formulation of the optimization problem of the induction motor is presented.

Journal ArticleDOI
TL;DR: This work shows how to localize the concept of epi-continuity, and how to apply these localized ideas to ensure persistence and stability of local optimizing sets.
Abstract: One of the fundamental questions in nonlinear optimization is how optimization problems behave when the functions defining them change (e.g., by continuous deformation). Recently the study of epi-continuity has somewhat unified the results in this area. Here we show how to localize the concept of epi-continuity, and how to apply these localized ideas to ensure persistence and stability of local optimizing sets. We also show how these conditions follow from known properties of nonlinear programming problems.

Journal ArticleDOI
TL;DR: A duality theory is developed in which a general relation between φ-divergence and utility functions is revealed, via the conjugate transform, and a new type of certainty equivalent concept emerges.
Abstract: The paper considers stochastically constrained nonlinear programming problems. A penalty function is constructed in terms of a “distance” between random variables, defined in terms of the φ-divergence functional a generalization of the relative entropy. A duality theory is developed in which a general relation between φ-divergence and utility functions is revealed, via the conjugate transform, and a new type of certainty equivalent concept emerges.

Journal ArticleDOI
TL;DR: The nonlinear parametric programming problem is reformulated as a closed system of nonlinear equations so that numerical continuation and bifurcation techniques can be used to investigate the dependence of the optimal solution on the system parameters.
Abstract: The nonlinear parametric programming problem is reformulated as a closed system of nonlinear equations so that numerical continuation and bifurcation techniques can be used to investigate the dependence of the optimal solution on the system parameters. This system, which is motivated by the Fritz John first-order necessary conditions, contains all Fritz John and all Karush-Kuhn-Tucker points as well as local minima and maxima, saddle points, feasible and nonfeasible critical points. Necessary and sufficient conditions for a singularity to occur in this system are characterized in terms of the loss of a complementarity condition, the linear dependence of the gradients of the active constraints, and the singularity of the Hessian of the Lagrangian on a tangent space. Any singularity can be placed in one of seven distinct classes depending upon which subset of these three conditions hold true at a solution. For problems with one parameter, we analyze simple and multiple bifurcation of critical points from a singularity arising from the loss of the complementarity condition, and then develop a set of conditions which guarantees the unique persistence of a minimum through this singularity.

Journal ArticleDOI
TL;DR: A specialization of the primal truncated Newton algorithm for solving nonlinear optimization problems on networks with gains, able to capitalize on the special structure of the constraints.
Abstract: We describe a specialization of the primal truncated Newton algorithm for solving nonlinear optimization problems on networks with gains. The algorithm and its implementation are able to capitalize on the special structure of the constraints. Extensive computational tests show that the algorithm is capable of solving very large problems. Testing of numerous tactical issues are described, including maximal basis, projected line search, and pivot strategies. Comparisons with NLPNET, a nonlinear network code, and MINOS, a general-purpose nonlinear programming code, are also included.

Journal ArticleDOI
TL;DR: The ellipsoid algorithm was used in this article to refine the conformations of peptides in a constrained nonlinear optimization problem with NMR distance constraints, where the dihedral angles about single bonds were used as variables to keep the dimensionality low.
Abstract: A new method for constrained nonlinear optimization known as the ellipsoid algorithm is evaluated as a means of determining and refining the conformations of peptides. Advantages of the ellipsoid algorithm over conventional optimization methods include that it avoids many local minima that other methods would be trapped by, and that it is sometimes able to find optimum solutions in which the constraints are satisfied exactly. The dihedral angles about single bonds were used as variables to keep the dimensionality low (the rate of convergence decreases rapidly with increasing dimensionality of the problem). The method is evaluated on problems involving distance constraints, and for minimization of conformational energy functions. In an initial application, conformations consistent with an experimental set of NMR distance constraints were obtained in a problem involving 48 variable dihedral angles.

Journal ArticleDOI
TL;DR: The Chow-Yorke algorithm is a homotopy method that has been proved globally convergent for Brouwer fixed-point problems, certain classes of zero finding and nonlinear programming problems, and two-point boundary-value approximations based on shooting and finite differences as discussed by the authors.

Journal ArticleDOI
TL;DR: In this article, an efficient structural optimization methodology is presented for the design of minimum weight space frames subject to multiple natural frequency constraints, which is implemented in an automated structural optimization system which has been applied to solve a variety of space frame optimization problems.
Abstract: An efficient structural optimization methodology is presented for the design of minimum weight space frames subject to multiple natural frequency constraints. A powerful class of generalized hybrid c onstraint approximations which require o nly the first order constraint function d erivatives have been developed to overcome inherent nonlinearity of the frequency constraint. The generalized hybrid constraint functions are shown to be relatively conservative, separable and convex in the region bounded by the move limits based on the formula described in this paper. The optimization methodology is implemented in an automated structural optimization system which has been applied to solve a variety of space frame optimization problems. N umerical results obtained for three example problems indicate that the o ptimization methodology requires fewer than 10 complete normal modes analyses to generate a near optimum solution.

Book ChapterDOI
Ron S. Dembo1
01 Jan 1987
TL;DR: A new, convergent, primal-feasible algorithm for linearly constrained optimization that is capable of rapid asymptotic behavior and has relatively low storage requirements is described.
Abstract: We describe a new, convergent, primal-feasible algorithm for linearly constrained optimization. It is capable of rapid asymptotic behavior and has relatively low storage requirements. Its application to large-scale nonlinear network optimization is discussed and computational results on problems of over 2000 variables and 1000 constraints are presented. Indications are that it could prove to be significantly better than known methods for this class of problems.

Journal ArticleDOI
TL;DR: In this paper, first and second-order conditions are given which must necessarily be satisfied by local minimizers for certain finite-dimensional nonsmooth nonlinear programming problems, which are of standard form, having a finite number of equality and inequality constraints.
Abstract: First- and second-order conditions are given which must necessarily be satisfied by local minimizers for certain finite-dimensional nonsmooth nonlinear programming problems. The problems considered are of standard form, having a finite number of equality and inequality constraints. The principal result does not require a constraint qualification, but does require that the functions be semismooth at the minimizer. The necessary conditions are stated in terms of the generalized gradients of nonsmooth analysis and certain second-order directional derivatives.

Journal ArticleDOI
TL;DR: In this article, the authors investigate methods for solving high-dimensional nonlinear optimization problems which typically occur in the daily scheduling of electricity production: problems with a nonlinear, separable cost function, lower and upper bounds on the variables, and an equality constraint to satisfy the demand.
Abstract: We investigate methods for solving high-dimensional nonlinear optimization problems which typically occur in the daily scheduling of electricity production: problems with a nonlinear, separable cost function, lower and upper bounds on the variables, and an equality constraint to satisfy the demand. If the cost function is quadratic, we use a modified Lagrange multiplier technique. For a nonquadratic cost function (a penalty function combining the original cost function and certain fuel constraints, so that it is generally not separable), we compare the performance of the gradient-projection method and the reduced-gradient method, both with conjugate search directions within facets of the feasible set. Numerical examples at the end of the paper demonstrate the effectiveness of the gradient-projection method to solve problems with hundreds of variables by exploitation of the special structure.