scispace - formally typeset
Search or ask a question

Showing papers on "Nonlinear programming published in 2002"


Journal ArticleDOI
TL;DR: An SQP algorithm that uses a smooth augmented Lagrangian merit function and makes explicit provision for infeasibility in the original problem and the QP subproblems is discussed.
Abstract: Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available and that the constraint gradients are sparse. We discuss an SQP algorithm that uses a smooth augmented Lagrangian merit function and makes explicit provision for infeasibility in the original problem and the QP subproblems. SNOPT is a particular implementation that makes use of a semidefinite QP solver. It is based on a limited-memory quasi-Newton approximation to the Hessian of the Lagrangian and uses a reduced-Hessian algorithm (SQOPT) for solving the QP subproblems. It is designed for problems with many thousands of constraints and variables but a moderate number of degrees of freedom (say, up to 2000). An important application is to trajectory optimization in the aerospace industry. Numerical results are given for most problems in the CUTE and COPS test collections (about 900 examples).

2,831 citations


Journal ArticleDOI
TL;DR: This paper deals with a certain class of optimization methods, based on conservative convex separable approximations (CCSA), for solving inequality-constrained nonlinear programming problems, and it is proved that the sequence of iteration points converges toward the set of Karush--Kuhn--Tucker points.
Abstract: This paper deals with a certain class of optimization methods, based on conservative convex separable approximations (CCSA), for solving inequality-constrained nonlinear programming problems. Each generated iteration point is a feasible solution with lower objective value than the previous one, and it is proved that the sequence of iteration points converges toward the set of Karush--Kuhn--Tucker points. A major advantage of CCSA methods is that they can be applied to problems with a very large number of variables (say 104--105) even if the Hessian matrices of the objective and constraint functions are dense.

1,015 citations


Journal ArticleDOI
TL;DR: The aim of the present work is to promote global convergence without the need to use a penalty function, so a new concept of a “filter” is introduced which allows a step to be accepted if it reduces either the objective function or the constraint violation function.
Abstract: In this paper the solution of nonlinear programming problems by a Sequential Quadratic Programming (SQP) trust-region algorithm is considered. The aim of the present work is to promote global convergence without the need to use a penalty function. Instead, a new concept of a “filter” is introduced which allows a step to be accepted if it reduces either the objective function or the constraint violation function. Numerical tests on a wide range of test problems are very encouraging and the new algorithm compares favourably with LANCELOT and an implementation of Sl1QP.

879 citations


Journal ArticleDOI
TL;DR: A condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization shows how their influence has transformed both the theory and practice of constrained optimization.
Abstract: Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interior-point techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar's widely publicized announcement in 1984 of a fast polynomial-time interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.

693 citations


Book
01 Jan 2002
TL;DR: Pardalos and Resende as mentioned in this paper proposed a method to solve the problem of finding the minimum-cost single-Commodity Flow (MCSF) in a network.
Abstract: PrefacePanos M. Pardalos and Mauricio G. C. Resende: IntroductionPanos M. Pardalos and Mauricio G. C. Resende: Part One: Algorithms 1: Linear Programming 1.1: Tamas Terlaky: Introduction 1.2: Tamas Terlaky: Simplex-Type Algorithms 1.3: Kees Roos: Interior-Point Methods for Linear Optimization 2: Henry Wolkowicz: Semidefinite Programming 3: Combinatorial Optimization 3.1: Panos M. Pardalos and Mauricio G. C. Resende: Introduction 3.2: Eva K. Lee: Branch-and-Bound Methods 3.3: John E. Mitchell: Branch-and-Cut Algorithms for Combinatorial Optimization Problems 3.4: Augustine O. Esogbue: Dynamic Programming Approaches 3.5: Mutsunori Yagiura and Toshihide Ibaraki: Local Search 3.6: Metaheuristics 3.6.1: Bruce L. Golden and Edward A. Wasil: Introduction 3.6.2: Eric D. Taillard: Ant Systems 3.6.3: John E. Beasley: Population Heuristics 3.6.4: Pablo Moscato: Memetic Algorithms 3.6.5: Leonidas S. Pitsoulis and Mauricio G. C. Resende: Greedy Randomized Adaptive Search Procedures 3.6.6: Manuel Laguna: Scatter Search 3.6.7: Fred Glover and Manuel Laguna: Tabu Search 3.6.8: E. H. L. Aarts and H. M. M. Ten Eikelder: Simulated Annealing 3.6.9: Pierre Hansen and Nenad Mladenovi'c: Variable Neighborhood Search 4: Yinyu Ye: Quadratic Programming 5: Nonlinear Programming 5.1: Gianni Di Pillo and Laura Palagi: Introduction 5.2: Gianni Di Pillo and Laura Palagi: Unconstrained Nonlinear Programming 5.3: Constrained Nonlinear Programming }a Gianni Di Pillo and Laura Palagi 5.4: Manlio Gaudioso: Nonsmooth Optimization 6: Christodoulos A. Floudas: Deterministic Global Optimizatio and Its Applications 7: Philippe Mahey: Decomposition Methods for Mathematical Programming 8: Network Optimization 8.1: Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin: Introduction 8.2: Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin: Maximum Flow Problem 8.3: Edith Cohen: Shortest-Path Algorithms 8.4: S. Thomas McCormick: Minimum-Cost Single-Commodity Flow 8.5: Pierre Chardaire and Abdel Lisser: Minimum-Cost Multicommodity Flow 8.6: Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin: Minimum Spanning Tree Problem 9: Integer Programming 9.1: Nelson Maculan: Introduction 9.2: Nelson Maculan: Linear 0-1 Programming 9.3: Yves Crama and peter L. Hammer: Psedo-Boolean Optimization 9.4: Christodoulos A. Floudas: Mixed-Integer Nonlinear Optimization 9.5: Monique Guignard: Lagrangian Relaxation 9.6: Arne Lookketangen: Heuristics for 0-1 Mixed-Integer Programming 10: Theodore B. Trafalis and Suat Kasap: Artificial Neural Networks in Optimization and Applications 11: John R. Birge: Stochastic Programming 12: Hoang Tuy: Hierarchical Optimization 13: Michael C. Ferris and Christian Kanzow: Complementarity and Related Problems 14: Jose H. Dula: Data Envelopment Analysis 15: Yair Censor and Stavros A. Zenios: Parallel Algorithms in Optimization 16: Sanguthevar Rajasekaran: Randomization in Discrete Optimization: Annealing Algorithms Part Two: Applications 17: Problem Types 17.1: Chung-Yee Lee and Michael Pinedo: Optimization and Heuristics of Scheduling 17.2: John E. Beasley, Abilio Lucena, and Marcus Poggi de Aragao: The Vehicle Routing Problem 17.3: Ding-Zhu Du: Network Designs: Approximations for Steiner Minimum Trees 17.4: Edward G. Coffman, Jr., Janos Csirik, and Gerhard J. Woeginger: Approximate Solutions to Bin Packing Problems 17.5: Rainer E. Burkard: The Traveling Salesmand Problem 17.6: Dukwon Kim and Boghos D. Sivazlian: Inventory Management 17.7: Zvi Drezner: Location 17.8: Jun Gu, Paul W. Purdom, John Franco, and Benjamin W. Wah: Algorithms for the Satisfiability (SAT) Problem 17.9: Eranda Cela: Assignment Problems 18: Application Areas 18.1: Warren B. Powell: Transportation and Logistics 18.2: Gang Yu and Benjamin G. Thengvall: Airline Optimization 18.3: Alexandra M. Newman, Linda K. Nozick, and Candace Arai Yano: Optimization in the Rail Industry 18.4: Andres Weintraub Pohorille and John Hof: Forstry Industry 18.5: Stephen C. Graves: Manufacturing Planning and Control 18.6: Robert C. Leachman: Semiconductor Production Planning 18.7: Matthew E. Berge, John T. Betts, Sharon K. Filipowski, William P. Huffman, and David P. Young: Optimization in the Aerospace Industry 18.8: Energy 18.8.1: Gerson Couto de Oliveira, Sergio Granville, and Mario Pereira: Optimization in Electrical Power Systems 18.8.2: Roland N. Horne: Optimization Applications in Oil and Gas Recovery 18.8.3: Roger Z. Rios-Mercado: Natural Gas Pipeline Optimization 18.9: G. Anandalingam: Opimization of Telecommunications Networks 18.10: Stanislav Uryasev: Optimization of Test Intervals in Nuclear Engineering 18.11: Hussein A. Y. Etawil and Anthony Vannelli: Optimization in VLSI Design: Target Distance Models for Cell Placement 18.12: Michael Florian and Donald W. Hearn: Optimization Models in Transportation Planning 18.13: Guoliang Xue: Optimization in computation Molecular Biology 18.14: Anna Nagurney: Optimization in the Financial Services Industry 18.15: J. B. Rosen, John H. Glick, and E. Michael Gertz: Applied Large-Scale Nonlinear Optimization for Optimal Control of Partial Differential Equations and Differential Algebraic Equations 18.16: Kumaraswamy Ponnambalam: Optimization in Water Reservoir Systems 18.17: Ivan Dimov and Zahari Zlatev: Optimization Problems in Air-Pollution Modeling 18.18: Charles B. Moss: Applied Optimization in Agriculture 18.19: Petra Mutzel: Optimization in Graph Drawing 18.20: G. E. Stavroulakis: Optimization for Modeling of Nonlinear Interactions in Mechanics Part Three: Software 19: Emmanuel Fragniere and Jacek Gondzio: Optimization Modeling Languages 20: Stephen J. Wright: Optimization Software Packages 21: Andreas Fink, Stefan VoB, and David L. Woodruff: Optimization Software Libraries 22: John E. Beasley: Optimization Test Problem Libraries 23: Simone de L. Martins, Celso C. Ribeiro, and Noemi Rodriguez: Parallel Computing Environment 24: Catherine C. McGeoch: Experimental Analysis of Optimization Algorithms 25: Andreas Fink, Stefan VoB, and David L. Woodruff: Object-Oriented Programming 26: Michael A. Trick: Optimization and the Internet Directory of Contributors Index

631 citations


Journal ArticleDOI
TL;DR: In this article, a unified overview and derivation of mixed-integer nonlinear programming (MINLP) techniques, such as Branch and Bound, Outer-Approximation, Generalized Benders and Extended Cutting Plane methods, as applied to nonlinear discrete optimization problems that are expressed in algebraic form is presented.
Abstract: This paper has as a major objective to present a unified overview and derivation of mixed-integer nonlinear programming (MINLP) techniques, Branch and Bound, Outer-Approximation, Generalized Benders and Extended Cutting Plane methods, as applied to nonlinear discrete optimization problems that are expressed in algebraic form. The solution of MINLP problems with convex functions is presented first, followed by a brief discussion on extensions for the nonconvex case. The solution of logic based representations, known as generalized disjunctive programs, is also described. Theoretical properties are presented, and numerical comparisons on a small process network problem.

625 citations


BookDOI
01 Jan 2002
TL;DR: The GAMS model for Pooling problems as mentioned in this paper is a well-known model for pooling problems, and it has been used extensively in the field of software engineering and computer science.
Abstract: Preface. Acknowledgements. List of Figures. List of Tables. 1. Introduction. 2. Convex Extensions. 3. Project Disaggregation. 4. Relaxations of Factorable Programs. 5. Domain Reduction. 6. Node Partitioning. 7. Implementation. 8. Refrigerant Design Problem. 9. The Pooling Problem. 10. Miscellaneous Problems. 11. GAMS/BARON: A Tutorial. A: GAMS Model for Pooling Problems. Bibliography. Index. Author Index.

562 citations


Journal ArticleDOI
TL;DR: A Chebyshev pseudospectral method is presented in this paper for directly solving a generic optimal control problem with state and control constraints and yields more accurate results than those obtained from the traditional collocation methods.
Abstract: We present a Chebyshev pseudospectral method for directly solving a generic Bolza optimal control problem with state and control constraints. This method employs Nth-degree Lagrange polynomial approximations for the stateand control variables with the values of these variables at the Chebyshev-Gauss-Lobatto (CGL) points as the expansion coefficients. This process yields a nonlinear programming problem (NLP) with the state and control values at the CGL points as unknown NLP parameters. Numerical examples demonstrate that this method yields more accurate results than those obtained from the traditional collocation methods.

484 citations


Journal ArticleDOI
TL;DR: This paper describes a new formulation, based on linear finite elements and non-linear programming, for computing rigorous lower bounds in 1, 2 and 3 dimensions, and is shown to be vastly superior to an equivalent formulation that is based on a linearized yield surface and linear programming.
Abstract: This paper describes a new formulation, based on linear finite elements and non-linear programming, for computing rigorous lower bounds in 1, 2 and 3 dimensions. The resulting optimization problem is typically very large and highly sparse and is solved using a fast quasi-Newton method whose iteration count is largely independent of the mesh refinement. For two-dimensional applications, the new formulation is shown to be vastly superior to an equivalent formulation that is based on a linearized yield surface and linear programming. Although it has been developed primarily for geotechnical applications, the method can be used for a wide range of plasticity problems including those with inhomogeneous materials, complex loading, and complicated geometry. Copyright © 2002 John Wiley & Sons, Ltd.

453 citations


01 Jan 2002
TL;DR: Particle Swarm Optimization is an efficient and general solution to solve most nonlinear optimization problems with nonlinear inequality constraints with preserving feasibility strategy employed to deal with constraints.
Abstract: This paper presents a Particle Swarm Optimization (PSO) algorithm for constrained nonlinear optimization problems. In PSO, the potential solutions, called particles, are "flown" through the problem space by learning from the current optimal particle and its own memory. In this paper, preserving feasibility strategy is employed to deal with constraints. PSO is started with a group of feasible solutions and a feasibility function is used to check if the new explored solutions satisfy all the constraints. All particles keep only those feasible solutions in their memory. Eleven test cases were tested and showed that PSO is an efficient and general solution to solve most nonlinear optimization problems with nonlinear inequality constraints.

446 citations


Journal ArticleDOI
TL;DR: An improved algorithm for simultaneous strategies for dynamic optimization based on interior point methods is developed and a reliable and efficient algorithm to adjust elements to track optimal control profile breakpoints and to ensure accurate state and control profiles is developed.

Journal ArticleDOI
TL;DR: In this paper, a new method for computing rigorous upper bounds on the limit loads for one-, two-and three-dimensional continua is described, which is based on linear finite elements.
Abstract: A new method for computing rigorous upper bounds on the limit loads for one-, two- and three-dimensional continua is described. The formulation is based on linear finite elements, permits kinematically admissible velocity discontinuities at all interelement boundaries, and furnishes a kinematically admissible velocity field by solving a non-linear programming problem. In the latter, the objective function corresponds to the dissipated power (which is minimized) and the unknowns are subject to linear equality constraints as well as linear and non-linear inequality constraints. Provided the yield surface is convex, the optimization problem generated by the upper bound method is also convex and can be solved efficiently by applying a two-stage, quasi-Newton scheme to the corresponding Kuhn–Tucker optimality conditions. A key advantage of this strategy is that its iteration count is largely independent of the mesh size. Since the formulation permits non-linear constraints on the unknowns, no linearization of the yield surface is necessary and the modelling of three-dimensional geometries presents no special difficulties. The utility of the proposed upper bound method is illustrated by applying it to a number of two- and three-dimensional boundary value problems. For a variety of two-dimensional cases, the new scheme is up to two orders of magnitude faster than an equivalent linear programming scheme which uses yield surface linearization. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This work gives a pattern search method for nonlinearly constrained optimization that is an adaption of a bound constrained augmented Lagrangian method first proposed by Conn, Gould, and Toint and is the first provably convergent directsearch method for general nonlinear programming.
Abstract: We give a pattern search method for nonlinearly constrained optimization that is an adaption of a bound constrained augmented Lagrangian method first proposed by Conn, Gould, and Toint [SIAM J. Numer. Anal., 28 (1991), pp. 545--572]. In the pattern search adaptation, we solve the bound constrained subproblem approximately using a pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of the subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. As far as we know, this is the first provably convergent direct search method for general nonlinear programming.

Journal ArticleDOI
TL;DR: A mechanism for proving global convergence in SQP--filter methods for nonlinear programming (NLP) is described, and the main point of interest is to demonstrate how convergence for NLP can be induced without forcing sufficient descent in a penalty-type merit function.
Abstract: A mechanism for proving global convergence in SQP--filter methods for nonlinear programming (NLP) is described. Such methods are characterized by their use of the dominance concept of multiobjective optimization, instead of a penalty parameter whose adjustment can be problematic. The main point of interest is to demonstrate how convergence for NLP can be induced without forcing sufficient descent in a penalty-type merit function. The proof relates to a prototypical algorithm, within which is allowed a range of specific algorithm choices associated with the Hessian matrix representation, updating the trust region radius, and feasibility restoration.

Proceedings ArticleDOI
25 Jul 2002
TL;DR: A fuzzy-GA method to resolve dispersed generator placement for distribution systems using the proposed genetic algorithm without any transformation for this nonlinear problem to a linear model or other methods.
Abstract: This paper presents a fuzzy-GA method to resolve dispersed generator placement for distribution systems. The problem formulation considers an objective to reduce power loss costs of distribution systems and the constraints with the number or size of dispersed generators and the deviation of the bus voltage. The main idea of solving fuzzy nonlinear goal programming is to transform the original objective function and constraints into the equivalent multi-objectives functions with fuzzy sets to evaluate their imprecise nature and solve the problem using the proposed genetic algorithm without any transformation for this nonlinear problem to a linear model or other methods. Moreover, this algorithm proposes a satisfying method to solve the constrained multiple objective problem. Analyzing the results and updating the expected value of each objective function allows the dispatcher to obtain the compromised or satisfied solution efficiently.

01 Jan 2002
TL;DR: In this article, a Chebyshev pseudospectral method is presented for directly solving a generic optimal control problem with state and control constraints, which yields more accurate results than those obtained from the traditional collocation methods.
Abstract: A Chebyshev pseudospectral method is presented in this paper for directly solving a generic optimal control problem with state and control constraints. This method employs N t h degree Lagrange polynomial approxiniations for the state and control variables with the values of these variables at the Chebyshev-GaussLobatto (CGL) points as the expansion coefficients. This process yields a nonlinear programming problem (NLP) with the state and control values at the CGL points as unknown NLP parameters. Numerical examples demonstrate this method yields more accurate results than those obtained from the traditional collocation methods.

01 Jan 2002
TL;DR: SNOPT is a set of Fortran subroutines for minimizing a smooth function subject to constraints, which may include simple bounds on the variables, linear constraints and smooth nonlinear constraints, designed to find locally optimal solutions for models involving smooth non linear functions.
Abstract: SNOPT is a set of Fortran subroutines for minimizing a smooth function subject to constraints, which may include simple bounds on the variables, linear constraints and smooth nonlinear constraints. SNOPT is a general-purpose optimizer, designed to find locally optimal solutions for models involving smooth nonlinear functions. They are often more widely useful. (For example, local optima are often global solutions, and discontinuities in the function gradients can often be tolerated if they are not too close to an optimum.) Ideally, users should provide gradients. Unknown components are estimated by finite differences. SNOPT incorporates a sequential quadratic programming (SQP) method that obtains search directions from a sequence of quadratic programming subproblems. Each QP subproblem minimizes a quadratic model of a certain Lagrangian function subject to a linearization of the constraints. An augmented Lagrangian merit function is reduced along each search direction to ensure convergence from any starting point. SNOPT is most efficient if only some of the variables enter nonlinearly, or if the number of active constraints (including simple bounds) is nearly as large as the number of variables. SNOPT requires relatively few evaluations of the problem functions. Hence it is especially effective if the objective or constraint functions are expensive to evaluate. The source code for SNOPT is suitable for any machine with a reasonable amount of memory and a Fortran compiler. SNOPT may be called from a driver program (typically in Fortran, C or MATLAB).

Journal ArticleDOI
TL;DR: A survey of algorithms and applications for the nonlinear knapsack problem, a nonlinear optimization problem with just one constraint, bounds on the variables, and a set of specially structured constraints such as generalized upper bounds (GUBs), is presented.

Journal ArticleDOI
TL;DR: In this paper, a new formulation for reactive power (VAr) planning problem including the allocation of flexible ac transmission systems (FACTS) devices is proposed, which directly takes into account the expected cost for voltage collapse and corrective controls, where the control effects by the devices to be installed are evaluated together with the other controls such as load shedding in contingencies to compute an optimal VAr planning.
Abstract: This paper proposes a new formulation for reactive power (VAr) planning problem including the allocation of flexible ac transmission systems (FACTS) devices. A new feature of the formulation lies in the treatment of security issues. Different from existing formulations, we directly take into account the expected cost for voltage collapse and corrective controls, where the control effects by the devices to be installed are evaluated together with the other controls such as load shedding in contingencies to compute an optimal VAr planning. The inclusion of load shedding into the formulation guarantees the feasibility of the problem. The optimal allocation by the proposed method implies that the investment is optimized, taking into account its effects on security in terms of the cost for power system operation under possible events occurring probabilistically. The problem is formulated as a mixed integer nonlinear programming problem of a large dimension. The Benders decomposition technique is tested where the original problem is decomposed into multiple subproblems. The numerical examinations are carried out using AEP-14 bus system to demonstrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: This work examines a representative class of MDO problem formulations known as collaborative optimization, and discusses an alternative problem formulation, distributed analysis optimization, that yields a more tractable computational optimization problem.
Abstract: Analytical features of multidisciplinary optimization (MDO) problem formulations have significant practical consequences for the ability of nonlinear programming algorithms to solve the resulting computational optimization problems reliably and efficiently. We explore this important but frequently overlooked fact using the notion of disciplinary autonomy. Disciplinary autonomy is a desirable goal in formulating and solving MDO problems; however, the resulting system optimization problems are frequently difficult to solve. We illustrate the implications of MDO problem formulation for the tractability of the resulting design optimization problem by examining a representative class of MDO problem formulations known as collaborative optimization. We also discuss an alternative problem formulation, distributed analysis optimization, that yields a more tractable computational optimization problem.

Journal ArticleDOI
TL;DR: This paper considers simultaneous fitting of multiple curves and surfaces to 3D measured data captured as part of a reverse engineering process, where constraints exist between the parameters of the curves or surfaces.

Journal ArticleDOI
TL;DR: Some cases in which any special structure in a large-scale linear and nonlinear optimization problems can be used with very little cost to obtain search directions from decomposed subproblems are described and analyzed.
Abstract: The efficient solution of large-scale linear and nonlinear optimization problems may require exploiting any special structure in them in an efficient manner. We describe and analyze some cases in which this special structure can be used with very little cost to obtain search directions from decomposed subproblems. We also study how to correct these directions using (decomposable) preconditioned conjugate gradient methods to ensure local convergence in all cases. The choice of appropriate preconditioners results in a natural manner from the structure in the problem. Finally, we conduct computational experiments to compare the resulting procedures with direct methods.

Journal ArticleDOI
TL;DR: These methods account for the fact that the dominant dynamics of highly dissipative PDE systems are low dimensional in nature and lead to approximate optimization problems that are of significantly lower order compared to the ones obtained from spatial discretization using finite-difference and finite-element techniques, and thus they can be solved with significantly smaller computational demand.

Book ChapterDOI
28 May 2002
TL;DR: A general framework is presented which allows for a novel set of linear solutions to the pose estimation problem for both n points and n lines and an analysis of the sensitivity of the algorithms to image noise is presented.
Abstract: Estimation of camera pose from an image of n points or lines with known correspondence is a thoroughly studied problem in computer vision. Most solutions are iterative and depend on nonlinear optimization of some geometric constraint, either on the world coordinates or on the projections to the image plane. For real-time applications we are interested in linear or closed-form solutions free of initialization. We present a general framework which allows for a novel set of linear solutions to the pose estimation problem for both n points and n lines. We present a number of simulations which compare our results to two other recent linear algorithm as well as to iterative approaches. We conclude with tests on real imagery in an augmented reality setup. We also present an analysis of the sensitivity of our algorithms to image noise.

Journal ArticleDOI
Feng Cheng1, Markus Ettl1, Grace Lin1, David D. Yao1
TL;DR: A nonlinear optimization model with multiple constraints, reflecting the service levels offered to different market segments is developed, and an exact algorithm for the important case of demand in each market segment having (at least) one unique component is developed.
Abstract: This study is motivated by a process-reengineering problem in personal computer (PC) manufacturing, i.e., to move from a build-to-stock operation that is centered around end-product inventory towards a configure-to-order (CTO) operation that eliminates endproduct inventory. In fact, CTO has made irrelevant the notion of preconfigured machine types and focuses instead on maintaining the right amount of inventory at the components. CTO appears to be the ideal operational model that provides both mass customization and a quick response time to order fulfillment. To quantify the inventory-service trade-off in the CTO environment, we develop a nonlinear optimization model with multiple constraints, reflecting the service levels offered to different market segments. To solve the optimization problem, we develop an exact algorithm for the important case of demand in each market segment having (at least) one unique component, and a greedy heuristic for the general (nonunique component) case. Furthermore, we show how to use sensitivity analysis, along with simulation, to fine-tune the solutions. The performance of the model and the solution approach is examined by extensive numerical studies on realistic problem data. We present the major findings in applying our model to study the inventory-service impacts in the reengineering of a PC manufacturing process.

Journal ArticleDOI
TL;DR: An equivalent multi objective linear programming form of the problem has been formulated in the proposed methodology using fuzzy set theory approach and the proposed solution procedure has been used to solve numerical examples.

Journal ArticleDOI
TL;DR: This paper proposes a method which uses nonlinear optimization and is based on direct differentiations of value functions and is then applied to general switched linear quadratic (GSLQ) problems.
Abstract: This paper presents an approach for solving optimal control problems of switched systems. In general, in such problems one needs to find both optimal continuous inputs and optimal switching sequences, since the system dynamics vary before and after every switching instant. After formulating a general optimal control problem, we propose a two stage optimization methodology. Since many practical problems only concern optimization where the number of switchings and the sequence of active subsystems are given, we concentrate on such problems and propose a method which uses nonlinear optimization and is based on direct differentiations of value functions. The method is then applied to general switched linear quadratic (GSLQ) problems. Examples illustrate the results.

Journal ArticleDOI
TL;DR: A novel control algorithm, probabilistically constrained predictive control, to deal with the uncertainties of system disturbances, formulated under the assumption of a linear system and solved with a nonlinear programming solver.

Journal ArticleDOI
TL;DR: In this article, a new approach is proposed to solve a kind of nonlinear optimization problem under uncertainty, in which some dependent variables are to be constrained with a predefined probability.
Abstract: Optimization under uncertainty is considered necessary for robust process design and operation. In this work, a new approach is proposed to solve a kind of nonlinear optimization problem under uncertainty, in which some dependent variables are to be constrained with a predefined probability. Such problems are called optimization under chance constraints. By employment of the monotony of these variables to one of the uncertain variables, the output feasible region will be mapped to a region of the uncertain input variables. Thus, the probability of holding the output constraints can be simply achieved by integration of the probability density function of the multivariate uncertain variables. Collocation on finite elements is used for the numerical integration, through which sensitivities of the chance constraints can be computed as well. The proposed approach is applied to the optimization of two process engineering problems under various uncertainties.