scispace - formally typeset
Search or ask a question

Showing papers on "Nonlinear programming published in 2011"


Book
01 Jan 2011
TL;DR: This chapter discusses Optimization Techniques, which are used in Linear Programming I and II, and Nonlinear Programming II, which is concerned with One-Dimensional Minimization.
Abstract: Preface. 1 Introduction to Optimization. 1.1 Introduction. 1.2 Historical Development. 1.3 Engineering Applications of Optimization. 1.4 Statement of an Optimization Problem. 1.5 Classification of Optimization Problems. 1.6 Optimization Techniques. 1.7 Engineering Optimization Literature. 1.8 Solution of Optimization Problems Using MATLAB. References and Bibliography. Review Questions. Problems. 2 Classical Optimization Techniques. 2.1 Introduction. 2.2 Single-Variable Optimization. 2.3 Multivariable Optimization with No Constraints. 2.4 Multivariable Optimization with Equality Constraints. 2.5 Multivariable Optimization with Inequality Constraints. 2.6 Convex Programming Problem. References and Bibliography. Review Questions. Problems. 3 Linear Programming I: Simplex Method. 3.1 Introduction. 3.2 Applications of Linear Programming. 3.3 Standard Form of a Linear Programming Problem. 3.4 Geometry of Linear Programming Problems. 3.5 Definitions and Theorems. 3.6 Solution of a System of Linear Simultaneous Equations. 3.7 Pivotal Reduction of a General System of Equations. 3.8 Motivation of the Simplex Method. 3.9 Simplex Algorithm. 3.10 Two Phases of the Simplex Method. 3.11 MATLAB Solution of LP Problems. References and Bibliography. Review Questions. Problems. 4 Linear Programming II: Additional Topics and Extensions. 4.1 Introduction. 4.2 Revised Simplex Method. 4.3 Duality in Linear Programming. 4.4 Decomposition Principle. 4.5 Sensitivity or Postoptimality Analysis. 4.6 Transportation Problem. 4.7 Karmarkar's Interior Method. 4.8 Quadratic Programming. 4.9 MATLAB Solutions. References and Bibliography. Review Questions. Problems. 5 Nonlinear Programming I: One-Dimensional Minimization Methods. 5.1 Introduction. 5.2 Unimodal Function. ELIMINATION METHODS. 5.3 Unrestricted Search. 5.4 Exhaustive Search. 5.5 Dichotomous Search. 5.6 Interval Halving Method. 5.7 Fibonacci Method. 5.8 Golden Section Method. 5.9 Comparison of Elimination Methods. INTERPOLATION METHODS. 5.10 Quadratic Interpolation Method. 5.11 Cubic Interpolation Method. 5.12 Direct Root Methods. 5.13 Practical Considerations. 5.14 MATLAB Solution of One-Dimensional Minimization Problems. References and Bibliography. Review Questions. Problems. 6 Nonlinear Programming II: Unconstrained Optimization Techniques. 6.1 Introduction. DIRECT SEARCH METHODS. 6.2 Random Search Methods. 6.3 Grid Search Method. 6.4 Univariate Method. 6.5 Pattern Directions. 6.6 Powell's Method. 6.7 Simplex Method. INDIRECT SEARCH (DESCENT) METHODS. 6.8 Gradient of a Function. 6.9 Steepest Descent (Cauchy) Method. 6.10 Conjugate Gradient (Fletcher-Reeves) Method. 6.11 Newton's Method. 6.12 Marquardt Method. 6.13 Quasi-Newton Methods. 6.14 Davidon-Fletcher-Powell Method. 6.15 Broyden-Fletcher-Goldfarb-Shanno Method. 6.16 Test Functions. 6.17 MATLAB Solution of Unconstrained Optimization Problems. References and Bibliography. Review Questions. Problems. 7 Nonlinear Programming III: Constrained Optimization Techniques. 7.1 Introduction. 7.2 Characteristics of a Constrained Problem. DIRECT METHODS. 7.3 Random Search Methods. 7.4 Complex Method. 7.5 Sequential Linear Programming. 7.6 Basic Approach in the Methods of Feasible Directions. 7.7 Zoutendijk's Method of Feasible Directions. 7.8 Rosen's Gradient Projection Method. 7.9 Generalized Reduced Gradient Method. 7.10 Sequential Quadratic Programming. INDIRECT METHODS. 7.11 Transformation Techniques. 7.12 Basic Approach of the Penalty Function Method. 7.13 Interior Penalty Function Method. 7.14 Convex Programming Problem. 7.15 Exterior Penalty Function Method. 7.16 Extrapolation Techniques in the Interior Penalty Function Method. 7.17 Extended Interior Penalty Function Methods. 7.18 Penalty Function Method for Problems with Mixed Equality and Inequality Constraints. 7.19 Penalty Function Method for Parametric Constraints. 7.20 Augmented Lagrange Multiplier Method. 7.21 Checking the Convergence of Constrained Optimization Problems. 7.22 Test Problems. 7.23 MATLAB Solution of Constrained Optimization Problems. References and Bibliography. Review Questions. Problems. 8 Geometric Programming. 8.1 Introduction. 8.2 Posynomial. 8.3 Unconstrained Minimization Problem. 8.4 Solution of an Unconstrained Geometric Programming Program Using Differential Calculus. 8.5 Solution of an Unconstrained Geometric Programming Problem Using Arithmetic-Geometric Inequality. 8.6 Primal-Dual Relationship and Sufficiency Conditions in the Unconstrained Case. 8.7 Constrained Minimization. 8.8 Solution of a Constrained Geometric Programming Problem. 8.9 Primal and Dual Programs in the Case of Less-Than Inequalities. 8.10 Geometric Programming with Mixed Inequality Constraints. 8.11 Complementary Geometric Programming. 8.12 Applications of Geometric Programming. References and Bibliography. Review Questions. Problems. 9 Dynamic Programming. 9.1 Introduction. 9.2 Multistage Decision Processes. 9.3 Concept of Suboptimization and Principle of Optimality. 9.4 Computational Procedure in Dynamic Programming. 9.5 Example Illustrating the Calculus Method of Solution. 9.6 Example Illustrating the Tabular Method of Solution. 9.7 Conversion of a Final Value Problem into an Initial Value Problem. 9.8 Linear Programming as a Case of Dynamic Programming. 9.9 Continuous Dynamic Programming. 9.10 Additional Applications. References and Bibliography. Review Questions. Problems. 10 Integer Programming. 10.1 Introduction 588. INTEGER LINEAR PROGRAMMING. 10.2 Graphical Representation. 10.3 Gomory's Cutting Plane Method. 10.4 Balas' Algorithm for Zero-One Programming Problems. INTEGER NONLINEAR PROGRAMMING. 10.5 Integer Polynomial Programming. 10.6 Branch-and-Bound Method. 10.7 Sequential Linear Discrete Programming. 10.8 Generalized Penalty Function Method. 10.9 Solution of Binary Programming Problems Using MATLAB. References and Bibliography. Review Questions. Problems. 11 Stochastic Programming. 11.1 Introduction. 11.2 Basic Concepts of Probability Theory. 11.3 Stochastic Linear Programming. 11.4 Stochastic Nonlinear Programming. 11.5 Stochastic Geometric Programming. References and Bibliography. Review Questions. Problems. 12 Optimal Control and Optimality Criteria Methods. 12.1 Introduction. 12.2 Calculus of Variations. 12.3 Optimal Control Theory. 12.4 Optimality Criteria Methods. References and Bibliography. Review Questions. Problems. 13 Modern Methods of Optimization. 13.1 Introduction. 13.2 Genetic Algorithms. 13.3 Simulated Annealing. 13.4 Particle Swarm Optimization. 13.5 Ant Colony Optimization. 13.6 Optimization of Fuzzy Systems. 13.7 Neural-Network-Based Optimization. References and Bibliography. Review Questions. Problems. 14 Practical Aspects of Optimization. 14.1 Introduction. 14.2 Reduction of Size of an Optimization Problem. 14.3 Fast Reanalysis Techniques. 14.4 Derivatives of Static Displacements and Stresses. 14.5 Derivatives of Eigenvalues and Eigenvectors. 14.6 Derivatives of Transient Response. 14.7 Sensitivity of Optimum Solution to Problem Parameters. 14.8 Multilevel Optimization. 14.9 Parallel Processing. 14.10 Multiobjective Optimization. 14.11 Solution of Multiobjective Problems Using MATLAB. References and Bibliography. Review Questions. Problems. A Convex and Concave Functions. B Some Computational Aspects of Optimization. B.1 Choice of Method. B.2 Comparison of Unconstrained Methods. B.3 Comparison of Constrained Methods. B.4 Availability of Computer Programs. B.5 Scaling of Design Variables and Constraints. B.6 Computer Programs for Modern Methods of Optimization. References and Bibliography. C Introduction to MATLAB(R) . C.1 Features and Special Characters. C.2 Defining Matrices in MATLAB. C.3 CREATING m-FILES. C.4 Optimization Toolbox. Answers to Selected Problems. Index .

3,283 citations


Journal ArticleDOI
TL;DR: NOMAD is software that implements the Mesh Adaptive Direct Search algorithm for blackbox optimization under general nonlinear constraints and aims for the best possible solution with a small number of evaluations.
Abstract: NOMAD is software that implements the Mesh Adaptive Direct Search (MADS) algorithm for blackbox optimization under general nonlinear constraints. Blackbox optimization is about optimizing functions that are usually given as costly programs with no derivative information and no function values returned for a significant number of calls attempted. NOMAD is designed for such problems and aims for the best possible solution with a small number of evaluations. The objective of this article is to describe the underlying algorithm, the software’s functionalities, and its implementation.

665 citations


Journal ArticleDOI
TL;DR: A new optimization approach that employs an artificial bee colony (ABC) algorithm to determine the optimal DG-unit's size, power factor, and location in order to minimize the total system real power loss.
Abstract: Distributed generation (DG) has been utilized in some electric power networks. Power loss reduction, environmental friendliness, voltage improvement, postponement of system upgrading, and increasing reliability are some advantages of DG-unit application. This paper presents a new optimization approach that employs an artificial bee colony (ABC) algorithm to determine the optimal DG-unit's size, power factor, and location in order to minimize the total system real power loss. The ABC algorithm is a new metaheuristic, population-based optimization technique inspired by the intelligent foraging behavior of the honeybee swarm. To reveal the validity of the ABC algorithm, sample radial distribution feeder systems are examined with different test cases. Furthermore, the results obtained by the proposed ABC algorithm are compared with those attained via other methods. The outcomes verify that the ABC algorithm is efficient, robust, and capable of handling mixed integer nonlinear optimization problems. The ABC algorithm has only two parameters to be tuned. Therefore, the updating of the two parameters towards the most effective values has a higher likelihood of success than in other competing metaheuristic methods.

652 citations


Journal ArticleDOI
TL;DR: In this article, a nonlinear mixed-integer programming with inter-temporal constraints is proposed to solve the problem of virtual power plant (VPP) bidding in a joint market of energy and spinning reserve service.
Abstract: This paper addresses the bidding problem faced by a virtual power plant (VPP) in a joint market of energy and spinning reserve service. The proposed bidding strategy is a non-equilibrium model based on the deterministic price-based unit commitment (PBUC) which takes the supply-demand balancing constraint and security constraints of VPP itself into account. The presented model creates a single operating profile from a composite of the parameters characterizing each distributed energy resources (DER), which is a component of VPP, and incorporates network constraints into its description of the capabilities of the portfolio. The presented model is a nonlinear mixed-integer programming with inter-temporal constraints and solved by genetic algorithm (GA).

433 citations


BookDOI
01 Dec 2011
TL;DR: Mixed-integer nonlinear programming (MINLP) problems combine the numerical difficulties of handling nonlinear functions with the challenge of optimizing in the context of nonconvex functions and discrete variables.
Abstract: Many engineering, operations, and scientific applications include a mixture of discrete and continuous decision variables and nonlinear relationships involving the decision variables that have a pronounced effect on the set of feasible and optimal solutions. Mixed-integer nonlinear programming (MINLP) problems combine the numerical difficulties of handling nonlinear functions with the challenge of optimizing in the context of nonconvex functions and discrete variables. MINLP is one of the most flexible modeling paradigms available for optimization; but because its scope is so broad, in the most general cases it is hopelessly intractable. Nonetheless, an expanding body of researchers and practitioners including chemical engineers, operations researchers, industrial engineers, mechanical engineers, economists, statisticians, computer scientists, operations managers, and mathematical programmers are interested in solving large-scale MINLP instances.

323 citations


Book
28 Jul 2011
TL;DR: The author covers adjoint-based derivative computation and the efficient solution of Newton systems by multigrid and preconditioned iterative methods.
Abstract: Semismooth Newton methods are a modern class of remarkably powerful and versatile algorithms for solving constrained optimization problems with partial differential equations (PDEs), variational inequalities, and related problems. This book provides a comprehensive presentation of these methods in function spaces, striking a balance between thoroughly developed theory and numerical applications. Although largely self-contained, the book also covers recent developments in the field, such as state-constrained problems and offers new material on topics such as improved mesh independence results. The theory and methods are applied to a range of practically important problems, including optimal control of semilinear elliptic differential equations, obstacle problems, and flow control of instationary Navier-Stokes fluids. In addition, the author covers adjoint-based derivative computation and the efficient solution of Newton systems by multigrid and preconditioned iterative methods. Audience: This book is appropriate for researchers and practitioners in PDE-constrained optimization, nonlinear optimization, and numerical analysis, as well as engineers interested in the current theory and methods for solving variational inequalities. It is also suitable as a text for an advanced graduate-level course in the aforementioned topics or applied functional analysis. Contents: Notation; Preface; Chapter One: Introduction; Chapter Two: Elements of Finite-Dimensional Nonsmooth Analysis; Chapter Three: Newton Methods for Semismooth Operator Equations; Chapter Four: Smoothing Steps and Regularity Conditions; Chapter Five: Variational Inequalities and Mixed Problems; Chapter Six: Mesh Independence; Chapter Seven: Trust-Region Globalization; Chapter Eight: State-Constrained and Related Problems; Chapter Nine: Several Applications; Chapter Ten: Optimal Control of Incompressible Navier-Stokes Flow; Chapter Eleven: Optimal Control of Compressible Navier-Stokes Flow; Appendix; Bibliography; Index.

314 citations


Journal ArticleDOI
TL;DR: A novel method for solving the three-phase DOPF model by transforming the mixed-integer non linear programming problem to a nonlinear programming problem is proposed which reduces the computational burden and facilitates its practical implementation and application.
Abstract: This paper presents a generic and comprehensive distribution optimal power flow (DOPF) model that can be used by local distribution companies (LDCs) to integrate their distribution system feeders into a Smart Grid. The proposed three-phase DOPF framework incorporates detailed modeling of distribution system components and considers various operating objectives. Phase specific and voltage dependent modeling of customer loads in the three-phase DOPF model allows LDC operators to determine realistic operating strategies that can improve the overall feeder efficiency. The proposed distribution system operation objective is based on the minimization of the energy drawn from the substation while seeking to minimize the number of switching operations of load tap changers and capacitors. A novel method for solving the three-phase DOPF model by transforming the mixed-integer nonlinear programming problem to a nonlinear programming problem is proposed which reduces the computational burden and facilitates its practical implementation and application. Two practical case studies, including a real distribution feeder test case, are presented to demonstrate the features of the proposed methodology. The results illustrate the benefits of the proposed DOPF in terms of reducing energy losses while limiting the number of switching operations.

302 citations


Book ChapterDOI
12 Jun 2011
TL;DR: A nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization (NDWPSO) was presented to solve the problem that it easily stuck at a local minimum point and its convergence speed is slow, when the linear decreasing inertia weight PSO (LDW PSO) adapt to the complex nonlinear optimization process.
Abstract: A nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization (NDWPSO) was presented to solve the problem that it easily stuck at a local minimum point and its convergence speed is slow, when the linear decreasing inertia weight PSO (LDWPSO) adapt to the complex nonlinear optimization process. The rate of particle evolution changing was introduced in this new algorithm and the inertia weight was formulated as a function of this factor according to its impact on the search performance of the swarm. In each iteration process, the weight was changed dynamically based on the current rate of evolutionary changing value, which provides the algorithm with effective dynamic adaptability. The algorithm of LDWPSO and NDWPSO were tested with three benchmark functions. The experiments show that the convergence speed of NDWPSO is significantly superior to LDWPSO, and the convergence accuracy is improved.

267 citations


Journal ArticleDOI
TL;DR: Novel methods for regularized MRI reconstruction from undersampled sensitivity encoded data-SENSE-reconstruction-using the augmented Lagrangian (AL) framework for solving large-scale constrained optimization problems are presented.
Abstract: Magnetic resonance image (MRI) reconstruction using SENSitivity Encoding (SENSE) requires regularization to suppress noise and aliasing effects. Edge-preserving and sparsity-based regularization criteria can improve image quality, but they demand computation-intensive nonlinear optimization. In this paper, we present novel methods for regularized MRI reconstruction from undersampled sensitivity encoded data-SENSE-reconstruction-using the augmented Lagrangian (AL) framework for solving large-scale constrained optimization problems. We first formulate regularized SENSE-reconstruction as an unconstrained optimization task and then convert it to a set of (equivalent) constrained problems using variable splitting. We then attack these constrained versions in an AL framework using an alternating minimization method, leading to algorithms that can be implemented easily. The proposed methods are applicable to a general class of regularizers that includes popular edge-preserving (e.g., total-variation) and sparsity-promoting (e.g., -norm of wavelet coefficients) criteria and combinations thereof. Numerical experiments with synthetic and in vivo human data illustrate that the proposed AL algorithms converge faster than both general-purpose optimization algorithms such as nonlinear conjugate gradient (NCG) and state-of-the-art MFISTA.

244 citations


Journal ArticleDOI
Deng-Feng Li1
01 Jun 2011
TL;DR: A closeness coefficient based nonlinear programming method for solving multiattribute decision making problems in which ratings of alternatives on attributes are expressed using interval-valued intuitionistic fuzzy (IVIF) sets and preference information on attributes is incomplete is developed.
Abstract: The aim of this paper is to develop a closeness coefficient based nonlinear programming method for solving multiattribute decision making problems in which ratings of alternatives on attributes are expressed using interval-valued intuitionistic fuzzy (IVIF) sets and preference information on attributes is incomplete. In this methodology, nonlinear programming models are constructed on the concept of the closeness coefficient, which is defined as a ratio of the square of the weighted Euclidean distance between an alternative and the IVIF negative ideal solution (IVIFNIS) to the sum of the squares of the weighted Euclidean distances between the alternative and the IVIF positive ideal solution (IVIFPIS) as well as the IVIFNIS. Simpler nonlinear programming models are deduced to calculate closeness intuitionistic fuzzy sets of alternatives to the IVIFPIS, which are used to estimate the optimal degrees of membership and hereby generate ranking order of the alternatives. The derived auxiliary nonlinear programming models are shown to be flexible with different information structures and decision environments. The proposed method is validated and compared with other methods. A real example is examined to demonstrate applicability of the proposed method in this paper.

233 citations


Journal ArticleDOI
TL;DR: In this paper, a variable-order adaptive pseudospectral method is presented for solving optimal control problems, which adjusts both themesh spacing and the degree of the polynomial on each mesh interval until a specified error tolerance is satisfied.
Abstract: A variable-order adaptive pseudospectral method is presented for solving optimal control problems. The method developed in this paper adjusts both themesh spacing and the degree of the polynomial on eachmesh interval until a specified error tolerance is satisfied. In regions of relatively high curvature, convergence is achieved by refining the mesh, while in regions of relatively low curvature, convergence is achieved by increasing the degree of the polynomial. An efficient iterativemethod is then described for accurately solving a general nonlinear optimal control problem. Using four examples, the adaptive pseudospectral method described in this paper is shown to be more efficient than either a global pseudospectral method or a fixed-order method.

Journal ArticleDOI
TL;DR: This paper considers the application of gradient-based distributed algorithms on an approximation of the multiuser problem and considers instances where user decisions are coupled, both in the objective and through nonlinear coupling constraints.
Abstract: Traditionally, a multiuser problem is a constrained optimization problem characterized by a set of users, an objective given by a sum of user-specific utility functions, and a collection of linear constraints that couple the user decisions. The users do not share the information about their utilities, but do communicate values of their decision variables. The multiuser problem is to maximize the sum of the user-specific utility functions subject to the coupling constraints, while abiding by the informational requirements of each user. In this paper, we focus on generalizations of convex multiuser optimization problems where the objective and constraints are not separable by user and instead consider instances where user decisions are coupled, both in the objective and through nonlinear coupling constraints. To solve this problem, we consider the application of gradient-based distributed algorithms on an approximation of the multiuser problem. Such an approximation is obtained through a Tikhonov regulariza...

Journal ArticleDOI
TL;DR: It is proved that the equilibrium point set of the proposed neural network is equivalent to the optimal solution of the original optimization problem by using the Lagrangian saddle-point theorem.
Abstract: A recurrent neural network is proposed for solving the non-smooth convex optimization problem with the convex inequality and linear equality constraints. Since the objective function and inequality constraints may not be smooth, the Clarke's generalized gradients of the objective function and inequality constraints are employed to describe the dynamics of the proposed neural network. It is proved that the equilibrium point set of the proposed neural network is equivalent to the optimal solution of the original optimization problem by using the Lagrangian saddle-point theorem. Under weak conditions, the proposed neural network is proved to be stable, and the state of the neural network is convergent to one of its equilibrium points. Compared with the existing neural network models for non-smooth optimization problems, the proposed neural network can deal with a larger class of constraints and is not based on the penalty method. Finally, the proposed neural network is used to solve the identification problem of genetic regulatory networks, which can be transformed into a non-smooth convex optimization problem. The simulation results show the satisfactory identification accuracy, which demonstrates the effectiveness and efficiency of the proposed approach.

Journal ArticleDOI
TL;DR: Approximate Karush–Kuhn–Tucker and approximate gradient projection conditions are analysed and implications between different conditions and counter-examples will be shown.
Abstract: Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Approximate KKT and Approximate Gradient Projection conditions are analyzed in this work. These conditions are not necessarily equivalent. Implications between dierent conditions and counter-examples will be shown. Algorithmic consequences will be discussed.

Book
19 Dec 2011
TL;DR: In this paper, the authors propose a general Discretention method for ODEs and DAEs, based on local minimum principles for index-1 and index-2 problems.
Abstract: 1 Introduction 2 Basics from Functional Analysis 2.1 Vector Spaces 2.2 Mappings, Dual Spaces, and Properties 2.3 Function Spaces 2.4 Stieltjes Integral 2.5 Set Arithmetic 2.6 Separation Theorems 2.7 Derivatives 2.8 Variational Equalities and Inequalities 3 Infinite and Finite Dimensional Optimization Problems 3.1 Problem Classes 3.2 Existence of a Solution 3.3 Conical Approximation of Sets 3.4 First Order Necessary Conditions of Fritz-John Type 3.5 Constraint Qualifications 3.6 Necessary and Sufficient Conditions in Finte Dimensions 3.7 Perturbed Nonlinear Optimization Problems 3.8 Numerical Methods 3.9 Duality 3.10 Mixed-Integer Nonlinear Programs and Branch&Bound 4 Local Minimum Principles 4.1 Local Minimum Principles for Index-2 Problems 4.2 Local Minimum Principles for Index-1 Problems 5 Discretization Methods for ODEs and DAEs 5.1 General Discretization Theory 5.2 Backward Differentiation Formulae (BDF) 5.3 Implicit Runge-Kutta Methods 5.4 Linearized Implicit Runge-Kutta Methods 6 Discretization of Optimal Control Problems 6.1 Direct Discretization Methods 6.2 Calculation of Gradients 6.3 Numerical Example 6.4 Discrete Minimum Principle and Approximation of Adjoints 6.5 Convergence 7 Selected Applications and Extensions 7.1 Mixed-Integer Optimal Control 7.2 Open-Loop-Real-Time Control 7.3 Dynamic Parameter Identification.

Journal ArticleDOI
TL;DR: In this paper, an efficient optimization procedure based on the clonal selection algorithm (CSA) is proposed for the solution of short-term hydrothermal scheduling problem, which is a new algorithm from the family of evolutionary computation, is simple, fast and a robust optimization tool for real complex hydrotherm scheduling problems, the results of the proposed approach are compared with those of gradient search (GS), simulated annealing (SA), evolutionary programming (EP), dynamic programming (DP), non-linear programming (NLP), genetic algorithm (GA), improved fast EP (IFEP),

Book ChapterDOI
26 Mar 2011
TL;DR: Using a new version of parametric probabilistic model checking, it is shown how the Model Repair problem can be reduced to a nonlinear optimization problem with a minimal-cost objective function, thereby yielding a solution technique.
Abstract: We introduce the problem of Model Repair for Probabilistic Systems as follows Given a probabilistic system M and a probabilistic temporal logic formula φ such that M fails to satisfy φ, the Model Repair problem is to find an M′ that satisfies v and differs from M only in the transition flows of those states in M that are deemed controllable Moreover, the cost associated with modifying M's transition flows to obtain M′ should be minimized Using a new version of parametric probabilistic model checking, we show how the Model Repair problem can be reduced to a nonlinear optimization problem with a minimal-cost objective function, thereby yielding a solution technique We demonstrate the practical utility of our approach by applying it to a number of significant case studies, including a DTMC reward model of the Zeroconf protocol for assigning IP addresses, and a CTMC model of the highly publicized Kaminsky DNS cache-poisoning attack

Journal ArticleDOI
TL;DR: Inspired by recent work, a formulation for the piecewise linear relaxation of bilinear functions with a logarithmic number of binary variables is introduced and computationally compare the performance of this new formulation to the best-performing piecewise relaxations with a linear number ofbinary variables.

Journal ArticleDOI
TL;DR: An optimal wideband spectrum sensing framework which identifies secondary transmission opportunities over multiple nonoverlapping narrowband channels is presented and it is demonstrated that the problem can be solved by convex optimization if certain practical constraints are applied.
Abstract: An optimal wideband spectrum sensing framework which identifies secondary transmission opportunities over multiple nonoverlapping narrowband channels is presented. The framework, which is referred to as multiband sensing-time-adaptive joint detection, improves the overall secondary user performance while protecting the primary network and keeping the harmful interference below a desired low level. Considering a periodic sensing scheme, the detection problem is formulated as a joint optimization problem to maximize the aggregate achievable secondary throughput capacity given a bound on the aggregate interference imposed on the primary network. It is demonstrated that the problem can be solved by convex optimization if certain practical constraints are applied. Simulation results attest that the proposed wideband spectrum sensing framework achieves superior performance compared to contemporary frameworks. An efficient iterative algorithm which solves the optimization problem with much lower complexity compared to other numerical methods is presented. It is established that the iteration-complexity and the complexity-per-iteration of the proposed algorithm increases linearly as the number of optimization variables (i.e., the number of narrowband channels) increases. The algorithm is evaluated via simulation and is shown to obtain the optimal solution very quickly and efficiently.

Journal ArticleDOI
TL;DR: In this article, a two-layer architecture for dynamic real-time optimization with an economic objective is presented, where the solution of the dynamic optimization problem is computed on two time-scales.

Journal ArticleDOI
TL;DR: An improved particle swarm optimization (IPSO) was proposed in this paper to solve the problem that the linearly decreasing inertia weight (LDIW) of particle Swarm optimization algorithm cannot adapt to the complex and nonlinear optimization process.
Abstract: An improved particle swarm optimization (IPSO) was proposed in this paper to solve the problem that the linearly decreasing inertia weight (LDIW) of particle swarm optimization algorithm cannot adapt to the complex and nonlinear optimization process The strategy of nonlinear decreasing inertia weight based on the concave function was used in this algorithm The aggregation degree factor of the swarm was introduced in this new algorithm And in each iteration process, the weight is changed dynamically based on the current aggregation degree factor and the iteration times, which provides the algorithm with dynamic adaptability The experiments on the three classical functions show that the convergence speed of IPSO is significantly superior to LDIWPSO, and the convergence accuracy is increased

Proceedings ArticleDOI
05 Jun 2011
TL;DR: A modified FA approach combined with chaotic sequences (FAC) applied to reliability-redundancy optimization is introduced and was found to outperform the previously best-known solutions available.
Abstract: The reliability-redundancy allocation problem can be approached as a mixed-integer programming problem. It has been solved by using optimization techniques such as dynamic programming, integer programming, and mixed-integer nonlinear programming. On the other hand, a broad class of meta-heuristics has been developed for reliability-redundancy optimization. Recently, a new meta-heuristics called firefly algorithm (FA) algorithm has emerged. The FA is a stochastic metaheuristic approach based on the idealized behavior of the flashing characteristics of fireflies. In FA, the flashing light can be formulated in such a way that it is associated with the objective function to be optimized, which makes it possible to formulate the firefly algorithm. This paper introduces a modified FA approach combined with chaotic sequences (FAC) applied to reliability-redundancy optimization. In this context, an example of mixed integer programming in reliability-redundancy design of an overspeed protection system for a gas turbine is evaluated. In this application domain, FAC was found to outperform the previously best-known solutions available.

Journal ArticleDOI
TL;DR: A new strategic bidding optimization technique which applies bilevel programming and swarm intelligence and develops a particle-swarm-optimization-based algorithm to solve the problem defined in the MLNB decision model.
Abstract: Competitive strategic bidding optimization is now a key issue in electricity generator markets. Digital ecosystems provide a powerful technological foundation and support for the implementation of the optimization. This paper presents a new strategic bidding optimization technique which applies bilevel programming and swarm intelligence. In this paper, we first propose a general multileader-one-follower nonlinear bilevel (MLNB) optimization concept and related definitions based on the generalized Nash equilibrium. By analyzing the strategic bidding behavior of generating companies, we create a specific MLNB decision model for day-ahead electricity markets. The MLNB decision model allows each generating company to choose its biddings to maximize its individual profit, and a market operator can find its minimized purchase electricity fare, which is determined by the output power of each unit and the uniform marginal prices. We then develop a particle-swarm-optimization-based algorithm to solve the problem defined in the MLNB decision model. The experiment results on a strategic bidding problem for a day-ahead electricity market have demonstrated the validity of the proposed decision model and algorithm.

Journal ArticleDOI
TL;DR: It is obtained that in the reasonable case when the penalty parameters are bounded, the complexity of reaching within $\epsilon$ of a KKT point is at most $\mathcal{O}(\ep silon^{-2})$ problem evaluations, which is the same in order as the function-evaluation complexity of steepest-descent methods applied to unconstrained, nonconvex smooth optimization.
Abstract: We estimate the worst-case complexity of minimizing an unconstrained, nonconvex composite objective with a structured nonsmooth term by means of some first-order methods. We find that it is unaffected by the nonsmoothness of the objective in that a first-order trust-region or quadratic regularization method applied to it takes at most $\mathcal{O}(\epsilon^{-2})$ function evaluations to reduce the size of a first-order criticality measure below $\epsilon$. Specializing this result to the case when the composite objective is an exact penalty function allows us to consider the objective- and constraint-evaluation worst-case complexity of nonconvex equality-constrained optimization when the solution is computed using a first-order exact penalty method. We obtain that in the reasonable case when the penalty parameters are bounded, the complexity of reaching within $\epsilon$ of a KKT point is at most $\mathcal{O}(\epsilon^{-2})$ problem evaluations, which is the same in order as the function-evaluation complexity of steepest-descent methods applied to unconstrained, nonconvex smooth optimization.

Journal ArticleDOI
01 Jan 2011
TL;DR: A new technique for combined physical system and control design (co-design) based on a simultaneous dynamic optimization approach known as direct transcription, which transforms infinitedimensional control design problems into finite dimensional nonlinear programming problems, is explored.
Abstract: Design of physical systems and associated control systems are coupled tasks; design methods that manage this interaction explicitly can produce system-optimal designs, whereas conventional sequential processes may not. Here we explore a new technique for combined physical system and control design (co-design) based on a simultaneous dynamic optimization approach known as direct transcription, which transforms infinite-dimensional control design problems into finite dimensional nonlinear programming problems. While direct transcription problem dimension is often large, sparse problem structures and fine-grained parallelism (among other advantageous properties) can be exploited to yield computationally efficient implementations. Extension of direct transcription to co-design gives rise to a new problem structures and new challenges. Here we illustrate direct transcription for co-design using a new automotive active suspension design example developed specifically for testing co-design methods. This example builds on prior active suspension problems by incorporating a more realistic physical design component that includes independent design variables and a broad set of physical design constraints, while maintaining linearity of the associated differential equations.© 2011 ASME

Journal ArticleDOI
TL;DR: The experimental results suggest that IWO holds immense promise to appear as an efficient metaheuristic for multi-objective optimization.

Journal ArticleDOI
TL;DR: In this paper, an analytical point-wise stationary approximation model is proposed to analyze time-dependent truck queuing processes with stochastic service time distributions at gates and yards of a port terminal.
Abstract: An analytical point-wise stationary approximation model is proposed to analyze time-dependent truck queuing processes with stochastic service time distributions at gates and yards of a port terminal. A convex nonlinear programming model is developed which minimizes the total truck turn time and discomfort due to shifted arrival times. A two-phase optimization approach is used to first compute a system-optimal truck arrival pattern, and then find a desirable pattern of time-varying tolls that leads to the optimal arrival pattern. Numerical experiments are conducted to test the computational efficiency and accuracy of the proposed optimization models.

Journal ArticleDOI
TL;DR: A mathematical characterization of the joint relationship among these layers of the SINR model offers quantitative understanding on the interaction of power control, scheduling, and flow routing in a CRN and offers a performance benchmark for any other algorithms developed for practical implementation.
Abstract: Cognitive radio networks (CRNs) have the potential to utilize spectrum efficiently and are positioned to be the core technology for the next-generation multihop wireless networks. An important problem for such networks is its capacity. We study this problem for CRNs in the SINR (signal-to-interference-and-noise-ratio) model, which is considered to be a better characterization of interference (but also more difficult to analyze) than disk graph model. The main difficulties of this problem are two-fold. First, SINR is a nonconvex function of transmission powers; an optimization problem in the SINR model is usually a nonconvex program and NP-hard in general. Second, in the SINR model, scheduling feasibility and the maximum allowed flow rate on each link are determined by SINR at the physical layer. To maximize capacity, it is essential to follow a cross-layer approach, but joint optimization at physical (power control), link (scheduling), and network (flow routing) layers with the SINR function is inherently difficult. In this paper, we give a mathematical characterization of the joint relationship among these layers. We devise a solution procedure that provides a (1- \varepsilon ) optimal solution to this complex problem, where \varepsilon is the required accuracy. Our theoretical result offers a performance benchmark for any other algorithms developed for practical implementation. Using numerical results, we demonstrate the efficacy of the solution procedure and offer quantitative understanding on the interaction of power control, scheduling, and flow routing in a CRN.

Journal ArticleDOI
TL;DR: It is demonstrated that the branch-and-bound based [epsilon]-optimal algorithm obtains a globally optimal solution with the predetermined relative optimality tolerance [epSilon] in a finite number of iterations.

Journal ArticleDOI
TL;DR: The Monte Carlo importance sampling (MCIS) technique is resorts to to find an approximate global solution to the source localization problem using time-difference-of-arrival (TDOA) measurements in sensor networks and constructs a Gaussian distribution and chooses its probability density function as the importance function.
Abstract: We consider the source localization problem using time-difference-of-arrival (TDOA) measurements in sensor networks. The maximum likelihood (ML) estimation of the source location can be cast as a nonlinear/nonconvex optimization problem, and its global solution is hardly obtained. In this paper, we resort to the Monte Carlo importance sampling (MCIS) technique to find an approximate global solution to this problem. To obtain an efficient importance function that is used in the technique, we construct a Gaussian distribution and choose its probability density function (pdf) as the importance function. In this process, an initial estimate of the source location is required. We reformulate the problem as a nonlinear robust least squares (LS) problem, and relax it as a second-order cone programming (SOCP), the solution of which is used as the initial estimate. Simulation results show that the proposed method can achieve the Cramer-Rao bound (CRB) accuracy and outperforms several existing methods.