Topic
Stochastic programming
About: Stochastic programming is a research topic. Over the lifetime, 12343 publications have been published within this topic receiving 421049 citations.
Papers published on a yearly basis
Papers
More filters
••
11 Sep 2006
TL;DR: It is shown that a hybrid strategy that makes use of genetic algorithms and quadratic programming is designed to provide an accurate and efficient solution to the problem of selecting an optimal portfolio within the standard mean-variance framework extended to include constraints of practical interest.
Abstract: We consider the problem of selecting an optimal portfolio within the standard mean-variance framework extended to include constraints of practical interest, such as limits on the number of assets that can be included in the portfolio and on the minimum and maximum investments per asset and/or groups of assets. The introduction of these realistic constraints transforms the selection of the optimal portfolio into a mixed integer quadratic programming problem. This optimization problem, which we prove to be NP-hard, is difficult to solve, even approximately, by standard optimization techniques. A hybrid strategy that makes use of genetic algorithms and quadratic programming is designed to provide an accurate and efficient solution to the problem.
106 citations
••
TL;DR: In this article, the authors investigate market power issues in bid-based hydrothermal scheduling, where market power is simulated with a single-stage Nash-Cournot equilibrium model and market power assessment for multiple stages is carried through a stochastic dynamic programming scheme.
Abstract: The objective of this paper is to investigate market power issues in bid-based hydrothermal scheduling. Initially, market power is simulated with a single-stage Nash-Cournot equilibrium model. Market power assessment for multiple stages is then carried through a stochastic dynamic programming scheme. The decision in each stage and state is the equilibrium of a multi-agent game. Thereafter, mitigation measures, specially bilateral contracts, are investigated. Case studies with data taken from the Brazilian system are presented and discussed.
105 citations
••
TL;DR: A network design model in which the traffic flows satisfy dynamic user equilibrium conditions for a single destination, and it is observed that not accounting for demand uncertainty explicitly, provides sub-optimal solution to the DUE NDP problem.
Abstract: In this paper we formulate a network design model in which the traffic flows satisfy dynamic user equilibrium conditions for a single destination. The model presented here incorporates the Cell Transmission Model (CTM); a traffic flow model capable of capturing shockwaves and link spillovers. Comparisons are made between the properties of the Dynamic User equilibrium Network Design Problem (DUE NDP) and an existing Dynamic System Optimal (DSO) NDP formulation. Both network design models have different objective functions with similar constraint sets which are linear and convex. Numerical demonstrations are made on multiple networks to demonstrate the efficacy of the model and demonstrate important differences between the DUE and DSO NDP approaches. In addition, the flexibility of the approach is demonstrated by extending the formulation to account for demand uncertainty. This is formulated as a stochastic programming problem and initial test results are demonstrated on test networks. It is observed that not accounting for demand uncertainty explicitly, provides sub-optimal solution to the DUE NDP problem.
105 citations
••
TL;DR: A nonlinear programming algorithm which exploits the matrix sparsity produced by matrices in sequential quadratic programming applications to solve trajectory optimization problems with nonlinear equality and inequality constraints.
Abstract: One of the most effective numerical techniques for solving nonlinear programming problems is the sequential quadratic programming approach. Many large nonlinear programming problems arise naturally in data fitting and when discretization techniques are applied to systems described by ordinary or partial differential equations. Problems of this type are characterized by matrices which are large and sparse. This paper describes a nonlinear programming algorithm which exploits the matrix sparsity produced by these applications. Numerical experience is reported for a collection of trajectory optimization problems with nonlinear equality and inequality constraints.
105 citations
••
TL;DR: A new computational algorithm is presented for the solution of discrete time linearly constrained stochastic optimal control problems decomposable in stages and is much more efficient than the conventional way based on enumeration or iterative methods with linear rate of convergence.
Abstract: A new computational algorithm is presented for the solution of discrete time linearly constrained stochastic optimal control problems decomposable in stages. The algorithm, designated gradient dynamic programming, is a backward moving stagewise optimization. The main innovations over conventional discrete dynamic programming (DDP) are in the functional representation of the cost-to-go function and the solution of the single-stage problem. The cost-to-go function (assumed to be of requisite smoothness) is approximated within each element defined by the discretization scheme by the lowest-order polynomial which preserve its values and the values of its gradient with respect to the state variables at all nodes of the discretization grid. The improved accuracy of this Hermitian interpolation scheme reduces the effect of discretization error and allows the use of coarser grids which reduces the dimensionality of the problem. At each stage, the optimal control is determined on each node of the discretized state space using a constrained Newton-type optimization procedure which has quadratic rate of convergence. The set of constraints which act as equalities is determined from an active set strategy which converges under lenient convexity requirements. This method of solving the single-stage optimization is much more efficient than the conventional way based on enumeration or iterative methods with linear rate of convergence. Once the optimal control is determined, the cost-to-go function and its gradient with respect to the state variables is calculated to be used at the next stage. The proposed technique permits the efficient optimization of stochastic systems whose high dimensionality does not permit solution under the conventional DDP framework and for which successive approximation methods are not directly applicable due to stochasticity. Results for a four-reservoir example are presented. The purpose of this paper is to present a new computational algorithm for the stochastic optimization of sequential decision problems. One important and extensively studied class of such problems in the area of water resources is the discrete time optimal control of multireservoir systems under stochastic inflows. Other applications include the optimal design and operation of sewer systems [e.g., Mays and Wenzel, 1976; Labadie et al., 1980], the optimal conjunctive utilization of surface and groundwater resources [e.g., Buras, 1972], and the minimum cost water quality maintenance in rivers [e.g., Dracup and Fogarty, 1974; Chang and Yeh, 1973], to mention only a few of the water resources applications and pertinent references. An extensive review of dynamic programming applications in water resources can be found in the works by Yakowitz [1982] and Yeh [1985]. Before we proceed with the
105 citations