Topic
Stochastic programming
About: Stochastic programming is a research topic. Over the lifetime, 12343 publications have been published within this topic receiving 421049 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: This work uses dynamic Bayesian networks (with decision trees representing the local families of conditional probability distributions) to represent stochastic actions in an MDP, together with a decision-tree representation of rewards, and develops versions of standard dynamic programming algorithms that directly manipulate decision-Tree representations of policies and value functions.
443 citations
••
TL;DR: A comprehensive review of studies in the fields of SCND and reverse logistics network design under uncertainty and existing optimization techniques for dealing with uncertainty such as recourse-based stochastic programming, risk-averse stochastics, robust optimization, and fuzzy mathematical programming are explored.
442 citations
••
TL;DR: This paper derives an equivalent reformulation for DCC and shows that it is equivalent to a classical chance constraint with a perturbed risk level, and analyzes the relationship between the conservatism of D CC and the size of historical data, which can help indicate the value of data.
Abstract: In this paper, we study data-driven chance constrained stochastic programs, or more specifically, stochastic programs with distributionally robust chance constraints (DCCs) in a data-driven setting to provide robust solutions for the classical chance constrained stochastic program facing ambiguous probability distributions of random parameters. We consider a family of density-based confidence sets based on a general $$\phi $$ź-divergence measure, and formulate DCC from the perspective of robust feasibility by allowing the ambiguous distribution to run adversely within its confidence set. We derive an equivalent reformulation for DCC and show that it is equivalent to a classical chance constraint with a perturbed risk level. We also show how to evaluate the perturbed risk level by using a bisection line search algorithm for general $$\phi $$ź-divergence measures. In several special cases, our results can be strengthened such that we can derive closed-form expressions for the perturbed risk levels. In addition, we show that the conservatism of DCC vanishes as the size of historical data goes to infinity. Furthermore, we analyze the relationship between the conservatism of DCC and the size of historical data, which can help indicate the value of data. Finally, we conduct extensive computational experiments to test the performance of the proposed DCC model and compare various $$\phi $$ź-divergence measures based on a capacitated lot-sizing problem with a quality-of-service requirement.
437 citations
••
05 Mar 2007TL;DR: This systematic study is able to find a minimum frequency of change allowed in a problem for two dynamic EMO procedures to adequately track Pareto-optimal frontiers on-line and suggest an automatic decision-making procedure for arriving at a dynamic single optimal solution on- line.
Abstract: Most real-world optimization problems involve objectives, constraints, and parameters which constantly change with time Treating such problems as a stationary optimization problem demand the knowledge of the pattern of change a priori and even then the procedure can be computationally expensive Although dynamic consideration using evolutionary algorithms has been made for single-objective optimization problems, there has been a lukewarm interest in formulating and solving dynamic multi-objective optimization problems In this paper, we modify the commonly-used NSGA-II procedure in tracking a new Pareto-optimal front, as soon as there is a change in the problem Introduction of a few random solutions or a few mutated solutions are investigated in detail The approaches are tested and compared on a test problem and a real-world optimization of a hydro-thermal power scheduling problem This systematic study is able to find a minimum frequency of change allowed in a problem for two dynamic EMO procedures to adequately track Pareto-optimal frontiers on-line Based on these results, this paper also suggests an automatic decision-making procedure for arriving at a dynamic single optimal solution on-line
434 citations
••
01 Jan 1989
TL;DR: In the long history of mathematics, stochastic optimal control is a rather recent development using Bellman's Principle of Optimality along with measure-theoretic and functional-analytic methods.
Abstract: In the long history of mathematics, stochastic optimal control is a rather recent development. Using Bellman’s Principle of Optimality along with measure-theoretic and functional-analytic methods, several mathematicians such as H. Kushner, W. Fleming, R. Rishel. W.M. Wonham and J.M. Bismut, among many others, made important contributions to this new area of mathematical research during the 1960s and early 1970s. For a complete mathematical exposition of the continuous time case see Fleming and Rishel (1975) and for the discrete time case see Bertsekas and Shreve (1978).
415 citations