scispace - formally typeset
Search or ask a question
Topic

Stochastic programming

About: Stochastic programming is a research topic. Over the lifetime, 12343 publications have been published within this topic receiving 421049 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Arguments from stability analysis indicate that Fortet-Mourier type probability metrics may serve as such canonical metrics and efficient algorithms are developed that determine optimal reduced measures approximately.
Abstract: Given a convex stochastic programming problem with a discrete initial probability distribution, the problem of optimal scenario reduction is stated as follows: Determine a scenario subset of prescribed cardinality and a probability measure based on this set that is the closest to the initial distribution in terms of a natural (or canonical) probability metric. Arguments from stability analysis indicate that Fortet-Mourier type probability metrics may serve as such canonical metrics. Efficient algorithms are developed that determine optimal reduced measures approximately. Numerical experience is reported for reductions of electrical load scenario trees for power management under uncertainty. For instance, it turns out that after 50% reduction of the scenario tree the optimal reduced tree still has about 90% relative accuracy.

615 citations

Book
01 Aug 1991
TL;DR: This book is useful to researchers in artificial intelligence and control theory, and others concerned with the design of complex applications in robotics, automated manufacturing, and time-critical decision support.
Abstract: "Planning and Control" explores planning and control by reformulating the two areas in a common control framework, developing the corresponding techniques side-by-side, and identifying opportunities for integrating their ideas and methods. This book is organized around the central roles of prediction, observation, and computation control. The first three chapters deal with predictive models of physical systems based on temporal logic and the differential calculus. Chapter 4 and 5 present some basic concepts in planning and control, including controllability, observability, stability, feedback control, task reduction, conditional plans, and the relationship between goals and preferences. Chapters 6 and 7 consider issues of uncertainty, covering state estimation and the Kalman filter, stochastic dynamic programming, probabilistic modeling, and graph-based decision models. The remaining chapters investigate selected topics in time-critical decision making, adaptive control, and hybrid control architectures. Throughout, the reader is led to consider critical tradeoffs involving the accuracy of prediction, the availability of information from observation, and the costs and benefits of computation in dynamic environments. This book is useful to researchers in artificial intelligence and control theory, and others concerned with the design of complex applications in robotics, automated manufacturing, and time-critical decision support.

613 citations

Journal ArticleDOI
TL;DR: Dec decomposition and partitioning methods for solvingMultistage stochastic linear programs model problems in financial planning, dynamic traffic assignment, economic policy analysis, and many other applications.
Abstract: Multistage stochastic linear programs model problems in financial planning, dynamic traffic assignment, economic policy analysis, and many other applications. Equivalent representations of such problems as deterministic linear programs are, however, excessively large. This paper develops decomposition and partitioning methods for solving these problems and reports on computational results on a set of practical test problems.

608 citations

Book
01 Jan 1976
TL;DR: A major revision of the second volume of a textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization.
Abstract: A major revision of the second volume of a textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization. The second volume is oriented towards mathematical analysis and computation, and treats infinite horizon problems extensively. New features of the 3rd edition are: 1) A major enlargement in size and scope: the length has increased by more than 50%, and most of the old material has been restructured and/or revised. 2) Extensive coverage (more than 100 pages) of recent research on simulation-based approximate dynamic programming (neuro-dynamic programming), which allow the practical application of dynamic programming to large and complex problems. 3) An in-depth development of the average cost problem (more than 100 pages), including a full analysis of multichain problems, and an extensive analysis of infinite-spaces problems. 4) An introduction to infinite state space stochastic shortest path problems. 5) Expansion of the theory and use of contraction mappings in infinite state space problems and in neuro-dynamic programming. 6) A substantive appendix on the mathematical measure-theoretic issues that must be addressed for a rigorous theory of stochastic dynamic programming. Much supplementary material can be found in the book's web page: http://www.athenasc.com/dpbook.html

606 citations

Journal ArticleDOI
TL;DR: The randomized stochastic gradient (RSG) algorithm as mentioned in this paper is a type of approximation algorithm for non-convex nonlinear programming problems, and it has a nearly optimal rate of convergence if the problem is convex.
Abstract: In this paper, we introduce a new stochastic approximation type algorithm, namely, the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method possesses a nearly optimal rate of convergence if the problem is convex. We discuss a variant of the algorithm which consists of applying a postoptimization phase to evaluate a short list of solutions generated by several independent runs of the RSG method, and we show that such modification allows us to improve significantly the large-deviation properties of the algorithm. These methods are then specialized for solving a class of simulation-based optimization problems in which only stochastic zeroth-order information is available.

599 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
86% related
Scheduling (computing)
78.6K papers, 1.3M citations
85% related
Optimal control
68K papers, 1.2M citations
84% related
Supply chain
84.1K papers, 1.7M citations
83% related
Markov chain
51.9K papers, 1.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023175
2022423
2021526
2020598
2019578
2018532