scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic programming published in 1995"


Book
01 Jan 1995

2,312 citations


Journal ArticleDOI
TL;DR: This paper characterize the desirable properties of a solution to models, when the problem data are described by a set of scenarios for their value, instead of using point estimates, and develops a general model formulation, called robust optimization RO, that explicitly incorporates the conflicting objectives of solution and model robustness.
Abstract: Mathematical programming models with noisy, erroneous, or incomplete data are common in operations research applications. Difficulties with such data are typically dealt with reactively-through sensitivity analysis-or proactively-through stochastic programming formulations. In this paper, we characterize the desirable properties of a solution to models, when the problem data are described by a set of scenarios for their value, instead of using point estimates. A solution to an optimization model is defined as: solution robust if it remains "close" to optimal for all scenarios of the input data, and model robust if it remains "almost" feasible for all data scenarios. We then develop a general model formulation, called robust optimization RO, that explicitly incorporates the conflicting objectives of solution and model robustness. Robust optimization is compared with the traditional approaches of sensitivity analysis and stochastic linear programming. The classical diet problem illustrates the issues. Robust optimization models are then developed for several real-world applications: power capacity expansion; matrix balancing and image reconstruction; air-force airline scheduling; scenario immunization for financial planning; and minimum weight structural design. We also comment on the suitability of parallel and distributed computer architectures for the solution of robust optimization models.

1,793 citations


Book
01 Jan 1995

1,224 citations


Journal ArticleDOI
TL;DR: This paper demonstrates practical approaches for determining relative parameter sensitivity with respect to a model's optimal objective function value, decision variables, and other analytic functions of a solution.
Abstract: In applications of operations research models, decision makers must assess the sensitivity of outputs to imprecise values for some of the model's parameters. Existing analytic approaches for classic optimization models rely heavily on duality properties for assessing the impact of local parameter variations, parametric programming for examining systematic variations in model coefficients, or stochastic programming for ascertaining a robust solution. This paper accommodates extensive simultaneous variations in any of an operations research model's parameters. For constrained optimization models, the paper demonstrates practical approaches for determining relative parameter sensitivity with respect to a model's optimal objective function value, decision variables, and other analytic functions of a solution. Relative sensitivity is assessed by assigning a portion of variation in an output value to each parameter that is imprecisely specified. The computing steps encompass optimization, Monte Carlo sampling, ...

958 citations


Journal ArticleDOI
TL;DR: A novel parallel decomposition algorithm for large, multistage stochastic optimization problems that decomposes the problem into subproblems that correspond to scenarios and has promise for solving Stochastic programs that lie outside current capabilities.
Abstract: A novel parallel decomposition algorithm is developed for large, multistage stochastic optimization problems. The method decomposes the problem into subproblems that correspond to scenarios. The subproblems are modified by separable quadratic terms to coordinate the scenario solutions. Convergence of the coordination procedure is proven for linear programs. Subproblems are solved using a nonlinear interior point algorithm. The approach adjusts the degree of decomposition to fit the available hardware environment. Initial testing on a distributed network of workstations shows that an optimal number of computers depends upon the work per subproblem and its relation to the communication capacities. The algorithm has promise for solving stochastic programs that lie outside current capabilities.

397 citations


Book ChapterDOI
01 Jan 1995
TL;DR: In this paper, the authors present a general framework for formulating and solving stochastic, dynamic network problems, including shortest paths, traveling salesman-type problems and vehicle routing.
Abstract: Publisher Summary This chapter discusses stochastic and dynamic networks and routing. The chapter discusses priori optimization in routing, shortest paths, traveling salesman-type problems and vehicle routing. These problems arise when decisions must be made before random outcomes (typically customer demands) are known. The chapter covers dynamic models of problems arising in transportation and logistics, and includes a discussion of important modeling issues, as well as a summary of dynamic models for a number of key problem areas. Dynamic networks provide an important foundation for addressing many problems in logistics planning. Algorithms that have been specialized for dynamic networks are presented. The results for solving infinite networks, including both exact results for stationary infinite networks, and model truncation techniques are briefly discussed. The chapter presents basic results and concepts from the field of stochastic programming, oriented toward their application to network problems. This discussion provides a general framework for formulating and solving stochastic, dynamic network problems. That framework is used to present two stochastic programming models.

319 citations


Journal ArticleDOI
TL;DR: The stochastic vehicle routing problem, where each customer has a known probability of presence and a random demand, is considered and is solved for the first time to optimality by means of an integer L-shaped method.
Abstract: In this article, the following stochastic vehicle routing problem is considered. Each customer has a known probability of presence and a random demand. This problem arises in several contexts, e.g., in the design of less-than-truckload collection routes. Because of uncertainty, it may not be possible to follow vehicle routes as planned. Using a stochastic programming framework, the problem is solved in two stages. In a first stage, planned collection routes are designed. In a second stage, when the set of present customers is known, these routes are followed as planned by skipping the absent customers. Whenever the vehicle capacity is attained or exceeded, the vehicle returns to the depot and resumes its collections along the planned route. This generates a penalty. The problem is to design a first stage solution in order to minimize the expected total cost of the second state solution. This is formulated as a stochastic integer program, and solved for the first time to optimality by means of an integer L...

284 citations


Journal ArticleDOI
TL;DR: Heuristics, based on the properties of the optimal solutions, are developed to find "good" solutions for the general problem and derive upper bounds which are useful when evaluating the performance of the heuristics.
Abstract: In this paper we study optimal strategies for renting hotel rooms when there is a stochastic and dynamic arrival of customers from different market segments. We formulate the problem as a stochastic and dynamic programming model and characterize the optimal policies as functions of the capacity and the time left until the end of the planning horizon. We consider three features that enrich the problem: we make no assumptions concerning the particular order between the arrivals of different classes of customers; we allow for multiple types of rooms and downgrading; and we consider requests for multiple nights. We also consider implementations of the optimal policy. The properties we derive for the optimal solution significantly reduce the computational effort needed to solve the problem, yet in the multiple product and/or multiple night case this is often not enough. Therefore, heuristics, based on the properties of the optimal solutions, are developed to find "good" solutions for the general problem. We also derive upper bounds which are useful when evaluating the performance of the heuristics. Computational experiments show a satisfactory performance of the heuristics in a variety of scenarios using real data from a medium size hotel.

242 citations


Journal ArticleDOI
TL;DR: This paper presents the theoretical developments of a novel approach for the optimization of design models involving stochastic parameters involving decomposition based approach to deal with linear and convex nonlinear models involving uncertainty described by arbitrary probability distribution functions.

201 citations


Book ChapterDOI
01 Jan 1995
TL;DR: A stochastic programming model is a model that specifies the assumptions made concerning the system in mathematical terms and identifies system parameters with mathematical objects and forms a problem to be solved and uses the obtained result for descriptive or operative purposes.
Abstract: When formulating a stochastic programming problem, we usually start from a deterministic problem that we call underlying deterministic problem. Then, observing that some of the parameters are random, we formulate another problem, the stochastic programming problem, by taking into account the probability distribution of the random elements in the underlying problem. When decision can or has to be made in the presence of randomness, at one single step, i.e., we do not wait for the occurrence of any event or realization of some random variable(s), then we say that the stochastic programming model is static. The two words: model and problem are used as synonyms. In the strict sense, the model specifies the assumptions made concerning the system in mathematical terms and identifies system parameters with mathematical objects. Having these, we formulate a problem to be solved and use the obtained result for descriptive or operative purposes.

170 citations


Journal ArticleDOI
TL;DR: In this article, the Shasta-Trinity reservoir operating policies were derived using stochastic dynamic programming (SDP) with different hydrologic state variables, and the authors compared how well SDP models predict their policies and how well these policies performed when simulated.
Abstract: Reservoir operating policies can be derived using stochastic dynamic programming (SDP) with different hydrologic state variables. This paper considers several choices for such hydrologic state variables for SDP models of the Shasta-Trinity system in northern California, for three different benefit functions. We compare how well SDP models predict their policies will perform, as well as how well these policies performed when simulated. For a benefit function stressing energy maximization, all policies did nearly as well, and the choice of the hydrologic state variable mattered very little. For a benefit function with larger water and firm power targets and severe penalties on corresponding shortages, predicted performance significantly overestimated simulated performance, and policies that employed more complete hydrologic information performed significantly better.

Journal ArticleDOI
TL;DR: It is shown that convergence properties of the decomposition method are heavily dependent on sparsity of the linking constraints and application to large-scale linear programming and stochastic programming is discussed.
Abstract: A decomposition method for large-scale convex optimization problems with block-angular structure and many linking constraints is analysed. The method is based on a separable approximation of the augmented Lagrangian function. Weak global convergence of the method is proved and speed of convergence analysed. It is shown that convergence properties of the method are heavily dependent on sparsity of the linking constraints. Application to large-scale linear programming and stochastic programming is discussed.

Journal ArticleDOI
TL;DR: A linear time on-line algorithm is proposed for which the expected difference between the optimum and the approximate solution value is O(log3/2n), and anΩ(1) lower bound on the expected Difference between the optimal and the solution found by any on- line algorithm is shown to hold.
Abstract: Different classes of on-line algorithms are developed and analyzed for the solution of {0, 1} and relaxed stochastic knapsack problems, in which both profit and size coefficients are random variables. In particular, a linear time on-line algorithm is proposed for which the expected difference between the optimum and the approximate solution value isO(log3/2 n). AnΩ(1) lower bound on the expected difference between the optimum and the solution found by any on-line algorithm is also shown to hold.

Journal ArticleDOI
TL;DR: A review of the methods for global optimization reveals that most methods have been developed for unconstrained problems and need to be extended to general constrained problems because most of the engineering applications have constraints.
Abstract: A review of the methods for global optimization reveals that most methods have been developed for unconstrained problems. They need to be extended to general constrained problems because most of the engineering applications have constraints. Some of the methods can be easily extended while others need further work. It is also possible to transform a constrained problem to an unconstrained one by using penalty or augmented Lagrangian methods and solve the problem that way. Some of the global optimization methods find all the local minimum points while others find only a few of them. In any case, all the methods require a very large number of calculations. Therefore, the computational effort to obtain a global solution is generally substantial. The methods for global optimization can be divided into two broad categories: deterministic and stochastic. Some deterministic methods are based on certain assumptions on the cost function that are not easy to check. These methods are not very useful since they are not applicable to general problems. Other deterministic methods are based on certain heuristics which may not lead to the true global solution. Several stochastic methods have been developed as some variation of the pure random search. Some methods are useful for only discrete optimization problems while others can be used for both discrete and continuous problems. Main characteristics of each method are identified and discussed. The selection of a method for a particular application depends on several attributes, such as types of design variables, whether or not all local minima are desired, and availability of gradients of all the functions.

Journal ArticleDOI
TL;DR: In this article, a multi-period, dynamic, portfolio optimization model is developed to address the problem of portfolio managers in the new fixed-income securities with various forms of uncertainty, in addition to the usual interest rate changes.

Book
30 Sep 1995
TL;DR: This chapter discusses the development of Stochastic Differential Equations and its applications in deterministic and non-deterministic systems.
Abstract: Preface Introduction From Deterministic to Stochastic Linear Control Systems Text Organization and Reading Suggestion MATHEMATICAL PRELIMINARIES Probability and Random Processes Probability, Measure, and Integration Convergence of Random Sequences Random Vectors and Conditional Expectations Second Order Processes and Calculus in Mean Square Exercises References Ito Integrals and Stochastic Differential Equations Markov Processes Orthogonal Increments Processes and the Wiener-Levy Process Ito Integrals and Stochastic Differential Equations Exercises References LINEAR STOCHASTIC CONTROL SYSTEMS: THE DISCRETE-TIME CASE Analysis of Discrete-Time Linear Stochastic Control Systems Analysis of Discrete-Time Causal LTI Systems Analysis of Causal LTI Stochastic Control Systems Analysis of the "State" Description of Controlled Markov Chains State Space Systems and ARMA Models Mathematical Modeling and Applications Exercises References Optimal Estimation for Discrete-Time Linear Stochastic Systems Optimal State Estimation Recursive Optimal Estimation and Kalman Filtering Modified Kalman Filtering Algorithms Exercises References Optimal Control of Discrete-Time Linear Stochastic Systems Introduction Dynamic Programming and LQC Control Problems LQC Optimal Control Problems Adaptive Stochastic Control Exercises References LINEAR STOCHASTIC CONTROL SYSTEMS: THE CONTINUOUS-TIME CASE Continuous-Time Linear Stochastic Control Systems Analysis of Continuous-Time Causal LTI Systems Further Discussion of Markov Processes Dynamic Programming and LQ Control Problems Exercises References Optimal Control of Continuous-Time Linear Stochastic Systems The Continuous-Time LQ Stochastic Control Problem Stochastic Dynamic Programming Innovation Processes and the Kalman-Bucy Filter Optimal Prediction and Smoothing The Separation Principle Exercises References Stability Analysis of Stochastic Differential Equations Stability of Deterministic Systems Stability of Stochastic Systems Stability of Moments Exercises References Appendix Fundamental Real and Functional Analysis Fundamental Matrix Theory and Vector Calculations Martingales References Index

Journal ArticleDOI
TL;DR: In this paper, a new methodology for maintenance scheduling that takes into account inter-area transfer limitations and stochastic reliability constraints is presented, where the optimization model is based on the Benders decomposition technique and the objective is to determine a minimum cost maintenance schedule, subject to technological and system reliability constraints.
Abstract: This paper presents a new methodology for maintenance scheduling that takes into account inter-area transfer limitations and stochastic reliability constraints. The optimization model is based on the Benders decomposition technique. The objective of this model is to determine a minimum cost maintenance schedule, subject to technological and system reliability constraints. The model may also be used to assess the adequacy of existing transmission limits under a given set of planned outages of generating units. Case studies with a realistic three-area power system are presented and discussed. >

Journal ArticleDOI
TL;DR: Drawing conclusions on the stability of optimal values and optimal solutions to the two-stage stochastic program when subjecting the underlying probability measure to perturbations are presented.
Abstract: For two-stage stochastic programs with integrality constraints in the second stage, we study continuity properties of the expected recourse as a function both of the first-stage policy and the integrating probability measure.

Journal ArticleDOI
TL;DR: In this article, the dynamic programming principle for a multidimensional singular stochastic control problem is established for the case when assuming Lipschitz continuity on the data, and the value function is continuous and is the unique viscosity solution of the corresponding Hamilton-Jacobi-Bellman equation.
Abstract: The dynamic programming principle for a multidimensional singular stochastic control problem is established in this paper When assuming Lipschitz continuity on the data, it is shown that the value function is continuous and is the unique viscosity solution of the corresponding Hamilton--Jacobi--Bellman equation

Journal ArticleDOI
TL;DR: The method of cutting planes from analytic centers applied to similar formulations is investigated, finding that new cutting planes will be generated not from the optimum of the linear programming relaxation, but from the analytic center of the set of localization.
Abstract: The stochastic linear programming problem with recourse has a dual block-angular structure. It can thus be handled by Benders' decomposition or by Kelley's method of cutting planes; equivalently the dual problem has a primal block-angular structure and can be handled by Dantzig-Wolfe decomposition—the two approaches are in fact identical by duality. Here we shall investigate the use of the method of cutting planes from analytic centers applied to similar formulations. The only significant difference form the aforementioned methods is that new cutting planes (or columns, by duality) will be generated not from the optimum of the linear programming relaxation, but from the analytic center of the set of localization.

Journal ArticleDOI
TL;DR: In this paper, the authors considered nonstationary stochastic periodic review inventory problems with proportional costs in industrial settings with seasonal patterns, trends, business cycles, and limited life items.
Abstract: Nonstationary stochastic periodic review inventory problems with proportional costs occur in a number of industrial settings with seasonal patterns, trends, business cycles, and limited life items. Myopic policies for such problems order as if the salvage value in the current period for ending inventory were the full purchase price, so that information about the future would not be needed. They have been shown in the literature to be optimal when demand "is increasing over time," and to provide upper bounds for the stationary finite horizon problem and in some other situations. Some results are also known, given special salvaging assumptions, about lower bounds on the optimal policy which are near-myopic. Here analogous but stronger bounds are derived for the general finite horizon problem, without such special assumptions. The best upper bound is an extension of the heuristic used by industry for some years for end of season EOS problems; the lower bound is an extension of earlier analytic methods. Four heuristics were tested against the optimal obtained by stochastic dynamic programming for 969 problems. The simplest heuristic is the myopic heuristic itself: it is good especially for moderately varying problems without heavy end of season salvage costs and averages only 2.75% in cost over the optimal. However, the best of the heuristics exceeds the optimal in cost by an average of only 0.02%, at about 0.5% of the computational cost of dynamic programming.

Journal ArticleDOI
TL;DR: The basic results show that the solutions obtained by replacing the original distribution by an empirical distribution provides an effective tool for solving stochastic programming problems.
Abstract: Several exponential bounds are derived by means of the theory of large deviations for the convergence of approximate solutions of stochastic optimization problems. The basic results show that the solutions obtained by replacing the original distribution by an empirical distribution provides an effective tool for solving stochastic programming problems.

Journal ArticleDOI
TL;DR: In this article, the authors estimate the effect of financial incentives for reenlistment on military retention rates using a stochastic dynamic programming model and show that the computational burden of the model is relatively low even when estimated on panel data with unobserved heterogeneity.
Abstract: We estimate the effect of financial incentives for reenlistment on military retention rates using a stochastic dynamic programming model. We show that the computational burden of the model is relatively low even when estimated on panel data with unobserved heterogeneity. The estimates of the model show strong effects of military compensation, especially of retirement pay, on retention rates. We also compare our model with simpler-to-compute models and find that all give approximately the same fit but that our dynamic programming model gives more plausible predictions of policy measures that affect military and civilian compensation at future dates.

Journal ArticleDOI
TL;DR: A deterministic optimal control problem is obtained that is equivalent to the stochastic production planning problem under consideration and derived the optimal feedback control policy in terms of the directional derivatives of the value function.

Journal Article
TL;DR: The traditional deterministic optimization models are limited in practical applications because the models parameters (future demands, interest rates, water inflows, resources, etc.) are not completely known when some decision is needed.
Abstract: Mathematical modeling of economic, ecological and other complex systems with the goal to analyze them and to find optimal decisions has been studied for many years. The challenging problems connected with running market economies, of realistic approaches to environmental protection, etc., are that the decisions are to be made under uncertainty. Therefore the traditional deterministic optimization models are limited in practical applications because the models parameters (future demands, interest rates, water inflows, resources, etc.) are not completely known when some decision is needed. A typical approach of substituting expected values for all random parameters can lead to inferior solutions that discredit both the model designer and the use of optimization methods. Moreover, in controlling or analyzing complex systems, various levels of uncertainties have to be taken into account: besides of requirements for proper treatment of nonhomogeneity of raw input materials, volatility of prices, demands or of water inflows one is asked to cope with future development of factors essential for running the system such as interest rates or innovations of technological progresses and to hedge against legislative changes and complete or partial changes of economic and other

Journal ArticleDOI
TL;DR: In this paper, the Central Limit Theorem of probability theory applied to sums and products of stochastic variables is applied to the selection of appropriate probability distribution functions for all stochastically variables.
Abstract: One of the main steps in an uncertainty analysis is the selection of appropriate probability distribution functions for all stochastic variables. In this paper, criteria for such selections are reviewed, the most important among them being any a priori knowledge about the nature of a stochastic variable, and the Central Limit Theorem of probability theory applied to sums and products of stochastic variables. In applications of these criteria, it is shown that many of the popular selections, such as the uniform distribution for a poorly known variable, require far more knowledge than is actually available. However, the knowledge available is usually sufficient to make use of other, more appropriate distributions. Next, functions of stochastic variables and the selection of probability distributions for their arguments as well as the use of different methods of error propagation through these functions are discussed. From these evaluations, priorities can be assigned to determine which of the stochastic variables in a function need the most care in selecting the type of distribution and its parameters. Finally, a method is proposed to assist in the assignment of an appropriate distribution which is commensurate with the total information on a particular stochastic variable, and is based on the scientific method. Two examples are given to elucidate the method for cases of little or almost no information.

01 Jan 1995
TL;DR: In this paper, the authors consider two-stage stochastic linear programming models with integer recourse and show that these models are at the intersection of two different branches of mathematical programming.
Abstract: In this thesis we consider two-stage stochastic linear programming models with integer recourse. Such models are at the intersection of two different branches of mathematical programming. On the one hand some of the model parameters are random, which places the problem in the field of stochastic programming. On the other hand some decision variables are required to be integers, so that these models also belong to the field of integer programming.

Journal ArticleDOI
TL;DR: In this paper, an approximation formula for constructing two linear objective functions based on the nonlinear objective function of the equivalent deterministic form (EDF) of the stochastic programming model is presented.

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of finding an optimal strategy for an organism in a large population when the environment fluctuates from year to year and cannot be predicted beforehand.
Abstract: We consider the problem of finding an optimal strategy for an organism in a large population when the environment fluctuates from year to year and cannot be predicted beforehand. In fluctuating environments, geometric mean fitness is the appropriate measure and individual optimization fails. Consequently, optimal strategies cannot be found by stochastic dynamic programming alone. We consider a simplified model in which each year is divided into two non-overlapping time intervals. In the first interval, environmental conditions are the same each year; in the second, they fluctuate from year to year. During the first interval, which ends at time of year T, population members do not reproduce. The state and time dependent strategy employed during the interval determines the probability of survival till T and the probability distribution of possible states at T given survival. In the interval following T, population members reproduce. The state of an individual at T and the ensuing environmental conditions determine the number of surviving descendants left by the individual next year. In this paper, we give a general characterization of optimal dynamic strategies over the first time interval. We show that an optimal strategy is the equilibrium solution of a (non-fluctuating environment) dynamic game. As a consequence, the behaviour of an optimal individual over the first time interval maximizes the expected value of a reward R$^{\ast}$ obtained at the end of the interval. However, R$^{\ast}$ cannot be specified in advance and can only be found once an optimal strategy has been determined. We illustrate this procedure with an example based on the foraging decisions of a parasitoid.

Book
30 Nov 1995
TL;DR: This book discusses Dynamic Programming for Scheduling a Batch Processor Periodic Scheduling Problems, a Multi-Objective Approach to Machine Loading Evaluative Models, and more.
Abstract: Manufacturing Systems Modeling The Nature of Mathematical Models of Manufacturing Systems Dynamic Models of Manufacturing Systems An Overview of Management Problems in Manufacturing Systems Plan of the Book For Further Reading Optimization Models and Model Solving Classes of Optimization Models An Overview of Optimization Methods Complexity of Optimization Problems Good and Bad Model Formulations Developing Heuristics from Mathematical Models The Theory of NP-Completeness An Outlook on Multi-Objective Optimization For Further Reading Discrete Time Models Aggregate Production Planning The Capacitated Lot-Sizing Problem The Discrete Lot-Sizing and Scheduling Problem Continuous Flow Models for Production Scheduling Discussion: The Flexibility of Discrete Time Models Strong Formulations for Lot-Sizing Problems The Single Item DLSP Problem: A Dynamic Programming Approach For Further Reading DEDS Models for Scheduling Problems Classical Machine Scheduling Theory Classification of Scheduling Problems Polynomial Complexity Scheduling Problems Dynamic Programming Approaches A Modeling Framework Based on Node Potential Assignment MILP Models for Scheduling Problems Branch and Bound Methods Heuristic Scheduling Methods Discussion Dynamic Programming for Scheduling a Batch Processor Periodic Scheduling Problems A Multi-Objective Approach to Machine Loading Evaluative Models Introduction to Queueing Models Queueing Networks Computational Methods for Closed Networks Approximate Analysis of Non-Product Form Queueing Networks Petri Net Models Product Forms and Local Balance Equations Putting Things Together Integrating Optimization Methods and Evaluative Models Optimal Control of Failure-Prone Manufacturing Systems Model Management and Modeling Languages Appendices Fundamentals of Mathematical Programming Linear Programming and Network Optimization Enumerative and Heuristic Methods for Discrete Optimization Dynamic Programming Stochastic Modeling Problems Bibliography Index Each Chapter Also Includes a "For Further Reading" Section.