scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic programming published in 1979"


Journal ArticleDOI
TL;DR: In this article, the authors propose a method to find the book that you love to read first or find an interesting book that will make you want to read, but not a book.
Abstract: What do you do to start reading dynamic programming and stochastic control? Searching the book that you love to read first or find an interesting book that will make you want to read? Everybody has difference with their reason of reading a book. Actuary, reading habit must be from earlier. Many people may be love to read, but not a book. It's not fault. Someone will be bored to open the thick book with small words to read. In more, this is the real condition. So do happen probably with this dynamic programming and stochastic control.

210 citations


Journal ArticleDOI
TL;DR: In this paper, the stochastic regulator problems and optimal stationary control as well as stability are studied for infinite dimensional systems with state and control dependent noise, and the model is described by a semigroup and Wiener processes in Hilbert space.
Abstract: In this paper stochastic regulator problems and optimal stationary control as well as stability are studied for infinite dimensional systems with state and control dependent noise. The stochastic model is described by a semigroup and Wiener processes in Hilbert space and Wonham’s approach using differential generators and dynamic programming is extended to infinite dimensions.

100 citations



Journal ArticleDOI
TL;DR: Monte Carlo methods utilizing a new network concept, Uniformly Directed Cutsets (UDCs), are presented for analyzing directed, acyclic networks with probabilistic arc durations, providing estimates for project completion time distributions, criticality indices, minimum time distributions and path optimality indices.

97 citations


Book ChapterDOI
01 Jan 1979
TL;DR: In this paper, a general problem involving the single-pass, single-point turning operation is introduced and a multiple criteria machining problem is formulated and solved using goal programming techniques.
Abstract: In this paper, a general problem involving the single-pass, single-point turning operation is introduced. Different mathematical models and solution approaches for solving various single objective problems are described. The mathematical properties of the minimization of cost and maximization of production rate solutions are discussed in detail. The solution approaches used are differential calculus, linear programming, and geometric programming. Finally, a multiple criteria machining problem is formulated and solved using goal programming techniques.

49 citations


Journal ArticleDOI
TL;DR: In this article, a generalization of the stochastic multilocation problem of inventory theory is considered and a qualititative analysis of the problem is presented and it is shown that optimal policies have a certain geometric form.
Abstract: This paper examines a convex programming problem that arises in several contexts. In particular, the formulation was motivated by a generalization of the stochastic multilocation problem of inventory theory. The formulation also subsumes some “active” models of stochastic programming. A qualititative analysis of the problem is presented and it is shown that optimal policies have a certain geometric form. Properties of the optimal policy and of the optimal value function are described.

36 citations



Journal ArticleDOI
TL;DR: In this paper, the authors present an application by giving an optimal control method for the regulation of the water level of Lake Balaton in Hungary, where one decision period is one month.

30 citations


Journal ArticleDOI
TL;DR: In this paper, stochastic dynamic programming (DP) is used to find the operating policy with the least expected steady state cost for a water supply system consisting of a reservoir and an alternative source.
Abstract: Stochastic dynamic programming (DP) is used to find the operating policy with the least expected steady state cost for a water supply system consisting of a reservoir and an alternative source. The set of possible decisions consists of a number of release rules, each expressing release as a function of storage, rather than a number of discrete releases, as in the conventional DP approach. A flexible procedure is developed which permits inflow to be described by piecewise linear probability density functions, and removes the constraint that inflow and release must be multiples of the discrete unit of storage. The techniques are applied to a reservoir-river system, and through simulation, the results are compared with the solution found by conventional DP.

27 citations


Journal ArticleDOI
TL;DR: In this article, a system for planning facilities and resources in distribution networks is presented, which is capable of evaluating alternative management strategies in the face of fluctuating costs and population movements over an extended time horizon.
Abstract: A system for planning facilities and resources in distribution networks is presented. It was applied at Cahill May Roberts, one of Ireland's largest pharmaceutical companies, having a turnover of $35 million and employing over 300 people. The planning system, a three stage stochastic programming model, is capable of evaluating alternative management strategies in the face of fluctuating costs and population movements over an extended time horizon. Apart from defining unique territories for the company's distribution centres and optimal customer servicing schedules within those territories, the planning system evaluated alternative locations for distribution centres under the twin uncertain ties of cost and demand. The planning model contained approximately 2,000 variables and 300 equations, whilst the undecomposed route planning model encompassed approximately 300 towns/villages and 1,200 customers Both models, therefore required substantial data inputs as successive management objectives and assumptions were examined. To facilitate this data preparation and to achieve solutions in a realistic time, a special matrix generator was developed which had the capability of producing data in the required format and inputing directly into both planning models. The generator controls the entire optimizing process and permits a three hundred-told reduction in data preparation. In effect the generator permits an optimization procedure to be used in a simulation mode. The model achieved savings in delivery and transport costs of 23.3% and 20% respectively, and increased customer service levels by 60%.

23 citations


Journal ArticleDOI
TL;DR: A new model structure, two-stage linear goal programming, is developed and compared to the other structures and found to provide additional useful information from a decision-making perspective.
Abstract: Recently a number of mathematical programming models have been developed to assist banks in their portfolio (balance sheet) management decision making. Generally, the model structures used may be classified as either linear, linear goal, or two-stage linear programming. Of these, linear programming models are the most common. The purpose of this paper is to discuss the optimal bank portfolio management solutions produced by each of the above programming structures. In addition, a new model structure, two-stage linear goal programming, is developed and compared to the other structures. From a decision-making perspective, this new model structure is found to provide additional useful information.

Journal ArticleDOI
TL;DR: A method of generating efficient and properly efficient solutions of a multiple criteria mathematical programming problem and computational procedures which reduce multiple criteria problems into scalar criterion problems are discussed.
Abstract: A method of generating efficient and properly efficient solutions of a multiple criteria mathematical programming problem is considered. The method is baaed on the principle of optimality in dynamic programming. Assuming the separability and monotonicity of the problem, a generalized functional equation of dynamic programming is derived. Moreover, computational procedures which reduce multiple criteria problems into scalar criterion problems are also discussed.

Journal ArticleDOI
TL;DR: Presents an application of stochastic integer-programming formulation to a portfolio of projects (each of which were planned with the aid of a decision-tree structure) and follow-up studies undertaken one year later are described to assess the accuracy of the data and adequacy of the model in practice.
Abstract: Presents an application of stochastic integer-programming formulation to a portfolio of projects (each of which were planned with the aid of a decision-tree structure) Follow-up studies undertaken one year later are described in an attempt to assess the accuracy of the data and adequacy of the model in practice

Journal ArticleDOI
TL;DR: It is possible to use Hogan's modification of the Frank-Wolfe algorithm in the solution of these two-stage stochastic convex programs to ensure that there is an equivalent deterministic convex program that has a directionally differentiable objective function.
Abstract: Stochastic programs are said to have simple recourse if the state vector in each period is uniquely determined once all previous decision and random vectors are known. This paper considers two-period problems of this nature. A number of important business and economic problems such as those concerned with inventory management, portfolio revision, cash balance management, and pension fund management can be formulated effectively as problems in this class. We present conditions that ensure that there is an equivalent deterministic convex program that has a directionally differentiable objective function. Detailed expressions enabling one to calculate the directional derivative are derived. Thus it is possible to use Hogan's modification of the Frank-Wolfe algorithm, which applies for two-stage convex programs, in the solution of these two-stage stochastic convex programs. Easily calculated bounds on the optimal objective value and the problem's Kuhn-Tucker conditions are presented. We consider the important...


Journal ArticleDOI
TL;DR: In this article, the problem of optimal design of mechanisms for minimum mechanical and structural error is formulated as a stochastic programming problem, where the nominal link lengths, the tolerances on link lengths and the clearances in joints are considered as design parameters.

Journal ArticleDOI
TL;DR: In this paper, the authors used stochastic dynamic programming to schedule the annual expansion of a water supply facility which is being used to supplement a catchment reservoir of fixed storage capacity, and the optimal plant construction program over a planning period of many years is that which optimises the total expected discounted cost, which is made up of investment cost, operating cost, and shortfall penalty cost.
Abstract: Stochastic dynamic programming is used to schedule the annual expansion of a water supply facility which is being used to supplement a catchment reservoir of fixed storage capacity. The optimal plant construction programme over a planning period of many years is that which optimises the total expected discounted cost, which is made up of investment cost, operating cost, and shortfall penalty cost. Reservoir storage is used as the state variable, and the stochastic structure of the system is derived from the probability distributions of monthly inflows. Storage carryover from year to year is incorporated into the model as an important feature.

Journal ArticleDOI
01 May 1979
TL;DR: The results of optimal control theory applied to deterministic fishery models is extended to the continuous stochastic case by singly considering the logistic parameters and the fishing harvest as random processes.
Abstract: The results of optimal control theory applied to deterministic fishery models is extended to the continuous stochastic case. The simple logistic model is generalized to stochastic models by singly considering the logistic parameters and the fishing harvest as random processes. Approximate probability density functions are calculated for one model. The stochastic models are applied to the Atlantic sea-scallop fishery. Dynamic programming is used to obtain control solutions.

Journal ArticleDOI
TL;DR: In this paper, a class of linear programming problems is analyzed, where some of the parameters are estimated by methods like least squares and are therefore stochastic, and some aspects of the validation problems involving the risk aversion parameter are also analyzed.
Abstract: A class of linear programming problems is analyzed here, where some of the parameters are estimated by methods like least squares and are therefore stochastic. Since the decision vector may also be stochastic in this framework, this leads to bilinear problems in stochastic programming in suitable cases. Further, the conventional methods for applying programming models using regression estimates for its parameters appear to raise questions of validation of such models. Some aspects of the validation problems involving the risk aversion parameter are also analyzed here.

Journal ArticleDOI
TL;DR: In this paper, Heilmann proved that a certain class of stochastic linear programs possesses an optimal solution which depends on the random parameter in a measurable way, and that the optimal value is measurable.
Abstract: Recently W. Heilmann proved that a certain class of stochastic linear programs possesses an optimal solution which depends on the random parameter in a measurable way, and that the optimal value is measurable. We prove a result of this type for much more general problems, including stochastic nonlinear programming and stochastic optimal control problems.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated a time-discrete stochastic dynamic programming problem with countable state and action spaces and introduced an approximation procedure for a numerical solution by decomposition of the state and also of the action space.
Abstract: In this note we investigate a time-discrete stochastic dynamic programming problem with countable state and action spaces. We introduce an approximation procedure for a numerical solution by decomposition of the state and also of the action space. The minimal value functions and the optimal policies of the Markovian Decision .Processes constructed by clustering of both spaces are calculated by dynamic programming. Bounds for the minimal value functions will be obtained and convergence theorems are proved.

ReportDOI
01 Feb 1979
TL;DR: A new approach to modeling and analysis of systems is presented that exploits the underlying structure of the system, and several standard systems notions are demonstrated to have interesting interpretations when analyzed via descriptor-variable theory.
Abstract: A new approach to modeling and analysis of systems is presented that exploits the underlying structure of the system. The development of the approach focuses on a new modeling form, called 'descriptor variable' systems, that was first introduced in this research. Key concepts concerning the classification and solution of descriptor-variable systems are identified, and theories are presented for the linear case, the time-invariant linear case, and the nonlinear case. Several standard systems notions are demonstrated to have interesting interpretations when analyzed via descriptor-variable theory. The approach developed also focuses on the optimization of large-scale systems. Descriptor variable models are convenient representations of subsystems in an interconnected network, and optimization of these models via dynamic programming is described. A general procedure for the optimization of large-scale systems, called spatial dynamic programming, is presented where the optimization is spatially decomposed in the way standard dynamic programming temporally decomposes the optimization of dynamical systems. Applications of this approach to large-scale economic markets and power systems are discussed.

Journal ArticleDOI
TL;DR: An integrated framework for handling dependent random variables in a large class of stochastic management models, a class that includes stochastically break-even analysis and stochastics present-value analysis is presented.

01 Jan 1979
TL;DR: In this article, a stochastic dynamic programming model that explicitly examines the incentives to retire from the military is developed and numerically evaluated, which includes the most significant institutional factors affecting an Air Force Office's retirement decision; actual data on promotion probabilities, officer's pay and allowances, and retirement pay are embedded in the model.
Abstract: : A stochastic dynamic programming model that explicitly examines the incentives to retire from the military is developed and numerically evaluated. The dynamic program includes the most significant institutional factors affecting an Air Force Office's retirement decision; actual data on promotion probabilities, officer's pay and allowances, and retirement pay are embedded in the model. The note is a progress report; research generalizing the model presented in this note will be presented in a future report. (Author)


Journal ArticleDOI
TL;DR: In this paper, the decision-making process associated with the scheduling of burley tobacco harvesting operations was formulated as a multi-stage decision process, and solved using a procedure called dynamic programming.
Abstract: THE decision-making process associated with the scheduling of burley tobacco harvesting operations was formulated as a multi-stage decision process, and solved using a procedure called dynamic programming. The solution of a stochastic dynamic programming model provides a set of optimal decision rules, that is, a strategy. When certain user-specified parameters are provided, the decision model provides information concerning the optimal date to start harvesting, the optimal number of hours to harvest on each day, the optimal date to introduce hired labor, and the optimal number of workers which should be hired. The solution of the dynamic programming model makes it possible to compute a timeliness cost which is defined as the amount of the expected total return which is lost because of delaying harvest initiation be-yond the optimal starting day. Thus, a decision-maker can consult tabulated strategy solutions in any situation during the harvesting season and make decisions with the aid of timeliness cost information.

Journal ArticleDOI
TL;DR: It is shown that such a class of models has wide applicability in economic systems, e.g., portfolio models, market strategies and economy-wide input-output models, and may lead to a random eigenvalue problem which is non-linear in some cases.
Abstract: A class of bilinear models of the form max f(x) where f(x) = min g(x, c) is a scalar function of vectors x and c is analysed here in the framework of stochastic linear programming. It is shown that such a class of models has wide applicability in economic systems, e.g., portfolio models, market strategies and economy-wide input-output models. It is further shown that such a class may lead to a random eigenvalue problem which is non-linear in some cases.

Journal ArticleDOI
TL;DR: The main aim of the present paper is to demonstrate that a similar linear programming approach is feasible even in the non-stationary case, and forms a programming problem (D∗) that is equivalent to the problem of finding a p=optimal policy for the stochastic dynamic program.
Abstract: In some recent publications it was shown that certain stationary stochastic dynamic programming problems with general state and action spaces can be solved by generalized linear programming. It Is the main aim of the present paper to demonstrate that a similar linear programming approach is feasible even in the non-stationary case. For this end, we formulate a programming problem (D∗) and show that (D∗) is equivalent to the problem of finding a p=optimal policy for the stochastic dynamic program, whereas a modification of (D∗) turns out to be the dual program of a pair of general linear programs.

Journal ArticleDOI
TL;DR: It is shown that the optimal control gains for the linear-exponential-Gaussian (LEG) problem can be computed without the "future observation program" at each time step.
Abstract: In this correspondence, it is shown that the optimal control gains for the linear-exponential-Gaussian (LEG) problem can be computed without the "future observation program" at each time step. Furthermore, the connection between the closed-loop Optimal (CLO) control and the open-loop feedback optimal (OLFO) control is discussed.

01 May 1979
TL;DR: The Future Automobile Population Stochastic Model (FAPS Model) as discussed by the authors is a model of new car sales developed by Wharton Econometric Forecasting Associates, revised to incorporate the new vehicle survival model.
Abstract: The model, which is called the Future Automobile Population Stochastic Model (FAPS Model), consists of two major components: (1) Model of new car sales. The model of new car sales is the model of automobile demand developed by Wharton Econometric Forecasting Associates, revised to incorporate the new vehicle survival model that was developed. (2) A procedure for specifying future planned and unplanned events. This procedure, which specifies the future values of exogenous parameters of the model, incorporates the uncertainty of these parameters into the model. A computer program of the FAPS Model was written and is documented in the report.