scispace - formally typeset
Search or ask a question

Showing papers on "Dynamic programming published in 1969"



Journal ArticleDOI
TL;DR: A broad class of problems can be formulated as minimizing a concave function over the solution set of a Leontief substitution system, which includes deterministic single- and multi-facility economic lot size, lot-size smoothing, warehousing, product-assortment, batch-queuing, capacity-expansion, investment consumption, and reservoir-control problems with concave cost functions.
Abstract: The paper shows that a broad class of problems can be formulated as minimizing a concave function over the solution set of a Leontief substitution system. The class includes deterministic single- and multi-facility economic lot size, lot-size smoothing, warehousing, product-assortment, batch-queuing, capacity-expansion, investment consumption, and reservoir-control problems with concave cost functions. Because in such problems an optimum occurs at an extreme point of the solution set, we can utilize the characterization of the extreme points given in a companion paper to obtain most existing qualitative characterizations of optimal policies for inventory models with concave costs in a unified manner. Dynamic programming recursions for searching the extreme points to find an optimal point are given in a number of cases. We only give algorithms whose computational effort increases algebraically (instead of exponentially) with the size of the problem.

235 citations


Journal ArticleDOI
TL;DR: This paper describes an algorithm for a ship scheduling problem, obtained from a Swedish shipowning company, that uses the Dantzig-Wolfe decomposition method for linear programming and some integer programming experiments have been made.
Abstract: This paper describes an algorithm for a ship scheduling problem, obtained from a Swedish shipowning company The algorithm uses the Dantzig-Wolfe decomposition method for linear programming The subprograms are simple network flow problems that are solved by dynamic programming The master program in the decomposition algorithm is an LP problem with only zero-one elements in the matrix and the right-hand side Integer solutions are not guaranteed, but generation and solution of a large number of problems indicates that the frequency of fractional solutions is as small as 1–2 per cent Problems with about 40 ships and 50 cargoes are solved in about 25 minutes on an IBM 7090 In order to resolve the fractional cases, some integer programming experiments have been made The results will be reported in a forthcoming paper

166 citations


Journal ArticleDOI
TL;DR: A dynamic programming approach is presented that reduces the amount of redundant transitional calculations implicit in a total enumeration approach to partitioning N entities into M disjoint and nonempty subsets clusters.
Abstract: This paper considers the problem of partitioning N entities into M disjoint and nonempty subsets clusters. Except when both N and N-M are very small, a search for the optimal solution by total enumeration of all clustering alternatives is quite impractical. The paper presents a dynamic programming approach that reduces the amount of redundant transitional calculations implicit in a total enumeration approach. A comparison of the number of calculations required under each approach is presented in Appendix A. Unlike most clustering approaches used in practice, the dynamic programming algorithm will always converge on the best clustering solution. The efficiency of the dynamic programming approach depends upon the rapid-access computer memory available. A numerical example is given in Appendix B.

122 citations


Book
01 Jan 1969

99 citations


Journal ArticleDOI
TL;DR: Several specialized dynamic programming techniques applicable to water system problems are introduced, including successive approximations, forward dynamic programming, dynamic programming for stochastic control, and iteration in policy space.

73 citations


Journal ArticleDOI
TL;DR: Optimality condition for singular control problems derived using differential dynamic programming to obtain expression for change in cost produced by control variation is given in this article, where the optimality condition is based on the objective function.
Abstract: Optimality condition for singular control problems derived using differential dynamic programming to obtain expression for change in cost produced by control variation

66 citations


Book
01 Feb 1969

61 citations


Journal ArticleDOI
TL;DR: Stochastic saturating systems optimal control computation, considering attitude control and tracking system design by elliptical differential equation of dynamic programming is presented in this paper, where attitude control is considered.
Abstract: Stochastic saturating systems optimal control computation, considering attitude control and tracking system design by elliptical differential equation of dynamic programming

59 citations


Journal ArticleDOI
TL;DR: Control of linear stochastic systems with unknown parameters is accomplished by means of an approximate solution of the associated functional equation of dynamic programming which leads to an adaptive control system which is linear in an expanded vector of state estimates with feedback gains which are explicit functions of a posteriori parameter probabilities.

34 citations


Journal ArticleDOI
TL;DR: A sample problem is presented and analyzed in detail in order to demonstrate an application of the model and the procedure was coded in Fortran IV and used to obtain the sample problem results.
Abstract: This article considers the problem of facility location-allocation. The problem is to allocate K facilities in M facility locations and assign N demands such that the total cost is minimized. The model is formulated as a mathematical programming problem and decomposed into the recursive equations of dynamic programming. A sample problem is presented and analyzed in detail in order to demonstrate an application of the model. The procedure was coded in Fortran IV and used to obtain the sample problem results.

Journal ArticleDOI
TL;DR: Using an alternative to dynamic programming, optimum fuel cycle costs of a three-zone, scatter-loaded nuclear power reactor can be found with nominal computer memory and time requirements using an accelerated exhaustive search technique.


Journal ArticleDOI
TL;DR: In this article, a network of road links is constructed and the traffic demand is modeled as a relation between two nodes in the network, and the overall system is decomposed into a NET OF NETWORK PROBLEMS, where each node will represent the road system in one time period.

Journal ArticleDOI
TL;DR: In this article, a new property of nonserial dynamic programming is presented, which allows cutting down considerably the computational effort required for solving the secondary optimization problem, which reduces the computational complexity.

Journal ArticleDOI
TL;DR: In this article, a dynamic programming model for selecting an optimal combination of transportation modes over a mid-to-mid-time planning horizon is presented. But the model is formulated as an optimal discrete time stochastic control problem where cost is quadratic and dynamic equations linear in the state and control variables.
Abstract: This paper develops a dynamic programming model for selecting an optimal combination of transportation modes over a midtiperiod planning horizon. The formulation explicitly incorporates uncertainty regarding future requirements or demands for a number of commodity classes. In addition to determining the optimal modes to employ, the model assigns individual commodity classes to various modes, determines which supply points serve which destinations, and reroutes carriers from destinations to alternative sources where they will be most effective. The model is formulated as an optimal discrete time stochastic control problem where cost is quadratic and dynamic equations linear in the state and control variables. This model may be solved in closed form by an efficient dynamic programming algorithm that permits the treatment of relatively large scale systems. Also developed is an alternative, generally suboptimal method of solution, based upon solving a sequence of convex programming problems over time. This te...

01 Jun 1969
TL;DR: The structure of OPTLOC is outlined, a COMPUTER-AIDED system for the OPTIMAL LOCATION of RURAL HIGHWAY LINKS that uses a modified DYNAMIC PROGRAMMING ALGORITHM to generate adaptive capabilities.
Abstract: THIS PAPER OUTLINES THE STRUCTURE OF OPTLOC, A COMPUTER-AIDED SYSTEM FOR THE OPTIMAL LOCATION OF RURAL HIGHWAY LINKS. THE SYSTEM USES A MODIFIED DYNAMIC PROGRAMMING ALGORITHM TO GENERATE ALIGNMENTS AND PROFILES WHICH ARE OPTIMUM IN TERMS OF A DEFINED SET OF CONSTRUCTION, MAINTENANCE AND USER COSTS, AND FEASIBLE WITH REGARD TO SPECIFIED CONSTRAINTS ON GRADES, CURVATURES AND LOCATION. IT IS EMPHASISED THAT THE SYSTEM IS NOT AN ATTEMPT TO PRODUCE FULLY-AUTOMATED SOLUTIONS TO HIGHWAY LOCATION PROBLEMS. /RRL/


Journal ArticleDOI
TL;DR: In this paper, the authors used the quasilinearization technique to overcome the dimensionality difficulties in dynamic programming, which can solve many important optimization problems, such as optimal control problems with continuous independent variables.

Journal ArticleDOI
TL;DR: The algorithm presented here should be thought of as replacing the basic algorithms in [3] and [4], and the important theoretical work developed in both should be used to calculate the knapsack function for large arguments.

Journal ArticleDOI
TL;DR: Control problems arising in stochastic service systems are treated by a decomposition technique involving the methods of queuing theory and dynamic programming to find the optimal operation of the surgical facilities in a hospital.
Abstract: Control problems arising in stochastic service systems are treated by a decomposition technique involving the methods of queuing theory and dynamic programming. This approach is applied to the optimal operation of the surgical facilities in a hospital. The generalization of the method to other stochastic service systems is immediate.

Journal ArticleDOI
01 May 1969
TL;DR: A brief outline of how the method of dynamic programming extends to the control of a Markov process is given and some of the difficulties which arise when information about the state of the process is statistical are mentioned.
Abstract: THE aim of this paper is to give a brief outline of how the method of dynamic programming extends to the control of a Markov process and to mention some of the difficulties which arise when information about the state of the process is statistical. The development of the subject has been rapid and, whilst this is encouraging, the corresponding growth in the literature is alarming. There will be no attempt here even to do justice to the main contributions. Our discussion will be limited to a comparison of various types of model and some of the different techniques which may be useful in formulating and solving problems of optimization. The application of Pontryagin's maximum principle to deterministic systems depends on the existence of a unique optimal path between the present state and some terminal state, since the solution is developed by integration backwards along this path. But, for stochastic systems, no such path can be defined and because of this, attempts to generalize the principle do not seem to have been very successful: see Kushner (1965). On the other hand, a Markovian point of view is already implicit in Bellman's principle of optimality: immediate costs should be minimized with future costs in mind, but the past may be neglected. There is no difficulty in extending this to any Markov process, simply by treating future expectations rather than actual costs. But an inductive method of constructing optimal policies is no more than a first step towards understanding the relation between policy and expectation. The determination of more explicit solutions depends a great deal on the simplifying assumptions made in choosing a particular mathematical model. A discussion of simple diffusion models is included here to illustrate the advantages obtained when we can rely on the methods of differential calculus. The detailed example of an insurance model is an oversimplified representation of any real business, but it reflects some features of insurance sensibly and, because of its simplicity, it is open to more informed criticism and hence, perhaps, to more realistic development. Stochastic control theory is concerned with decision-making when there is uncertainty about the effect of particular decisions on the future course of events. The need for special care in model building becomes even more apparent when we admit also that decisions must be based on approximate information. The investigation of optimal decisions depends critically on whether this information can be represented conveniently; for example, in terms of a small number of sufficient statistics. Again, it seems necessary to adopt a Bayesian approach to unknown parameters, if we are to retain the Markovian structure of the decision process. The statistical aspects of control theory deserve more attention. So far, progress has perhaps been limited by a static view of unknown parameters and one purpose of the remarks at the end of this paper is to suggest a re-examination of this view.



Journal ArticleDOI
TL;DR: In this article, a fairly realistic nonlinear model of a water reservoir system with multiple uses has been developed based on available data, and the optimum of the system based on the developed model has been determined by the combined use of dynamic programming and the pattern search techniques.
Abstract: A fairly realistic nonlinear model of a water reservoir system with multiple uses has been developed based on available data, and the optimum of the system based on the developed model has been determined by the combined use of dynamic programming and the pattern search techniques. Both the simplex search and the Hooke and Jeeves pattern search have been used. The approach in modeling and optimization can treat complex inequality constraints. The benefits or losses resulting from four purposes or uses of water, namely, urban water supply, hydroelectric power generation, irrigation, and recreation, are taken into account in the profit function. Other uses such as flood control, navigation, and fish and wildlife enhancement are considered indirectly by the use of inequality constraints. It appears that the approach developed in this work can treat a water resource allocation problem involving complex inequality constraints.

ReportDOI
01 Aug 1969
TL;DR: The problem is that of optimally testing a coherent system to learn some characteristic of it, for example, whether it is operating or not, as well as a comparison of computer computation times for both.
Abstract: : The problem is that of optimally testing a coherent system to learn some characteristic of it, for example, whether it is operating or not. A branch and bound and a dynamic programming solution are given, as well as a comparison of computer computation times for both. Several specific models with analytical solutions are also presented.


Journal ArticleDOI
TL;DR: In this article, a method for directly converting an optimal control problem to a Cauchy problem is presented, and no use is made of the Euler equations, Pontryagin's maximum principle, or dynamic programming in the derivation.
Abstract: A method for directly converting an optimal control problem to a Cauchy problem is presented. No use is made of the Euler equations, Pontryagin's maximum principle, or dynamic programming in the derivation. The initial-value problem, in addition to being desirable from the computational point of view, possesses stable characteristics. The results are directly applicable in the study of guidance and control and are particularly useful for obtaining numerical solutions to control problems.

Journal ArticleDOI
TL;DR: This paper uses an illegal modification to the dynamic programming process to obtain an upper bound to the max-min value and a second but legal application of dynamic programming to the minimization part of the problem for a fixed maximizing vector will give a lower bound.
Abstract: The historic max-min problem is examined as a discrete process rather than in its more usual continuous mode. Since the practical application of the max-min model usually involves discrete objects such as ballistic missiles, the discrete formulation of the problem seems quite appropriate. This paper uses an illegal modification to the dynamic programming process to obtain an upper bound to the max-min value. Then a second but legal application of dynamic programming to the minimization part of the problem for a fixed maximizing vector will give a lower bound to the max-min value. Concepts of optimal stopping rules may be applied to indicate when sufficiently near optimal solutions have been obtained.

Journal ArticleDOI
TL;DR: In this article, an adaptive control system which is linear in an expanded vector of state estimates with feedback gains which are explicit functions of a posteriori parameter probabilities is presented. But the performance of this controller is illustrated with a simple example.
Abstract: Control of linear stochastic systems with unknown parameters is accomplished by means of an approximate solution of the associated functional equation of dynamic programming. The approximation is based on repeated linearizations of a quadratic weighting matrix appearing in the optimal cost function for the control process. This procedure leads to an adaptive control system which is linear in an expanded vector of state estimates with feedback gains which are explicit functions of a posteriori parameter probabilities. The performance of this controller is illustrated with a simple example.