scispace - formally typeset
Search or ask a question

Showing papers on "Dynamic programming published in 1974"


Journal ArticleDOI
TL;DR: A technique is developed which improves all of the dynamic programming methods by asquare root factor and can be incorporated into the more general 0-1 knapsack problem obtaining a square root improvement in the asymptotic behavior.
Abstract: Given r numbers s1, …, sr, algorithms are investigated for finding all possible combinations of these numbers which sum to M. This problem is a particular instance of the 0-1 unidimensional knapsack problem. All of the usual algorithms for this problem are investigated in terms of both asymptotic computing times and storage requirements, as well as average computing times. We develop a technique which improves all of the dynamic programming methods by a square root factor. Empirical studies indicate this new algorithm to be generally superior to all previously known algorithms. We then show how this improvement can be incorporated into the more general 0-1 knapsack problem obtaining a square root improvement in the asymptotic behavior. A new branch and search algorithm that is significantly faster than the Greenberg and Hegerich algorithm is also presented. The results of extensive empirical studies comparing these knapsack algorithms are given

570 citations


Journal ArticleDOI
TL;DR: A branch-and-bound algorithm for identifying an optimal solution to the following problem: select plant sites from a given set of sites and choose their production and distribution levels to meet known demand at discrete points at minimum cost.
Abstract: The following problem is considered: select plant sites from a given set of sites and choose their production and distribution levels to meet known demand at discrete points at minimum cost. The construction and operating cost of each plant is assumed to be a concave function of the total production at that plant, and the distribution cost between each plant and demand point is assumed to be a concave function of the amount shipped. There may be capacity restrictions on the plants. A branch-and-bound algorithm for identifying an optimal solution is described; it is equivalent to the solution of a finite sequence of transportation problems. The algorithm is developed as a particular case of a simplified algorithm for minimizing separable concave functions over linear polyhedra. Computational results are cited for a computer code implementing the algorithm.

159 citations


Journal ArticleDOI
TL;DR: A new method is proposed to solve a single-stage expansion problem for a transmission network, given future generation and load patterns, and alternative types of lines available, subject to overload, reliability and right-of-way constraints.
Abstract: A new method is proposed in this paper to solve a single-stage expansion problem for a transmission network, given future generation and load patterns, and alternative types of lines available, subject to overload, reliability and right-of-way constraints. The problem is formulated as a series of zero-one integer programs which are solved by an efficient branch-and-bound algorithm. Complexity is reduced by the concepts of optimal cost-capacity curves and screening algorithms. A sample study is shown and the method is implemented in a computer program.

153 citations


Journal ArticleDOI
01 Jul 1974
TL;DR: In this paper, a review of the recent literature within the power systems field can be found in Section 5.1. The authors point out some specific areas where more work needs to be done.
Abstract: Important power system planning and operation problems have been formulated as mathematical optimization problems. Such problems as the economic dispatch, in many of its facets; var scheduling and allocation; pollution dispatch; maximum interchange; hydrothermal unit commitment and dispatch; generation, transmission, and distribution expansion planning; maintenance scheduling and substation switching, have been formulated and solved. Modern mathematical optimization techniques, such as nonlinear, quadratic, linear, integer and dynamic programming and their many combinations and extensions, have been exploited. Some of the formulations and solutions to these problems as presented in the recent literature within the power systems field are reviewed. The large number of papers available is a measure of the current immense activity in this area. Attempts are made to point out some specific areas where more work needs to be done.

127 citations


Journal ArticleDOI
TL;DR: In this paper, a control scheme for the immunisation of susceptibles in the Kermack-McKendrick epidemic model for a closed population is proposed, which uses Dynamic Programming and Pontryagin's Maximum Principle.
Abstract: A control scheme for the immunisation of susceptibles in the KermackMcKendrick epidemic model for a closed population is proposed. The bounded control appears linearly in both dynamics and integral cost functionals and any optimal policies are of the "bang-bang" type. The approach uses Dynamic Programming and Pontryagin's Maximum Principle and allows one, for certain values of the cost and removal rates, to apply necessary and sufficient conditions for optimality and show that a one-switch candidate is the optimal control. In the remaining cases we are still able to show that an optimal control, if it exists, has at most one switch.

123 citations


Journal ArticleDOI
TL;DR: The nonserial problem of synthesizing an energy integrated separation system is solved by decomposing the original problem so that a serial structure results.
Abstract: The problem of synthesizing an optimal multicomponent separation system which is energy integrated is solved by a combined decomposition and dynamic programming technique. Dynamic programming is an optimization technique which allows the solution by decomposition of a multistage or serial optimization problem. Whenever the special serial structure is absent, again decomposition can be tried, but in this case it is by no means obvious how to decompose effectively the given problem into subproblems. In this paper the nonserial problem of synthesizing an energy integrated separation system is solved by decomposing the original problem so that a serial structure results.

113 citations


Journal ArticleDOI
TL;DR: It is proposed that the real valued objective function be replaced by preference relations and sufficient conditions are given on the structure of the preference relations to insure that the recursive dynamic programming procedure yields an optimal sequence of decisions.
Abstract: The dynamic programming recursive procedure has provided an efficient method for solving a variety of multi-stage decision problems in which the objective is measured by a real valued utility function. In this paper we propose that the real valued objective function be replaced by preference relations. Sufficient conditions are given on the structure of the preference relations to insure that the recursive dynamic programming procedure yields an optimal sequence of decisions. The solution method is well adapted to an interactive mode of implementation in which there is a dialogue between the decision maker and a source of information and analysis e.g., a computer. The “computer” collects, analyzes, and presents information on a set of alternatives to the decision maker who then communicates to the “computer” his choice of the best alternatives in the set. The process is repeated, stage by stage, thus generating an optimal sequence of decisions. The approach should be particularly useful in dealing with multi-stage decision problems involving design and/or operation of facilities and multi-period public projects where a variety of desiderata must be considered i.e., a simple cost or benefit function is inadequate.

98 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of locating a given number of plants along a road network so as to minimize the total distance between plants and warehouses assigned to them is modeled as an integer programming problem and the relaxed LP is solved by decomposition.
Abstract: The problem is to locate a given number of plants along a road network so as to minimize the total distance between plants and warehouses assigned to them. It is modeled as an integer programming problem and the relaxed LP is solved by decomposition. In cases of noninteger termination, the ILP is attacked using group theoretics and a dynamic programming recursion. Computational results are given, which contrast this procedure to a branch-and-bound approach.

89 citations


Journal ArticleDOI
TL;DR: A computation scheme based directly on the dynamic programming formulation is proposed, a time-sharing computer program is discussed, and the results of an example problem are presented.
Abstract: A method for determining where to locate the inspection stations in a multistage production process with imperfect inspection is presented. Dynamic programming is used to establish that the optimal expected total cost function at every stage is piecewise linear and concave. While the optimal policy at every stage usually consists of one “inspect” region and one “do not inspect” region, this policy structure is found not to hold in general. A computation scheme based directly on the dynamic programming formulation is proposed, a time-sharing computer program is discussed, and the results of an example problem are presented.

82 citations


Journal ArticleDOI
TL;DR: In this article, the assortment problem was extended to consider probabilistic demands and multiple periods, and the problem can be approached by using dynamic programming thus finding the shortest route through a network.
Abstract: The early work done on the assortment problem assumed known demands and additive and proportional substitution cost functions for single period problems. The extension of this problem to consider probabilistic demands and multiple periods complicates the problem considerably. By making some assumptions about the pattern of demands and the form of a reasonable solution, the problem can be approached by using dynamic programming thus finding the shortest route through a network.

78 citations


Journal ArticleDOI
01 Nov 1974
TL;DR: In this paper, the problem of decentralized control of stochastic discrete-time dynamic systems with one-step-delay sharing information structure is treated, and the problem can be decomposed into several static team problems.
Abstract: This paper treats problems of decentralized control of stochastic discrete-time dynamic systems with one-step-delay sharing information structure. It is shown that, by applying dynamic programming technique, the problem can be decomposed into several static team problems. For LQG case, this approach gives a solution which clearly shows the relation between estimation and control functions in the optimal control policy. Also the form of this solution is convenient for numerical computation of the optimal control.

Journal ArticleDOI
TL;DR: An efficient dynamic programming formulation is developed for each of the two different flow shop scheduling problems that arise when, in a two machine problem, one machine is characterized by sequence dependent setup times.
Abstract: This paper considers the two different flow shop scheduling problems that arise when, in a two machine problem, one machine is characterized by sequence dependent setup times. The objective is to determine a schedule that minimizes makespan. After establishing the optimally of permutation schedules for both of these problems, an efficient dynamic programming formulation is developed for each of them. Each of these formulations is shown to be comparable, from a computational standpoint, to the corresponding formulation of the traveling salesman problem. Then, the relative merits of the dynamic programming and branch and bound approaches to these two scheduling problems are discussed.

Journal ArticleDOI
TL;DR: An expository derivation of the square-root information filter/smoother is given, based on the recursive least-squares method, which is easier to grasp, interpret and generalize than are the dynamic programming arguments previously used.

Journal ArticleDOI
TL;DR: In this article, the authors formulates and obtains optimal gambling strategies for certain gambling models by setting these models within the framework of dynamic programming (also referred to as Markovian decision processes) and then using results in this field.
Abstract: : In the paper the author formulates and obtains optimal gambling strategies for certain gambling models. This is done by setting these models within the framework of dynamic programming (also referred to as Markovian decision processes) and then using results in this field.


Journal ArticleDOI
TL;DR: This paper presents an optimization procedure which would offer a much simpler and faster procedure than dynamic programming in reaching optimal solutions for a special class of resource allocation problems.
Abstract: This paper presents an optimization procedure which would offer a much simpler and faster procedure than dynamic programming in reaching optimal solutions for a special class of resource allocation problems. The solution method is based upon an incremental analysis and does not require further computation beyond the conversion of a payoff table to a table of marginal payoffs by simple subtractions. The optimality of the incremental solution will be demonstrated by a heuristic proof with several examples; and a numerical problem to illustrate the use of incremental analysis as well as to compare it with the solution procedure of dynamic programming will also be given.

Journal ArticleDOI
TL;DR: In this paper, an optimization algorithm that uses value iteration dynamic programing and simulation in conjunction with penalty costs to derive long-term operating policies for water resource systems is described, illustrated by numerical examples.
Abstract: An optimization algorithm is described that uses value iteration dynamic programing and simulation in conjunction with penalty costs to derive long-term operating policies for water resource systems. Very efficient value iteration procedures are developed for two types of problems, illustrated by numerical examples. An indication is given of how the methods described can be applied to multireservoir systems by using the concept of an equivalent reservoir.

Journal ArticleDOI
TL;DR: The subject of this paper is the application of stochastic control theory to resource allocation under uncertainty in the context of the general problem of allocating resources to repair machines where it is possible to perform a limited number of diagnostic experiments to learn more about potential failures.
Abstract: The subject of this paper is the application of stochastic control theory to resource allocation under uncertainty. In these problems it is assumed that the results of a given allocation of resources are not known with certainty, but that a limited number of experiments can be performed to reduce the uncertainty. The problem is to develop a policy for performing experiments and allocating resources on the basis of the outcome of the experiments such that a performance index is optimized. The problem is first analyzed using the basic stochastic dynamic programming approach. A computationally practical algorithm for obtaining an approximate solution is then developed. This algorithm preserves the "closed-loop" feature of the dynamic programming solution in that the resulting decision policy depends both on the results of past experiments and on the statistics of the outcomes of future experiments. In other words, the present decision takes into account the value of future information. The concepts are discussed in the context of the general problem of allocating resources to repair machines where it is possible to perform a limited number of diagnostic experiments to learn more about potential failures. Illustrative numerical results are given.

Journal ArticleDOI
TL;DR: In this article, a discrete model representation of turbo-charged diesel engines is recast in state-space form and used in an optimal control study, which leads to extensions and simplification of known theoretical results.
Abstract: Recently proposed discrete-model representation of turbo-charged diesel engines are recast in state-space form and used in an optimal control study. It is known that the turbo-charger has considerable influence on the dynamical behaviour of the engine and this is accounted for by incorporating a weighting of the air-fuel ratio in the performance index. This leads to a rather unusual form of the quadratic performance functional and the optimal controls are determined using both dynamic programming and discrete minimum principle techniques. This leads to extensions and some simplification of known theoretical results.

Journal ArticleDOI
TL;DR: It is shown that, providing the authors admit mixed policies, these gaps can be filled in and that, furthermore, the dynamic programming calculations may, in some general circumstances, be carried out initially in terms of pure policies, and optimal mixed policies can be generated from these.
Abstract: This note deals with the manner in which dynamic problems, involving probabilistic constraints, may be tackled using the ideas of Lagrange multipliers and efficient solutions. Both the infinite and finite time horizon are considered. Under very general conditions, Lagrange-multiplier and efficient-solution methods will readily produce, via the dynamic-programming formulations, classes of optimal solutions. However there may be gaps in the constraint levels thus generated. It is shown that, providing we admit mixed policies, these gaps can be filled in and that, furthermore, the dynamic programming calculations may, in some general circumstances, be carried out initially in terms of pure policies, and optimal mixed policies can be generated from these. The probabilistic constraints are treated in two ways, viz., by considering situations in which constraints are placed on the probabilities with which systems enter into specific states, and by considering situations in which minimum variances of performance are required subject to constraints on mean performance. Finally the mean/variance problem is viewed from the point of view of efficient solution theory. It is seen that some of the main variance-minimization theorems may be related to this more general theory, and that efficient solutions may also be obtained using dynamic-programming methods.

Journal ArticleDOI
TL;DR: The objective of the study is to determine the size of the canal system, the surface reservoir, and the ground water pumping facilities, such that when the system is operated optimally, the capital, operation, and maintenance costs of meeting given irrigation water requirements are minimized.
Abstract: A mathematical model is presented as a tool in facilitating the analysis and optimization of the conjunctive use of surface and ground water resources of a subsystem of the Indus Basin irrigation system in Pakistan. The objective of the study is to determine the size of the canal system, the surface reservoir, and the ground water pumping facilities, such that when the system is operated optimally, the capital, operation, and maintenance costs of meeting given irrigation water requirements are minimized. A mathematical model of the Marala Ravi link canal subsystem, which is uncoupled from other areas of the large, complex Indus Basin irrigation system, is developed. The mathematical programming problem is large scale and must be decomposed into subproblems. The subproblems are solved by dynamic programming whereas the design or outer problem is solved by an efficient direct search technique.

Journal ArticleDOI
TL;DR: Two types of representation theorems and properties of sets of optimal policies are investigated in detail for r-sdp and r-msdp, and a subclass of r-MSdp, r-imsdp, is introduced in the last half of this paper.

Journal ArticleDOI
TL;DR: This paper gives an algorithm for finding optimal stationary policies in the dynamic programming with the recursive additive system in the case of finite state and action spaces and gives several interesting examples with numerical computations to obtain optimal policies.
Abstract: In the paper [5], N. Furukawa and S. Iwamoto have defined Markovian decision processes with a new broad class of reward systems, that is, recursive reward functions, and have studied the existence and properties of optimal policies. Under some conditions on the reward functions, they have proved that there exists a (p, s)-optimal stationary policy and that in the case of a finite action space there exists an optimal stationary policy. These are some generalizations of results by D. Blackwell [3]. In this paper the author defines a dynamic programming problem with a recursive additive system which is referred to one type of Markovian decision processes with recursive reward functions defined by the previous authors [5]. This paper gives an algorithm for finding optimal stationary policies in the dynamic programming with the recursive additive system in the case of finite state and action spaces. Furthermore, we give several interesting examples with numerical computations to obtain optimal policies. The motivation to consider the dynamic programming problem with the recursive additive system is the following : If we restrict the " reward " in narrow sense, for instance, the money in economic systems or the loss in statistical decision problems, it will be appropriate for us to accept the total sum of stage-wise rewards as a performance index. That is so-called additive reward system. But many practical problems in the field of engineerings enable us to interpret the " reward " in wider sense. In those problems we often encounter much complicated reward systems that are more than so-called additive. We have an interesting class of such complicated reward systems in which we can find a common feature named " recursive additive ". By talking about various reward systems belonging to this class at the same time, we can make clear, as a dynamic programming problem, an important common property within the class, Our proofs are partially owing to Blackwell [2].


Book ChapterDOI
Art Lew1
20 Aug 1974
TL;DR: The problem of optimally allocating limited resources among competing processes may be formulated as a problem in finding the shortest path in a directed graph, provided a quantitative measure of the performance of each process as a function of its resource allocation is suitably defined.
Abstract: The problem of optimally allocating limited resources among competing processes may be formulated as a problem in finding the shortest path in a directed graph, provided a quantitative measure of the performance of each process as a function of its resource allocation can be suitably defined If this measure is also a function of time, scheduling problems arise so that optimal allocations become time-varying and may depend upon various precedence relations or constraints among the processes Dynamic programming approaches to such allocation and scheduling problems are presented in the context of parallel processing

Journal ArticleDOI
TL;DR: The modified application of dynamic programming is made for a single reservoir system illustrating the technique and the achievement of near optimum performance.
Abstract: Reduction of computation in reservoir operation optimization problems can be made through a modification of the optimization technique instead of limiting development of the system models. Considerations are presented herein which lead to the development of a modified application of deterministic optimization techniques. The modification enables reduction of computation to take place while achieving results that approximate the optimum. The modified application of dynamic programming is made for a single reservoir system illustrating the technique and the achievement of near optimum performance.

Journal ArticleDOI
01 Mar 1974


Journal ArticleDOI
TL;DR: The simplex method is specialized for a special class of networks with gains arising in discounted deterministic Markov decision models.
Abstract: The simplex method is specialized for a special class of networks with gains arising in discounted deterministic Markov decision models.

Journal ArticleDOI
01 Feb 1974-Infor
TL;DR: An optimal algorithm for scheduling a project network when the tasks are operating in a resource pool which has a finite upper bound on each resource type is presented.
Abstract: This paper presents an optimal algorithm for scheduling a project network when the tasks are operating in a resource pool which has a finite upper bound on each resource type A state space description of task processing time is used in the dynamic programming formulation A finite time processing interval and a penalty cost rate ia assigned to each task to cost the system in the optimal search