scispace - formally typeset
Search or ask a question

Showing papers on "Dynamic programming published in 1980"


Journal ArticleDOI
TL;DR: An investigation of the single-vehicle, many-to-many, immediate-request dial-a-ride problem is developed, with a Dynamic Programming approach which exhibits a computational effort which is asymptotically lower than the corresponding effort of the classical Dynamic Programming algorithm applied to a Traveling Salesman Problem of the same size.
Abstract: An investigation of the single-vehicle, many-to-many, immediate-request dial-a-ride problem is developed in two parts I and II Part I focuses on the “static” case of the problem In this case, intermediate requests that may appear during the execution of the route are not considered A generalized objective function is examined, the minimization of a weighted combination of the time to service all customers and of the total degree of “dissatisfaction” experienced by them while waiting for service This dissatisfaction is assumed to be a linear function of the waiting and riding times of each customer Vehicle capacity constraints and special priority rules are part of the problem A Dynamic Programming approach is developed The algorithm exhibits a computational effort which, although an exponential function of the size of the problem, is asymptotically lower than the corresponding effort of the classical Dynamic Programming algorithm applied to a Traveling Salesman Problem of the same size Part II extends this approach to solving the equivalent “dynamic” case In this case, new customer requests are automatically eligible for consideration at the time they occur The procedure is an open-ended sequence of updates, each following every new customer request The algorithm optimizes only over known inputs and does not anticipate future customer requests Indefinite deferment of a customer's request is prevented by the priority rules introduced in Part I Examples in both “static” and “dynamic” cases are presented

592 citations


01 Aug 1980
TL;DR: This paper describes the design schema of experienced software designers and illustrates its operation by considering thinking-aloud protocols collected from both expert and novice designers.
Abstract: : A design task involves a complex set of processes. Starting from a global statement of the problem, a designer must develop a precise plan for a solution that can be realized in some concrete way. Software design, which is investigated in this paper, is the process of translating a set of task requirements into a structural description of a computer program that will perform the task. Through experience, designers acquire knowledge concerning the overall structure of a good design and of the processes of generating one. Using this knowledge, they direct their actions to insure that their designs will satisfy these constraints. We call this abstract knowledge about designs and design processes, along with a set of procedures which implement these processes, a 'design schema'. This paper describes the design schema of experienced software designers and illustrates its operation by considering thinking-aloud protocols collected from both expert and novice designers. (Author)

396 citations


Book ChapterDOI
01 Jan 1980
TL;DR: In this article, the authors consider two questions arising in the analysis of heuristic algorithms: 1) is there a general procedure involved when analysing a particular problem heuristic, and 2) how can heuristic procedures be incorporated into optimising algorithms such as branch and bound.
Abstract: We consider two questions arising in the analysis of heuristic algorithms. (i) Is there a general procedure involved when analysing a particular problem heuristic? (ii) How can heuristic procedures be incorporated into optimising algorithms such as branch and bound?

241 citations


Journal ArticleDOI
TL;DR: This paper presents and compares two possible manipulation methods for solving the optimization of the weekly operating policy of multireservoir hydroelectric power systems and shows that the suboptimal global feedback operating policy gives better results than the optimal local feedback Operating policy.
Abstract: The optimization of the weekly operating policy of multireservoir hydroelectric power systems is a stochastic nonlinear programing problem. For small systems this problem can be solved by dynamic programing, but for large systems there is yet no method of solving this problem directly, so that one must resort to mathematical manipulations in order to solve it. This paper presents and compares two possible manipulation methods for solving this problem. The first, called the one-at-a-time method, consists in breaking up the original multivariable problem into a series of one-state variable subproblems that are solved by dynamic programing. The final result is an optimal local feedback operating policy for each reservoir. The second method, called the aggregation/decomposition method, consists in breaking up the original n-state variable stochastic optimization problem into n stochastic optimization subproblems of two-state variables that are also solved by dynamic programing. The final result is a suboptimal global feedback operating policy for the system of n reservoirs. The two methods are then applied to a network of six reservoir-hydroplant complexes, and the results obtained are reported. It is shown that the suboptimal global feedback operating policy gives better results than the optimal local feedback operating policy.

223 citations


Journal ArticleDOI
TL;DR: A Dynamic Programming approach for sequencing a given set of jobs in a single machine is developed, so that the total processing cost is minimized, and offers savings in computational effort as compared to the classical Dynamic programming approach to sequencing problems.
Abstract: A Dynamic Programming approach for sequencing a given set of jobs in a single machine is developed, so that the total processing cost is minimized. Assume that there are N distinct groups of jobs, where the jobs within each group are identical. A very general, yet additive cost function is assumed. This function includes the overall completion time minimization problem as well as the total weighted completion time minimization problem as special cases. Priority considerations are included; no job may be shifted by more than a prespecified number of positions from its initial, First Come-First Served position in a prescribed sequence. The running time and the storage requirement of the Dynamic Programming algorithm are both polynomial functions of the maximum number of jobs per group, and exponential functions of the number of groups N. This makes our approach practical for real-world problems in which this latter number is small. More importantly, the algorithm offers savings in computational effort as compared to the classical Dynamic Programming approach to sequencing problems, savings which are solely due to taking advantage of group classifications. Specific cost functions, as well as a real-world problem for which the algorithm is particularly well-suited, are examined. The problem application is the optimal sequencing of aircraft landings at an airport. A numerical example as well as suggestions on possible extensions to the model are also presented.

211 citations


Journal ArticleDOI
Paolo Toth1
TL;DR: New dynamic programming algorithms for the solution of the Zero-One Knapsack Problem are developed and extensive computational results indicate that the algorithms proposed are superior to the best branch and bound and dynamic programming methods.
Abstract: New dynamic programming algorithms for the solution of the Zero-One Knapsack Problem are developed. Original recursive procedures for the computation of the Knapsack Function are presented and the utilization of bounds to eliminate states not leading to optimal solutions is analyzed. The proposed algorithms, according to the nature of the problem to be solved, automatically determine the most suitable procedure to be employed. Extensive computational results showing the efficiency of the new and the most commonly utilized algorithms are given. The results indicate that, for difficult problems, the algorithms proposed are superior to the best branch and bound and dynamic programming methods.

155 citations


Proceedings ArticleDOI
28 Apr 1980
TL;DR: A quadrangle inequality condition for rendering speed-up is given, which follows immediately from the general condition that the construction of optimal binary search trees may be speeded up from O(n) steps to 3 steps, a result that was first obtained by Knuth using a different and rather complicated argument.
Abstract: Dynamic programming is one of several widely used problem-solving techniques in computer science and operation research. In applying this technique, one always seeks to find speed-up by taking advantage of special properties of the problem at hand. However, in the current state of art, ad hoc approaches for speeding up seem to be characteristic; few general criteria are known. In this paper we give a quadrangle inequality condition for rendering speed-up. This condition is easily checked, and can be applied to several apparently different problems. For example, it follows immediately from our general condition that the construction of optimal binary search trees may be speeded up from O(n3) steps to O(n2), a result that was first obtained by Knuth using a different and rather complicated argument.

116 citations


Proceedings ArticleDOI
01 Dec 1980
TL;DR: In this article, the authors consider the time-optimal control of a dynamic system with jump parameters and derive optimality conditions using a dynamic programming approach, which is used for solution of a simple example.
Abstract: We consider the time-optimal control of a dynamic system with jump parameters. The motivating example is that of a parts-manufacturing system in which machines fail and are reputed according to known Markov processes. It is desired to obtain a feedback control of parts-routing to machines, which minimizes the expected completion time of a given production target. Using a Dynamic Programming approach, we derive optimality conditions. These are used for solution of a simple example. It is seen that closed form solutions would be very hard to obtain for large problems, so alternative approaches are also discussed.

111 citations


Journal ArticleDOI
TL;DR: A technique for finding MINSUM and MINMAX solutions to multi-criteria decision problems, called Multi Objective Dynamic Programming, capable of handling a wide range of linear, nonlinear, deterministic and stochastic multi- Criteria Decision problems, is presented and illustrated.
Abstract: A technique for finding MINSUM and MINMAX solutions to multi-criteria decision problems, called Multi Objective Dynamic Programming, capable of handling a wide range of linear, nonlinear, deterministic and stochastic multi-criteria decision problems, is presented and illustrated. Multiple objectives are considered by defining an adjoint state space and solving a (N + 1) terminal optimisation problem. The method efficiently generates both individual (criterion) optima and multiple criteria solutions in a single pass. Sensitivity analysis on weights over the various objectives is easily performed.

83 citations


Journal ArticleDOI
TL;DR: Concepts of dynamic programming are used within a discrete time Markovian model for the development of a graded population and optimal recruitment and transition patterns are determined.
Abstract: In this paper, concepts of dynamic programming are used within a discrete time Markovian model for the development of a graded population. Optimal recruitment and transition patterns are determined by minimizing expected discrepancies between actual states and preferred goals.

54 citations


Journal ArticleDOI
TL;DR: Two solution methods are offered for the no-shortage stock control problem under linearly increasing demand and the heuristic of the first "myopic" method is also exploited in the second method, which is based on a dynamic programming formulation.
Abstract: Two solution methods are offered for the no-shortage stock control problem under linearly increasing demand. The heuristic of the first "myopic" method is also exploited in the second method, which is based on a dynamic programming formulation. The DP formulation is not only trivial to solve computationally, but also offers ready-made sensitivity analyses. Unlike the other method, it also readily extends to more complicated models.

Journal ArticleDOI
TL;DR: The problem of daily controlling a water distribution network, including pumping devices and storage capacities, in order to supply the consumers at the lowest cost is formulated as a constrained optimal control problem.

Journal ArticleDOI
TL;DR: In this article, the general inverse problem of optimal control is considered from a dynamic programming point of view, and necessary and sufficient conditions are developed which two integral criteria must satisfy if they are to yield the same optimal feedback law, the dynamics being fixed.
Abstract: The general inverse problem of optimal control is considered from a dynamic programming point of view. Necessary and sufficient conditions are developed which two integral criteria must satisfy if they are to yield the same optimal feedback law, the dynamics being fixed. Specializing to the linear-quadratic case, it is shown how the general results given here recapture previously obtained results for quadratic criteria with linear dynamics.

Journal ArticleDOI
TL;DR: Extensions of the finite “secretory problem” in dynamic programming are presented which incorporate not only a parameter for the uncertain availability of previous alternatives, but for the uncertainties of an alternative currently being observed.



Journal ArticleDOI
TL;DR: In this paper, the authors present an application of a method for finding the global solution to a problem in integers with a separable objective function of a very general form, and show that there is a relationship between an integer problem with a nonlinear objective function and many constraints and a series of nonlinear problems with only a single constraint, each of which can be solved sequentially using dynamic programming.
Abstract: This paper presents an application of a method for finding the global solution to a problem in integers with a separable objective function of a very general form. This report shows that there is a relationship between an integer problem with a separable nonlinear objective function and many constraints and a series of nonlinear problems with only a single constraint, each of which can be solved sequentially using dynamic programming. The first solution to any of the individual smaller problems that satisfies the original constraints in addition, will be the optimal solution to the multiply-constrained problem.

Journal ArticleDOI
TL;DR: In this article, a set of optimal stand densities over one rotation of an even-aged stand is derived mathematically using a discrete time, continuous state dynamic programming model for forestry.
Abstract: A set of optimal stand densities over one rotation of an even-aged stand is derived mathematically using a discrete time, continuous state dynamic programming model. The use of the calculus approach to search for optimal solutions stage by stage is new for forestry. The criterion for optimization used is the maximum physical harvest over one rotation. When the stand growth model has the biologically reasonable form suggested here, it is easy to determine the optimal stand density over any number of growth periods. This, in turn, makes it easy to determine the optimal rotation age by sensitivity analysis of total return on the number of stages in the decision process.


Journal ArticleDOI
TL;DR: A framework which permits sequences of management decisions to be conveniently formulated, and their associated costs and benefits specified, takes the form of a network, which study how to select suitable decision sequences and what proportion of the resource to manage with each selected sequence, so as to optimize some specified objective and meet the constraints imposed on management of theresource.
Abstract: This paper deals with a mathematical model designed to provide guidelines for managing a land resource over an extended period of time We develop a framework which permits sequences of management decisions to be conveniently formulated, and their associated costs and benefits specified This takes the form of a network Each path in the network represents a possible decision sequence We study how to select suitable decision sequences and what proportion of the resource to manage with each selected sequence, so as to optimize some specified objective and meet the constraints imposed on management of the resource An LP model is formulated The solution strategy decomposes the LP matrix using Dantzig-Wolfe decomposition and solves the subproblems efficiently by dynamic programming or a network flow algorithm Computational aspects are discussed and the concepts and procedures are illustrated in the Appendix, for forest management

Journal ArticleDOI
TL;DR: In this article, the variance constraint is incorporated as a penalty term in a nonseparable Lagrangian problem which is solved by a two-stage procedure, and the optimal solution to the non-separable problem is found by a simple search algorithm.
Abstract: A dynamic programing solution procedure is developed for a class of variance-constrained reservoir control problems. The variance constraint is incorporated as a penalty term in a nonseparable Lagrangian problem which is solved by a two-stage procedure. First, the difficulty associated with the nonseparability of the problem is resolved by the consideration of a modified simpler separable problem which is solved by a simple additive dynamic programing recursion. Then the optimal solution to the nonseparable problem is found by a simple search algorithm. The entire discussion is devoted to the mathematical aspects of the problem and no consideration is given to the applicability of variance constraints as far as water resources management problems are concerned. The procedure is applied to a simple numerical example.

Journal ArticleDOI
TL;DR: A new dynamic programming algorithm is presented which utilizes orthogonal polynomials to decompose the multidimensional problem into a series of one-dimensional problems.
Abstract: A particularly challenging problem is the development of appropriate optimal control logic for adjustable valves, gates and regulators that includes consideration of unsteady sewer flows and backwater effects. Dynamic programming appears to be a particularly attractive optimization approach, but the multidimensional character of the control problem creates an enormous computational burden. A new dynamic programming algorithm is presented which utilizes orthogonal polynomials to decompose the multidimensional problem into a series of one-dimensional problems. The algorithm is linked with an unsteady flow model which is solved numerically by a fully implicit scheme that uses an interrupted double sweep technique to deal with supercritical flow, abrupt conduit transitions, and control gate linearization errors. The algorithm is applied to the planned San Francisco North Shore Outfalls Consolidation Project.

Journal ArticleDOI
TL;DR: This paper compares the performance of dynamic programming, the maximum principle, the quasilinearization technique, the functional conjugate gradient method and the sequential conjugATE gradient restoration algorithm in computing optimal control policies for multistage constrained nonlinear systems.

Journal ArticleDOI
TL;DR: In this paper, a sufficient condition for optimality involving the Bellman equation of dynamic programming is presented, which is necessary and sufficient for optimal programming with a partial differential inequality (PDI).
Abstract: A well-known sufficient condition for optimality involving the Bellman equation of dynamic programming applies only in exceptional circumstances when the Bellman equation has a smooth solution. We give a nontechnical presentation, with examples, of a refinement of this condition involving a partial differential inequality which is necessary as well as sufficient for optimality.


Journal ArticleDOI
TL;DR: This paper recasts the multiple data path assignment problem solved by Torng and Wilhelm by the dynamic programming method into a minimal covering problem following a switching theoretic approach and achieves minimal cost solutions by assigning weights to the bus-compatible sets present in the feasible solutions.
Abstract: This paper recasts the multiple data path assignment problem solved by Torng and Wilhelm by the dynamic programming method [1] into a minimal covering problem following a switching theoretic approach. The concept of bus compatibility for the data transfers is used to obtain the various ways of interconnecting the circuit modules with the minimum number of buses that allow concurrent data transfers. These have been called the feasible solutions of the problem. The minimal cost solutions are obtained by assigning weights to the bus-compatible sets present in the feasible solutions. Minimization of the cost of the solution by increasing the number of buses is also discussed.

Journal ArticleDOI
Ellis L. Johnson1
TL;DR: A general framework of subadditive lifting methods for partitioning, covering, and packing problems and several existing methods, including linear programming using the dual simplex method over the convex hull of solutions, the group method, branch-and-bound, shortest paths, and dynamic programming are described.

Journal ArticleDOI
TL;DR: The dynamic programming algorithm is examined with a view to solving it on a multiprocessor cluster, and the two methods of control- space division and state-space division are proposed, and their efficiency is analysed in detail.
Abstract: The dynamic programming algorithm is examined with a view to solving it on a multiprocessor cluster. The two methods of control-space division and state-space division are proposed, and their efficiency is analysed in detail. A practical example is given to illustrate the two methods using a two-processor system, and the results are projected to find the minimum achievable time and optimum number of processors for this example.

Journal ArticleDOI
TL;DR: In this article, the authors studied properties of invariant problems when the state space is arbitrary and the action space is finite and obtained optimal policies for this case when the optimality criterion is that of maximizing the average reward per unit time.

Book ChapterDOI
01 Jan 1980
TL;DR: The compromise programming method is extended to dynamic multicriteria problem, and Lp-metric is used as the measure of “closeness”, providing the closest solution to the ideal one.
Abstract: The compromise programming method is extended to dynamic multicriteria problem. Compromise control minimizes the measure of distance, providing the closest solution to the ideal one. As the measure of “closeness”, Lp-metric is used. The choice of metric parameter p enables either a maximization of additive group utility or minimization of maximum individual regret. To avoid difficulties in application of dynamic programming, the stated problem is transformed in an appropriate form. This problem is solved by modified dynamic programming algorithm with two computational levels.