scispace - formally typeset
Search or ask a question

Advanced Techniques for Dynamic Programming

01 Jan 2013-
TL;DR: This chapter contains an extensive discussion of dynamic programming speedup, with a focus on dynamic programming, online algorithms, and work functions.
Abstract: This is an overview over dynamic programming with an emphasis on advanced methods. Problems discussed include path problems, construction of search trees, scheduling problems, applications of dynamic programming for sorting problems, server problems, as well as others. This chapter contains an extensive discussion of dynamic programming speedup. There exist several general techniques in the literature for speeding up naive implementations of dynamic programming. Two of the best known are the Knuth-Yao quadrangle inequality speedup and the SMAWK/LARSCH algorithm for finding the row minima of totally monotone matrices. The chapter includes “ready to implement” descriptions of the SMAWK and LARSCH algorithms. Another focus is on dynamic programming, online algorithms, and work functions.
Citations
More filters
01 Jan 1998
TL;DR: This paper provides an O(nmax(?,β)) time algorithm for finding a minimal cost prefix-free code in which the encoding alphabet consists of unequal cost (length) letters, with lengths ? and β.
Abstract: In this paper we discuss a variation of the classical Huffman coding problem: finding optimal prefix-free codes for unequal letter costs. Our problem consists of finding a minimal cost prefix-free code in which the encoding alphabet consists of unequal cost (length) letters, with lengths α and β. The most efficient algorithm known previously required O(n2+max(αβ)) time to construct such a minimal-cost set of n codewords. In this paper we provide an O(nmax(αβ)) time algorithm. Our improvement comes from the use of a more sophisticated modeling of the problem combined with the observation that the problem possesses a "Monge property" and that the SMAWK algorithm on monotone matrices can therefore be applied.

22 citations

Posted Content
TL;DR: This paper describes how to reduce the time for filling in the DP tables by two orders of magnitude, down to $O(n^3)$, by introducing a grouping technique that permits separating the $\Theta(n)$-space tables into groups, each of size $O (n^2)$, and then using Two-Dimensional Range-Minimum Queries (RMQs) to fill in that group's table entries in time.
Abstract: AIFV-$2$ codes are a new method for constructing lossless codes for memoryless sources that provide better worst-case redundancy than Huffman codes. They do this by using two code trees instead of one and also allowing some bounded delay in the decoding process. Known algorithms for constructing AIFV-code are iterative; at each step they replace the current code tree pair with a "better" one. The current state of the art for performing this replacement is a pair of Dynamic Programming (DP) algorithms that use $O(n^5)$ time to fill in two tables, each of size $O(n^3)$ (where $n$ is the number of different characters in the source). This paper describes how to reduce the time for filling in the DP tables by two orders of magnitude, down to $O(n^3)$. It does this by introducing a grouping technique that permits separating the $\Theta(n^3)$-space tables into $\Theta(n)$ groups, each of size $O(n^2)$, and then using Two-Dimensional Range-Minimum Queries (RMQs) to fill in that group's table entries in $O(n^2)$ time. This RMQ speedup technique seems to be new and might be of independent interest.

7 citations


Cites background from "Advanced Techniques for Dynamic Pro..."

  • ...[2] provides a recent overview of the techniques available....

    [...]

Journal ArticleDOI
TL;DR: AIFV-2 codes are a new method for constructing lossless codes for memoryless sources that provide better worst-case redundancy than Huffman codes as discussed by the authors, by using two code trees instead of one and also allowing some bounded delay in the decoding process.
Abstract: AIFV-2 codes are a new method for constructing lossless codes for memoryless sources that provide better worst-case redundancy than Huffman codes. They do this by using two code trees instead of one and also allowing some bounded delay in the decoding process. Known algorithms for constructing AIFV-codes are iterative; at each step they replace the current code tree pair with a “better” one. The current state of the art for performing this replacement is a pair of Dynamic Programming (DP) algorithms that use O ( n 5 ) time to fill in two tables, each of size O ( n 3 ) (where n is the number of different characters in the source). This paper describes how to reduce the time for filling in the DP tables by two orders of magnitude, down to O ( n 3 ) . It does this by introducing a grouping technique that permits separating the Θ ( n 3 ) -space tables into Θ ( n ) groups, each of size O ( n 2 ) , and then using Two-Dimensional Range-Minimum Queries (RMQs) to fill in that group's table entries in O ( n 2 ) time.

5 citations

01 Jan 2021
TL;DR: This work presents a simple algorithm for computing optimal search trees with two-way comparisons that extends directly to the standard full variant of the problem, which also allows unsuccessful queries and for which no polynomial-time algorithm was previously known.
Abstract: We present a simple O(n 4 ) -time algorithm for computing optimal search trees with two-way comparisons. The only previous solution to this problem, by Anderson et al., has the same running time but is significantly more complicated and is restricted to the variant where only successful queries are allowed. Our algorithm extends directly to solve the standard full variant of the problem, which also allows unsuccessful queries and for which no polynomial-time algorithm was previously known. The correctness proof of our algorithm relies on a new structural theorem for two-way-comparison search trees.

5 citations

Posted Content
TL;DR: In this paper, the authors present a self-contained analysis of a particular family of metrics over the set of nonnegative integers, defined through a nested sequence of optimal transport problems, which provide tight estimates for general Krasnosel'skii-Mann fixed point iterations for non-expansive maps.
Abstract: We present a self-contained analysis of a particular family of metrics over the set of non-negative integers. We show that these metrics, which are defined through a nested sequence of optimal transport problems, provide tight estimates for general Krasnosel'skii-Mann fixed point iterations for non-expansive maps. We also describe some of their very special properties, including their monotonicity and the so-called "convex quadrangle inequality" that yields a greedy algorithm to compute them efficiently.

1 citations

References
More filters
Book
01 Jan 1990
TL;DR: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures and presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers.
Abstract: From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning.

21,651 citations

01 Jan 2005

19,250 citations

Book
21 Oct 1957
TL;DR: The more the authors study the information processing aspects of the mind, the more perplexed and impressed they become, and it will be a very long time before they understand these processes sufficiently to reproduce them.
Abstract: From the Publisher: An introduction to the mathematical theory of multistage decision processes, this text takes a functional equation approach to the discovery of optimum policies. Written by a leading developer of such policies, it presents a series of methods, uniqueness and existence theorems, and examples for solving the relevant equations. The text examines existence and uniqueness theorems, the optimal inventory equation, bottleneck problems in multistage production processes, a new formalism in the calculus of variation, strategies behind multistage games, and Markovian decision processes. Each chapter concludes with a problem set that Eric V. Denardo of Yale University, in his informative new introduction, calls a rich lode of applications and research topics. 1957 edition. 37 figures.

14,187 citations

Book
01 May 1995
TL;DR: The leading and most up-to-date textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization.
Abstract: The leading and most up-to-date textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization. The treatment focuses on basic unifying themes, and conceptual foundations. It illustrates the versatility, power, and generality of the method with many examples and applications from engineering, operations research, and other fields. It also addresses extensively the practical application of the methodology, possibly through the use of approximations, and provides an extensive treatment of the far-reaching methodology of Neuro-Dynamic Programming/Reinforcement Learning.

10,834 citations

Journal ArticleDOI
TL;DR: The upper bound is obtained for a specific probabilistic nonsequential decoding algorithm which is shown to be asymptotically optimum for rates above R_{0} and whose performance bears certain similarities to that of sequential decoding algorithms.
Abstract: The probability of error in decoding an optimal convolutional code transmitted over a memoryless channel is bounded from above and below as a function of the constraint length of the code. For all but pathological channels the bounds are asymptotically (exponentially) tight for rates above R_{0} , the computational cutoff rate of sequential decoding. As a function of constraint length the performance of optimal convolutional codes is shown to be superior to that of block codes of the same length, the relative improvement increasing with rate. The upper bound is obtained for a specific probabilistic nonsequential decoding algorithm which is shown to be asymptotically optimum for rates above R_{0} and whose performance bears certain similarities to that of sequential decoding algorithms.

6,804 citations