scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Random access to Fibonacci encoded files

30 Oct 2016-Discrete Applied Mathematics (North-Holland)-Vol. 212, pp 115-128
TL;DR: The Wavelet tree is adapted, in this paper, to Fibonacci codes, so that in addition to supporting direct access to the fibonacci encoded file, it also increases the compression savings when compared to the original Fib onacci compressed file.
About: This article is published in Discrete Applied Mathematics.The article was published on 2016-10-30 and is currently open access. It has received 27 citations till now. The article focuses on the topics: Fibonacci search technique & Fibonacci number.
Citations
More filters
Journal ArticleDOI
TL;DR: This work follows a compact data structure approach to design a storage structure for raster data, which is commonly used to represent attributes of the space (temperatures, pressure, elevation measures, etc.) in geographical information systems.

30 citations


Cites background from "Random access to Fibonacci encoded ..."

  • ...Therefore, a family of compression methods arises having this property [51, 52, 53, 54, 55, 56, 57]....

    [...]

Journal ArticleDOI
TL;DR: The pruning procedure is improved and empirical evidence is given that when memory storage is of main concern, the suggested data structure outperforms other direct access techniques such as those due to Külekci, DACs and sampling, with a slowdown as compared to DAC’s and fixed length encoding.

15 citations

Journal ArticleDOI
TL;DR: The notion of context-sensitive rewriting codes is introduced and the analysis of the compression performance of a family of such codes, based on generalizations of the Fibonacci sequence, is brought here.
Abstract: Writing data on flash memory is asymmetric in the sense that it is possible to change a 0 into a 1-bit, but erasing a 1 back to value 0 is much more expensive and can only be done in blocks. This has triggered the development of rewriting codes in which new data can overwrite the old one, subject to the constraint of never changing a 1 into a zero. The notion of context-sensitive rewriting codes is introduced and we bring here the analysis of the compression performance of a family of such codes, based on generalizations of the Fibonacci sequence. This is then compared with experimental results.

11 citations


Cites methods from "Random access to Fibonacci encoded ..."

  • ...This property of the appearance of a 1-bit implying that the following bit, if it exists, must be a zero, has been exploited in several useful applications: robustness to errors [13], the design of Fibonacci codes [14], direct access [15], fast decoding and compressed search [10, 16], compressed matching in dictionaries [17], faster modular exponentiation [18], etc....

    [...]

Book ChapterDOI
01 Apr 2021
TL;DR: A new dynamic Huffman encoding approach is proposed, that provably always performs at least as good as static Huffman coding, and may be better than the standard dynamic HuffMan coding for certain files.
Abstract: Huffman coding is known to be optimal, yet its dynamic version may yield smaller compressed files. The best known bound is that the number of bits used by dynamic Huffman coding in order to encode a message of n characters is at most larger by n bits than the number of bits required by static Huffman coding. In particular, dynamic Huffman coding can also generate a larger encoded file than the static variant, though in practice the file might often, but not always, be smaller. We propose here a new dynamic Huffman encoding approach, that provably always performs at least as good as static Huffman coding, and may be better than the standard dynamic Huffman coding for certain files. This is achieved by reversing the direction for the references of the encoded elements to those forming the model of the encoding, from pointing backwards to looking into the future.

8 citations

References
More filters
01 Jan 2001
TL;DR: Here the authors haven’t even started the project yet, and already they’re forced to answer many questions: what will this thing be named, what directory will it be in, what type of module is it, how should it be compiled, and so on.
Abstract: Writers face the blank page, painters face the empty canvas, and programmers face the empty editor buffer. Perhaps it’s not literally empty—an IDE may want us to specify a few things first. Here we haven’t even started the project yet, and already we’re forced to answer many questions: what will this thing be named, what directory will it be in, what type of module is it, how should it be compiled, and so on.

6,547 citations


"Random access to Fibonacci encoded ..." refers background or methods in this paper

  • ...We shall come back to the differences between FWTs and Knuth’s Fibonacci trees later....

    [...]

  • ...They are related, but not quite identical, to the trees defined by Knuth [22] as Fibonacci trees....

    [...]

  • ...(2) Note that the Fibonacci tree by Knuth [22] is based on a similar recursion, but with a different layout: the right subtree of the root of Th+1 would be Th−1....

    [...]

Journal ArticleDOI
TL;DR: An application is the construction of a uniformly universal sequence of codes for countable memoryless sources, in which the n th code has a ratio of average codeword length to source rate bounded by a function of n for all sources with positive rate.
Abstract: Countable prefix codeword sets are constructed with the universal property that assigning messages in order of decreasing probability to codewords in order of increasing length gives an average code-word length, for any message set with positive entropy, less than a constant times the optimal average codeword length for that source. Some of the sets also have the asymptotically optimal property that the ratio of average codeword length to entropy approaches one uniformly as entropy increases. An application is the construction of a uniformly universal sequence of codes for countable memoryless sources, in which the n th code has a ratio of average codeword length to source rate bounded by a function of n for all sources with positive rate; the bound is less than two for n = 0 and approaches one as n increases.

1,306 citations


"Random access to Fibonacci encoded ..." refers background in this paper

  • ...universal , if the expected length of its codewords, for any finite probability distribution P , is within a constant factor of the expected length of an optimal code for P [9]....

    [...]

Journal ArticleDOI
TL;DR: PATRICIA as mentioned in this paper is an algorithm which provides a flexible means of storing, indexing, and retrieving information in a large file, which is economical of index space and of reindexing time.
Abstract: PATRICIA is an algorithm which provides a flexible means of storing, indexing, and retrieving information in a large file, which is economical of index space and of reindexing time. It does not require rearrangement of text or index as new material is added. It requires a minimum restriction of format of text and of keys; it is extremely flexible in the variety of keys it will respond to. It retrieves information in response to keys furnished by the user with a quantity of computation which has a bound which depends linearly on the length of keys and the number of their proper occurrences and is otherwise independent of the size of the library. It has been implemented in several variations as FORTRAN programs for the CDC-3600, utilizing disk file storage of text. It has been applied to several large information-retrieval problems and will be applied to others.

887 citations

Proceedings ArticleDOI
12 Jan 2003
TL;DR: A novel implementation of compressed suffix arrays exhibiting new tradeoffs between search time and space occupancy for a given text (or sequence) of n symbols over an alphabet σ, where each symbol is encoded by lg|σ| bits.
Abstract: We present a novel implementation of compressed suffix arrays exhibiting new tradeoffs between search time and space occupancy for a given text (or sequence) of n symbols over an alphabet σ, where each symbol is encoded by lgvσv bits. We show that compressed suffix arrays use just nHh + σ bits, while retaining full text indexing functionalities, such as searching any pattern sequence of length m in O(m lg vσv + polylog(n)) time. The term Hh ≤ lg vσv denotes the hth-order empirical entropy of the text, which means that our index is nearly optimal in space apart from lower-order terms, achieving asymptotically the empirical entropy of the text (with a multiplicative constant 1). If the text is highly compressible so that Hn = o(1) and the alphabet size is small, we obtain a text index with o(m) search time that requires only o(n) bits. Further results and tradeoffs are reported in the paper.

818 citations


"Random access to Fibonacci encoded ..." refers background in this paper

  • ...[14], which allows direct access to any codeword, and in fact recodes the compressed file into an alternative form....

    [...]

Proceedings ArticleDOI
30 Oct 1989
TL;DR: Data structures that represent static unlabeled trees and planar graphs are developed, and there is no other structure that encodes n-node trees with fewer bits per node, as N grows without bound.
Abstract: Data structures that represent static unlabeled trees and planar graphs are developed. The structures are more space efficient than conventional pointer-based representations, but (to within a constant factor) they are just as time efficient for traversal operations. For trees, the data structures described are asymptotically optimal: there is no other structure that encodes n-node trees with fewer bits per node, as N grows without bound. For planar graphs (and for all graphs of bounded page number), the data structure described uses linear space: it is within a constant factor of the most succinct representation. >

759 citations


"Random access to Fibonacci encoded ..." refers methods in this paper

  • ...Time/Space tradeoffs for rank and select As mentioned before, Jacobson [19] showed that rank, on a bit-vector of length n, can be computed in O(1) time using n + O(n log lognlogn ) = n + o(n) bits....

    [...]

  • ...As mentioned before, Jacobson [19] showed that rank, on a bit-vector of length n, can be computed in O(1) time using n + O( log logn logn ) = n + o(n) bits....

    [...]

  • ...Jacobson [19] showed that rank, on a bit-vector of length n, can be computed in O(1) time using n+O( log logn logn ) = n+ o(n) bits....

    [...]

  • ...Jacobson [19] showed that rank, on a bit-vector of length n, can be computed in O(1) time using n+O(n log lognlogn ) = n+ o(n) bits....

    [...]