scispace - formally typeset
Search or ask a question

Showing papers on "Hash function published in 1971"


Journal ArticleDOI
TL;DR: Tradeoff curves are developed to show minimal cost of file usage by grouping various partially combined indices under conditions offile usage with different fractions of retrieval and update.
Abstract: In a paper in the November 1970 Communications of the ACM, V.Y. Lum introduced a technique of file indexing named combined indices. This technique permitted decreased retrieval time at the cost of increased storage space. This paper examines combined indices under conditions of file usage with different fractions of retrieval and update. Tradeoff curves are developed to show minimal cost of file usage by grouping various partially combined indices.

128 citations


Journal ArticleDOI
Vincent Y. Lum1, P. S. T. Yuen1, M. Dodd1
TL;DR: The intricacy of the generalized radix transformation method has motivated us to conduct further studies of the technique and it is more appropriate to change the label of that transformation in [1] from "Lin's method" to "generalized radix Transformation method" and use this term here.
Abstract: In an earlier paper by Lum, Yuen, and Dodd [1] experimental results comparing six commonly used key-to-address transformation techniques were presented. One transformation in that study referred to as \"Lin's method\" is an elaborate technique based on radix transformation. Andrew Lin has since pointed out to the authors that his method of transformation [2] consists of not just a radix transformation algorithm but also the specific ways the values of p and q are chosen as well as hardware implementation to carry out the steps of this transformation in an efficient manner. Since our study was intended for general radix transformations rather than Lin's specific implementation, we think it is more appropriate to change the label of that transformation in [1] from \"Lin's method\" to \"generalized radix transformation method\" and we use this term here. The intricacy of the generalized radix transformation method has motivated us to conduct further studies of the technique. The additional results are presented General permission to republish, but not for profit, all or part of this material is granted, provided that reference is made to this publication, to its date of issue, and to the fact that reprinting privileges were granted by permission of the Association for Computing Machinery. after a brief description of the basic algorithm used in this generalized radix transformation method. As reported in [1], a key is expressed in radix p and the result taken modulo qm where p and q are relatively prime and m is a positive integer. A given key is first written as a simple binary bit string. These bits are then grouped to form p-nary digits. The result is expressed as a decimal number which, taken modulo q% gives the address. To simplify the selection of p, q, and m,p was set to equal q + 1, and m was chosen so that qm approximates the number of addresses available. Further analysis of this generalized radix transformation technique shows that in the process of grouping bits to obtain a decimal number on the basis of p radix, alternatives exist. In the case of p = 8, as given in [l], 3 bits will be grouped. Thus the binary string 100101110101 will become a decimal number of 5 q-6 X 8-1-5 X 82 q-4 X 83 = 2421. However, if p = 19, then it is not clear how many bits are to be grouped. Grouping by …

125 citations


Proceedings ArticleDOI
11 Nov 1971

29 citations


Journal ArticleDOI
C. E. Price1
TL;DR: Methods described are: sequential search, merge search, binary search, estimated entry, and direct entry, which are considered to be basic methodology for table searching in computer programming.
Abstract: Consideration is given to the basic methodology for table searching in computer programming. Only static tables are treated, but references are made to techniques for handling dynamic tables. Methods described are: sequential search, merge search, binary search, estimated entry, and direct entry. The rationale of key transformation is discussed, with some consideration of methods of “hash addressing.” A general guide to technique selection is given in conclusion.

26 citations



Journal ArticleDOI
TL;DR: Given a transformation algorithm and data to be transformed, it is possible to characterize certain qualities of the algorithm that relate to retrieval problems.
Abstract: Frequently it is useful to abbreviate or otherwise transform keys used for the retrieval of information. These transformations include the compression of long keys into a fixed field length by operations on characters or groups of characters, hash or random transformations in order to obtain a direct address, or phonetic coding to order to group together keys that are in some way similar. The various transformations have differing effects on file retrieval schemes. Given a transformation algorithm and data to be transformed, it is possible to characterize certain qualities of the algorithm that relate to retrieval problems. This paper is concerned with some measures of effectiveness of such transformation algorithms.

5 citations



Patent
07 Dec 1971
TL;DR: In an address translation system for translating a virtual address into a real address, the virtual address is hashed to form a hash address TADDR and a residue, RESIDUE as mentioned in this paper.
Abstract: In an address translation system for translating a virtual address into a real address, the virtual address is hashed to form a hash address TADDR and a residue, RESIDUE. The hash address is used to access a memory, 10, each location of which contains a real address, RADDR, a tag, TAG, and a next address NADDR. The next address points to another location, thereby linking the locations together to form a number of circular chains. The tag of the addressed location is compared with the residue and, if they match, the address translation is successful. If they do not match, the memory is accessed again, this time using the next address value from the currently addressed location. In this way, a chain of locations is scanned until either a match is found, or else the first location in the chain is returned to, indicating that the translation has been successful.

1 citations


Proceedings ArticleDOI
01 Jan 1971
TL;DR: In this paper, a real-time information storage and retrieval (IS&R) system is presented, in which a hashing function is used to optimize the input/output (I/O) time versus disk and memory storage.
Abstract: Presented here is a paper on the “Design and Implementation of a Real Time (On Line) Information Storage and Retrieval System”. Guidance is given into the design and testing of a hashing function for a volatile set of alphanumeric keys. The interrelationship between the hashing function and the Information Storage and Retrieval (IS&R) System is examined with respect to design criteria. Finally the design of the IS&R System itself is presented in terms of an attempt to optimize the Input/Output (I/O) time versus disk and memory storage for a volatile set of records. Built into the IS&R System design is the flexibility to optimize the systems performance under both the ideal and worst case hashing function design or performance. Emphasis, in the design, is placed on the I/O time required in updating, or inquiring, the file for a matched record retrieval. The following techniques were combined to form the original IS&R System: data structures, directory (dictionary) concept, bit directories, link listing (chaining), hashing function, and minimization of disk head movement.The theory of the IS&R System is derived in general and is applicable to a variety of applications as long as the application possess the following requirements: a highly volatile set of records, a real time (or on line) retrieval requirement, difficulty in designing (or choosing) a hashing function due to the volatile set of keys, and heavy activity on updating, or inquiring, the file. Detailed algorithms of the system design are presented for this purpose.

1 citations