scispace - formally typeset
Search or ask a question

Showing papers on "Feature hashing published in 1992"


Journal ArticleDOI
TL;DR: The discussion is confined to the problem of recognizing dot patterson embedded in a scene after they have undergone translation, rotation, and scale changes.
Abstract: The parallelizability of geometric hashing is explored, and two algorithms are presented. Geometric hashing uses the collection of models in a preprocessing phase (executed off line) to build a hash table data structure. The data structure encodes the model information in a highly redundant, multiple-viewpoint way. During the recognition phase, when presented with a scene and extracted features, the hash table data structure indexes geometric properties of the scene features to candidate models. The first uses: parallel hypercube techniques to route information through a series of maps and building-block parallel algorithms. The second algorithm uses the Connection Machine's large memory resources and achieves parallelism through broadcast facilities from the front end. The discussion is confined to the problem of recognizing dot patterson embedded in a scene after they have undergone translation, rotation, and scale changes. >

44 citations


Journal ArticleDOI
TL;DR: It has been found that distinguishing between different types of features in a model or scene results in a very efficient implementation of Geometric Hashing using a multidimensional hash table, and the filtering ratio of this scheme turns out to be high enough to allow raliable recognition with the corerct feature correspondence between model and scene.

43 citations


Proceedings ArticleDOI
01 Jun 1992
TL;DR: A multi-directory hashing scheme, called fast search multi- directory hashing, and its generalization, called controlled searchmulti- Directory hashing, are presented and both methods achieve linearly increasing expected directory size with the number of records.
Abstract: The objective of this paper is to develop and analyze high performance hash based search methods for main memory databases. We define optimal search in main memory databases as the search that requires at most one key comparison to locate a record. Existing hashing techniques become impractical when they are adapted to yield optimal search in main memory databases because of their large directory size. Multi-directory hashing techniques can provide significantly improved directory utilization over single-directory hashing techniques. A multi-directory hashing scheme, called fast search multi-directory hashing, and its generalization, called controlled search multi-directory hashing, are presented. Both methods achieve linearly increasing expected directory size with the number of records. Their performance is compared to existing alternatives.

18 citations


H. G. Dietz1
01 Jan 1992
TL;DR: This paper discusses the construction and use of customized hash functions to consistently improve execution speedl and reduce memory usage for such consuucts, and suggests that adding a population count instruction to the instruction set of a processor will greatly improve its hashing performance.
Abstract: In most modem languages, there is a construct that allows the programmer to directly represent a multiway branch based on the value of an expression In Pascal, it is the case statement; in C:, it is the switch and in Fortran-90 the SELECT However, it is quite common that the efficiency of these constructs is far worse than one might reasonably expeat This paper discusses the construction and use of customized hash functions to consistently improve execution speedl and reduce memory usage for such consuucts Performance results are given, including some that lead to the suggestion that adding a population count instruction to the instruction set of a processor will greatly improve its hashing performance

6 citations


Journal ArticleDOI
01 Sep 1992
TL;DR: Some heuristics for computing the character weights in a Cichelli-style, minimal perfect hashing function are given and an example using the names of the fifty United States is given to illustrate how the weights are determined.
Abstract: Some heuristics for computing the character weights in a Cichelli-style, minimal perfect hashing function are given. These ideas should perform best when applied to relatively small, static sets of character strings and they can be used as the foundation for a large programming assignment. An example using the names of the fifty United States is given to illustrate how the weights are determined.

4 citations