scispace - formally typeset
S

Stefan Berchtold

Researcher at Augsburg College

Publications -  37
Citations -  5423

Stefan Berchtold is an academic researcher from Augsburg College. The author has contributed to research in topics: Nearest neighbor search & Search engine indexing. The author has an hindex of 24, co-authored 37 publications receiving 5335 citations. Previous affiliations of Stefan Berchtold include AT&T & AT&T Labs.

Papers
More filters
Book ChapterDOI

The X-tree: an index structure for high-dimensional data

TL;DR: A new organization of the directory is introduced which uses a split algorithm minimizing overlap and additionally utilizes the concept of supernodes to keep the directory as hierarchical as possible, and at the same time to avoid splits in the directory that would result in high overlap.
Journal ArticleDOI

Searching in high-dimensional spaces: Index structures for improving the performance of multimedia databases

TL;DR: An overview of the current state of the art in querying multimedia databases is provided, describing the index structures and algorithms for an efficient query processing in high-dimensional spaces.
Proceedings ArticleDOI

The pyramid-technique: towards breaking the curse of dimensionality

TL;DR: The results of experiments demonstrate that the Pyramid-Technique outperforms the X-tree and the Hilbert R-tree by a factor of up to 14 (number of page accesses) and up to 2500 (total elapsed time) for range queries.
Proceedings ArticleDOI

A cost model for nearest neighbor search in high-dimensional data space

TL;DR: A new cost model for nearest neighbor search in high-dimensional data space is developed which takes boundary effects into account and therefore also works in high dimensions and is applicable to different data distributions and index structures.
Proceedings ArticleDOI

Independent quantization: an index compression technique for high-dimensional data spaces

TL;DR: This work develops a compressed index, called the IQ-tree, with a three-level structure, and develops a cost model and an optimization algorithm based on this cost model that permits an independent determination of the degree of compression for each second level page to minimize expected query cost.