scispace - formally typeset
T

Thomas Seidl

Researcher at Ludwig Maximilian University of Munich

Publications -  401
Citations -  8782

Thomas Seidl is an academic researcher from Ludwig Maximilian University of Munich. The author has contributed to research in topics: Cluster analysis & Nearest neighbor search. The author has an hindex of 47, co-authored 387 publications receiving 8320 citations. Previous affiliations of Thomas Seidl include Alpen-Adria-Universität Klagenfurt & Technische Universität München.

Papers
More filters
Book ChapterDOI

3D Shape Histograms for Similarity Search and Classification in Spatial Databases

TL;DR: 3D shape histograms are introduced as an intuitive and powerful similarity model for 3D objects that efficiently supports similarity search based on quadratic forms and has high classification accuracy and good performance.
Proceedings ArticleDOI

Optimal multi-step k-nearest neighbor search

TL;DR: This work presents a novel multi-step algorithm which is guaranteed to produce the minimum number of candidates for k-nearest neighbor search and demonstrates significant performance gain over the previous solution.
Journal ArticleDOI

Evaluating clustering in subspace projections of high dimensional data

TL;DR: In this paper, the authors take a systematic approach to evaluate the major clustering paradigms in a common framework and provide a benchmark set of results on a large variety of real world and synthetic data sets.
Journal ArticleDOI

The ClusTree: indexing micro-clusters for anytime stream mining

TL;DR: This work proposes a parameter-free algorithm that automatically adapts to the speed of the data stream and makes best use of the time available under the current constraints to provide a clustering of the objects seen up to that point.

MOA: Massive Online Analysis, a framework for stream classification and clustering.

TL;DR: A software environment for implementing algorithms and running experiments for online learning from evolving data streams designed to deal with the challenging problem of scaling up the implementation of state of the art algorithms to real world dataset sizes.