scispace - formally typeset
Search or ask a question
Author

Reinhard Klein

Bio: Reinhard Klein is an academic researcher from University of Bonn. The author has contributed to research in topics: Rendering (computer graphics) & Point cloud. The author has an hindex of 45, co-authored 315 publications receiving 9572 citations. Previous affiliations of Reinhard Klein include Fraunhofer Society & University of Tübingen.


Papers
More filters
Journal ArticleDOI
TL;DR: An automatic algorithm to detect basic shapes in unorganized point clouds based on random sampling and detects planes, spheres, cylinders, cones and tori, and obtains a representation solely consisting of shape proxies.
Abstract: In this paper we present an automatic algorithm to detect basic shapes in unorganized point clouds. The algorithm decomposes the point cloud into a concise, hybrid structure of inherent shapes and a set of remaining points. Each detected shape serves as a proxy for a set of corresponding points. Our method is based on random sampling and detects planes, spheres, cylinders, cones and tori. For models with surfaces composed of these basic shapes only, for example, CAD models, we automatically obtain a representation solely consisting of shape proxies. We demonstrate that the algorithm is robust even in the presence of many outliers and a high degree of noise. The proposed method scales well with respect to the size of the input point cloud and the number and size of the shapes within the data. Even point sets with several millions of samples are robustly decomposed within less than a minute. Moreover, the algorithm is conceptually simple and easy to implement. Application areas include measurement of physical parameters, scan registration, surface compression, hybrid rendering, shape classification, meshing, simplification, approximation and reverse engineering.

1,800 citations

Proceedings ArticleDOI
29 Jul 2006
TL;DR: A progressive compression method for point sampled models that is specifically apt at dealing with densely sampled surface geometry and it is demonstrated that additional point attributes, such as color, can be well integrated and efficiently encoded in this framework.
Abstract: In this paper we present a progressive compression method for point sampled models that is specifically apt at dealing with densely sampled surface geometry. The compression is lossless and therefore is also suitable for storing the unfiltered, raw scan data. Our method is based on an octree decomposition of space. The point-cloud is encoded in terms of occupied octree-cells. To compress the octree we employ novel prediction techniques that were specifically designed for point sampled geometry and are based on local surface approximations to achieve high compression rates that outperform previous progressive coders for point-sampled geometry. Moreover we demonstrate that additional point attributes, such as color, which are of great importance for point-sampled geometry, can be well integrated and efficiently encoded in this framework.

406 citations

Proceedings ArticleDOI
16 Jun 2003
TL;DR: This paper advocates the usage of so-called 3D Zernike invariants as descriptors for content based 3D shape retrieval and provides practical analysis of these invariants along with algorithms and computational details.
Abstract: Content based 3D shape retrieval for broad domains like the World Wide Web has recently gained considerable attention in Computer Graphics community. One of the main challenges in this context is the mapping of 3D objects into compact canonical representations referred to as descriptors, which serve as search keys during the retrieval process. The descriptors should have certain desirable properties like invariance under scaling, rotation and translation. Very importantly, they should possess descriptive power providing a basis for similarity measure between three-dimensional objects which is close to the human notion of resemblance.In this paper we advocate the usage of so-called 3D Zernike invariants as descriptors for content based 3D shape retrieval. The basis polynomials of this representation facilitate computation of invariants under the above transformations. Some theoretical results have already been summarized in the past from the aspect of pattern recognition and shape analysis. We provide practical analysis of these invariants along with algorithms and computational details. Furthermore, we give a detailed discussion on influence of the algorithm parameters like type and resolution of the conversion into a volumetric function, number of utilized coefficients, etc. As is revealed by our study, the 3D Zernike descriptors are natural extensions of spherical harmonics based descriptors, which are reported to be among the most successful representations at present. We conduct a comparison of 3D Zernike descriptors against these regarding computational aspects and shape retrieval performance.

337 citations

01 Jan 2003
TL;DR: This paper introduces an enhanced 3D approach of the recently introduced 2D Shape Contexts that can be used for measuring 3d shape similarity as fast, intuitive and powerful similarity model for 3D objects.
Abstract: Content based 3D shape retrieval for broad domains like the World Wide Web has recently gained considerable attention in Computer Graphics community. One of the main challenges in this context is the mapping of 3D objects into compact canonical representations referred to as descriptors or feature vector, which serve as search keys during the retrieval process. The descriptors should have certain desirable properties like invariance under scaling, rotation and translation as well as a descriptive power providing a basis for similarity measure between threedimensional objects which is close to the human notion of resemblance. In this paper we introduce an enhanced 3D approach of the recently introduced 2D Shape Contexts that can be used for measuring 3d shape similarity as fast, intuitive and powerful similarity model for 3D objects. The Shape Context at a point captures the distribution over relative positions of other shape points and thus summarizes global shape in a rich, local descriptor. Shape Contexts greatly simplify recovery of correspondences between points of two given shapes. Moreover, the Shape Context leads to a robust score for measuring shape similarity, once shapes are aligned.

300 citations

Journal ArticleDOI
TL;DR: Practical analysis of 3D Zernike invariants along with algorithms and computational details are provided along with a detailed discussion on influence of the algorithm parameters like the conversion into a volumetric function, number of utilized coefficients, etc.
Abstract: We advocate the usage of 3D Zernike invariants as descriptors for 3D shape retrieval. The basis polynomials of this representation facilitate computation of invariants under rotation, translation and scaling. Some theoretical results have already been summarized in the past from the aspect of pattern recognition and shape analysis. We provide practical analysis of these invariants along with algorithms and computational details. Furthermore, we give a detailed discussion on influence of the algorithm parameters like the conversion into a volumetric function, number of utilized coefficients, etc. As is revealed by our study, the 3D Zernike descriptors are natural extensions of recently introduced spherical harmonics based descriptors. We conduct a comparison of 3D Zernike descriptors against these regarding computational aspects and shape retrieval performance using several quality measures and based on experiments on the Princeton Shape Benchmark.

270 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Posted Content
TL;DR: ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy, a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations.
Abstract: We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans.

3,707 citations

01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

Journal ArticleDOI
TL;DR: The random forest is clearly the best family of classifiers (3 out of 5 bests classifiers are RF), followed by SVM (4 classifiers in the top-10), neural networks and boosting ensembles (5 and 3 members in theTop-20, respectively).
Abstract: We evaluate 179 classifiers arising from 17 families (discriminant analysis, Bayesian, neural networks, support vector machines, decision trees, rule-based classifiers, boosting, bagging, stacking, random forests and other ensembles, generalized linear models, nearest-neighbors, partial least squares and principal component regression, logistic and multinomial regression, multiple adaptive regression splines and other methods), implemented in Weka, R (with and without the caret package), C and Matlab, including all the relevant classifiers available today. We use 121 data sets, which represent the whole UCI data base (excluding the large-scale problems) and other own real problems, in order to achieve significant conclusions about the classifier behavior, not dependent on the data set collection. The classifiers most likely to be the bests are the random forest (RF) versions, the best of which (implemented in R and accessed via caret) achieves 94.1% of the maximum accuracy overcoming 90% in the 84.3% of the data sets. However, the difference is not statistically significant with the second best, the SVM with Gaussian kernel implemented in C using LibSVM, which achieves 92.3% of the maximum accuracy. A few models are clearly better than the remaining ones: random forest, SVM with Gaussian and polynomial kernels, extreme learning machine with Gaussian kernel, C5.0 and avNNet (a committee of multi-layer perceptrons implemented in R with the caret package). The random forest is clearly the best family of classifiers (3 out of 5 bests classifiers are RF), followed by SVM (4 classifiers in the top-10), neural networks and boosting ensembles (5 and 3 members in the top-20, respectively).

2,616 citations