scispace - formally typeset
Search or ask a question
Author

Marburg

Bio: Marburg is an academic researcher. The author has contributed to research in topics: Atomic packing factor & Sphere packing. The author has an hindex of 1, co-authored 1 publications receiving 1440 citations.

Papers
More filters

Cited by
More filters
MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations

Book
01 Jan 1996
TL;DR: The Bayes Error and Vapnik-Chervonenkis theory are applied as guide for empirical classifier selection on the basis of explicit specification and explicit enforcement of the maximum likelihood principle.
Abstract: Preface * Introduction * The Bayes Error * Inequalities and alternatedistance measures * Linear discrimination * Nearest neighbor rules *Consistency * Slow rates of convergence Error estimation * The regularhistogram rule * Kernel rules Consistency of the k-nearest neighborrule * Vapnik-Chervonenkis theory * Combinatorial aspects of Vapnik-Chervonenkis theory * Lower bounds for empirical classifier selection* The maximum likelihood principle * Parametric classification *Generalized linear discrimination * Complexity regularization *Condensed and edited nearest neighbor rules * Tree classifiers * Data-dependent partitioning * Splitting the data * The resubstitutionestimate * Deleted estimates of the error probability * Automatickernel rules * Automatic nearest neighbor rules * Hypercubes anddiscrete spaces * Epsilon entropy and totally bounded sets * Uniformlaws of large numbers * Neural networks * Other error estimates *Feature extraction * Appendix * Notation * References * Index

3,598 citations

Journal ArticleDOI
TL;DR: The novelty of the approach is that it does not use a model selection criterion to choose one among a set of preestimated candidate models; instead, it seamlessly integrate estimation and model selection in a single algorithm.
Abstract: This paper proposes an unsupervised algorithm for learning a finite mixture model from multivariate data. The adjective "unsupervised" is justified by two properties of the algorithm: 1) it is capable of selecting the number of components and 2) unlike the standard expectation-maximization (EM) algorithm, it does not require careful initialization. The proposed method also avoids another drawback of EM for mixture fitting: the possibility of convergence toward a singular estimate at the boundary of the parameter space. The novelty of our approach is that we do not use a model selection criterion to choose one among a set of preestimated candidate models; instead, we seamlessly integrate estimation and model selection in a single algorithm. Our technique can be applied to any type of parametric mixture model for which it is possible to write an EM algorithm; in this paper, we illustrate it with experiments involving Gaussian mixtures. These experiments testify for the good performance of our approach.

2,182 citations

Journal ArticleDOI
TL;DR: An efficient closest point search algorithm, based on the Schnorr-Euchner (1995) variation of the Pohst (1981) method, is implemented and is shown to be substantially faster than other known methods.
Abstract: In this semitutorial paper, a comprehensive survey of closest point search methods for lattices without a regular structure is presented. The existing search strategies are described in a unified framework, and differences between them are elucidated. An efficient closest point search algorithm, based on the Schnorr-Euchner (1995) variation of the Pohst (1981) method, is implemented. Given an arbitrary point x /spl isin/ /spl Ropf//sup m/ and a generator matrix for a lattice /spl Lambda/, the algorithm computes the point of /spl Lambda/ that is closest to x. The algorithm is shown to be substantially faster than other known methods, by means of a theoretical comparison with the Kannan (1983, 1987) algorithm and an experimental comparison with the Pohst (1981) algorithm and its variants, such as the Viterbo-Boutros (see ibid. vol.45, p.1639-42, 1999) decoder. Modifications of the algorithm are developed to solve a number of related search problems for lattices, such as finding a shortest vector, determining the kissing number, computing the Voronoi (1908)-relevant vectors, and finding a Korkine-Zolotareff (1873) reduced basis.

1,616 citations

Journal ArticleDOI
TL;DR: The key to a successful quantization is the selection of an error criterion – such as entropy and signal-to-noise ratio – and the development of optimal quantizers for this criterion.
Abstract: Quantization is a process that maps a continous or discrete set of values into approximations that belong to a smaller set. Quantization is a lossy: some information about the original data is lost in the process. The key to a successful quantization is therefore the selection of an error criterion – such as entropy and signal-to-noise ratio – and the development of optimal quantizers for this criterion.

1,574 citations