scispace - formally typeset
Search or ask a question
Author

Norbert Pfeifer

Bio: Norbert Pfeifer is an academic researcher from Vienna University of Technology. The author has contributed to research in topics: Point cloud & Laser scanning. The author has an hindex of 49, co-authored 249 publications receiving 8855 citations. Previous affiliations of Norbert Pfeifer include University of Vienna & University of Innsbruck.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the characteristics of laser scanning are compared to photogrammetry with reference to a big pilot project and the results are in accordance with the expectations, however, the geomorphologic quality of the contours, computed from a terrain model derived from laser scanning, needs to be improved.
Abstract: Large-scale terrain measurement in wooded areas was an unsolved problem up to now. Laser scanning solves this problem to a large extent. In this article, the characteristics of laser scanning will be compared to photogrammetry with reference to a big pilot project. Laser scanning supplies data with a skew distribution of errors because a portion of the supplied points is not on the terrain but on the treetops. Thus, the usual interpolation and filtering has to be adapted to this new data type. We will report on the implementation of this new method. The results are in accordance with the expectations. The geomorphologic quality of the contours, computed from a terrain model derived from laser scanning, needs to be improved. Solutions are still to be found.

1,196 citations

Journal ArticleDOI
TL;DR: In this paper, two different methods for correcting the laser scanning intensity data for known influences resulting in a value proportional to the reflectance of the scanned surface are presented, data-driven and model-driven correction.
Abstract: Most airborne and terrestrial laser scanning systems additionally record the received signal intensity for each measurement. Multiple studies show the potential of this intensity value for a great variety of applications (e.g. strip adjustment, forestry, glaciology), but also state problems if using the original recorded values. Three main factors, a) spherical loss, b) topographic and c) atmospheric effects, influence the backscatter of the emitted laser power, which leads to a noticeably heterogeneous representation of the received power. This paper describes two different methods for correcting the laser scanning intensity data for these known influences resulting in a value proportional to the reflectance of the scanned surface. The first approach – data-driven correction – uses predefined homogeneous areas to empirically estimate the best parameters (least-squares adjustment) for a given global correction function accounting for all range-dependent influences. The second approach – model-driven correction – corrects each intensity independently based on the physical principle of radar systems. The evaluation of both methods, based on homogeneous reflecting areas acquired at different heights in different missions, indicates a clear reduction of intensity variation, to 1/3.5 of the original variation, and offsets between flight strips to 1/10. The presented correction methods establish a great potential for laser scanning intensity to be used for surface classification and multi-temporal analyses.

509 citations

Journal ArticleDOI
17 Nov 2008-Sensors
TL;DR: This article proposes a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data, based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces.
Abstract: Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects.

327 citations

Journal ArticleDOI
TL;DR: A comparison of the evaluation techniques shows that they highlight different properties of the building detection results, and a comprehensive evaluation strategy involving quality metrics derived by different methods is proposed.
Abstract: In this paper, different methods for the evaluation of building detection algorithms are compared. Whereas pixel-based evaluation gives estimates of the area that is correctly classified, the results are distorted by errors at the building outlines. These distortions are potentially in an order of 30%. Object-based evaluation techniques are less affected by such errors. However, the performance metrics thus delivered are sometimes considered to be less objective, because the definition of a ldquocorrect detectionrdquo is not unique. Based on a critical review of existing performance metrics, selected methods for the evaluation of building detection results are presented. These methods are used to evaluate the results of two different building detection algorithms in two test sites. A comparison of the evaluation techniques shows that they highlight different properties of the building detection results. As a consequence, a comprehensive evaluation strategy involving quality metrics derived by different methods is proposed.

311 citations

Journal ArticleDOI
TL;DR: In this article, the segmentation of airborne laser scanning data is based on cluster analysis in a feature space, and a recently proposed neighborhood system, called slope adaptive, is utilized to improve the quality of the computed attributes.
Abstract: This paper presents an algorithm for the segmentation of airborne laser scanning data. The segmentation is based on cluster analysis in a feature space. To improve the quality of the computed attributes, a recently proposed neighborhood system, called slope adaptive, is utilized. Key parameters of the laser data, e.g., point density, measurement accuracy, and horizontal and vertical point distribution, are used for defining the neighborhood among the measured points. Accounting for these parameters facilitates the computation of accurate and reliable attributes for the segmentation irrespective of point density and the 3D content of the data (step edges, layered surfaces, etc.) The segmentation with these attributes reveals more of the information that exists in the airborne laser scanning data.

235 citations


Cited by
More filters
Journal ArticleDOI

6,278 citations

Journal ArticleDOI
01 May 1981
TL;DR: This chapter discusses Detecting Influential Observations and Outliers, a method for assessing Collinearity, and its applications in medicine and science.
Abstract: 1. Introduction and Overview. 2. Detecting Influential Observations and Outliers. 3. Detecting and Assessing Collinearity. 4. Applications and Remedies. 5. Research Issues and Directions for Extensions. Bibliography. Author Index. Subject Index.

4,948 citations

Journal ArticleDOI
TL;DR: The random forest is clearly the best family of classifiers (3 out of 5 bests classifiers are RF), followed by SVM (4 classifiers in the top-10), neural networks and boosting ensembles (5 and 3 members in theTop-20, respectively).
Abstract: We evaluate 179 classifiers arising from 17 families (discriminant analysis, Bayesian, neural networks, support vector machines, decision trees, rule-based classifiers, boosting, bagging, stacking, random forests and other ensembles, generalized linear models, nearest-neighbors, partial least squares and principal component regression, logistic and multinomial regression, multiple adaptive regression splines and other methods), implemented in Weka, R (with and without the caret package), C and Matlab, including all the relevant classifiers available today. We use 121 data sets, which represent the whole UCI data base (excluding the large-scale problems) and other own real problems, in order to achieve significant conclusions about the classifier behavior, not dependent on the data set collection. The classifiers most likely to be the bests are the random forest (RF) versions, the best of which (implemented in R and accessed via caret) achieves 94.1% of the maximum accuracy overcoming 90% in the 84.3% of the data sets. However, the difference is not statistically significant with the second best, the SVM with Gaussian kernel implemented in C using LibSVM, which achieves 92.3% of the maximum accuracy. A few models are clearly better than the remaining ones: random forest, SVM with Gaussian and polynomial kernels, extreme learning machine with Gaussian kernel, C5.0 and avNNet (a committee of multi-layer perceptrons implemented in R with the caret package). The random forest is clearly the best family of classifiers (3 out of 5 bests classifiers are RF), followed by SVM (4 classifiers in the top-10), neural networks and boosting ensembles (5 and 3 members in the top-20, respectively).

2,616 citations

Journal ArticleDOI
TL;DR: This paper reviews remote sensing implementations of support vector machines (SVMs), a promising machine learning methodology that is particularly appealing in the remote sensing field due to their ability to generalize well even with limited training samples.
Abstract: A wide range of methods for analysis of airborne- and satellite-derived imagery continues to be proposed and assessed. In this paper, we review remote sensing implementations of support vector machines (SVMs), a promising machine learning methodology. This review is timely due to the exponentially increasing number of works published in recent years. SVMs are particularly appealing in the remote sensing field due to their ability to generalize well even with limited training samples, a common limitation for remote sensing applications. However, they also suffer from parameter assignment issues that can significantly affect obtained results. A summary of empirical results is provided for various applications of over one hundred published works (as of April, 2010). It is our hope that this survey will provide guidelines for future applications of SVMs and possible areas of algorithm enhancement.

2,546 citations