scispace - formally typeset
Search or ask a question
Author

David A. Landgrebe

Bio: David A. Landgrebe is an academic researcher from Purdue University. The author has contributed to research in topics: Multispectral image & Multispectral pattern recognition. The author has an hindex of 48, co-authored 177 publications receiving 14075 citations. Previous affiliations of David A. Landgrebe include DuPont & Rochester Institute of Technology.


Papers
More filters
Journal ArticleDOI
01 Jun 1991
TL;DR: The subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed, and the relation between decision trees and neutral networks (NN) is also discussed.
Abstract: A survey is presented of current methods for decision tree classifier (DTC) designs and the various existing issues. After considering potential advantages of DTCs over single-state classifiers, the subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed. The relation between decision trees and neutral networks (NN) is also discussed. >

3,176 citations

01 Jan 1978
TL;DR: In this paper, the authors describe the traitement de donnees reference record created on 2005-06-20, modified on 2016-08-08 and used for remote sensing.
Abstract: Keywords: Remote sensing ; traitement de donnees Reference Record created on 2005-06-20, modified on 2016-08-08

1,149 citations

Journal ArticleDOI
TL;DR: The article includes an example of an image space representation, using three bands to simulate a color IR photograph of an airborne hyperspectral data set over the Washington, DC, mall.
Abstract: The fundamental basis for space-based remote sensing is that information is potentially available from the electromagnetic energy field arising from the Earth's surface and, in particular, from the spatial, spectral, and temporal variations in that field. Rather than focusing on the spatial variations, which imagery perhaps best conveys, why not move on to look at how the spectral variations might be used. The idea was to enlarge the size of a pixel until it includes an area that is characteristic from a spectral response standpoint for the surface cover to be discriminated. The article includes an example of an image space representation, using three bands to simulate a color IR photograph of an airborne hyperspectral data set over the Washington, DC, mall.

1,007 citations

Book
31 Jan 2003
TL;DR: In this article, the authors introduce the concept of feature definition and feature definition for multispectral image data and present a data analysis paradigm and examples of feature definitions and features.
Abstract: Preface. PART I: INTRODUCTION. Chapter 1. Introduction and Background. PART II: THE BASICS FOR CONVENTIONAL MULTISPECTRAL DATA. Chapter 2. Radiation and Sensor Systems in Remote Sensing. Chapter 3. Pattern Recognition in Remote Sensing. PART III: ADDITIONAL DETAILS. Chapter 4. Training a Classifier. Chapter 5. Hyperspectral Data Characteristics. Chapter 6. Feature Definition. Chapter 7. A Data Analysis Paradigm and Examples. Chapter 8. Use of Spatial Variations. Chapter 9. Noise in Remote Sensing Systems. Chapter 10. Multispectral Image Data Preprocessing. Appendix. An Outline of Probability Theory. Exercises. Index.

889 citations

Journal ArticleDOI
01 Sep 1994
TL;DR: By using additional unlabeled samples that are available at no extra cost, the performance may be improved, and therefore the Hughes phenomenon can be mitigated and therefore more representative estimates can be obtained.
Abstract: The authors study the use of unlabeled samples in reducing the problem of small training sample size that can severely affect the recognition rate of classifiers when the dimensionality of the multispectral data is high The authors show that by using additional unlabeled samples that are available at no extra cost, the performance may be improved, and therefore the Hughes phenomenon can be mitigated Furthermore, by experiments, they show that by using additional unlabeled samples more representative estimates can be obtained They also propose a semiparametric method for incorporating the training (ie, labeled) and unlabeled samples simultaneously into the parameter estimation process >

547 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field.
Abstract: The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques and methods imported from statistical learning theory have been receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field.

6,527 citations

MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations

Proceedings Article
04 Dec 2017
TL;DR: It is proved that, since the data instances with larger gradients play a more important role in the computation of information gain, GOSS can obtain quite accurate estimation of the information gain with a much smaller data size, and is called LightGBM.
Abstract: Gradient Boosting Decision Tree (GBDT) is a popular machine learning algorithm, and has quite a few effective implementations such as XGBoost and pGBRT. Although many engineering optimizations have been adopted in these implementations, the efficiency and scalability are still unsatisfactory when the feature dimension is high and data size is large. A major reason is that for each feature, they need to scan all the data instances to estimate the information gain of all possible split points, which is very time consuming. To tackle this problem, we propose two novel techniques: Gradient-based One-Side Sampling (GOSS) and Exclusive Feature Bundling (EFB). With GOSS, we exclude a significant proportion of data instances with small gradients, and only use the rest to estimate the information gain. We prove that, since the data instances with larger gradients play a more important role in the computation of information gain, GOSS can obtain quite accurate estimation of the information gain with a much smaller data size. With EFB, we bundle mutually exclusive features (i.e., they rarely take nonzero values simultaneously), to reduce the number of features. We prove that finding the optimal bundling of exclusive features is NP-hard, but a greedy algorithm can achieve quite good approximation ratio (and thus can effectively reduce the number of features without hurting the accuracy of split point determination by much). We call our new GBDT implementation with GOSS and EFB LightGBM. Our experiments on multiple public datasets show that, LightGBM speeds up the training process of conventional GBDT by up to over 20 times while achieving almost the same accuracy.

4,977 citations

Journal ArticleDOI
Julian Besag1
TL;DR: In this paper, the authors proposed an iterative method for scene reconstruction based on a non-degenerate Markov Random Field (MRF) model, where the local characteristics of the original scene can be represented by a nondegenerate MRF and the reconstruction can be estimated according to standard criteria.
Abstract: may 7th, 1986, Professor A. F. M. Smith in the Chair] SUMMARY A continuous two-dimensional region is partitioned into a fine rectangular array of sites or "pixels", each pixel having a particular "colour" belonging to a prescribed finite set. The true colouring of the region is unknown but, associated with each pixel, there is a possibly multivariate record which conveys imperfect information about its colour according to a known statistical model. The aim is to reconstruct the true scene, with the additional knowledge that pixels close together tend to have the same or similar colours. In this paper, it is assumed that the local characteristics of the true scene can be represented by a nondegenerate Markov random field. Such information can be combined with the records by Bayes' theorem and the true scene can be estimated according to standard criteria. However, the computational burden is enormous and the reconstruction may reflect undesirable largescale properties of the random field. Thus, a simple, iterative method of reconstruction is proposed, which does not depend on these large-scale characteristics. The method is illustrated by computer simulations in which the original scene is not directly related to the assumed random field. Some complications, including parameter estimation, are discussed. Potential applications are mentioned briefly.

4,490 citations