scispace - formally typeset
Search or ask a question

Showing papers by "David A. Landgrebe published in 1995"


01 Jan 1995
TL;DR: Algorithms are developed to analyze high dimensional multispectral data that enable the data analyst to classify high dimensional data more accurately and efficiently than is possible with standard pattern recognition techniques.
Abstract: New sensor technology has made it possible to gather multispectral images in hundreds and potentially thousands of spectral bands, whereas current sensors typically gather images in 12 or fewer bands. This tremendous increase in spectral resolution should provide a wealth of detailed information, but the techniques used to analyze lower dimensional data often perform poorly on high dimensional data. In this thesis, algorithms are developed to analyze high dimensional multispectral data. In particular a method for gathering training samples is demonstrated, the effect of atmospheric adjustments on classification accuracy is explored, a new method for estimating the covariance matrix of a class is presented, and a new method for estimating the number of clusters in a data cloud is developed. These techniques enable the data analyst to classify high dimensional data more accurately and efficiently than is possible with standard pattern recognition techniques.

23 citations


Proceedings ArticleDOI
22 Oct 1995
TL;DR: Using a technique called projection pursuit, a pre-processing dimensional reduction method has been developed based on the optimization of a projection index and a method to estimate an initial value that can more quickly lead to the global maximum is presented.
Abstract: Supervised classification techniques use labeled samples to train the classifier. Often the number of such samples is limited, thus limiting the precision with which class characteristics can be estimated. As the number of spectral bands becomes large, the limitation on performance imposed by the limited number of training samples can become severe. Such consequences suggest the value of reducing the dimensionality by a pre-processing method that takes advantage of the asymptotic normality of projected data. Using a technique called projection pursuit, a pre-processing dimensional reduction method has been developed based on the optimization of a projection index. A method to estimate an initial value that can more quickly lead to the global maximum is presented for projection pursuit using the Bhattacharyya distance as the projection index.

21 citations


Proceedings ArticleDOI
10 Jul 1995
TL;DR: Two parametric projection pursuit algorithms have been presented, and an iterative procedure of the sequential approach that mitigates the computation time problem is shown.
Abstract: Supervised classification techniques use labeled samples in order to train the classifier. Usually the number of such samples is limited, and as the number of bands available increases, this limitation becomes more severe, and can become dominate over the projected added value of having the additional bands available. This suggests the need for reducing the dimensionality via a preprocessing method. Such reduction should enable the estimation of feature extraction parameters to be more accurate. Using a technique referred to as projection pursuit, two parametric projection pursuit algorithms have been developed: parallel parametric projection pursuit and sequential parametric projection pursuit. In the present paper both methods are presented, and an iterative procedure of the sequential approach that mitigates the computation time problem is shown.

14 citations


Proceedings ArticleDOI
10 Jul 1995
TL;DR: In this article, a new covariance estimator is presented that selects an appropriate mixture of the sample covariance and the common covariance estimates, in the sense that it maximizes the average likelihood of training samples not used in the estimates.
Abstract: When classifying data with the Gaussian maximum likelihood classifier, the mean vector and covariance matrix of each class usually are not known and must be estimated from training samples. For p-dimensional data, the sample covariance estimate is singular, and therefore unusable, if fewer than p+1 training samples from each class are available, and it is a poor estimate of the true covariance unless many more than p+1 samples are available. Since inaccurate estimates of the covariance matrix lead to lowered classification accuracy and labeling training samples can be difficult and expensive in remote sensing applications, having too few training samples is a major impediment in using the Gaussian maximum likelihood classifier with high dimensional remote sensing data. In the paper, a new covariance estimator is presented that selects an appropriate mixture of the sample covariance and the common covariance estimates. The mixture deemed appropriate is the one that provides the best fit to the training samples in the sense that it maximizes the average likelihood of training samples not used in the estimates. When the number of training samples is limited or when the covariance matrices of the classes are similar, this estimator tends to select an estimate close to the common covariance, otherwise it favors the sample covariance estimate. Since it is non-singular whenever the common covariance estimate is non-singular, the new estimator can be used even when some of the sample covariance matrices are singular.

3 citations


01 Jan 1995
TL;DR: The objective of this thesis is to address the issues and provide some solutions to the problem of inference and decision making with imprecise or partially known priors and sampling distributions, and prove the Bayes' Theorem for the 2-Choquet Capacity classes.
Abstract: Bayesian inference and decision making requires elici1:ation of prior probabilities and sampling distributions. In many applica~tions such as exploratory data analysis, however, it may not be possible to construct the prior probabilities or the sampling distributions precisely. The objective of this thesis is to address the issues and provide some solutions to the problem of inference and decision making with imprecise or partially known priors and sampling distributions. More specifically, we will address the following three interrelated problen~s: (1) how to describe in~precise priors and sampling distributions, (2) how to proceed from approximate priors and sampling distributions to approximate posteriors and posterior related quantities, and (3) how to make decisions with imprecise posterior probabilities. When the priors and/or sampling distributions are not known precisely, a natural approach is to consider a class or a neighborhood of priors, and classes or collections of sampling distributions. This approach leads naturally to consideration of upper and lower probabilities or interval-valuedl probabilities. We examine the various approaches to representation of imprecision in priors and sampling distributions. We realize that many useful classes, either for the priors or for the sampling distributions, are conveniently described in terms of 2Choquet Capacities. We prove the Bayes' Theorem (or Conditioning) for the 2-Choquet Capacity classes. Since the classes of imprecise probabilities described by the Dempster-Shafer Theory are .o-Choquet Capacities (and therefore 2-Choquet Capacities) our result provides another proof of the incon:sistency of the Dempster's rule. We address the problem of combination of various sources of information and the requirements for a reasonable combination rule. Here, we also examine the issues of independence of sources of information which is a crucial issue in combining various sources of information. We consider three methods to combine imprecise information. In method one, we utilizes thle extreme-point representations of the imprecise priors and/or the sampling distributions to obtain the extreme-points of the class of posteriors. This method is usually computationally very demanding. Therefore, we propose a simple iterative procedure that allows direct computation of not only the posterior probabilities, but also many useful posterior related quantities such as the posterior mean, the predictive density that the next observation would lie in a given set, the posterior expected loss of a decision or an action, etc. Finally,, by considering the joint space of observations and parameters, we show that if this class of joint probabilities is a 2-Choquet capacity class, we can utilize our Bayes' Theorem found earlier to obtain the posterior probabilities. This last approach is computationally the most efficient method. Finally, we address the problem of decision making with imprecise posteriors obtained from imprecise priors and sampling distributions. Even ,though, allowing imprecision is a natural approach for representation of lack of information, it sometimes leads to complications in decision making and even indeterminacies. We suggest a few ad-hoc rules to resolve the remaining indeterminacies. The ultimate solution in such cases is to simply gather more data.

3 citations


Proceedings ArticleDOI
10 Jul 1995
TL;DR: In the discussion which follows, some of the aspects significant to the further development of this technology are discussed.
Abstract: The study of multispectral reflectance characteristics for use in remotely identifying and measuring Earth surface cover types began in earnest in the early 1960s. The analysis methods which are appropriate have been controlled by the available sensor technology over the time since then. In recent years, sensor developments have made possible multispectral data of much greater quality and complexity. In the discussion which follows, some of the aspects significant to the further development of this technology are discussed.

2 citations


Proceedings ArticleDOI
12 Jun 1995
TL;DR: In this article, a focused effort was begun a few years ago to advance the technology of multispectral analysis to a more effective level, and especially to prepare suitable methods to yield full information extraction capabilities from the new hyperspectral data.
Abstract: Analysis methods for multispectral data have been under study for at least three decades. In spite of that fact, the state of the technology is still far from satisfactory for conventional multispectral data, and the advent of hyperspectral sensor systems raises the challenge substantially. Thus a focused effort was begun a few years ago to advance the technology of multispectral analysis to a more effective level, and especially to prepare suitable methods to yield full information extraction capabilities from the new hyperspectral data. This paper outlines some of what has been learned about this problem from the research effort.

2 citations