scispace - formally typeset
B

Behzad M. Shahshahani

Researcher at Purdue University

Publications -  7
Citations -  1118

Behzad M. Shahshahani is an academic researcher from Purdue University. The author has contributed to research in topics: Unsupervised learning & Statistical model. The author has an hindex of 5, co-authored 7 publications receiving 1076 citations.

Papers
More filters
Journal ArticleDOI

The effect of unlabeled samples in reducing the small sample size problem and mitigating the Hughes phenomenon

TL;DR: By using additional unlabeled samples that are available at no extra cost, the performance may be improved, and therefore the Hughes phenomenon can be mitigated and therefore more representative estimates can be obtained.

The Effect of Unlabeled Samples in Reducing the Small Sample Size Problem and Mitigating the

TL;DR: In this paper, the use of unlabeled samples in reducing the problem of small training sample size that can severely affect the recognition rate of classifiers when the dimensionality of the multispectral data is high.

Classification of multispectral data by joint supervised-unsupervised learning

TL;DR: It is shown that by adding the unlabeled samples to the classifier design process, better estimates for the discriminant functions can be obtained and the peaking phenomenon that is observed in the performance versus dimensionality curves, can be mitigated.
Proceedings ArticleDOI

Using Partially Labeled Data For Normal Mixture Identification With Application To Class Definition

TL;DR: In this paper, the problem of estimating the parameters of a normal mixture density when, in addition to the unlabeled samples, sets of partially labeled samples are available is addressed.
Proceedings ArticleDOI

Asymptotic improvement of supervised learning by utilizing additional unlabeled samples: normal mixture density case

TL;DR: It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates.