scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Probabilistic neural networks

01 Jan 1990-Neural Networks (Elsevier Science Ltd.)-Vol. 3, Iss: 1, pp 109-118
TL;DR: A probabilistic neural network that can compute nonlinear decision boundaries which approach the Bayes optimal is formed, and a fourlayer neural network of the type proposed can map any input pattern to any number of classifications.
About: This article is published in Neural Networks.The article was published on 1990-01-01. It has received 3772 citations till now. The article focuses on the topics: Probabilistic neural network & Activation function.
Citations
More filters
Book
01 Jan 1995
TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Abstract: From the Publisher: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts, the book examines techniques for modelling probability density functions and the properties and merits of the multi-layer perceptron and radial basis function network models. Also covered are various forms of error functions, principal algorithms for error function minimalization, learning and generalization in neural networks, and Bayesian techniques and their applications. Designed as a text, with over 100 exercises, this fully up-to-date work will benefit anyone involved in the fields of neural computation and pattern recognition.

19,056 citations


Cites methods from "Probabilistic neural networks"

  • ...This is known as the Nadaraya-Watson estimator (Nadaraya, 1964; Watson, 1964), and has been re-discovered relatively recently in the context of neural networks (Specht, 1990; Schi0ler and Hartmann, 1992)....

    [...]

Journal ArticleDOI
09 Jun 2005-Nature
TL;DR: A new, bead-based flow cytometric miRNA expression profiling method is used to present a systematic expression analysis of 217 mammalian miRNAs from 334 samples, including multiple human cancers, and finds the miRNA profiles are surprisingly informative, reflecting the developmental lineage and differentiation state of the tumours.
Abstract: Recent work has revealed the existence of a class of small non-coding RNA species, known as microRNAs (miRNAs), which have critical functions across various biological processes. Here we use a new, bead-based flow cytometric miRNA expression profiling method to present a systematic expression analysis of 217 mammalian miRNAs from 334 samples, including multiple human cancers. The miRNA profiles are surprisingly informative, reflecting the developmental lineage and differentiation state of the tumours. We observe a general downregulation of miRNAs in tumours compared with normal tissues. Furthermore, we were able to successfully classify poorly differentiated tumours using miRNA expression profiles, whereas messenger RNA profiles were highly inaccurate when applied to the same samples. These findings highlight the potential of miRNA profiling in cancer diagnosis.

9,470 citations

Book
01 Jan 1996
TL;DR: Professor Ripley brings together two crucial ideas in pattern recognition; statistical methods and machine learning via neural networks in this self-contained account.
Abstract: From the Publisher: Pattern recognition has long been studied in relation to many different (and mainly unrelated) applications, such as remote sensing, computer vision, space research, and medical imaging. In this book Professor Ripley brings together two crucial ideas in pattern recognition; statistical methods and machine learning via neural networks. Unifying principles are brought to the fore, and the author gives an overview of the state of the subject. Many examples are included to illustrate real problems in pattern recognition and how to overcome them.This is a self-contained account, ideal both as an introduction for non-specialists readers, and also as a handbook for the more expert reader.

5,632 citations

Journal ArticleDOI
TL;DR: The general regression neural network (GRNN) is a one-pass learning algorithm with a highly parallel structure that provides smooth transitions from one observed value to another.
Abstract: A memory-based network that provides estimates of continuous variables and converges to the underlying (linear or nonlinear) regression surface is described. The general regression neural network (GRNN) is a one-pass learning algorithm with a highly parallel structure. It is shown that, even with sparse data in a multidimensional measurement space, the algorithm provides smooth transitions from one observed value to another. The algorithmic form can be used for any regression problem in which an assumption of linearity is not justified. >

4,091 citations

MonographDOI
02 Jul 2004
TL;DR: This combining pattern classifiers methods and algorithms helps people to enjoy a good book with a cup of coffee in the afternoon, instead they cope with some harmful virus inside their computer.
Abstract: Thank you for downloading combining pattern classifiers methods and algorithms. Maybe you have knowledge that, people have look hundreds times for their chosen novels like this combining pattern classifiers methods and algorithms, but end up in infectious downloads. Rather than enjoying a good book with a cup of coffee in the afternoon, instead they cope with some harmful virus inside their computer.

2,667 citations


Cites methods from "Probabilistic neural networks"

  • ...The real value of Parzen classifier lies in the fact that it is the statistical counterpart of several important classification methods such as radial basis function networks [41,42], the probabilistic neural network (PNN) [43], and a number of fuzzy classifiers [44–46]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points, so it may be said that half the classification information in an infinite sample set is contained in the nearest neighbor.
Abstract: The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points. This rule is independent of the underlying joint distribution on the sample points and their classifications, and hence the probability of error R of such a rule must be at least as great as the Bayes probability of error R^{\ast} --the minimum probability of error over all decision rules taking underlying probability structure into account. However, in a large sample analysis, we will show in the M -category case that R^{\ast} \leq R \leq R^{\ast}(2 --MR^{\ast}/(M-1)) , where these bounds are the tightest possible, for all suitably smooth underlying distributions. Thus for any number of categories, the probability of error of the nearest neighbor rule is bounded above by twice the Bayes probability of error. In this sense, it may be said that half the classification information in an infinite sample set is contained in the nearest neighbor.

12,243 citations


"Probabilistic neural networks" refers methods in this paper

  • ...The nearest neighbor decision rule has been investigated in detail by Cover and Hart (1967). In general, neither limiting case provides optimal separation of the two distributions. A degree of averaging of nearest neighbors, dictated by the density of training samples, provides better generalization than basing the decision on a single nearest neighbor. The network proposed is similar in effect to the knearest neighbor classifier. Specht (1966) contains an involved discussion of how one should choose a value of the smoothing parameter, a, as a function of the dimension of the problem, p, and the number of training patterns, n....

    [...]

  • ...The nearest neighbor decision rule has been investigated in detail by Cover and Hart (1967). In general, neither limiting case provides optimal separation of the two distributions. A degree of averaging of nearest neighbors, dictated by the density of training samples, provides better generalization than basing the decision on a single nearest neighbor. The network proposed is similar in effect to the knearest neighbor classifier. Specht (1966) contains an involved discussion of how one should choose a value of the smoothing parameter, a, as a function of the dimension of the problem, p, and the number of training patterns, n. However, it has been found that in practical problems it is not difficult to find a good value of a, and that the misclassification rate does not change dramatically with small changes in a. Specht (1967b) describes an experiment in which electrocardiograms were classified as normal or abnormal using the two-category classification of eqns (1) and (12)....

    [...]

  • ...The nearest neighbor decision rule has been investigated in detail by Cover and Hart (1967). In general, neither limiting case provides optimal separation of the two distributions....

    [...]

  • ...The nearest neighbor decision rule has been investigated in detail by Cover and Hart (1967)....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the problem of the estimation of a probability density function and of determining the mode of the probability function is discussed. Only estimates which are consistent and asymptotically normal are constructed.
Abstract: : Given a sequence of independent identically distributed random variables with a common probability density function, the problem of the estimation of a probability density function and of determining the mode of a probability function are discussed. Only estimates which are consistent and asymptotically normal are constructed. (Author)

10,114 citations


"Probabilistic neural networks" refers background in this paper

  • ...Alternate estimators suggested by Cacoullos (1966) and Parzen (1962) are given in Table 1, where f.,(X) n,;?...

    [...]

  • ...Parzen (1962) showed how one may construct a family of estimates of f(X), .f°(x) n~,,,~°, , (4) which is consistent at all points X at which the PDF...

    [...]

  • ...Parzen (1962) showed how one may construct a family of estimates of f (X) ,...

    [...]

  • ...Further application of Cacoullos (1966), Theorem 4,1, to other univariate kernels suggested by Parzen (1962) yields the following multivariate estimators (which are products of univariate kernels): 1 ~1, when alllX~-XA,jI --2 (18) fa(X) --n(22)p ,=, fA(X) n~ p = = ~ , when all IX~ -Xa0....

    [...]

  • ...In his classic paper, Parzen (1962) showed that a class of PDF estimators asymptotically approaches the underlying parent density provided only that it is continuous....

    [...]

Book
01 Jan 1963
TL;DR: In this article, a tabular summary of parametric families of distributions is presented, along with a parametric point estimation method and a nonparametric interval estimation method for point estimation.
Abstract: 1 probability 2 Random variables, distribution functions, and expectation 3 Special parametric families of univariate distributions 4 Joint and conditional distributions, stochastic independence, more expectation 5 Distributions of functions of random variables 6 Sampling and sampling distributions 7 Parametric point estimation 8 Parametric interval estimation 9 Tests of hypotheses 10 Linear models 11 Nonparametric method Appendix A Mathematical Addendum Appendix B tabular summary of parametric families of distributions Appendix C References and related reading Appendix D Tables

4,571 citations

Journal ArticleDOI
TL;DR: In this article, a tabular summary of parametric families of distributions is presented, along with a parametric point estimation method and a nonparametric interval estimation method for point estimation.
Abstract: 1 probability 2 Random variables, distribution functions, and expectation 3 Special parametric families of univariate distributions 4 Joint and conditional distributions, stochastic independence, more expectation 5 Distributions of functions of random variables 6 Sampling and sampling distributions 7 Parametric point estimation 8 Parametric interval estimation 9 Tests of hypotheses 10 Linear models 11 Nonparametric method Appendix A Mathematical Addendum Appendix B tabular summary of parametric families of distributions Appendix C References and related reading Appendix D Tables

3,211 citations

Journal ArticleDOI

684 citations


"Probabilistic neural networks" refers background in this paper

  • ...Cacoullos (1966) has also extended Parzen's resuits to cover the multivariate case. Theorem 4.1 in Cacoullos (1966)indicates how the Parzen results can be extended to estimates in the special ease that the multivariate kernel is a product of univariate kernels....

    [...]

  • ...Further application of Cacoullos (1966), Theorem 4,1, to other univariate kernels suggested by Parzen (1962) yields the following multivariate estimators (which are products of univariate kernels):...

    [...]

  • ...Cacoullos (1966) has also extended Parzen's resuits to cover the multivariate case....

    [...]

  • ...Alternate estimators suggested by Cacoullos (1966) and Parzen (1962) are given in Table 1, where f.,(X) n,;?...

    [...]

  • ...Further application of Cacoullos (1966), Theorem 4,1, to other univariate kernels suggested by Parzen (1962) yields the following multivariate estimators (which are products of univariate kernels): 1 ~1, when alllX~-XA,jI --2 (18) fa(X) --n(22)p ,=, fA(X) n~ p = = ~ , when all IX~ -Xa0....

    [...]