scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Pattern Recognition and Machine Learning

01 Aug 2007-Technometrics (Taylor & Francis)-Vol. 49, Iss: 3, pp 366-366
TL;DR: This book covers a broad range of topics for regular factorial designs and presents all of the material in very mathematical fashion and will surely become an invaluable resource for researchers and graduate students doing research in the design of factorial experiments.
Abstract: (2007). Pattern Recognition and Machine Learning. Technometrics: Vol. 49, No. 3, pp. 366-366.
Citations
More filters
Journal ArticleDOI
TL;DR: A novel fuzzy support vector machine (FSVM) and a variety of artificial neural networks (ANNs) are applied in this paper and the classification results reveal that FSVM significantly outperforms a number of ANN algorithms.
Abstract: Partial discharge (PD) source classification aims to identify the types of defects causing discharges in high voltage (HV) equipment. This paper presents a comprehensive study of applying pattern recognition techniques to automatic PD source classification. Three challenging issues are investigated in this paper. The first issue is the feature extraction for obtaining representative attributes from the original PD measurement data. Several approaches including stochastic neighbour embedding (SNE), principal component analysis (PCA), kernel principal component analysis (KPCA), discrete wavelet transform (DWT), and conventional statistic operators are adopted for feature extraction. The second issue is the pattern recognition algorithms for identifying various types of PD sources. A novel fuzzy support vector machine (FSVM) and a variety of artificial neural networks (ANNs) are applied in the paper. The third issue is the identification of multiple PD sources, which may occur in HV equipment simultaneously. Two approaches are proposed to address this issue. To evaluate the performance of various algorithms in this paper, extensive laboratory experiments on a number of artificial PD models are conducted. The classification results reveal that FSVM significantly outperforms a number of ANN algorithms. The practical PD sources classification for HV equipment is a considerable complicated problem. Therefore, this paper also discusses some issues of meaningful application of the above proposed pattern recognition techniques for practical PD sources classification of HV equipment.

119 citations


Cites background or methods from "Pattern Recognition and Machine Lea..."

  • ...For the detailed derivations of these algorithms, readers may refer to [24, 26, 29]....

    [...]

  • ...KPCA is one of the nonlinear feature extraction methods [29]....

    [...]

  • ...In Bayesian classifier, the probability that one particular data point belongs to class is denoted as and is computed [24, 29]...

    [...]

  • ...And the computation of output layer weights is formulated as a quadratic optimization problem [29]....

    [...]

  • ...RBF network aims to derive an input-output mapping using M radial basis functions [29]...

    [...]

Journal ArticleDOI
TL;DR: A Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior.
Abstract: A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

119 citations

Journal ArticleDOI
TL;DR: This paper proposes a method for binary classification that does not only produce a prediction of the class of a query instance but also a quantification of the two sources of uncertainty: aleatoric uncertainty and epistemic uncertainty.

119 citations


Cites methods from "Pattern Recognition and Machine Lea..."

  • ...Indeed, the classification problem is often formalized within the framework of Bayesian decision theory [4]....

    [...]

Journal ArticleDOI
TL;DR: Findings indicate differential contributions of MTL subregions to event representation via a distributed code along the anterior-posterior axis of M TL that depends on the nature of event content.
Abstract: Current theories of medial temporal lobe (MTL) function focus on event content as an important organizational principle that differentiates MTL subregions. Perirhinal and parahippocampal cortices may play content-specific roles in memory, whereas hippocampal processing is alternately hypothesized to be content specific or content general. Despite anatomical evidence for content-specific MTL pathways, empirical data for content-based MTL subregional dissociations are mixed. Here, we combined functional magnetic resonance imaging with multiple statistical approaches to characterize MTL subregional responses to different classes of novel event content (faces, scenes, spoken words, sounds, visual words). Univariate analyses revealed that responses to novel faces and scenes were distributed across the anterior--posterior axis of MTL cortex, with face responses distributed more anteriorly than scene responses. Moreover, multivariate pattern analyses of perirhinal and parahippocampal data revealed spatially organized representational codes for multiple content classes, including nonpreferred visual and auditory stimuli. In contrast, anterior hippocampal responses were content general, with less accurate overall pattern classification relative to MTL cortex. Finally, posterior hippocampal activation patterns consistently discriminated scenes more accurately than other forms of content. Collectively, our findings indicate differential contributions of MTL subregions to event representation via a distributed code along the anterior--posterior axis of MTL that depends on the nature of event content.

119 citations

Journal ArticleDOI
TL;DR: A general class of multivariate priors for group-sparse modeling within the Bayesian framework is presented, showing the flexibility of this modeling by considering several extensions such as multiple measurements, within-group correlations, and overlapping groups.
Abstract: In this paper, we present a general class of multivariate priors for group-sparse modeling within the Bayesian framework. We show that special cases of this class correspond to multivariate versions of several classical priors used for sparse modeling. Hence, this general prior formulation is helpful in analyzing the properties of different modeling approaches and their connections. We derive the estimation procedures with these priors using variational inference for fully Bayesian estimation. In addition, we discuss the differences between the proposed inference and deterministic inference approaches with these priors. Finally, we show the flexibility of this modeling by considering several extensions such as multiple measurements, within-group correlations, and overlapping groups.

119 citations


Cites background or methods from "Pattern Recognition and Machine Lea..."

  • ...The use of conjugate priors significantly simplify the form of posterior distri- butions....

    [...]

  • ...However, as in many multidimensional problems, the Bayesian model defined with the joint distribution in (30) does not allow for exact inference as the marginal distribution is intractable....

    [...]