scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Pattern Recognition and Machine Learning

01 Aug 2007-Technometrics (Taylor & Francis)-Vol. 49, Iss: 3, pp 366-366
TL;DR: This book covers a broad range of topics for regular factorial designs and presents all of the material in very mathematical fashion and will surely become an invaluable resource for researchers and graduate students doing research in the design of factorial experiments.
Abstract: (2007). Pattern Recognition and Machine Learning. Technometrics: Vol. 49, No. 3, pp. 366-366.
Citations
More filters
Journal ArticleDOI
TL;DR: This study predicts the changes in battery capacity over time using a Bayesian non-parametric approach based on Gaussian process regression that can be integrated against an arbitrary input sequence to predict capacity fade in a variety of usage scenarios, forming a generalised health model.
Abstract: Accurately predicting the future health of batteries is necessary to ensure reliable operation, minimise maintenance costs, and calculate the value of energy storage investments. The complex nature of degradation renders data-driven approaches a promising alternative to mechanistic modelling. Here we show that a Bayesian non-parametric approach, using Gaussian process regression, can predict capacity fade in a variety of usage scenarios, forming a generalised health model. Our results are demonstrated on the open-source NASA Randomised Battery Usage Dataset, with data of 26 cells aged under widely varying operational conditions. Using half of the cells for training, and half for validation, we can accurately predict long term capacity fade, with a best case normalised root mean square error of 4.3%, including accurate estimation of the uncertainty of the prediction.

129 citations


Cites background from "Pattern Recognition and Machine Lea..."

  • ...Minimising the NLML automatically performs a trade-off between bias and variance, and hence ameliorates over-fitting to the data [26]....

    [...]

Journal ArticleDOI
TL;DR: Several methods of spike sorting are introduced and the accuracy and robustness of their performance are compared by using publicized data of simultaneous extracellular and intracellular recordings of neuronal activity.
Abstract: Simultaneous recordings with multi-channel electrodes are widely used for studying how multiple neurons are recruited for information processing. The recorded signals contain the spike events of a number of adjacent or distant neurons and must be sorted correctly into spike trains of individual neurons. Several mathematical methods have been proposed for spike sorting but the process is difficult in practice, as extracellularly recorded signals are corrupted by biological noise. Moreover, spike sorting is often time-consuming, as it usually requires corrections by human operators. Methods are needed to obtain reliable spike clusters without heavy manual operation. Here, we introduce several methods of spike sorting and compare the accuracy and robustness of their performance by using publicized data of simultaneous extracellular and intracellular recordings of neuronal activity. The best and excellent performance was obtained when a newly proposed filter for spike detection was combined with the wavelet transform and variational Bayes for a finite mixture of Student's t-distributions, namely, robust variational Bayes. Wavelet transform extracts features that are characteristic of the detected spike waveforms and the robust variational Bayes categorizes the extracted features into clusters corresponding to spikes of the individual neurons. The use of Student's t-distributions makes this categorization robust against noisy data points. Some other new methods also exhibited reasonably good performance. We implemented all of the proposed methods in a C++ code named 'EToS' (Efficient Technology of Spike sorting), which is freely available on the Internet.

129 citations


Cites background from "Pattern Recognition and Machine Lea..."

  • ...The difficulty arising from the high-dimensionality of the data space is called ‘the curse of dimensionality’ (Bishop, 2006) and it should be mitigated by...

    [...]

  • ...The difficulty arising from the high-dimensionality of the data space is called ‘the curse of dimensionality’ (Bishop, 2006) and it should be mitigated by eliminating redundant data information....

    [...]

Proceedings ArticleDOI
28 Nov 2016
TL;DR: Data-driven methods serve an increasingly important role in discovering geometric, structural, and semantic relationships between shapes as discussed by the authors, in contrast to traditional approaches that process shapes in isolation of each other.
Abstract: Data-driven methods serve an increasingly important role in discovering geometric, structural, and semantic relationships between shapes. In contrast to traditional approaches that process shapes in isolation of each other, data-driven methods aggregate information from 3D model collections to improve the analysis, modeling and editing of shapes. Through reviewing the literature, we provide an overview of the main concepts and components of these methods, as well as discuss their application to classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.

128 citations

Journal ArticleDOI
TL;DR: Xid+ as mentioned in this paper is a prior-based source extraction tool for the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources that uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates.
Abstract: We have developed a new prior-based source extraction tool, xid+, to carry out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. xid+ is developed using a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates. In this paper, we discuss the details of xid+ and demonstrate the basic capabilities and performance by running it on simulated SPIRE maps resembling the COSMOS field, and comparing to the current prior-based source extraction tool desphot. Not only we show that xid+ performs better on metrics such as flux accuracy and flux uncertainty accuracy, but we also illustrate how obtaining the posterior probability distribution can help overcome some of the issues inherent with maximum-likelihood-based source extraction routines. We run xid+ on the COSMOS SPIRE maps from Herschel Multi-Tiered Extragalactic Survey using a 24-μm catalogue as a positional prior, and a uniform flux prior ranging from 0.01 to 1000 mJy. We show the marginalized SPIRE colour–colour plot and marginalized contribution to the cosmic infrared background at the SPIRE wavelengths. xid+ is a core tool arising from the Herschel Extragalactic Legacy Project (HELP) and we discuss how additional work within HELP providing prior information on fluxes can and will be utilized. The software is available at https://github.com/H-E-L-P/XID_plus. We also provide the data product for COSMOS. We believe this is the first time that the full posterior probability of galaxy photometry has been provided as a data product.

128 citations

Journal ArticleDOI
TL;DR: A practical framework to detect termites nondestructively by using the acoustic signals extraction is proposed, which has the pros to maintain the quality of wood products and prevent higher termite attacks.
Abstract: Termites are the most destructive pests and their attacks significantly impact the quality of wooden buildings. Due to their cryptic behavior, it is rarely apparent from visual observation that a termite infestation is active and that wood damage is occurring. Based on the phenomenon of acoustic signals generated by termites when attacking wood, we proposed a practical framework to detect termites nondestructively, i.e., by using the acoustic signals extraction. This method has the pros to maintain the quality of wood products and prevent higher termite attacks. In this work, we inserted 220 subterranean termites into a pine wood for feeding activity and monitored its acoustic signal. The two acoustic features (i.e., energy and entropy) derived from the time domain were used for this study’s analysis. Furthermore, the support vector machine (SVM) algorithm with different kernel functions (i.e., linear, radial basis function, sigmoid and polynomial) were employed to recognize the termites’ acoustic signal. In addition, the area under a receiver operating characteristic curve (AUC) was also adopted to analyze and improve the performance results. Based on the numerical analysis, the SVM with polynomial kernel function achieves the best classification accuracy of 0.9188.

128 citations


Cites background from "Pattern Recognition and Machine Lea..."

  • ...The optimization of this margin to its support vector can be converted into a constrained quadratic programming problem as seen in Equation (5) [32]....

    [...]

  • ...The optimization of this margin to its support vector can be o verted into a constrained quadratic programming problem as seen in Equation (5) [32]....

    [...]