scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that the coarse- grained and fine-grained localization problems for ad hoc sensor networks can be posed and solved as a pattern recognition problem using kernel methods from statistical learning theory, and a simple and effective localization algorithm is derived.
Abstract: We show that the coarse-grained and fine-grained localization problems for ad hoc sensor networks can be posed and solved as a pattern recognition problem using kernel methods from statistical learning theory. This stems from an observation that the kernel function, which is a similarity measure critical to the effectiveness of a kernel-based learning algorithm, can be naturally defined in terms of the matrix of signal strengths received by the sensors. Thus we work in the natural coordinate system provided by the physical devices. This not only allows us to sidestep the difficult ranging procedure required by many existing localization algorithms in the literature, but also enables us to derive a simple and effective localization algorithm. The algorithm is particularly suitable for networks with densely distributed sensors, most of whose locations are unknown. The computations are initially performed at the base sensors, and the computation cost depends only on the number of base sensors. The localization step for each sensor of unknown location is then performed locally in linear time. We present an analysis of the localization error bounds, and provide an evaluation of our algorithm on both simulated and real sensor networks.

198 citations

Journal ArticleDOI
TL;DR: Comparisons between the SVM model and the classical radial basis function (RBF) network demonstrate that the S VM is superior to the conventional RBF network in predicting air quality parameters with different time series and of better generalization performance than the RBF model.

197 citations

Journal ArticleDOI
TL;DR: A new cost-sensitive algorithm (CSMLP) is presented to improve the discrimination ability of (two-class) MLPs and it is theoretically demonstrated that the incorporation of prior information via the cost parameter may lead to balanced decision boundaries in the feature space.
Abstract: Traditional learning algorithms applied to complex and highly imbalanced training sets may not give satisfactory results when distinguishing between examples of the classes. The tendency is to yield classification models that are biased towards the overrepresented (majority) class. This paper investigates this class imbalance problem in the context of multilayer perceptron (MLP) neural networks. The consequences of the equal cost (loss) assumption on imbalanced data are formally discussed from a statistical learning theory point of view. A new cost-sensitive algorithm (CSMLP) is presented to improve the discrimination ability of (two-class) MLPs. The CSMLP formulation is based on a joint objective function that uses a single cost parameter to distinguish the importance of class errors. The learning rule extends the Levenberg-Marquadt's rule, ensuring the computational efficiency of the algorithm. In addition, it is theoretically demonstrated that the incorporation of prior information via the cost parameter may lead to balanced decision boundaries in the feature space. Based on the statistical analysis of results on real data, our approach shows a significant improvement of the area under the receiver operating characteristic curve and G-mean measures of regular MLPs.

195 citations

Journal ArticleDOI
TL;DR: A novel SVM classification system for voltage disturbances with high accuracy in classification with training data from one power network and unseen testing data from another and lower accuracy when the SVM classifier was trained on synthetic data and test data originated from the power network.
Abstract: The support vector machine (SVM) is a powerful method for statistical classification of data used in a number of different applications. However, the usefulness of the method in a commercial available system is very much dependent on whether the SVM classifier can be pretrained from a factory since it is not realistic that the SVM classifier must be trained by the customers themselves before it can be used. This paper proposes a novel SVM classification system for voltage disturbances. The performance of the proposed SVM classifier is investigated when the voltage disturbance data used for training and testing originated from different sources. The data used in the experiments were obtained from both real disturbances recorded in two different power networks and from synthetic data. The experimental results shown high accuracy in classification with training data from one power network and unseen testing data from another. High accuracy was also achieved when the SVM classifier was trained on data from a real power network and test data originated from synthetic data. A lower accuracy resulted when the SVM classifier was trained on synthetic data and test data originated from the power network.

195 citations

Journal ArticleDOI
TL;DR: A multi-layer SVM classifier is applied to fault diagnosis of power transformer for the first time in this paper and shows that the classifier has an excellent performance on training speed and reliability.

193 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847