Topic
Statistical learning theory
About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.
Papers published on a yearly basis
Papers
More filters
••
08 Oct 2007
TL;DR: A new fault diagnosis approach for electro-hydraulic position closed-loop system based on support vector regression leads to solving convex optimization problems and also the model complexity follows from this solution.
Abstract: A fault diagnosis method is proposed for electro-hydraulic position closed-loop system based on support vector regression (SVR), a new class of kernel based techniques introduced within statistical learning theory and structural risk minimization. This new fault diagnosis approach leads to solving convex optimization problems and also the model complexity follows from this solution. By using the measurable parameters, the total fault diagnosis model and the partial fault diagnosis model of SVR for electro-hydraulic position closed-loop system are established. The simulation results show that this method can effectively detect the fault of electro-hydraulic position closed-loop system.
5 citations
••
22 Aug 2000TL;DR: It is shown that fusion of features, soft decisions, and hard decisions each yield improved performance with respect to the individual sensors, and fusion decreases the overall error rate.
Abstract: A method is described to improve the performance of sensor fusion algorithms. Data sets available for training fusion algorithms are often smaller than described, since the sensor suite used for data acquisition is always limited by the slowest, least reliable sensor. In addition, the fusion process expands the dimension of the data, which increases the requirement for training data. By using structural risk minimization, a technique of statistical learning theory, a classifier of optimal complexity can be obtained, leading to improved performance. A technique for jointly optimizing the local decision thresholds is also described for hard- decision fusion. The procedure is demonstrated for EMI, GPR and MWIR data acquired at the US Army mine lanes at Fort AP Hill, VA, Site 71A. It is shown that fusion of features, soft decisions, and hard decisions each yield improved performance with respect to the individual sensors. Fusion decreases the overall error rate from roughly 20 percent for the best single sensor to roughly 10 percent for the best fused result.
5 citations
••
01 Jan 2006TL;DR: Tissue conductivity for each layer in 2-D head model is estimated effectively by MSVM, which not only obtains higher accuracy of learning, but also has greater generalization ability and faster computing speed.
Abstract: Estimating head tissue conductivity for each layer is a high dimensional, non-linear and ill-posed problem which is part of Electrical Impedance Tomography (EIT) inverse problem. Traditional methods have many difficulties in resolving this problem. Support Vector Machine (SVM) based on Statistical Learning Theory (SLT) is a new kind of learning method including Support Vector Classification (SVC) and Support Vector Regression (SVR). A new method using SVR is proposed to solve the problem in multi-input and multi-output system named Multi-SVM (MSVM). Tissue conductivity for each layer in 2-D head model is estimated effectively by MSVM. Compared with wavelet neural network method, MSVM not only obtains higher accuracy of learning, it also has greater generalization ability and faster computing speed as our experiment demonstrates.
5 citations
••
25 Jul 2004
TL;DR: This work employs some ideas taken from statistical learning theory to conjecture the existence of such exponential behavior and designs a new approach to implementing the training steps that can exploit this behavior in order to systematically test the generalization level during the training process.
Abstract: In training a learning machine (LM) with unlimited data samples available in the training set, it is important to be able to determine when the LM has attained an adequate level of generalization in order to stop the training process. While this is a problem that has not yet achieved a satisfactory solution, aiding the determination of the generalization level is the observation that as the LM becomes consistent and reaches an acceptable generalization threshold, finding samples from the training set that would make the system fail and trigger a new cycle of the training algorithm to be implemented becomes more infrequent. In a statistical sense, the number of samples that can be tested as having no new information (i.e. information not already learnt from training cycles already completed) between two successive triggers of training events asymptotically displays a faster than exponential growth behavior, which in turn provides a telltale sign of a LM reaching consistency and thus attaining a desired generalization level. This work employs some ideas taken from statistical learning theory to conjecture the existence of such exponential behavior and designs a new approach to implementing the training steps that can exploit this behavior in order to systematically test the generalization level during the training process. Examples of nonlinear regression problems are included to illustrate the ideas and to validate the methods. The obtained results are general and are independent of the configuration of the LM, its architecture, and the specific training algorithm used; hence, they are applicable to a broad class of supervised learning problems.
5 citations
•
TL;DR: In this article, the authors proposed an extension of the singular information criterion (SingIC) to many singular machines and evaluated the efficiency in Gaussian mixtures, and the results offer an effective strategy to select the optimal size of singular machines.
Abstract: To decide the optimal size of learning machines is a central issue in the statistical learning theory, and that is why some theoretical criteria such as the BIC are developed. However, they cannot be applied to singular machines, and it is known that many practical learning machines e.g. mixture models, hidden Markov models, and Bayesian networks, are singular. Recently, we proposed the Singular Information Criterion (SingIC), which allows us to select the optimal size of singular machines. The SingIC is based on the analysis of the learning coefficient. So, the machines, to which the SingIC can be applied, are still limited. In this paper, we propose an extension of this criterion, which enables us to apply it to many singular machines, and evaluate the efficiency in Gaussian mixtures. The results offer an effective strategy to select the optimal size.
5 citations