Topic
Statistical learning theory
About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.
Papers published on a yearly basis
Papers
More filters
••
01 Jan 2004
TL;DR: Stochastic optimization and statistical learning theory and stochastic optimization, کتابخانه دیجیتال جندی شاپور اهواز
Abstract: Statistical learning theory and stochastic optimization , Statistical learning theory and stochastic optimization , کتابخانه دیجیتال جندی شاپور اهواز
326 citations
••
TL;DR: Support vector machines are a family of machine learning methods originally introduced for the problem of classification and later generalized to various other situations, and are currently used in various domains of application, including bioinformatics, text categorization, and computer vision.
Abstract: Support vector machines (SVMs) are a family of machine learning methods, originally introduced for the problem of classification and later generalized to various other situations. They are based on principles of statistical learning theory and convex optimization, and are currently used in various domains of application, including bioinformatics, text categorization, and computer vision. Copyright © 2009 John Wiley & Sons, Inc.
For further resources related to this article, please visit the WIREs website.
323 citations
••
TL;DR: This paper investigated several state-of-the-art machine-learning methods for automated classification of clustered microcalcifications (MCs), and formulated differentiation of malignant from benign MCs as a supervised learning problem, and applied these learning methods to develop the classification algorithm.
Abstract: In this paper, we investigate several state-of-the-art machine-learning methods for automated classification of clustered microcalcifications (MCs). The classifier is part of a computer-aided diagnosis (CADx) scheme that is aimed to assisting radiologists in making more accurate diagnoses of breast cancer on mammograms. The methods we considered were: support vector machine (SVM), kernel Fisher discriminant (KFD), relevance vector machine (RVM), and committee machines (ensemble averaging and AdaBoost), of which most have been developed recently in statistical learning theory. We formulated differentiation of malignant from benign MCs as a supervised learning problem, and applied these learning methods to develop the classification algorithm. As input, these methods used image features automatically extracted from clustered MCs. We tested these methods using a database of 697 clinical mammograms from 386 cases, which included a wide spectrum of difficult-to-classify cases. We analyzed the distribution of the cases in this database using the multidimensional scaling technique, which reveals that in the feature space the malignant cases are not trivially separable from the benign ones. We used receiver operating characteristic (ROC) analysis to evaluate and to compare classification performance by the different methods. In addition, we also investigated how to combine information from multiple-view mammograms of the same case so that the best decision can be made by a classifier. In our experiments, the kernel-based methods (i.e., SVM, KFD, and RVM) yielded the best performance (A/sub z/=0.85, SVM), significantly outperforming a well-established, clinically-proven CADx approach that is based on neural network (A/sub z/=0.80).
305 citations
••
24 Jul 2000TL;DR: The main purpose of the paper is to compare the support vector machine (SVM) developed by Cortes and Vapnik (1995) with other techniques such as backpropagation and radial basis function (RBF) networks for financial forecasting applications.
Abstract: The main purpose of the paper is to compare the support vector machine (SVM) developed by Cortes and Vapnik (1995) with other techniques such as backpropagation and radial basis function (RBF) networks for financial forecasting applications. The theory of the SVM algorithm is based on statistical learning theory. Training of SVMs leads to a quadratic programming (QP) problem. Preliminary computational results for stock price prediction are also presented.
303 citations
••
TL;DR: In this article, the authors reformulate coarse-graining as a supervised machine learning problem and use statistical learning theory to decompose the coarsegraining error and cross-validation to select and compare the performance of different models.
Abstract: Atomistic or ab initio molecular dynamics simulations are widely used to predict thermodynamics and kinetics and relate them to molecular structure. A common approach to go beyond the time- and length-scales accessible with such computationally expensive simulations is the definition of coarse-grained molecular models. Existing coarse-graining approaches define an effective interaction potential to match defined properties of high-resolution models or experimental data. In this paper, we reformulate coarse-graining as a supervised machine learning problem. We use statistical learning theory to decompose the coarse-graining error and cross-validation to select and compare the performance of different models. We introduce CGnets, a deep learning approach, that learns coarse-grained free energy functions and can be trained by a force-matching scheme. CGnets maintain all physically relevant invariances and allow one to incorporate prior physics knowledge to avoid sampling of unphysical structures. We show tha...
298 citations