scispace - formally typeset
Patent

Relevance vector machine

Reads0
Chats0
TLDR
The relevance vector machine (RVM) as mentioned in this paper is a probabilistic basis model with a Bayesian treatment, where a prior is introduced over the weights governed by a set of hyperparameters.
Abstract
A relevance vector machine (RVM) for data modeling is disclosed. The RVM is a probabilistic basis model. Sparsity is achieved through a Bayesian treatment, where a prior is introduced over the weights governed by a set of hyperparameters. As compared to a Support Vector Machine (SVM), the non-zero weights in the RVM represent more prototypical examples of classes, which are termed relevance vectors. The trained RVM utilizes many fewer basis functions than the corresponding SVM, and typically superior test performance. No additional validation of parameters (such as C) is necessary to specify the model, except those associated with the basis.

read more

Citations
More filters
Journal ArticleDOI

Using Support Vector Machine to identify imaging biomarkers of neurological and psychiatric disease: A critical review

TL;DR: Support-Vector-Machine has been successfully applied in the context of disease diagnosis, transition prediction and treatment prognosis, using both structural and functional neuroimaging data, and those studies that applied it to the investigation of Alzheimer's disease, schizophrenia, major depression, bipolar disorder, presymptomatic Huntington's disease and autistic spectrum disorder are reviewed.
Journal ArticleDOI

Disorders of consciousness after acquired brain injury: the state of the science

TL;DR: The state of the science with regard to clinical management of patients with prolonged disorders of consciousness is described, and consciousness-altering pathophysiological mechanisms, specific clinical syndromes, and novel diagnostic and prognostic applications of advanced neuroimaging and electrophysiological procedures are reviewed.
Proceedings Article

Variational Relevance Vector Machines

TL;DR: This paper shows how the RVM can be formulated and solved within a completely Bayesian paradigm through the use of variational inference, thereby giving a posterior distribution over both parameters and hyperparameters.
Journal ArticleDOI

PRoNTo: Pattern Recognition for Neuroimaging Toolbox

TL;DR: The goal of this work was to build a toolbox comprising all the necessary functionalities for multivariate analyses of neuroimaging data, based on machine learning models, and to facilitate novel contributions from developers, aiming to improve the interaction between the neuroim imaging and machine learning communities.
Journal ArticleDOI

Coma and consciousness: paradigms (re)framed by neuroimaging.

TL;DR: Advances in measurement support a model of consciousness as the emergent property of the collective behavior of widespread frontoparietal network connectivity modulated by specific forebrain circuit mechanisms.
References
More filters
Book

Statistical Decision Theory and Bayesian Analysis

TL;DR: An overview of statistical decision theory, which emphasizes the use and application of the philosophical ideas and mathematical structure of decision theory.
Journal ArticleDOI

Sparse bayesian learning and the relevance vector machine

TL;DR: It is demonstrated that by exploiting a probabilistic Bayesian learning framework, the 'relevance vector machine' (RVM) can derive accurate prediction models which typically utilise dramatically fewer basis functions than a comparable SVM while offering a number of additional advantages.
Book

Fast training of support vector machines using sequential minimal optimization

TL;DR: In this article, the authors proposed a new algorithm for training Support Vector Machines (SVM) called SMO (Sequential Minimal Optimization), which breaks this large QP problem into a series of smallest possible QP problems.
Journal ArticleDOI

Artificial neural networks: a tutorial

TL;DR: The article discusses the motivations behind the development of ANNs and describes the basic biological neuron and the artificial computational model, and outlines network architectures and learning processes, and presents some of the most commonly used ANN models.
Journal ArticleDOI

Input space versus feature space in kernel-based methods

TL;DR: The geometry of feature space is reviewed, and the connection between feature space and input space is discussed by dealing with the question of how one can, given some vector in feature space, find a preimage in input space.