scispace - formally typeset
Search or ask a question
Author

Jan Luts

Bio: Jan Luts is an academic researcher from Katholieke Universiteit Leuven. The author has contributed to research in topics: Support vector machine & Semiparametric regression. The author has an hindex of 18, co-authored 45 publications receiving 1369 citations. Previous affiliations of Jan Luts include University of Technology, Sydney & The Catholic University of America.

Papers
More filters
Journal ArticleDOI
TL;DR: This tutorial provides a concise overview of support vector machines and different closely related techniques for pattern classification, illustrated on three real-life applications in the field of metabolomics, genetics and proteomics.

273 citations

Journal ArticleDOI
TL;DR: The prediction of the tumor type of in-vivo MRS is possible using classifiers developed from previously acquired data, in different hospitals with different instrumentation under the same acquisition protocols.
Abstract: Justification Automatic brain tumor classification by MRS has been under development for more than a decade. Nonetheless, to our knowledge, there are no published evaluations of predictive models with unseen cases that are subsequently acquired in different centers. The multicenter eTUMOUR project (2004–2009), which builds upon previous expertise from the INTERPRET project (2000–2002) has allowed such an evaluation to take place.

132 citations

Journal ArticleDOI
TL;DR: It is demonstrated that binary LS-SVMs can be extended to a multiclass classifier system obtaining class probabilities by Bayesian techniques and pairwise coupling.

92 citations

Journal ArticleDOI
TL;DR: To assess intra‐ and interobserver agreement of routinely performed measurements—crown–rump length (CRL) and mean gestational sac diameter (MSD)—for assessing the likelihood of miscarriage in the first trimester of pregnancy using transvaginal sonography.
Abstract: Objectives To assess intra- and interobserver agreement of routinely performed measurements – crown–rump length (CRL) and mean gestational sac diameter (MSD) – for assessing the likelihood of miscarriage in the first trimester of pregnancy using transvaginal sonography. Methods A cross-sectional study of CRL and gestational sac measurements in first-trimester pregnancies was conducted in a fetal medicine referral center with a predominantly Caucasian population. Gestational age ranged from 6 to 9 weeks. All patients underwent a transvaginal ultrasound examination using a highresolution ultrasound machine. Two measurements of CRL and measurements of three diameters of the gestational sac were obtained by two observers. Agreement within and between observers for CRL and between observers for MSD was analyzed using 95% prediction intervals, Bland–Altman plots with 95% limits of agreement and the intraclass correlation coefficient (ICC). Results In total 54 patients were included in the study, with measurements obtained by both observers in 44 of these. Intra- and interobserver ICCs were high for CRL measurements, with values of 0.992 and 0.993 for intraobserver agreement and 0.993 for interobserver agreement. For the MSD, the interobserver ICC was 0.952. Limits of agreement were ± 8.91 and ± 11.37% for intraobserver agreement of CRL and ± 14.64% for interobserver agreement of CRL. For MSD, the interobserver limits of agreement were ± 18.78%. For an MSD measurement of 20 mm by the first observer, the prediction interval for the second observer was 16.8–24.5 mm. For a CRL measurement of 6 mm, the prediction interval for the second observer was 5.4–6.7 mm. Conclusion For dating purposes, there is reasonable reproducibility of CRL measurements using transvaginal ultrasonography at 6–9 weeks’ gestation. When diagnosing miscarriage based on measurements of CRL care must be taken for values close to any decision boundary. The higher interobserver variability that we observed for MSD has implications for the diagnosis of miscarriage based on this measurement in the absence of a visible embryo or yolk sac. Copyright  2011 ISUOG. Published by John Wiley & Sons, Ltd.

81 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: For instance, mean-field variational inference as discussed by the authors approximates probability densities through optimization, which is used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling.
Abstract: One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this article, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find a member of that family which is close to the target density. Closeness is measured by Kullback–Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data...

3,421 citations

Journal ArticleDOI

3,152 citations

Journal ArticleDOI
TL;DR: How the combination of standard clinical‐pathological markers with the information provided by these genomic entities might help further understand the biological complexity of this disease, increase the efficacy of current and novel therapies, and ultimately improve outcomes for breast cancer patients is discussed.

1,236 citations

Journal ArticleDOI
TL;DR: Variational inference (VI), a method from machine learning that approximates probability densities through optimization, is reviewed and a variant that uses stochastic optimization to scale up to massive data is derived.
Abstract: One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms.

852 citations