scispace - formally typeset
Search or ask a question
Institution

Institut de Mathématiques de Toulouse

Facility
About: Institut de Mathématiques de Toulouse is a based out in . It is known for research contribution in the topics: Estimator & Bounded function. The organization has 723 authors who have published 3373 publications receiving 50822 citations. The organization is also known as: Institut de Mathematiques de Toulouse & Toulouse Mathematics Institute.


Papers
More filters
Book
27 Nov 2013
TL;DR: Semigroups of bounded operators on a Banach space have been studied in this paper for Riemannian geometry and Markov semigroups have been used for stochastic calculus.
Abstract: Introduction.- Part I Markov semigroups, basics and examples: 1.Markov semigroups.- 2.Model examples.- 3.General setting.- Part II Three model functional inequalities: 4.Poincare inequalities.- 5.Logarithmic Sobolev inequalities.- 6.Sobolev inequalities.- Part III Related functional, isoperimetric and transportation inequalities: 7.Generalized functional inequalities.- 8.Capacity and isoperimetry-type inequalities.- 9.Optimal transportation and functional inequalities.- Part IV Appendices: A.Semigroups of bounded operators on a Banach space.- B.Elements of stochastic calculus.- C.Some basic notions in differential and Riemannian geometry.- Notations and list of symbols.- Bibliography.- Index.

1,169 citations

Journal ArticleDOI
TL;DR: In this article, the Dirichlet Laplacian operator −∆ on a curved quantum guide in R n (n = 2, 3) with an asymptotically straight reference curve was considered, and uniqueness results for the inverse problem associated to the reconstruction of the curvature by using either observations of spectral data or a boot-strapping method were given.
Abstract: In this paper, we consider the Dirichlet Laplacian operator −∆ on a curved quantum guide in R n (n = 2, 3) with an asymptotically straight reference curve. We give uniqueness results for the inverse problem associated to the reconstruction of the curvature by using either observations of spectral data or a boot-strapping method. keywords: Inverse Problem, Quantum Guide, Curvature

1,148 citations

Journal Article
TL;DR: This work introduces generic notions of complexity for the two dominant frameworks considered in the literature: fixed-budget and fixed-confidence settings, and provides the first known distribution-dependent lower bound on the complexity that involves information-theoretic quantities and holds when m ≥ 1 under general assumptions.
Abstract: The stochastic multi-armed bandit model is a simple abstraction that has proven useful in many different contexts in statistics and machine learning. Whereas the achievable limit in terms of regret minimization is now well known, our aim is to contribute to a better understanding of the performance in terms of identifying the m best arms. We introduce generic notions of complexity for the two dominant frameworks considered in the literature: fixed-budget and fixed-confidence settings. In the fixed-confidence setting, we provide the first known distribution-dependent lower bound on the complexity that involves information-theoretic quantities and holds when m ≥ 1 under general assumptions. In the specific case of two armed-bandits, we derive refined lower bounds in both the fixedcon fidence and fixed-budget settings, along with matching algorithms for Gaussian and Bernoulli bandit models. These results show in particular that the complexity of the fixed-budget setting may be smaller than the complexity of the fixed-confidence setting, contradicting the familiar behavior observed when testing fully specified alternatives. In addition, we also provide improved sequential stopping rules that have guaranteed error probabilities and shorter average running times. The proofs rely on two technical results that are of independent interest: a deviation lemma for self-normalized sums (Lemma 7) and a novel change of measure inequality for bandit models (Lemma 1).

1,061 citations

Book ChapterDOI
TL;DR: In this article, a review of various global sensitivity analysis methods of model output is presented, in a complete methodological framework, in which three kinds of methods are distinguished: the screening (coarse sorting of the most influential inputs among a large number), the measures of importance (quantitative sensitivity indices) and the deep exploration of the model behaviour (measuring the effects of inputs on their all variation range).
Abstract: This chapter makes a review, in a complete methodological framework, of various global sensitivity analysis methods of model output. Numerous statistical and probabilistic tools (regression, smoothing, tests, statistical learning, Monte Carlo, …) aim at determining the model input variables which mostly contribute to an interest quantity depending on model output. This quantity can be for instance the variance of an output variable. Three kinds of methods are distinguished: the screening (coarse sorting of the most influential inputs among a large number), the measures of importance (quantitative sensitivity indices) and the deep exploration of the model behaviour (measuring the effects of inputs on their all variation range). A progressive application methodology is illustrated on a scholar application. A synthesis is given to place every method according to several axes, mainly the cost in number of model evaluations, the model complexity and the nature of brought information.

744 citations

Journal ArticleDOI
TL;DR: A simple extension of a sparse PLS exploratory approach is proposed to perform variable selection in a multiclass classification framework and has a classification performance similar to other wrapper or sparse discriminant analysis approaches on public microarray and SNP data sets.
Abstract: Variable selection on high throughput biological data, such as gene expression or single nucleotide polymorphisms (SNPs), becomes inevitable to select relevant information and, therefore, to better characterize diseases or assess genetic structure. There are different ways to perform variable selection in large data sets. Statistical tests are commonly used to identify differentially expressed features for explanatory purposes, whereas Machine Learning wrapper approaches can be used for predictive purposes. In the case of multiple highly correlated variables, another option is to use multivariate exploratory approaches to give more insight into cell biology, biological pathways or complex traits. A simple extension of a sparse PLS exploratory approach is proposed to perform variable selection in a multiclass classification framework. sPLS-DA has a classification performance similar to other wrapper or sparse discriminant analysis approaches on public microarray and SNP data sets. More importantly, sPLS-DA is clearly competitive in terms of computational efficiency and superior in terms of interpretability of the results via valuable graphical outputs. sPLS-DA is available in the R package mixOmics, which is dedicated to the analysis of large biological data sets.

672 citations


Authors

Showing all 748 results

NameH-indexPapersCitations
Pierre Degond5538711773
Jean B. Lasserre5345915913
Michel Ledoux4413513193
Gilles Celeux4215111591
Philippe Vieu381346259
Teodor Banica371884539
Dominikus Noll361574075
Philippe Laurençot362775011
Pierre Raphaël36924184
Charles Bordenave361243082
Kim-Anh Lê Cao341346296
Vincent Guedj331084046
Jean-Michel Roquejoffre321203595
Laurent Manivel321443311
Jean-Marc Schlenker301312492
Network Information
Related Institutions (5)
Courant Institute of Mathematical Sciences
7.7K papers, 439.7K citations

91% related

École Polytechnique
39.2K papers, 1.2M citations

86% related

École normale supérieure de Lyon
15.5K papers, 585.1K citations

83% related

Institut Universitaire de France
9K papers, 309.8K citations

82% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20237
202247
2021258
2020289
2019301
2018274