R
Robert Tibshirani
Researcher at Stanford University
Publications - 620
Citations - 359457
Robert Tibshirani is an academic researcher from Stanford University. The author has contributed to research in topics: Lasso (statistics) & Gene expression profiling. The author has an hindex of 147, co-authored 593 publications receiving 326580 citations. Previous affiliations of Robert Tibshirani include University of Toronto & University of California.
Papers
More filters
Journal ArticleDOI
The Radiogenomic Risk Score: Construction of a Prognostic Quantitative, Noninvasive Image-based Molecular Assay for Renal Cell Carcinoma.
Neema Jamshidi,Eric Jonasch,Matthew A. Zapala,Matthew A. Zapala,Ronald L. Korn,Lejla Aganovic,Hongjuan Zhao,Raviprakash T. Sitaram,Robert Tibshirani,Sudeep Banerjee,James D. Brooks,Börje Ljungberg,Michael D. Kuo +12 more
TL;DR: A SOMA for the CCRCC-specific SPC prognostic gene signature that is predictive of disease-specific survival and independent of stage was constructed and validated, confirming that SOMA construction is feasible.
Posted Content
Exact Post-Selection Inference for Sequential Regression Procedures
TL;DR: In this paper, the authors propose new inference tools for forward stepwise regression, least angle regression, and the lasso, which can be used to perform valid inference after any selection event that can be characterized as y falling into a polyhedral set.
Journal ArticleDOI
Discussion: The Dantzig selector: Statistical estimation when p is much larger than n
TL;DR: In this paper, the choice of predictor variables in large-scale linear models is discussed and the relationship between the Dantzig Selector (DS) and the Lasso algorithm is explored.
Book ChapterDOI
Additive Models, Trees, and Related Methods
TL;DR: This chapter begins the discussion of some specific methods for supervised learning by describing five related techniques: generalized additive models, trees, multivariate adaptive regression splines, the patient rule induction method, and hierarchical mixtures of experts.
Discussion of Boosting Papers
TL;DR: His results imply that boosting-like methods can reasonably be expected to converge to Bayes classifiers under sufficient regularity conditions (such as the requirement that trees with at least p+ 1 terminal nodes are used, where p is the number of variables in the model).