scispace - formally typeset
T

Trevor Hastie

Researcher at Stanford University

Publications -  428
Citations -  230646

Trevor Hastie is an academic researcher from Stanford University. The author has contributed to research in topics: Lasso (statistics) & Feature selection. The author has an hindex of 124, co-authored 412 publications receiving 202592 citations. Previous affiliations of Trevor Hastie include University of Waterloo & University of Toronto.

Papers
More filters
Proceedings Article

The Entire Regularization Path for the Support Vector Machine

TL;DR: In this article, the authors argue that the choice of the SVM cost parameter can be critical and derive an algorithm that can fit the entire path of SVM solutions for every value of the cost parameter, with essentially the same computational cost as fitting one SVM model.
Book ChapterDOI

Linear Methods for Classification

TL;DR: This chapter revisits the classification problem and focuses on linear methods for classification, which means that the boundaries of these regions can be rough or smooth, depending on the prediction function.
Journal ArticleDOI

Effective degrees of freedom: a flawed metaphor

TL;DR: This work exhibits and theoretically explore various fitting procedures for which degrees of freedom is not monotonic in the model complexity parameter, and can exceed the total dimension of the ambient space even in very simple settings.
Posted Content

Learning interactions through hierarchical group-lasso regularization

Michael Lim, +1 more
- 12 Aug 2013 - 
TL;DR: A method for learning pairwise interactions in a manner that satisfies strong hierarchy: whenever an interaction is estimated to be nonzero, both its associated main effects are also included in the model, which results in interpretable interaction models.
Journal ArticleDOI

Assessing the significance of global and local correlations under spatial autocorrelation: A nonparametric approach

TL;DR: A method to test the correlation of two random fields when they are both spatially autocorrelated, using Monte‐Carlo methods, and focuses on permuting, and then smoothing and scaling one of the variables to destroy the correlation with the other, while maintaining at the same time the initial Autocorrelation.