An introduction to statistical learning
Citations
2,827 citations
2,784 citations
Cites background or methods from "An introduction to statistical lear..."
...ont is used to find parsimonious models in a large family of candidate models. In this work, we re-envision the dynamical system discovery problem from an entirely new perspective of sparse regression [42, 14, 18] and compressed sensing [12, 8, 9, 7, 2, 43]. In particular, we leverage the fact that most physical systems have only a few relevant terms that define the dynamics, making the governing equations spar...
[...]
...ntributions in each element. However, if sparsity of ˘ is desired, so that most of the entries are zero, then it is possible to add an L1 regularization term to the regression, resulting in the LASSO [14, 18, 42]: ˘ = argmin ˘0 k ˘0 yk2 + k˘0k1: (2) The parameter weights the sparsity constraint. This formulation is closely related to the compressive sensing framework, which allows for the sparse vector ˘ to...
[...]
2,132 citations
1,785 citations
1,602 citations
Cites background from "An introduction to statistical lear..."
...This is the main reason why a training-testing approach is used when dealing with ML problems [162, 163]....
[...]
...Put it differently, a linear model is deemed transparent because its error surface can be understood and reasoned about, allowing the user to understand how the model will act in every situation it may face [163]....
[...]