scispace - formally typeset
P

Preetum Nakkiran

Researcher at Harvard University

Publications -  55
Citations -  2212

Preetum Nakkiran is an academic researcher from Harvard University. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 15, co-authored 40 publications receiving 1128 citations. Previous affiliations of Preetum Nakkiran include Google & University of California, Berkeley.

Papers
More filters
Proceedings Article

Deep Double Descent: Where Bigger Models and More Data Hurt

TL;DR: The notion of model complexity allows us to identify certain regimes where increasing the number of train samples actually hurts test performance, and defines a new complexity measure called the effective model complexity and conjecture a generalized double descent with respect to this measure.
Proceedings ArticleDOI

Having your cake and eating it too: jointly optimal erasure codes for I/O, storage and network-bandwidth

TL;DR: This paper designs erasure codes that are simultaneously optimal in terms of I/O, storage, and network bandwidth, and builds on top of a class of powerful practical codes, called the product-matrix-MSR codes.
Posted Content

SGD on Neural Networks Learns Functions of Increasing Complexity.

TL;DR: Key to the work is a new measure of how well one classifier explains the performance of another, based on conditional mutual information, which can be helpful in explaining why SGD-learned classifiers tend to generalize well even in the over-parameterized regime.
Posted Content

Deep Double Descent: Where Bigger Models and More Data Hurt

TL;DR: In this paper, the authors show that a variety of modern deep learning tasks exhibit a "double-descent" phenomenon where, as we increase model size, performance first gets worse and then gets better.