M
Michael Young
Researcher at Google
Publications - 6
Citations - 1977
Michael Young is an academic researcher from Google. The author has contributed to research in topics: Technical debt & Memory footprint. The author has an hindex of 5, co-authored 5 publications receiving 1503 citations.
Papers
More filters
Proceedings ArticleDOI
Ad click prediction: a view from the trenches
H. Brendan McMahan,Gary Holt,D. Sculley,Michael Young,Dietmar Ebner,Julian Paul Grady,Lan Nie,Todd Phillips,Eugene Davydov,Daniel Golovin,Sharat Chikkerur,Dan Liu,Martin Wattenberg,Arnar Mar Hrafnkelsson,Tom Boulos,J. Kubica +15 more
TL;DR: The goal of this paper is to highlight the close relationship between theoretical advances and practical engineering in this industrial setting, and to show the depth of challenges that appear when applying traditional machine learning methods in a complex dynamic system.
Proceedings Article
Hidden technical debt in Machine learning systems
D. Sculley,Gary Holt,Daniel Golovin,Eugene Davydov,Todd Phillips,Dietmar Ebner,Vinay Chaudhary,Michael Young,Jean-Francois Crespo,Dan Dennison +9 more
TL;DR: It is found it is common to incur massive ongoing maintenance costs in real-world ML systems, and several ML-specific risk factors to account for in system design are explored.
Machine Learning: The High Interest Credit Card of Technical Debt
D. Sculley,Gary Holt,Daniel Golovin,Eugene Davydov,Todd Phillips,Dietmar Ebner,Vinay Chaudhary,Michael Young +7 more
TL;DR: The goal of this paper is highlight several machine learning specific risk factors and design patterns to be avoided or refactored where possible, including boundary erosion, entanglement, hidden feedback loops, undeclared consumers, data dependencies, changes in the external world, and a variety of system-level anti-patterns.
Proceedings Article
Large-Scale Learning with Less RAM via Randomization
TL;DR: In this paper, the weight vector is projected onto a coarse discrete set using randomized rounding, which reduces memory usage by more than 50% during training and up to 95% when making predictions from a fixed model, with almost no loss in accuracy.
Posted Content
Large-Scale Learning with Less RAM via Randomization
TL;DR: This work reduces the memory footprint of popular large-scale online learning methods by projecting the authors' weight vector onto a coarse discrete set using randomized rounding, and proves these memory-saving methods achieve regret guarantees similar to their exact variants.