scispace - formally typeset
Y

Yaoliang Yu

Researcher at University of Waterloo

Publications -  113
Citations -  3338

Yaoliang Yu is an academic researcher from University of Waterloo. The author has contributed to research in topics: Robustness (computer science) & Estimator. The author has an hindex of 25, co-authored 111 publications receiving 2742 citations. Previous affiliations of Yaoliang Yu include Fudan University & Carnegie Mellon University.

Papers
More filters
Journal ArticleDOI

Petuum: A New Platform for Distributed Machine Learning on Big Data

TL;DR: This work proposes a general-purpose framework, Petuum, that systematically addresses data- and model-parallel challenges in large-scale ML, by observing that many ML programs are fundamentally optimization-centric and admit error-tolerant, iterative-convergent algorithmic solutions.
Journal ArticleDOI

Semantic Pooling for Complex Event Analysis in Untrimmed Videos

TL;DR: This work defines a novel notion of semantic saliency that assesses the relevance of each shot with the event of interest and proposes a new isotonic regularizer that is able to exploit the constructed semantic ordering information.
Posted Content

Petuum: A New Platform for Distributed Machine Learning on Big Data

TL;DR: In this article, the authors propose a general-purpose framework that systematically addresses data and model-parallel challenges in large-scale ML, by observing that many ML programs are fundamentally optimization-centric and admit error-tolerant, iterative-convergent algorithmic solutions.
Proceedings Article

Convex Multi-view Subspace Learning

TL;DR: This paper develops an efficient algorithm that recovers an optimal data reconstruction by exploiting an implicit convex regularizer, then recovers the corresponding latent representation and reconstruction model, jointly and optimally.
Proceedings ArticleDOI

DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference

TL;DR: This work proposes a simple but effective method, DeeBERT, to accelerate BERT inference, which allows samples to exit earlier without passing through the entire model, and provides new ideas to efficiently apply deep transformer-based models to downstream tasks.