scispace - formally typeset
H

Hamed Hassani

Researcher at University of Pennsylvania

Publications -  142
Citations -  2593

Hamed Hassani is an academic researcher from University of Pennsylvania. The author has contributed to research in topics: Computer science & Submodular set function. The author has an hindex of 23, co-authored 99 publications receiving 1591 citations. Previous affiliations of Hamed Hassani include ETH Zurich & University of Texas at Austin.

Papers
More filters
Proceedings Article

FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization.

TL;DR: FedPAQ is presented, a communication-efficient Federated Learning method with Periodic Averaging and Quantization that achieves near-optimal theoretical guarantees for strongly convex and non-convex loss functions and empirically demonstrate the communication-computation tradeoff provided by the method.
Proceedings Article

Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks

TL;DR: In this article, the authors present a convex optimization framework to compute guaranteed upper bounds on the Lipschitz constant of DNNs both accurately and efficiently, where activation functions are interpreted as gradients of convex potential functions.
Proceedings Article

Fast and Provably Good Seedings for k-Means

TL;DR: This work proposes a simple yet fast seeding algorithm that produces *provably* good clusterings even *without assumptions* on the data, and allows for a favourable trade-off between solution quality and computational cost, speeding up k-means++ seeding by up to several orders of magnitude.
Journal Article

Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization

TL;DR: Stochastic conditional gradient methods are proposed as an alternative solution relying on Approximating gradients via a simple averaging technique requiring a single stochastic gradient evaluation per iteration, and replacing projection step in proximal methods by a linear program lowers the computational complexity of each iteration.
Posted Content

Gradient Methods for Submodular Maximization

TL;DR: In this paper, the authors show that projected gradient descent can achieve a constant approximation guarantee for maximizing continuous submodular functions with convex constraints, i.e., a function is defined as an expectation over a family of submodul functions with an unknown distribution.