scispace - formally typeset
A

Ayush Sekhari

Researcher at Cornell University

Publications -  22
Citations -  200

Ayush Sekhari is an academic researcher from Cornell University. The author has contributed to research in topics: Computer science & Stationary point. The author has an hindex of 5, co-authored 13 publications receiving 95 citations.

Papers
More filters
Proceedings Article

Uniform Convergence of Gradients for Non-Convex Learning and Optimization

TL;DR: This work investigates the rate at which refined properties of the empirical risk, in particular, gradients---converge to their population counterparts in standard non-convex learning tasks, and the consequences of this convergence for optimization, and proposes vector-valued Rademacher complexities as a simple, composable, and user-friendly tool to derive dimension-free uniform convergence bounds for gradients in non- Conventus learning problems.
Proceedings Article

The Complexity of Making the Gradient Small in Stochastic Convex Optimization

TL;DR: It is shown that in the global oracle/statistical learning model, only logarithmic dependence on smoothness is required to find a near-stationary point, whereas polynomial dependence on Smoothness is necessary in the local stochastic oracle model.
Posted Content

Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations

TL;DR: An algorithm which finds an $\epsilon$-approximate stationary point using stochastic gradient and Hessian-vector products is designed, and a lower bound is proved which establishes that this rate is optimal and that it cannot be improved using Stochastic $p$th order methods for any $p\ge 2$ even when the first $ p$ derivatives of the objective are Lipschitz.
Proceedings ArticleDOI

Hybrid RL: Using Both Offline and Online Data Can Make RL Efficient

TL;DR: This work considers a hybrid reinforcement learning setting (Hybrid RL), in which an agent has access to an offline dataset and the ability to collect experience via real-world online interaction, and adapts the classical Q learning/iteration algorithm to the hybrid setting, which it is called Hy-Q.
Posted Content

The Complexity of Making the Gradient Small in Stochastic Convex Optimization

TL;DR: In this paper, the authors give nearly matching upper and lower bounds on the oracle complexity of finding near-stationary points in stochastic convex optimization with respect to the global oracle/statistical learning model.