Q
Qinqing Zheng
Researcher at University of Pennsylvania
Publications - 20
Citations - 503
Qinqing Zheng is an academic researcher from University of Pennsylvania. The author has contributed to research in topics: Computer science & Differential privacy. The author has an hindex of 5, co-authored 14 publications receiving 352 citations. Previous affiliations of Qinqing Zheng include University of Chicago & Facebook.
Papers
More filters
Proceedings Article
A convergent Gradient descent algorithm for rank minimization and semidefinite programming from random linear measurements
Qinqing Zheng,John Lafferty +1 more
TL;DR: A simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs.
Posted Content
Convergence Analysis for Rectangular Matrix Completion Using Burer-Monteiro Factorization and Gradient Descent
Qinqing Zheng,John Lafferty +1 more
TL;DR: This work addresses the rectangular matrix completion problem by lifting the unknown matrix to a positive semidefinite matrix in higher dimension, and optimizing a nonconvex objective over the semideFinite factor using a simple gradient descent scheme.
Proceedings Article
Online Decision Transformer
TL;DR: Online Decision Transformers (ODT), an RL algorithm based on sequence modeling that blends offline pretraining with online finetuning in a unified framework, is proposed and shown to be competitive with the state-of-the-art in absolute performance on the D4RL benchmark.
Posted Content
A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements
Qinqing Zheng,John Lafferty +1 more
TL;DR: In this article, a simple, scalable, and fast gradient descent algorithm was proposed to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs.
Posted Content
A Theorem of the Alternative for Personalized Federated Learning.
TL;DR: This paper shows how the excess risks of personalized federated learning with a smooth, strongly convex loss depend on data heterogeneity from a minimax point of view, and reveals a surprising theorem of the alternative for personalized federation learning.