R
Rina Foygel Barber
Researcher at University of Chicago
Publications - 121
Citations - 3990
Rina Foygel Barber is an academic researcher from University of Chicago. The author has contributed to research in topics: False discovery rate & Inference. The author has an hindex of 28, co-authored 103 publications receiving 2699 citations.
Papers
More filters
Posted ContentDOI
Fast and Flexible Estimation of Effective Migration Surfaces
TL;DR: Fast Estimation of Effective Migration Surfaces (FEEMS) uses a Gaussian Markov Random Field in a penalized likelihood framework that allows for efficient optimization and output of effective migration surfaces and expands the ability of users to quickly visualize and interpret spatial structure in their data.
Journal ArticleDOI
Fast and flexible estimation of effective migration surfaces.
TL;DR: Fast Estimation of Effective Migration Surfaces (FEEMS) as discussed by the authors uses a Gaussian Markov Random Field model in a penalized likelihood framework that allows for efficient optimization and output of effective migration surfaces.
Journal Article
MOCCA: mirrored convex/concave optimization for nonconvex composite functions
Rina Foygel Barber,Emil Y. Sidky +1 more
TL;DR: The MOCCA (mirrored convex/concave) algorithm is proposed, a primal/dual optimization approach that takes a local convex approximation to each term at every iteration, and offers theoretical guarantees for convergence when the overall problem is approximately convex.
Posted Content
Discretized conformal prediction for efficient distribution-free inference
TL;DR: In this paper, the authors developed discretized conformal prediction algorithms that are guaranteed to cover the target value with the desired probability, and that offer a tradeoff between computational cost and prediction accuracy.
Posted Content
An equivalence between critical points for rank constraints versus low-rank factorizations
TL;DR: It is shown that all second-order stationary points of the factorized objective function correspond to stationary pointsOf projected gradient descent run on the original problem (where the projection step enforces the rank constraint), which allows to unify many existing optimization guarantees that have been proved specifically in either the rank-constrained or the factorization setting.