scispace - formally typeset
N

Nicholas Boyd

Researcher at University of California, Berkeley

Publications -  12
Citations -  766

Nicholas Boyd is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Frank–Wolfe algorithm & Convex optimization. The author has an hindex of 8, co-authored 12 publications receiving 690 citations. Previous affiliations of Nicholas Boyd include Amazon.com.

Papers
More filters
Posted Content

Streaming Variational Bayes

TL;DR: SDA-Bayes as mentioned in this paper is a framework for streaming and distributed computation of a Bayesian posterior, which makes streaming updates to the estimated posterior according to a user-specified approximation batch primitive.
Journal ArticleDOI

The Alternating Descent Conditional Gradient Method for Sparse Inverse Problems

TL;DR: This paper proposes a variant of the classical conditional gradient method (CGM) for sparse inverse problems with differentiable measurement models that gives the theoretical global optimality guarantees and stopping conditions of convex optimization along with the performance and modeling flexibility associated with nonconvex optimization.
Proceedings Article

Streaming Variational Bayes

TL;DR: SDA-Bayes is presented, a framework for streaming updates to the estimated posterior of a Bayesian posterior, with variational Bayes (VB) as the primitive, and the usefulness of the framework is demonstrated by fitting the latent Dirichlet allocation model to two large-scale document collections.
Posted ContentDOI

DeepLoco: Fast 3D Localization Microscopy Using Neural Networks

TL;DR: This paper introduces a new algorithm for a critical part of the SMLM process: estimating the number and locations of the fluorophores in a single frame, and trains a neural network to minimize the Bayes’ risk under a generative model for single S MLM frames.
Proceedings ArticleDOI

The alternating descent conditional gradient method for sparse inverse problems

TL;DR: This paper proposes a variant of the classical conditional gradient method (CGM) for sparse inverse problems with differentiable measurement models that gives the theoretical global optimality guarantees and stopping conditions of convex optimization along with the performance and modeling flexibility associated with nonconvex optimization.