L
Lawrence K. Saul
Researcher at University of California, San Diego
Publications - 138
Citations - 40154
Lawrence K. Saul is an academic researcher from University of California, San Diego. The author has contributed to research in topics: Hidden Markov model & Nonlinear dimensionality reduction. The author has an hindex of 49, co-authored 133 publications receiving 37255 citations. Previous affiliations of Lawrence K. Saul include Massachusetts Institute of Technology & University of Pennsylvania.
Papers
More filters
Proceedings ArticleDOI
Learning curve bounds for a Markov decision process with undiscounted rewards
Lawrence K. Saul,Satinder Singh +1 more
TL;DR: This work studies how the agent’s performance depends on the allowed exploration time in Markov decision processes, and compute a lower bound on the return of policies that appear optimal based on imperfect statistics.
Proceedings Article
Visualization of low dimensional structure in tonal pitch space
TL;DR: This work references the toroidal structure commonly associated with harmonic space, but stops short of presenting an explicit embedding of this torus, yielding a more complex structure than the standard toroidal model has heretofore assumed.
Proceedings ArticleDOI
Sparse decomposition of mixed audio signals by basis pursuit with autoregressive models
Youngmin Cho,Lawrence K. Saul +1 more
TL;DR: A framework to detect when certain sounds are present in a mixed audio signal is developed and the required optimizations in this framework are derived and experimental results on combinations of periodic and aperiodic sources are presented.
Proceedings ArticleDOI
Learning dictionaries of stable autoregressive models for audio scene analysis
Youngmin Cho,Lawrence K. Saul +1 more
TL;DR: This work characterize the acoustic variability of individual sources by autoregressive models of their time domain waveforms and shows how to estimate stable models by substituting a simple convex optimization for a difficult eigenvalue problem.
Proceedings Article
Latent Variable Models for Predicting File Dependencies in Large-Scale Software Development
TL;DR: This work evaluates different latent variable models for file dependency detection, including Bernoulli mixture models, exponential family PCA, restricted Boltzmann machines, and fully Bayesian approaches, and finds that LVMs improve the performance of related file prediction over current leading methods.