scispace - formally typeset
L

Lawrence K. Saul

Researcher at University of California, San Diego

Publications -  138
Citations -  40154

Lawrence K. Saul is an academic researcher from University of California, San Diego. The author has contributed to research in topics: Hidden Markov model & Nonlinear dimensionality reduction. The author has an hindex of 49, co-authored 133 publications receiving 37255 citations. Previous affiliations of Lawrence K. Saul include Massachusetts Institute of Technology & University of Pennsylvania.

Papers
More filters
Proceedings Article

Boltzmann Chains and Hidden Markov Models

TL;DR: A statistical mechanical framework for the modeling of discrete time series is proposed, and maximum likelihood estimation is done via Boltzmann learning in one-dimensional networks with tied weights, which motivates new architectures that address particular shortcomings of HMMs.

Learning High Dimensional Correspondences from Low Dimensional Manifolds

TL;DR: This paper generalizes three methods in unsupervised learning—principal components analysis, factor analysis, and locally linear embedding— to discover subspaces and manifolds that provide common low dimensional representations of different high dimensional data sets and uses the shared representations discovered by these algorithms to put high dimensional examples from different data sets into correspondence.
Journal ArticleDOI

Maximum likelihood and minimum classification error factor analysis for automatic speech recognition

TL;DR: It is found that modeling feature correlations by factor analysis leads to significantly increased likelihoods and word accuracies, and the rate of improvement with model size often exceeds that observed in conventional HMM's.
Proceedings Article

Hidden-Unit Conditional Random Fields

TL;DR: This paper explores a generalization of conditional random elds (CRFs) in which binary stochastic hidden units appear between the data and the labels, and derives ecient algorithms for inference and learning in these models by observing that the hidden units are conditionally independent given the dataand the labels.
Proceedings ArticleDOI

Comparison of Large Margin Training to Other Discriminative Methods for Phonetic Recognition by Hidden Markov Models

TL;DR: This paper compares three frameworks for discriminative training of continuous-density hidden Markov models (CD-HMMs) and proposes a new framework based on margin maximization, which yields significantly lower error rates than both CML and MCE training.