scispace - formally typeset
L

Lawrence K. Saul

Researcher at University of California, San Diego

Publications -  138
Citations -  40154

Lawrence K. Saul is an academic researcher from University of California, San Diego. The author has contributed to research in topics: Hidden Markov model & Nonlinear dimensionality reduction. The author has an hindex of 49, co-authored 133 publications receiving 37255 citations. Previous affiliations of Lawrence K. Saul include Massachusetts Institute of Technology & University of Pennsylvania.

Papers
More filters
Proceedings Article

Latent Coincidence Analysis: A Hidden Variable Model for Distance Metric Learning

TL;DR: A latent variable model for supervised dimensionality reduction and distance metric learning is described and it is shown that inference is completely tractable and an Expectation-Maximization (EM) algorithm for parameter estimation is derived.

Large margin training of acoustic models for speech recognition

TL;DR: Algorithms for training Gaussian mixture models both as multiway classifiers in their own right and as individual components of larger models (e.g., observation models in CD-HMMs) are presented.

Convex Optimizations for Distance Metric Learning and Pattern Classification

TL;DR: In this article, the authors describe two algorithms for learning distance metrics based on convex optimization, which can be used to measure the dissimilarity between different feature vectors in a multidimensional vector space.
Proceedings ArticleDOI

Using Machine Learning to Predict Path-Based Slack from Graph-Based Timing Analysis

TL;DR: A machine learning model is proposed, based on bigrams of path stages, to predict expensive PBA results from relatively inexpensive GBA results, which has the potential to substantially reduce pessimism while retaining the lower turnaround time of GBA analysis.
Proceedings Article

Fast Learning by Bounding Likelihoods in Sigmoid Type Belief Networks

TL;DR: This work proposes to avoid the infeasibility of the E step by bounding likelihoods instead of computing them exactly, and shows that the estimation of the network parameters can be made fast by performing the estimation in either of the alternative domains.