scispace - formally typeset
E

E.W.D. Whittaker

Researcher at University of Cambridge

Publications -  6
Citations -  286

E.W.D. Whittaker is an academic researcher from University of Cambridge. The author has contributed to research in topics: Word error rate & Perplexity. The author has an hindex of 5, co-authored 6 publications receiving 284 citations.

Papers
More filters
Proceedings ArticleDOI

The 1998 HTK system for transcription of conversational telephone speech

TL;DR: The 1998 HTK large vocabulary speech recognition system for conversational telephone speech was used in the NIST 1998 Hub5E evaluation as mentioned in this paper, which includes reduced bandwidth analysis, side-based cepstral feature normalisation, vocal tract length normalisation (VTLN), triphone and quinphone hidden Markov models (HMMs) built using speaker adaptive training (SAT), maximum likelihood linear regression (MLLR) speaker adaptation and a confidence score based system combination.
Proceedings ArticleDOI

Comparison of part-of-speech and automatically derived category-based language models for speech recognition

TL;DR: This paper compares various category-based language models when used in conjunction with a word-based trigram by means of linear interpolation to find the largest improvement with a model using automatically determined categories.

The 1997 HTK broadcast news transcription system

TL;DR: The recent development of the HTK broadcast news transcription system is presented, using data for which no manual preclassification or segmentation is available and therefore automatic techniques are required and compatible acoustic modelling strategies must be adopted.

The 1998 HTK broadcast news transcription system: development and results

TL;DR: Changes to the HTK broadcast news transcription system for the November 1998 Hub4 evaluation reduced the error rate by 13% on the 1997 evaluation data and the final system had an overall word error rate of 13.8% for the 1998 evaluation data sets.
Proceedings ArticleDOI

Efficient class-based language modelling for very large vocabularies

TL;DR: Investigates the perplexity and word error rate performance of two different forms of class model and the respective data-driven algorithms for obtaining automatic word classifications and shows that both models, when interpolated with a word model, perform similarly well.