scispace - formally typeset
A

Amit Juneja

Researcher at University of Maryland, College Park

Publications -  19
Citations -  450

Amit Juneja is an academic researcher from University of Maryland, College Park. The author has contributed to research in topics: Hidden Markov model & Feature (machine learning). The author has an hindex of 11, co-authored 18 publications receiving 442 citations.

Papers
More filters
Proceedings ArticleDOI

Landmark-based speech recognition: report of the 2004 Johns Hopkins summer workshop

TL;DR: Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines (SVM); dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an ASR, current theories of human speech perception and phonology.
Proceedings ArticleDOI

Speech segmentation using probabilistic phonetic feature hierarchy and support vector machines

TL;DR: The method overcomes the disadvantage of the traditional acoustic-phonetic methods where the error is carried down the hierarchy and performs considerably better than a context-dependent hidden Markov model (HMM) based approach that uses 39 mel-cepstrum based parameters.
Dissertation

Speech recognition based on phonetic features and acoustic landmarks

TL;DR: The Probabilistic framework makes the acoustic-phonetic approach to speech recognition suitable for practical recognition tasks as well as compatible with probabilistic pronunciation and language models.
Journal ArticleDOI

A probabilistic framework for landmark detection based on phonetic features for automatic speech recognition.

TL;DR: A probabilistic framework for a landmark-based approach to speech recognition is presented for obtaining multiple landmark sequences in continuous speech using manner class pronunciation models for isolated word recognition with known vocabulary.
Proceedings ArticleDOI

Segmentation of continuous speech using acoustic-phonetic parameters and statistical learning

TL;DR: This paper presents a methodology for combining acoustic-phonetic knowledge with statistical learning for automatic segmentation and classification of continuous speech and achieves performance on segmentation of continuousspeech better than the BMM based approach that uses 39 cepstrum-based speech parameters.