L
Liang Dong
Researcher at National University of Singapore
Publications - 7
Citations - 115
Liang Dong is an academic researcher from National University of Singapore. The author has contributed to research in topics: Hidden Markov model & Viseme. The author has an hindex of 4, co-authored 7 publications receiving 104 citations.
Papers
More filters
Journal ArticleDOI
Recognition of visual speech elements using adaptively boosted hidden Markov models
Say Wei Foo,Yong Lian,Liang Dong +2 more
TL;DR: A novel approach for recognition of visual speech elements is presented that makes use of adaptive boosting and hidden Markov models (HMMs) to build an AdaBoost-HMM classifier that outperforms the traditional HMM classifiers in accuracy, especially for visemes extracted from contexts.
Book ChapterDOI
Recognition of Visual Speech Elements Using Hidden Markov Models
Say Wei Foo,Liang Dong +1 more
TL;DR: A novel subword lip reading system using continuous Hidden Markov Models (HMMs) configured according to the statistical features of lip motion and trained with the Baum-Welch method is presented.
Journal ArticleDOI
A two-channel training algorithm for hidden Markov model and its application to lip reading
Liang Dong,Say Wei Foo,Yong Lian +2 more
TL;DR: Results of experiments on identifying a group of confusable visemes indicate that the proposed approach to discriminative training of HMM is able to increase the recognition accuracy by an average of 20% compared with the conventional HMMs that are trained with the Baum-Welch estimation.
Proceedings ArticleDOI
A boosted multi-HMM classifier for recognition of visual speech elements
Say Wei Foo,Liang Dong +1 more
TL;DR: A novel boosted classifier using multiple hidden Markov models (HMMs) is reported that is significantly better in terms of accuracy and robustness than the traditional single HMM classifier.
Proceedings ArticleDOI
Modeling continuous visual speech using boosted viseme models
Liang Dong,Say Wei Foo,Yong Lian +2 more
TL;DR: A novel connected-viseme approach for modeling continuous visual speech that adopts AdaBoost-HMMs as the viseme models and indicates that the proposed method has better performance than the conventional approach.