scispace - formally typeset
Search or ask a question
Topic

Viseme

About: Viseme is a research topic. Over the lifetime, 865 publications have been published within this topic receiving 17889 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The best multimodal system that combines the two acoustic cues as well as visual cue improves the recognition of POA category, MOA category by 3% and vowels by 2%.

10 citations

Book ChapterDOI
11 Apr 2002
TL;DR: A new system for the recognition of visual speech based on support vector machines which proved to be powerful classifiers in other visual tasks is proposed, which offers the advantage of an easy generalization to large vocabulary recognition tasks due to the use of viseme models, as opposed to entire word models.
Abstract: Speech recognition based on visual information is an emerging research field We propose here a new system for the recognition of visual speech based on support vector machines which proved to be powerful classifiers in other visual tasks We use support vector machines to recognize the mouth shape corresponding to different phones produced To model the temporal character of the speech we employ the Viterbi decoding in a network of support vector machines The recognition rate obtained is higher than those reported earlier when the same features were used The proposed solution offers the advantage of an easy generalization to large vocabulary recognition tasks due to the use of viseme models, as opposed to entire word models

10 citations

Posted Content
TL;DR: The authors use a structured approach for devising speaker-dependent viseme classes, which enables the creation of a set of phoneme-to-viseme maps where each has a different quantity of visemes ranging from two to 45.
Abstract: In machine lip-reading there is continued debate and research around the correct classes to be used for recognition. In this paper we use a structured approach for devising speaker-dependent viseme classes, which enables the creation of a set of phoneme-to-viseme maps where each has a different quantity of visemes ranging from two to 45. Viseme classes are based upon the mapping of articulated phonemes, which have been confused during phoneme recognition, into viseme groups. Using these maps, with the LiLIR dataset, we show the effect of changing the viseme map size in speaker-dependent machine lip-reading, measured by word recognition correctness and so demonstrate that word recognition with phoneme classifiers is not just possible, but often better than word recognition with viseme classifiers. Furthermore, there are intermediate units between visemes and phonemes which are better still.

10 citations

Posted Content
TL;DR: This paper showed that visemes, which were defined over a century ago, are unlikely to be optimal for a modern computer lip-reading system and showed that computer lip reading is not heavily constrained by video resolution, pose, lighting and other practical factors.
Abstract: In the quest for greater computer lip-reading performance there are a number of tacit assumptions which are either present in the datasets (high resolution for example) or in the methods (recognition of spoken visual units called visemes for example). Here we review these and other assumptions and show the surprising result that computer lip-reading is not heavily constrained by video resolution, pose, lighting and other practical factors. However, the working assumption that visemes, which are the visual equivalent of phonemes, are the best unit for recognition does need further examination. We conclude that visemes, which were defined over a century ago, are unlikely to be optimal for a modern computer lip-reading system.

10 citations

Journal Article

10 citations


Network Information
Related Topics (5)
Vocabulary
44.6K papers, 941.5K citations
78% related
Feature vector
48.8K papers, 954.4K citations
76% related
Feature extraction
111.8K papers, 2.1M citations
75% related
Feature (computer vision)
128.2K papers, 1.7M citations
74% related
Unsupervised learning
22.7K papers, 1M citations
73% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20237
202212
202113
202039
201919
201822