scispace - formally typeset
Search or ask a question
Topic

Viseme

About: Viseme is a research topic. Over the lifetime, 865 publications have been published within this topic receiving 17889 citations.


Papers
More filters
Proceedings Article
18 Oct 2012
TL;DR: A framework designed for helping people with hearing disabilities learn how to articulate correctly in Romanian and also it works as a training-assistant for Romanian language lip-reading.
Abstract: In this paper, we propose a 3D facial animation model for simulating visual speech production in the Romanian language. Using a set of existing 3D key shapes representing facial animation visemes, fluid animations describing facial activity during speech pronunciation are provided, taking into account the Romanian language coarticulation effects which are discussed in this paper. A novel mathematical model for defining efficient viseme coarticulation functions for 3D facial animations is also provided. The 3D tongue activity could be closely observed in real-time while different words are pronounced in Romanian, by allowing transparency for the 3D head model, thus making tongue, teeth and the entire oral cavity visible. The purpose of this work is to provide a framework designed for helping people with hearing disabilities learn how to articulate correctly in Romanian and also it works as a training-assistant for Romanian language lip-reading.

2 citations

Proceedings ArticleDOI
27 Aug 2007
TL;DR: The experimental results show that the proposed lip movement model can synthesize the individualized speech animation with high naturalness at different speech rates.
Abstract: A novel lip movement model related to speech rate is proposed in this paper. The model is constructed based on the research results on the viscoelasticity of skin-muscle tissue and the quantitative relationship between lip muscle force and speech rate. In order to show the validity of the model, we have applied it to our Chinese speech animation system. The experimental results show that our system can synthesize the individualized speech animation with high naturalness at different speech rates.

2 citations

Proceedings ArticleDOI
14 May 2017
TL;DR: This work investigates how speech sound quality in early learning affects the model's capability to learn new vowel sounds and shows that new vowels can be acquired although they were not included in earlylearning.
Abstract: Many studies emphasize the importance of infant-directed speech: stronger articulated, higher-quality speech helps infants to better distinguish different speech sounds. This effect has been widely investigated in terms of the infant's perceptual capabilities, but few studies examined whether infant-directed speech has an effect on articulatory learning. In earlier studies, we developed a model that learns articulatory control for a 3D vocal tract model via goal babbling. Exploration is organized in the space of outcomes. This so called goal space is generated from a set of ambient speech sounds. Similarly to how speech from the environment shapes infant's speech perception, the data from which the goal space is learned shapes the later learning process: it determines which sounds the model is able to discriminate, and thus, which sounds it can eventually learn to produce. We investigate how speech sound quality in early learning affects the model's capability to learn new vowel sounds. The model is trained either on hyperarticulated (tense) or on hypoarticulated (lax) vowels. Then we retrain the model with vowels from the other set. Results show that new vowels can be acquired although they were not included in early learning. There is, however, an effect of learning order, showing that models first trained on the stronger articulated tense vowels easier accommodate to new vowel sounds later on.

2 citations


Network Information
Related Topics (5)
Vocabulary
44.6K papers, 941.5K citations
78% related
Feature vector
48.8K papers, 954.4K citations
76% related
Feature extraction
111.8K papers, 2.1M citations
75% related
Feature (computer vision)
128.2K papers, 1.7M citations
74% related
Unsupervised learning
22.7K papers, 1M citations
73% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20237
202212
202113
202039
201919
201822