scispace - formally typeset
Open Access

Making a thinking-talking head

TLDR
The Thinking-Talking Head is described, an interdisciplinary project that sits between and draws upon engineering/computer science and behavioural/cognitive science; research and performance; implementation and evaluation; and how these head functions will be tuned and evaluated using various paradigms, including an imitation paradigm.
Abstract
This paper describes the Thinking-Talking Head; an interdisciplinary project that sits between and draws upon engineering/computer science and behavioural/cognitive science; research and performance; implementation and evaluation. The project involves collaboration between computer scientists, engineers, language technologists and cognitive scientists, and its aim is twofold (a) to create a 3-D computer animation of a human head that will interact in real time with human agents, and (b) to serve as a research platform to drive research in the contributing disciplines, and in talking head research in general. The thinkingtalking head will emulate elements of face-to-face conversation through speech (including intonation), gaze and gesture. So it must have an active sensorium that accurately reflects the properties of its immediate environment, and must be able to generate appropriate communicative signals to feedback to the interlocutor. Here we describe the current implementation and outline how we are tackling issues concerning both the outputs (synthetic voice, visual speech, facial expressiveness and naturalness) from and inputs (auditory-visual speech recognition, emotion recognition, auditory-visual speaker localization) to the head. We describe how these head functions will be tuned and evaluated using various paradigms, including an imitation paradigm.

read more

Citations
More filters
Proceedings Article

A morphable model for the synthesis of 3D faces

Matthew Turk
Proceedings ArticleDOI

Language teaching in a mixed reality games environment

TL;DR: This paper explores the methodology the team is developing to independently control for degree of language knowledge and degree of world experience.
Proceedings ArticleDOI

PETA: a pedagogical embodied teaching agent

TL;DR: A hybrid real and virtual system that monitors and teaches children in an everyday classroom environment without requiring any special virtual reality set ups or any knowledge that there is a computer involved is described.
Journal ArticleDOI

Automatic Deictic Gestures for Animated Pedagogical Agents

TL;DR: A system that automatically generates deictic gestures for animated pedagogical agents (APAs) takes audio and text as input, which define what the APA has to say, and generates animated gestures based on a set of rules.
Proceedings ArticleDOI

Digital learning activities delivered by eloquent instructor avatars: scaling with problem instance

TL;DR: An approach for achieving scalable authoring of digital learning activities, without sacrificing delivery eloquence, by adapting the targets of the deictic gestures, the speech, and the synchronization between speech and gestures is presented.
References
More filters
Proceedings ArticleDOI

A morphable model for the synthesis of 3D faces

TL;DR: A new technique for modeling textured 3D faces by transforming the shape and texture of the examples into a vector space representation, which regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance.
Proceedings Article

A morphable model for the synthesis of 3D faces

Matthew Turk
Proceedings ArticleDOI

A muscle model for animation three-dimensional facial expression

Keith Waters
TL;DR: The development of a parameterized facial muscle process, that incorporates the use of a model to create realistic facial animation is described, which allows a richer vocabulary and a more general approach to the modelling of the primary facial expressions.
Journal ArticleDOI

Physically‐based facial modelling, analysis, and animation

TL;DR: A new 3D hierarchical model of the human face is developed that incorporates a physically-based approximation to facial tissue and a set of anatomically-motivated facial muscle actuators and is efficient enough to produce facial animation at interactive rates on a high-end graphics workstation.
Journal ArticleDOI

Can computer personalities be human personalities

TL;DR: The findings demonstrate that personality does not require richly defined agents, sophisticated pictorial representations, nautral language processing, or artificial intelligence, rather, even the most superficial manipulations are sufficient to exhibit personality, with powerful effects.