scispace - formally typeset
S

Savant Karunaratne

Researcher at University of Sydney

Publications -  8
Citations -  32

Savant Karunaratne is an academic researcher from University of Sydney. The author has contributed to research in topics: Computer facial animation & Computer animation. The author has an hindex of 4, co-authored 8 publications receiving 32 citations.

Papers
More filters
Journal ArticleDOI

Modelling and combining emotions, visual speech and gestures in virtual head models

TL;DR: The blending algorithm enables animators to script their animations at higher, more user-friendly levels or to use the results of artificial intelligence and computational psychological methods to generate and manage expressive, autonomous or near-autonomous virtual characters, without having to rely on performance-based methods.
Proceedings ArticleDOI

3D animated movie actor training using fuzzy logic

TL;DR: An expert system based on fuzzy knowledge bases that helps in moving towards automating the task of animating virtual human heads and faces of expressive, talking, acting humanoids and other characters inhabiting virtual worlds is presented.
Journal ArticleDOI

A new efficient expression generation and automatic cloning method for multimedia actors

TL;DR: A new and efficient method for facial expression generation on cloned synthetic head models that has real-time performance, is less computationally expensive than physically based models, and has greater anatomical correspondence than rational free-form deformation or spline-based techniques.

A fuzzy rule-based interactive methodology for training multimedia actors

TL;DR: This paper presents an expert system based on fuzzy knowledge bases that helps in moving towards automating the task of animating virtual human heads and faces in virtual worlds.
Proceedings ArticleDOI

Interactive emotional response computation for scriptable multimedia actors

TL;DR: A virtual actor framework developed by us aids animators in automating the modelling and animation of emotive virtual human heads with visual speech and gestures and the 'situation processor' component of this system uses the OCC cognitive-emotional theory to intelligently query a user to determine the emotional response of a synthetic character in a movie scene.