scispace - formally typeset
Search or ask a question
Author

Ken Anjyo

Bio: Ken Anjyo is an academic researcher from Victoria University of Wellington. The author has contributed to research in topics: Animation & Computer animation. The author has an hindex of 16, co-authored 69 publications receiving 1337 citations. Previous affiliations of Ken Anjyo include Wellington Management Company & Hitachi.


Papers
More filters
Proceedings ArticleDOI
15 Sep 1995
TL;DR: The method for modeling human figure locomotions with emotions serves as a basis from which the method can interpolate or extrapolate the human locomotions, and an individual's character or mood, appearing during the human behaviors, is extracted by the method.
Abstract: This paper describes the method for modeling human figure locomotions with emotions. Fourier expansions of experimental data of actual human behaviors serve as a basis from which the method can interpolate or extrapolate the human locomotions. This means, for instance, that transition from a walk to a run is smoothly and realistically performed by the method. Moreover an individual's character or mood, appearing during the human behaviors, is also extracted by the method. For example, the method gets "briskness" from the experimental data for a "normal" walk and a "brisk" walk. Then the "brisk" run is generated by the method, using another Fourier expansion of the measured data of running. The superposition of these human behaviors is shown as an efficient technique for generating rich variations of human locomotions. In addition, step-length, speed, and hip position during the locomotions are also modeled, and then interactively controlled to get a desired animation. Abstract

500 citations

Proceedings ArticleDOI
01 Jan 2014
TL;DR: It is shown that, despite the simplicity of the blendshape approach, there remain open problems associated with this fundamental technique.
Abstract: Blendshapes”, a simple linear model of facial expression, is the prevalent approach to realistic facial animation. It has driven animated characters in Hollywood films, and is a standard feature of commercial animation packages. The blendshape approach originated in industry, and became a subject of academic research relatively recently. This report describes the published state of the art in this area, covering both literature from the graphics research community, and developments published in industry forums. We show that, despite the simplicity of the blendshape approach, there remain open problems associated with this fundamental technique.

226 citations

Journal ArticleDOI
Yeongho Seol1, John P. Lewis, Jaewoo Seo, Byungkuk Choi, Ken Anjyo, Junyong Noh 
TL;DR: This article presents a novel spacetime facial animation retargeting method for blendshape face models that provides spacetime editing for the popular blendshape representation of facial models, exhibiting smooth and controlled propagation of user edits across surrounding frames.
Abstract: The goal of a practical facial animation retargeting system is to reproduce the character of a source animation on a target face while providing room for additional creative control by the animator. This article presents a novel spacetime facial animation retargeting method for blendshape face models. Our approach starts from the basic principle that the source and target movements should be similar. By interpreting movement as the derivative of position with time, and adding suitable boundary conditions, we formulate the retargeting problem as a Poisson equation. Specified (e.g., neutral) expressions at the beginning and end of the animation as well as any user-specified constraints in the middle of the animation serve as boundary conditions. In addition, a model-specific prior is constructed to represent the plausible expression space of the target face during retargeting. A Bayesian formulation is then employed to produce target animation that is consistent with the source movements while satisfying the prior constraints. Since the preservation of temporal derivatives is the primary goal of the optimization, the retargeted motion preserves the rhythm and character of the source movement and is free of temporal jitter. More importantly, our approach provides spacetime editing for the popular blendshape representation of facial models, exhibiting smooth and controlled propagation of user edits across surrounding frames.

94 citations

Journal ArticleDOI
TL;DR: This work proposes a new highlight shader for the 3D objects used in cel animation that makes highlight shapes and animations in a cartoon style using Blinn's (1977) traditional specular model.
Abstract: We propose a new highlight shader for the 3D objects used in cel animation. Without using a texture-mapping technique, our shader makes highlight shapes and animations in a cartoon style. Our shader makes an initial highlight shape using Blinn's (1977) traditional specular model. It then interactively modifies the initial shape through geometric, stylistic, and Boolean transformations for the highlight until we get our final desired shape. Moreover, once these operations specify highlight shapes for each keyframe, our shader automatically generates the highlight animation. In other words, our shader offers a new definition of highlighting 3D objects for cel animation.

57 citations

Journal ArticleDOI
TL;DR: This work presents new algorithms for the compatible embedding of 2D shapes that exhibit a combination of simplicity, speed, and accuracy that has not been achieved in previous work.
Abstract: We present new algorithms for the compatible embedding of 2D shapes. Such embeddings offer a convenient way to interpolate shapes having complex, detailed features. Compared to existing techniques, our approach requires less user input, and is faster, more robust, and simpler to implement, making it ideal for interactive use in practical applications. Our new approach consists of three parts. First, our boundary matching algorithm locates salient features using the perceptually motivated principles of scale-space and uses these as automatic correspondences to guide an elastic curve matching algorithm. Second, we simplify boundaries while maintaining their parametric correspondence and the embedding of the original shapes. Finally, we extend the mapping to shapes' interiors via a new compatible triangulation algorithm. The combination of our algorithms allows us to demonstrate 2D shape interpolation with instant feedback. The proposed algorithms exhibit a combination of simplicity, speed, and accuracy that has not been achieved in previous work.

54 citations


Cited by
More filters
Journal ArticleDOI

6,278 citations

Proceedings Article
01 Jan 1999

2,010 citations

Proceedings ArticleDOI
01 Jul 2002
TL;DR: This paper shows that a motion database can be preprocessed for flexibility in behavior and efficient search and exploited for real-time avatar control and demonstrates the flexibility of the approach through four different applications.
Abstract: Real-time control of three-dimensional avatars is an important problem in the context of computer games and virtual environments. Avatar animation and control is difficult, however, because a large repertoire of avatar behaviors must be made available, and the user must be able to select from this set of behaviors, possibly with a low-dimensional input device. One appealing approach to obtaining a rich set of avatar behaviors is to collect an extended, unlabeled sequence of motion data appropriate to the application. In this paper, we show that such a motion database can be preprocessed for flexibility in behavior and efficient search and exploited for real-time avatar control. Flexibility is created by identifying plausible transitions between motion segments, and efficient search through the resulting graph structure is obtained through clustering. Three interface techniques are demonstrated for controlling avatar motion using this data structure: the user selects from a set of available choices, sketches a path through an environment, or acts out a desired motion in front of a video camera. We demonstrate the flexibility of the approach through four different applications and compare the avatar motion to directly recorded human motion.

983 citations

01 Jan 2009
TL;DR: This thesis builds a human-assisted motion annotation system to obtain ground-truth motion, missing in the literature, for natural video sequences, and proposes SIFT flow, a new framework for image parsing by transferring the metadata information from the images in a large database to an unknown query image.
Abstract: The focus of motion analysis has been on estimating a flow vector for every pixel by matching intensities. In my thesis, I will explore motion representations beyond the pixel level and new applications to which these representations lead. I first focus on analyzing motion from video sequences. Traditional motion analysis suffers from the inappropriate modeling of the grouping relationship of pixels and from a lack of ground-truth data. Using layers as the interface for humans to interact with videos, we build a human-assisted motion annotation system to obtain ground-truth motion, missing in the literature, for natural video sequences. Furthermore, we show that with the layer representation, we can detect and magnify small motions to make them visible to human eyes. Then we move to a contour presentation to analyze the motion for textureless objects under occlusion. We demonstrate that simultaneous boundary grouping and motion analysis can solve challenging data, where the traditional pixel-wise motion analysis fails. In the second part of my thesis, I will show the benefits of matching local image structures instead of intensity values. We propose SIFT flow that establishes dense, semantically meaningful correspondence between two images across scenes by matching pixel-wise SIFT features. Using SIFT flow, we develop a new framework for image parsing by transferring the metadata information, such as annotation, motion and depth, from the images in a large database to an unknown query image. We demonstrate this framework using new applications such as predicting motion from a single image and motion synthesis via object transfer. Based on SIFT flow, we introduce a nonparametric scene parsing system using label transfer, with very promising experimental results suggesting that our system outperforms state-of-the-art techniques based on training classifiers. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

899 citations

Journal ArticleDOI
TL;DR: A framework is developed that transforms biological motion into a representation allowing for analysis using linear methods from statistics and pattern recognition, and reveals that the dynamic part of the motion contains more information about gender than motion-mediated structural cues.
Abstract: Biological motion contains information about the identity of an agent as well as about his or her actions, intentions, and emotions. The human visual system is highly sensitive to biological motion and capable of extracting socially relevant information from it. Here we investigate the question of how such information is encoded in biological motion patterns and how such information can be retrieved. A framework is developed that transforms biological motion into a representation allowing for analysis using linear methods from statistics and pattern recognition. Using gender classification as an example, simple classifiers are constructed and compared to psychophysical data from human observers. The analysis reveals that the dynamic part of the motion contains more information about gender than motion-mediated structural cues. The proposed framework can be used not only for analysis of biological motion but also to synthesize new motion patterns. A simple motion modeler is presented that can be used to visualize and exaggerate the differences in male and female walking patterns.

866 citations