scispace - formally typeset
Search or ask a question
Author

Yangzhou Du

Bio: Yangzhou Du is an academic researcher from Intel. The author has contributed to research in topics: Facial expression & Animation. The author has an hindex of 20, co-authored 51 publications receiving 1150 citations.

Papers published on a yearly basis

Papers
More filters
Patent
Wenlong Li1, Yangzhou Du1, Xiaofeng Tong1
04 Jun 2013
TL;DR: In this article, a video recording of an individual may be encoded utilizing an avatar that is driven by the facial expression(s) of the individual, and the resultant avatar animation may accurately mimic facial expressions of the recorded individual.
Abstract: Techniques are disclosed for performing avatar-based video encoding. In some embodiments, a video recording of an individual may be encoded utilizing an avatar that is driven by the facial expression(s) of the individual. In some such cases, the resultant avatar animation may accurately mimic facial expression(s) of the recorded individual. Some embodiments can be used, for example, in video sharing via social media and networking websites. Some embodiments can be used, for example, in video-based communications (e.g., peer-to-peer video calls; videoconferencing). In some instances, use to the disclosed techniques may help to reduce communications bandwidth use, preserve the individual's anonymity, and/or provide enhanced entertainment value (e.g., amusement) for the individual, for example.

87 citations

Patent
09 Aug 2011
TL;DR: In this article, a multi-view stereo process is used to generate a dense avatar mesh using the camera parameters and sparse key points, which can then be used to reconstruct a 3D face model.
Abstract: Systems, devices and methods are described including recovering camera parameters and sparse key points for multiple 2D facial images and applying a multi-view stereo process to generate a dense avatar mesh using the camera parameters and sparse key points. The dense avatar mesh may then be used to generate a 3D face model and multi-view texture synthesis may be applied to generate a texture image for the 3D face model.

79 citations

Patent
11 Apr 2011
TL;DR: In this paper, a method and apparatus for capturing and representing 3D wireframe, color and shading of facial expressions are provided, wherein the method includes the following steps: storing a plurality of feature data sequences, each of the feature data sequence corresponding to one of the plurality of facial facial expressions; and retrieving one of these sequences based on user facial feature data; and mapping the retrieved feature datasequence to an avatar face.
Abstract: A method and apparatus for capturing and representing 3D wireframe, color and shading of facial expressions are provided, wherein the method includes the following steps: storing a plurality of feature data sequences, each of the feature data sequences corresponding to one of the plurality of facial expressions; and retrieving one of the feature data sequences based on user facial feature data; and mapping the retrieved feature data sequence to an avatar face. The method may advantageously provide improvements in execution speed and communications bandwidth.

70 citations

Patent
Anbang Yao1, Yangzhou Du1, Xiaofeng Tong1, Tao Wang1, Yurong Chen1, Jianguo Li1, Jianbo Ye1, Wenlong Li1, Yimin Zhang1 
13 Dec 2013
TL;DR: In this paper, various modifications to the shape regression technique for use in real-time applications, and methods, systems, and machine readable mediums which utilize the resulting facial landmark tracking methods are discussed.
Abstract: Disclosed in some examples are various modifications to the shape regression technique for use in real-time applications, and methods, systems, and machine readable mediums which utilize the resulting facial landmark tracking methods

66 citations

Patent
Wenlong Li1, Xiaofeng Tong, Yangzhou Du, Thomas Sachson, Yunzhen Wang 
29 Mar 2013
TL;DR: In this paper, the set of facial motion data is modified in response to a specific condition, such as buffer overflow condition and tracking failure condition, and an avatar animation is initiated based on the modified set of motion data.
Abstract: Systems and methods may provide for detecting a condition with respect to one or more frames of a video signal associated with a set of facial motion data and modifying, in response to the condition, the set of facial motion data to indicate that the one or more frames lack facial motion data. Additionally, an avatar animation may be initiated based on the modified set of facial motion data. In one example, the condition is one or more of a buffer overflow condition and a tracking failure condition.

53 citations


Cited by
More filters
Proceedings Article
01 Jan 1999

2,010 citations

Patent
Daehwan Kim1, Yoonki Hong1
22 Apr 2013
TL;DR: In this paper, a mobile terminal and a control method thereof, which can obtain an image, are provided. But they do not specify how to obtain the image from the mobile terminal, nor how to extract the pictogram from the obtained image.
Abstract: A mobile terminal and a control method thereof, which can obtain an image, are provided. A mobile terminal (100) includes a camera unit (121), a pictogram extraction unit (182) and a controller (180). The camera unit (121) obtains image information corresponding to at least one of a still image and a moving image. The pictogram extraction unit (182) extracts at least one pictogram from the obtained image information. The controller (180) detects information related to the extracted pictogram, and displays the detected information to be overlapped with the obtained image information. In the mobile terminal, the controller (180) includes, as the detected information, at least one of previously recorded information and currently searched information related to the extracted pictogram.

482 citations

Journal ArticleDOI
TL;DR: This paper proposes to use another form of supervision information for feature selection, i.e. pairwise constraints, which specifies whether a pair of data samples belong to the same class (must-link constraints) or different classes (cannot- link constraints).
Abstract: Feature selection is an important preprocessing step in mining high-dimensional data. Generally, supervised feature selection methods with supervision information are superior to unsupervised ones without supervision information. In the literature, nearly all existing supervised feature selection methods use class labels as supervision information. In this paper, we propose to use another form of supervision information for feature selection, i.e. pairwise constraints, which specifies whether a pair of data samples belong to the same class (must-link constraints) or different classes (cannot-link constraints). Pairwise constraints arise naturally in many tasks and are more practical and inexpensive than class labels. This topic has not yet been addressed in feature selection research. We call our pairwise constraints guided feature selection algorithm as Constraint Score and compare it with the well-known Fisher Score and Laplacian Score algorithms. Experiments are carried out on several high-dimensional UCI and face data sets. Experimental results show that, with very few pairwise constraints, Constraint Score achieves similar or even higher performance than Fisher Score with full class labels on the whole training data, and significantly outperforms Laplacian Score.

205 citations