scispace - formally typeset
Open AccessJournal ArticleDOI

3D Hair sketching for real-time dynamic & key frame animations

Reads0
Chats0
TLDR
This paper describes a sketch-based tool, with which a user can both create hair models with different styling parameters and produce animations of these created hair models using physically and key frame-based techniques.
Abstract
Physically based simulation of human hair is a well studied and well known problem. But the “pure” physically based representation of hair (and other animation elements) is not the only concern of the animators, who want to “control” the creation and animation phases of the content. This paper describes a sketch-based tool, with which a user can both create hair models with different styling parameters and produce animations of these created hair models using physically and key frame-based techniques. The model creation and animation production tasks are all performed with direct manipulation techniques in real-time.

read more

Content maybe subject to copyright    Report

Visual Comput (2008) 24: 577585
DOI 10.1007/s00371-008-0238-8
ORIGINAL ARTICLE
Rıfat Aras
Barkın Bas¸arankut
Tolga C¸apın
B
¨
ulent
¨
Ozg
¨
uc¸
3D Hair sketching for real-time dynamic &
key frame animations
Published online: 5 June 2008
© Springer-Verlag 2008
Electronic supplementary material
The online version of this article
(doi:10.1007/s00371-008-0238-8) contains
supplementary material, which is available
to authorized users.
R. Aras (u) ·B. Bas¸arankut ·T. C¸apın·
B.
¨
Ozg
¨
uc¸
Department of Computer Engineering,
Bilkent University, Ankara, Turkey
{arifat, barkin, tcapin,
ozguc}@cs.bilkent.edu.tr
Abstract Physically based simula-
tion of human hair is a well-studied
and well-known problem. But the
“pure” physically based represen-
tation of hair (and other animation
elements) is not the only concern of
the animators, who want to “control”
the creation and animation phases
of the content. This paper describes
a sketch-based tool, with which
a user can both create hair models
with different styling parameters and
produce animations of these created
hair models using physically and key
frame-based techniques. The model
creation and animation production
tasks are all performed with direct
manipulation techniques in real-time.
Keywords Sketching · Direct
manipulation · Key frame · Hair
animation
1 Introduction
As one of the hardest parts of the overall character an-
imation, realistic hair is also one of the most important
elements for producing convincing virtual human/animal
agents. Physically based simulation of hair is a well-
studied and well-known subject, but for animators and
artists that create computer animation content, physical re-
ality is not the only concern. They have to be provided
with intuitive interfaces to create such content. Therefore,
direct manipulation techniques for user interfaces, which
are emerging as a major technique for user interaction, can
be used as a means of 3D content creation.
In this paper, we propose such a sketch-based tool,
with which an artist can create hair models including the
stylistic properties of hair intuitively with direct manipu-
lation. With the proposed tool, it is also possible to create
physical and key frame animations in a short time with
minimum user interference. The key frame and physi-
cally based animations are realized effectively by using
GPU programming, thus enabling the created animations
to be controlled interactively. Another property to be men-
tioned is that the created hair content is subject to no extra
mapping process or database lookups (except the gesture
recognition phase to solve ill-defined problems). With this
property, it is ensured that the created hair model looks
and behaves as closely as possible to the sketched one.
2 Previous work
Different hair modeling techniques have been proposed
to serve for different purposes. Individual particle-based
methods[1, 5, 11],real-timeanimationsolutions[8,10, 16],
representing detailed interactions of the hair with each
other [12] and interactive hairstyling systems [2, 7] have
all addressed different parts of the hair modeling and an-
imation problem. Our proposed tool deals with three dif-
ferent aspects of the problem: (1) modeling the hair along
with its stylistic properties, (2) creating and controlling
the animation of the hair model, and (3) performing these
tasks with a direct manipulation interface. Therefore, it
would be appropriate to examine the previous work rele-
vant to our method, with respect to these different aspects.

578 R. Aras
Hair modeling and animation. Choe et al. [2] present
a wisp-based technique that produces static hairstyles
by employing wisp parameters such as length distribu-
tion, deviation radius function and strand-shape fuzziness
value. On top of this statistical model, a constraint-based
styler is used to model artificial features such as hairpins.
Although the generated styles are realistic, real-time oper-
ation is unavailable due to excessive calculations. Oshita
presents a physically based dynamic wisp model [10] that
supports hair dynamics in a coarse model and then extends
it to a fine model. In the dynamic wisp model, the shape of
a wisp and the shapes of the individual strands are geomet-
rically controlled based on the velocity of the particles in
the coarse model. The model is designed to work on GPUs
with operations performed in real-time.
Controlling animations. Physically based modeling of
hair and other natural phenomena creates very realistic
animations, but controlling these animations to match de-
signers’ unique needs has recently become an important
topic. In Shi and Yu’s work [14], liquids are controlled to
match rapidly changing target shapes that represent regu-
lar non-fluid objects. Two different external force fields
are applied for controlling the liquid: feedback force field
and gradient field of a potential function that is defined
by the shape and skeleton of the target object. Like wa-
ter, controlled smoke animations have also been studied.
Fattal and Lischinski [3] drive the smoke towards a given
sequence of target smoke states. This control is achieved
by two extra terms added to the standard flow equations
that are (1) a driving force term used to carry smoke
towards a target and (2) a smoke gathering term that
prevents the smoke from diffusing too much. Treuille
et al. [15] use a continuous quasi-Newton optimization
to solve for wind-forces to be applied to the underlying
velocity field throughout the simulation to match the user-
defined key frames. Physically based hair animation has
also been a subject of animation control. In Petrovic’s
work [11], hair is represented as a volume of particles. To
control hair, this method employs a simulation force based
on volumetric hair density difference between current and
target hair shapes, which directs a group of connected hair
particles towards a desired shape.
Sketch-based interaction. Creating 3D content in an in-
tuitive way has become an active research area re-
cently. Sketch-based techniques have gained popularity
to achieve this task. A number of researchers have pro-
posed sketch-based techniques for creating hair anima-
tions. Wither et al. [17] present a sketching interface for
physically based hair styling. This approach consists of
extracting geometric and elastic parameters of individual
hair strands. The 3D, physically based strands are inferred
from 2D sketches by first cutting 2D strokes into half
helical segments and then fitting these half helices to seg-
ments. After sketching a certain number of guide strands,
a volume stroke is drawn to set the hair volume and adapt
the hair cut. Finally, other strands are interpolated from
the guide strands and the volume stroke. Because of the
mentioned fitting process, it is not possible to obtain a re-
sultant physically based strand that matches the user’s
input stroke. Another physically based hair creation tech-
nique is proposed by Hernandez et al. [6]. In this work,
the painting interface can only create density and length
maps, therefore hairstyle parameters such as curliness and
fuzziness cannot be created easily with this technique.
In contrast to these physically based sketching tools,
Malik [9] describes a tablet-based hair sketching user in-
terface to create non-physically based hairstyles. In this
approach, hair is represented as wisps, and parameters
such as density, twist, and frizziness of a wisp are used
to define the style of the hair. Additionally, with the help
of gesture recognition algorithms and virtual tools such
as virtual comb or hair-pin, the style of drawn hair can
be changed interactively. Another non-physically based
hairstyle design system is proposed by Fu et al. [4]. Their
design system is equipped with a sketching interface and
a fast vector field solver. The user-drawn strokes are used
to depict the global shape of the hairstyle. The sketch-
ing system employs the following style primitive types:
stream curve, dividing curve and ponytail. In this system,
it is hard to provide local control over hairstyle without
losing real-time property.
2.1 Our contribution
In this paper, we propose an easy-to-use sketch-based sys-
tem for creation and animation of hair models, using both
physically based and key frame-based animation tech-
niques. In contrast to the previous work, our system is
capable of completely preserving the drawn properties of
the hair without any intermediate mapping process. As
a result, all types of hair drawings (e.g., curly, wavy hair)
can be represented. Dynamic and key frame animations
of hair can also be created and edited in real-time with
a sketching interface. Statistical wisp parameters that have
been previously employed in a static context [2] (such as
fuzziness and closeness) are employed in a dynamic wisp
model. Physically based constraints are used in conjunc-
tion with key frame animations to create hybrid anima-
tions. Finally, a wide range of hair animation effects, such
as growing hair, hairstyle changes, etc., are supported by
the key framing interface via the proposed wisp matching
and hair mass sampling techniques.
3 Hairstyle modeling
In our tool, hair is represented as a group of wisps, and
styling of the hair is achieved by manipulating wisp par-
ameters such as fuzziness and closeness [2]. Hand-drawn
2D strokes are used as a means of input. The process

3D Hair sketching for real-time dynamic & key frame animations 579
of converting 2D input values into 3D styling parameters
consists of locating, recording and forming steps. The de-
tails of the steps of the process are explained in the follow-
ing subsections. The flow diagram of the process is given
in Fig. 1.
3.1 Skeleton strand root positioning
First, we define skeleton strands as the master strands in
a wisp, which are responsible for the style and move-
ment of other strands located on that wisp. The goal of
the first step of the hair sketching tool is then locating
the root point of the skeleton strand. To achieve this goal,
we fit a Catmull–Rom patch on the surface of the head
model [7]. The Catmull–Rom patch structure is used to
hold location information on the head. The patch repre-
sentation allows us to decrease the dimension of the loca-
tion problem from 3D to 2D space. The underlying patch
structure also makes it easy to find the neighborhood in-
formation within a wisp, which is used to distribute the
imitator strands around the skeleton strand. When the tool
is in idle state (i.e., a wisp is not being sketched), the user’s
first input point is considered as a root candidate. If the
point is on the patch, the point is registered as a skele-
ton strand root. The 3D coordinates of the root point are
converted to the uv coordinate system of the patch, in
order to be used at a later stage, during the wisp formation
phase.
3.2 Skeleton strand control point position recording
After the root of the skeleton strand is determined, other
control points of the strand are recorded. Because users
can only interact with the 2D display, these 2D points have
to be converted to 3D coordinates. This is accomplished
by employing OpenGLs depth buffers and invisible planar
elements [13].
Fig.1. Flow diagram of the hairstyle capture process
3.3 Style recording
The style of an individual wisp is represented by a number
of style parameters, such as closeness c
i, j
, fuzziness f
i, j
,
number of strands in a wisp n
i
, and strand thickness dis-
tribution t
i, j
(where i is the id of a particular wisp and j is
the number of recorded control points on that wisp).
Although these parameters can be recorded by differ-
ent means of input, in our tool, a tablet stylus pen is used
for their recording with a direct manipulation interface.
For example, the pressure information from the stylus pen
is mapped to the closeness parameter c
i, j
(as the applied
pressure increases, the closeness value increases also), and
the tilt angle information is used to represent the fuzziness
parameter. These parameters, except the number of strands
and the thickness distribution, are recorded for each con-
trol point of the skeleton strand, so that it is possible to
vary them within a single wisp as can be seen in Fig. 2.
3.4 Gesture recognition
The gesture recognition step operates on the drawn hair
wisps and detects if there are any loops. These loops de-
fine curling hairstyles, and are used to create 3D curling
hair. Gesture recognition represents hair strands as a list of
segments: a hair strand is formed by n mass nodes, form-
ing n 1 segments. These segments may intersect with
any other segment drawn on the same wisp. By calculating
the respective positions of each segment on the 2D view-
port, we detect these possible intersections as follows.
The intersections that have a potential to form curly hair
are selected if they satisfy the following two constraints:
1. A hair segment might be intersected by more than one
segment. If such a condition occurs, the segment that is
nearest is chosen and the others are discarded.
2. After the intersecting segment is chosen, if the seg-
ments between the chosen pair do not produce a con-
vex polygon, this means that the loop does not repre-
sent a curling hair and shall be discarded.
Fig. 2a–d. The effect of wisp parameters. a Constant closeness
value. b Increasing closeness value. c Decreasing closeness value.
d The effect of fuzziness value

580 R. Aras
Fig.3. The segments S3 and S12 are intersecting. Therefore, mass
nodes forming the loop are mapped to a helix via the found axis of
rotation
If all these constraints are satisfied, the drawing is detected
as a loop. The focal point of the loop is found, and a prin-
cipal 3D axis of rotation is established at the focal point.
With this axis, the mass nodes in this loop are mapped to
a helical structure, thus producing a 3D curly hair (Fig. 3).
3.5 Wisp formation
After the recording of the segments is completed, an in-
dividual wisp is formed according to the recorded control
points and wisp style parameters. The captured control
points and parameters are fed into a GPU vertex shader
to create Catmull–Rom splines of the skeleton and imita-
tor strands. The imitator strand root points are distributed
around the skeleton strand root uniformly, using the em-
ployed patch structure and Archimedes’ spiral (Fig. 4).
Archimedes’ spiral is a spiral with polar equation:
r(θ) =αθ, (1)
where α controls the distance between successive turnings
that matches our closeness style parameter. If we adapt
this to our model, the equation becomes:
r(θ) =c
i, j
θ. (2)
Because a patch structure is employed for locating strands
in a wisp, we can map patch coordinates to polar coordi-
nates as follows:
u =r cos θ
v =r sin θ (3)
Replacing r with Eq. 2, we get the patch parameters as
follows:
u =c
i, j
θ cos θ
v =c
i, j
θ sin θ (4)
Fig.4. The Archimedes spiral on the Catmull–Rom patch uv
space. The points are obtained with 30 degree increments. The left
hand spiral closeness parameter c
i, j
is greater than the right hand
spiral closeness parameter
Fig.5. The blue vector represents the offset distance for that con-
trol point. According to the fuzziness parameter, the corresponding
control point of the imitator strand is perturbed by randomly replac-
ing it in the volumes defined by perturbation spheres
The distances between the skeleton strand root point and
the distributed imitator strand roots define the offset dis-
tances of the remaining control points of the imitator
strands from the control points of the skeleton strand. In
a wisp, in other words, if closeness is kept constant and no
fuzziness is applied to the remaining control points, imita-
tor strands keep these offset distances.
The role of the fuzziness style parameter is to produce
uneven looking wisps. The increased value of fuzziness
parameter results in a more perturbed imitator strand con-
trol point location (Fig. 5).
4 Animation creation
4.1 Dynamic model
Our model separates the dynamical properties of our
skeleton strand structure from the stylistic properties,

3D Hair sketching for real-time dynamic & key frame animations 581
using Choe et al.s approach [2]. We decompose the mas-
ter strand into two components outline and detail com-
ponents in order to separate the intrinsic geometry of the
strand from the deformations applied to it.
4.1.1 Dynamic representation of skeleton strand
When a skeleton strand is drawn, it is processed by the
dynamic model, in order to extract its physical and de-
tail representative components. The component extraction
process consists of a linear regression model as described
below in which physical representative components are
aligned with the axis of regression, and detail represen-
tative components become the vertical distance vector be-
tween the axis of regression and the skeleton strand points
(Fig. 6).
1. After the skeleton strand is drawn, the axis of regres-
sion vector starting from root ending at last control
point is found.
2. Each control point of the skeleton strand is projected
onto this axis, thus forming the physical masses of the
strand.
3. Vectors starting from physical masses ending at cor-
responding control points make detail components,
and vectors connecting neighbor physical masses make
physical components.
4. Physical components are used to preserve the distance
between their connected physical masses.
5. Once the above steps are complete, when simulating
the created hair model using physically based tech-
niques, input forces act on the created physical masses.
Fig. 6a–c. Extraction of physical and detail representative compo-
nents from a skeleton strand. a The sketched skeleton strand. b The
extracted components red rods are physical components and yellow
rods are detail components. c The wisp generated
4.1.2 Global simulation force stroke
Our tool, besides providing full control for creating hair-
styles, also aims at providing control while animating the
created hairstyle. We propose two approaches in this paper.
The first method is global simulation force stroke (GSFS).
The second method is the key frame model, which will
be discussed in the next section. GSFS enables the user
to intuitively manipulate the physical environment via the
drawing interface. When the tool is in the physical anima-
tion mode, a drawn stroke on the screen is recognized as
a GSFS, thus creating a virtual force field following the
GFSF’s pattern. Creating a force field requires a grid struc-
ture underneath. Field elements are calculated and stored in
grid nodes, which will be later accessed by physical masses
that are located inside them (Fig. 7).
4.2 Key frame model
We also propose a key frame animation interface for cre-
ating hair simulations. Hair wisps are drawn on the 3D
head model and their positions are recorded as key frames.
After key frame creation is finished, the in-betweening
stage operates.
The in-betweening stage is responsible for calculating
the transition functions and mapping of wisps between
the key frames. It is the most crucial stage of key frame-
based hair animation, since it fills the gaps between the
key frames provided to the interface, with correctly calcu-
lated in-between frames.
The stage consists of three steps: wisp matching be-
tween key frames, wisp mass sampling, and path function
creation.
4.2.1 Wisp matching between key frames
There can be any number of key frames provided to the
tool. Each key frame can also consist of up to hundreds of
individual hair wisps. Hair wisps of each key frame should
be correctly mapped to the hair wisps on the next key
Fig.7. The intersection points of the GSFS and the walls of the grid
are found and a force vector between these points is formed

Citations
More filters
Proceedings ArticleDOI

HairBrush for Immersive Data-Driven Hair Modeling

TL;DR: This work proposes an interactive hair modeling system that can help create complex hairstyles in minutes or hours that would otherwise take much longer with existing tools, and provides a new 3D hair authoring interface for immersive interaction in virtual reality (VR).
Journal ArticleDOI

A Survey of Image-Based Techniques for Hair Modeling

TL;DR: This paper surveys the state of the art in the major topics of image-based techniques for hair modeling, including single-viewhair modeling, static hair modeling from multiple images, video-based dynamic hair modeled, and the editing and reusing of hair modeling results.
Journal ArticleDOI

Realistic hair modeling from a hybrid orientation field

TL;DR: This paper proposes a novel approach to construct a realistic 3D hair model from a hybrid orientation field and demonstrates that this approach can preserve structural details of 3Dhair models.
References
More filters
Proceedings ArticleDOI

Sketching hairstyles

TL;DR: An intuitive sketching interface for interactive hairstyle design, made possible by an efficient numerical updating scheme based on a vector field representation obtained by solving a sparse linear system with the style curves acting as boundary constraints.
Book ChapterDOI

A simple Physics model to animate human hair modeled in 2D strips in real time

TL;DR: A simple Physics model to animate human hair modeled in 2D strips in real time, which can handle deformation of any complexity and still appear smooth is presented.
Proceedings ArticleDOI

Realistic Hair from a Sketch

TL;DR: This paper proposes a user-friendly method for controlling such a physically-based model, requiring no specific knowledge of mechanics or hair styling: the user sketches example hair strands over a side view of the character's head, or alternatively annotates a picture of real hair viewed from the side serving as a reference.

A Sketching Interface for Modeling and Editing Hairstyles

TL;DR: This paper presents interaction techniques and algorithms for modeling and editing virtual 3D hairstyles with a user-friendly sketching interface that allows for creating expressive hairstyles quickly and easily, even for first-time users.
Journal ArticleDOI

Real-time animation of complex hairstyles

TL;DR: This work details the construction of an efficient lattice mechanical deformation model which represents the volume behavior of the hair strands which is highly scalable and allows hairstyles of any complexity to be simulated in any rendering context with the appropriate trade off between accuracy and computation speed.
Related Papers (5)