scispace - formally typeset

Journal ArticleDOI

3D Hair sketching for real-time dynamic & key frame animations

Rıfat Aras1, Barkın Başarankut1, Tolga Capin1, Bülent Özgüç1 
02 Jul 2008-The Visual Computer (Springer-Verlag)-Vol. 24, Iss: 7, pp 577-585

TL;DR: This paper describes a sketch-based tool, with which a user can both create hair models with different styling parameters and produce animations of these created hair models using physically and key frame-based techniques.

AbstractPhysically based simulation of human hair is a well studied and well known problem. But the “pure” physically based representation of hair (and other animation elements) is not the only concern of the animators, who want to “control” the creation and animation phases of the content. This paper describes a sketch-based tool, with which a user can both create hair models with different styling parameters and produce animations of these created hair models using physically and key frame-based techniques. The model creation and animation production tasks are all performed with direct manipulation techniques in real-time.

Topics: Physically based animation (61%), Animation (58%), Key frame (53%)

Summary (1 min read)

1 Introduction

  • As one of the hardest parts of the overall character animation, realistic hair is also one of the most important elements for producing convincing virtual human/animal agents.
  • Physically based simulation of hair is a wellstudied and well-known subject, but for animators and artists that create computer animation content, physical reality is not the only concern.
  • They have to be provided with intuitive interfaces to create such content.
  • With the proposed tool, it is also possible to create physical and key frame animations in a short time with minimum user interference.
  • Another property to be men- tioned is that the created hair content is subject to no extra mapping process or database lookups (except the gesture recognition phase to solve ill-defined problems).

2 Previous work

  • Different hair modeling techniques have been proposed to serve for different purposes.
  • The proposed tool deals with three different aspects of the problem: (1) modeling the hair along with its stylistic properties, (2) creating and controlling the animation of the hair model, and (3) performing these tasks with a direct manipulation interface.
  • Like water, controlled smoke animations have also been studied.
  • Sketch-based techniques have gained popularity to achieve this task.
  • Hair is represented as wisps, and parameters such as density, twist, and frizziness of a wisp are used to define the style of the hair.

3 Hairstyle modeling

  • The process of converting 2D input values into 3D styling parameters consists of locating, recording and forming steps.
  • The distances between the skeleton strand root point and the distributed imitator strand roots define the offset distances of the remaining control points of the imitator strands from the control points of the skeleton strand.
  • The stage consists of three steps: wisp matching between key frames, wisp mass sampling, and path function creation.
  • Therefore, between two key frames, the authors select the hair strand having fewer numbers of mass nodes, and resample its mass nodes to match the second key frame.
  • To further enhance the quality of animation and provide smooth and realistic transitions, slow-in and slow-out systems are introduced, defined below.

5 Results and conclusion

  • The authors proposed a sketching tool to create and animate hair models intuitively.
  • Both of the animation techniques can be prototyped rapidly.
  • One limitation with their tool is that it is not possible to use artistic techniques like hachure and shading.
  • The authors were able to obtain the results shown in Table 1, inclusive of the full hair-head collision detection procedures.

Did you find this useful? Give us your feedback

...read more

Content maybe subject to copyright    Report

Visual Comput (2008) 24: 577585
DOI 10.1007/s00371-008-0238-8
ORIGINAL ARTICLE
Rıfat Aras
Barkın Bas¸arankut
Tolga C¸apın
B
¨
ulent
¨
Ozg
¨
uc¸
3D Hair sketching for real-time dynamic &
key frame animations
Published online: 5 June 2008
© Springer-Verlag 2008
Electronic supplementary material
The online version of this article
(doi:10.1007/s00371-008-0238-8) contains
supplementary material, which is available
to authorized users.
R. Aras (u) ·B. Bas¸arankut ·T. C¸apın·
B.
¨
Ozg
¨
uc¸
Department of Computer Engineering,
Bilkent University, Ankara, Turkey
{arifat, barkin, tcapin,
ozguc}@cs.bilkent.edu.tr
Abstract Physically based simula-
tion of human hair is a well-studied
and well-known problem. But the
“pure” physically based represen-
tation of hair (and other animation
elements) is not the only concern of
the animators, who want to “control”
the creation and animation phases
of the content. This paper describes
a sketch-based tool, with which
a user can both create hair models
with different styling parameters and
produce animations of these created
hair models using physically and key
frame-based techniques. The model
creation and animation production
tasks are all performed with direct
manipulation techniques in real-time.
Keywords Sketching · Direct
manipulation · Key frame · Hair
animation
1 Introduction
As one of the hardest parts of the overall character an-
imation, realistic hair is also one of the most important
elements for producing convincing virtual human/animal
agents. Physically based simulation of hair is a well-
studied and well-known subject, but for animators and
artists that create computer animation content, physical re-
ality is not the only concern. They have to be provided
with intuitive interfaces to create such content. Therefore,
direct manipulation techniques for user interfaces, which
are emerging as a major technique for user interaction, can
be used as a means of 3D content creation.
In this paper, we propose such a sketch-based tool,
with which an artist can create hair models including the
stylistic properties of hair intuitively with direct manipu-
lation. With the proposed tool, it is also possible to create
physical and key frame animations in a short time with
minimum user interference. The key frame and physi-
cally based animations are realized effectively by using
GPU programming, thus enabling the created animations
to be controlled interactively. Another property to be men-
tioned is that the created hair content is subject to no extra
mapping process or database lookups (except the gesture
recognition phase to solve ill-defined problems). With this
property, it is ensured that the created hair model looks
and behaves as closely as possible to the sketched one.
2 Previous work
Different hair modeling techniques have been proposed
to serve for different purposes. Individual particle-based
methods[1, 5, 11],real-timeanimationsolutions[8,10, 16],
representing detailed interactions of the hair with each
other [12] and interactive hairstyling systems [2, 7] have
all addressed different parts of the hair modeling and an-
imation problem. Our proposed tool deals with three dif-
ferent aspects of the problem: (1) modeling the hair along
with its stylistic properties, (2) creating and controlling
the animation of the hair model, and (3) performing these
tasks with a direct manipulation interface. Therefore, it
would be appropriate to examine the previous work rele-
vant to our method, with respect to these different aspects.

578 R. Aras
Hair modeling and animation. Choe et al. [2] present
a wisp-based technique that produces static hairstyles
by employing wisp parameters such as length distribu-
tion, deviation radius function and strand-shape fuzziness
value. On top of this statistical model, a constraint-based
styler is used to model artificial features such as hairpins.
Although the generated styles are realistic, real-time oper-
ation is unavailable due to excessive calculations. Oshita
presents a physically based dynamic wisp model [10] that
supports hair dynamics in a coarse model and then extends
it to a fine model. In the dynamic wisp model, the shape of
a wisp and the shapes of the individual strands are geomet-
rically controlled based on the velocity of the particles in
the coarse model. The model is designed to work on GPUs
with operations performed in real-time.
Controlling animations. Physically based modeling of
hair and other natural phenomena creates very realistic
animations, but controlling these animations to match de-
signers’ unique needs has recently become an important
topic. In Shi and Yu’s work [14], liquids are controlled to
match rapidly changing target shapes that represent regu-
lar non-fluid objects. Two different external force fields
are applied for controlling the liquid: feedback force field
and gradient field of a potential function that is defined
by the shape and skeleton of the target object. Like wa-
ter, controlled smoke animations have also been studied.
Fattal and Lischinski [3] drive the smoke towards a given
sequence of target smoke states. This control is achieved
by two extra terms added to the standard flow equations
that are (1) a driving force term used to carry smoke
towards a target and (2) a smoke gathering term that
prevents the smoke from diffusing too much. Treuille
et al. [15] use a continuous quasi-Newton optimization
to solve for wind-forces to be applied to the underlying
velocity field throughout the simulation to match the user-
defined key frames. Physically based hair animation has
also been a subject of animation control. In Petrovic’s
work [11], hair is represented as a volume of particles. To
control hair, this method employs a simulation force based
on volumetric hair density difference between current and
target hair shapes, which directs a group of connected hair
particles towards a desired shape.
Sketch-based interaction. Creating 3D content in an in-
tuitive way has become an active research area re-
cently. Sketch-based techniques have gained popularity
to achieve this task. A number of researchers have pro-
posed sketch-based techniques for creating hair anima-
tions. Wither et al. [17] present a sketching interface for
physically based hair styling. This approach consists of
extracting geometric and elastic parameters of individual
hair strands. The 3D, physically based strands are inferred
from 2D sketches by first cutting 2D strokes into half
helical segments and then fitting these half helices to seg-
ments. After sketching a certain number of guide strands,
a volume stroke is drawn to set the hair volume and adapt
the hair cut. Finally, other strands are interpolated from
the guide strands and the volume stroke. Because of the
mentioned fitting process, it is not possible to obtain a re-
sultant physically based strand that matches the user’s
input stroke. Another physically based hair creation tech-
nique is proposed by Hernandez et al. [6]. In this work,
the painting interface can only create density and length
maps, therefore hairstyle parameters such as curliness and
fuzziness cannot be created easily with this technique.
In contrast to these physically based sketching tools,
Malik [9] describes a tablet-based hair sketching user in-
terface to create non-physically based hairstyles. In this
approach, hair is represented as wisps, and parameters
such as density, twist, and frizziness of a wisp are used
to define the style of the hair. Additionally, with the help
of gesture recognition algorithms and virtual tools such
as virtual comb or hair-pin, the style of drawn hair can
be changed interactively. Another non-physically based
hairstyle design system is proposed by Fu et al. [4]. Their
design system is equipped with a sketching interface and
a fast vector field solver. The user-drawn strokes are used
to depict the global shape of the hairstyle. The sketch-
ing system employs the following style primitive types:
stream curve, dividing curve and ponytail. In this system,
it is hard to provide local control over hairstyle without
losing real-time property.
2.1 Our contribution
In this paper, we propose an easy-to-use sketch-based sys-
tem for creation and animation of hair models, using both
physically based and key frame-based animation tech-
niques. In contrast to the previous work, our system is
capable of completely preserving the drawn properties of
the hair without any intermediate mapping process. As
a result, all types of hair drawings (e.g., curly, wavy hair)
can be represented. Dynamic and key frame animations
of hair can also be created and edited in real-time with
a sketching interface. Statistical wisp parameters that have
been previously employed in a static context [2] (such as
fuzziness and closeness) are employed in a dynamic wisp
model. Physically based constraints are used in conjunc-
tion with key frame animations to create hybrid anima-
tions. Finally, a wide range of hair animation effects, such
as growing hair, hairstyle changes, etc., are supported by
the key framing interface via the proposed wisp matching
and hair mass sampling techniques.
3 Hairstyle modeling
In our tool, hair is represented as a group of wisps, and
styling of the hair is achieved by manipulating wisp par-
ameters such as fuzziness and closeness [2]. Hand-drawn
2D strokes are used as a means of input. The process

3D Hair sketching for real-time dynamic & key frame animations 579
of converting 2D input values into 3D styling parameters
consists of locating, recording and forming steps. The de-
tails of the steps of the process are explained in the follow-
ing subsections. The flow diagram of the process is given
in Fig. 1.
3.1 Skeleton strand root positioning
First, we define skeleton strands as the master strands in
a wisp, which are responsible for the style and move-
ment of other strands located on that wisp. The goal of
the first step of the hair sketching tool is then locating
the root point of the skeleton strand. To achieve this goal,
we fit a Catmull–Rom patch on the surface of the head
model [7]. The Catmull–Rom patch structure is used to
hold location information on the head. The patch repre-
sentation allows us to decrease the dimension of the loca-
tion problem from 3D to 2D space. The underlying patch
structure also makes it easy to find the neighborhood in-
formation within a wisp, which is used to distribute the
imitator strands around the skeleton strand. When the tool
is in idle state (i.e., a wisp is not being sketched), the user’s
first input point is considered as a root candidate. If the
point is on the patch, the point is registered as a skele-
ton strand root. The 3D coordinates of the root point are
converted to the uv coordinate system of the patch, in
order to be used at a later stage, during the wisp formation
phase.
3.2 Skeleton strand control point position recording
After the root of the skeleton strand is determined, other
control points of the strand are recorded. Because users
can only interact with the 2D display, these 2D points have
to be converted to 3D coordinates. This is accomplished
by employing OpenGLs depth buffers and invisible planar
elements [13].
Fig.1. Flow diagram of the hairstyle capture process
3.3 Style recording
The style of an individual wisp is represented by a number
of style parameters, such as closeness c
i, j
, fuzziness f
i, j
,
number of strands in a wisp n
i
, and strand thickness dis-
tribution t
i, j
(where i is the id of a particular wisp and j is
the number of recorded control points on that wisp).
Although these parameters can be recorded by differ-
ent means of input, in our tool, a tablet stylus pen is used
for their recording with a direct manipulation interface.
For example, the pressure information from the stylus pen
is mapped to the closeness parameter c
i, j
(as the applied
pressure increases, the closeness value increases also), and
the tilt angle information is used to represent the fuzziness
parameter. These parameters, except the number of strands
and the thickness distribution, are recorded for each con-
trol point of the skeleton strand, so that it is possible to
vary them within a single wisp as can be seen in Fig. 2.
3.4 Gesture recognition
The gesture recognition step operates on the drawn hair
wisps and detects if there are any loops. These loops de-
fine curling hairstyles, and are used to create 3D curling
hair. Gesture recognition represents hair strands as a list of
segments: a hair strand is formed by n mass nodes, form-
ing n 1 segments. These segments may intersect with
any other segment drawn on the same wisp. By calculating
the respective positions of each segment on the 2D view-
port, we detect these possible intersections as follows.
The intersections that have a potential to form curly hair
are selected if they satisfy the following two constraints:
1. A hair segment might be intersected by more than one
segment. If such a condition occurs, the segment that is
nearest is chosen and the others are discarded.
2. After the intersecting segment is chosen, if the seg-
ments between the chosen pair do not produce a con-
vex polygon, this means that the loop does not repre-
sent a curling hair and shall be discarded.
Fig. 2a–d. The effect of wisp parameters. a Constant closeness
value. b Increasing closeness value. c Decreasing closeness value.
d The effect of fuzziness value

580 R. Aras
Fig.3. The segments S3 and S12 are intersecting. Therefore, mass
nodes forming the loop are mapped to a helix via the found axis of
rotation
If all these constraints are satisfied, the drawing is detected
as a loop. The focal point of the loop is found, and a prin-
cipal 3D axis of rotation is established at the focal point.
With this axis, the mass nodes in this loop are mapped to
a helical structure, thus producing a 3D curly hair (Fig. 3).
3.5 Wisp formation
After the recording of the segments is completed, an in-
dividual wisp is formed according to the recorded control
points and wisp style parameters. The captured control
points and parameters are fed into a GPU vertex shader
to create Catmull–Rom splines of the skeleton and imita-
tor strands. The imitator strand root points are distributed
around the skeleton strand root uniformly, using the em-
ployed patch structure and Archimedes’ spiral (Fig. 4).
Archimedes’ spiral is a spiral with polar equation:
r(θ) =αθ, (1)
where α controls the distance between successive turnings
that matches our closeness style parameter. If we adapt
this to our model, the equation becomes:
r(θ) =c
i, j
θ. (2)
Because a patch structure is employed for locating strands
in a wisp, we can map patch coordinates to polar coordi-
nates as follows:
u =r cos θ
v =r sin θ (3)
Replacing r with Eq. 2, we get the patch parameters as
follows:
u =c
i, j
θ cos θ
v =c
i, j
θ sin θ (4)
Fig.4. The Archimedes spiral on the Catmull–Rom patch uv
space. The points are obtained with 30 degree increments. The left
hand spiral closeness parameter c
i, j
is greater than the right hand
spiral closeness parameter
Fig.5. The blue vector represents the offset distance for that con-
trol point. According to the fuzziness parameter, the corresponding
control point of the imitator strand is perturbed by randomly replac-
ing it in the volumes defined by perturbation spheres
The distances between the skeleton strand root point and
the distributed imitator strand roots define the offset dis-
tances of the remaining control points of the imitator
strands from the control points of the skeleton strand. In
a wisp, in other words, if closeness is kept constant and no
fuzziness is applied to the remaining control points, imita-
tor strands keep these offset distances.
The role of the fuzziness style parameter is to produce
uneven looking wisps. The increased value of fuzziness
parameter results in a more perturbed imitator strand con-
trol point location (Fig. 5).
4 Animation creation
4.1 Dynamic model
Our model separates the dynamical properties of our
skeleton strand structure from the stylistic properties,

3D Hair sketching for real-time dynamic & key frame animations 581
using Choe et al.s approach [2]. We decompose the mas-
ter strand into two components outline and detail com-
ponents in order to separate the intrinsic geometry of the
strand from the deformations applied to it.
4.1.1 Dynamic representation of skeleton strand
When a skeleton strand is drawn, it is processed by the
dynamic model, in order to extract its physical and de-
tail representative components. The component extraction
process consists of a linear regression model as described
below in which physical representative components are
aligned with the axis of regression, and detail represen-
tative components become the vertical distance vector be-
tween the axis of regression and the skeleton strand points
(Fig. 6).
1. After the skeleton strand is drawn, the axis of regres-
sion vector starting from root ending at last control
point is found.
2. Each control point of the skeleton strand is projected
onto this axis, thus forming the physical masses of the
strand.
3. Vectors starting from physical masses ending at cor-
responding control points make detail components,
and vectors connecting neighbor physical masses make
physical components.
4. Physical components are used to preserve the distance
between their connected physical masses.
5. Once the above steps are complete, when simulating
the created hair model using physically based tech-
niques, input forces act on the created physical masses.
Fig. 6a–c. Extraction of physical and detail representative compo-
nents from a skeleton strand. a The sketched skeleton strand. b The
extracted components red rods are physical components and yellow
rods are detail components. c The wisp generated
4.1.2 Global simulation force stroke
Our tool, besides providing full control for creating hair-
styles, also aims at providing control while animating the
created hairstyle. We propose two approaches in this paper.
The first method is global simulation force stroke (GSFS).
The second method is the key frame model, which will
be discussed in the next section. GSFS enables the user
to intuitively manipulate the physical environment via the
drawing interface. When the tool is in the physical anima-
tion mode, a drawn stroke on the screen is recognized as
a GSFS, thus creating a virtual force field following the
GFSF’s pattern. Creating a force field requires a grid struc-
ture underneath. Field elements are calculated and stored in
grid nodes, which will be later accessed by physical masses
that are located inside them (Fig. 7).
4.2 Key frame model
We also propose a key frame animation interface for cre-
ating hair simulations. Hair wisps are drawn on the 3D
head model and their positions are recorded as key frames.
After key frame creation is finished, the in-betweening
stage operates.
The in-betweening stage is responsible for calculating
the transition functions and mapping of wisps between
the key frames. It is the most crucial stage of key frame-
based hair animation, since it fills the gaps between the
key frames provided to the interface, with correctly calcu-
lated in-between frames.
The stage consists of three steps: wisp matching be-
tween key frames, wisp mass sampling, and path function
creation.
4.2.1 Wisp matching between key frames
There can be any number of key frames provided to the
tool. Each key frame can also consist of up to hundreds of
individual hair wisps. Hair wisps of each key frame should
be correctly mapped to the hair wisps on the next key
Fig.7. The intersection points of the GSFS and the walls of the grid
are found and a force vector between these points is formed

Citations
More filters

Proceedings ArticleDOI
17 Oct 2019
TL;DR: This work proposes an interactive hair modeling system that can help create complex hairstyles in minutes or hours that would otherwise take much longer with existing tools, and provides a new 3D hair authoring interface for immersive interaction in virtual reality (VR).
Abstract: While hair is an essential component of virtual humans, it is also one of the most challenging digital assets to create. Existing automatic techniques lack the generality and flexibility to create rich hair variations, while manual authoring interfaces often require considerable artistic skills and efforts, especially for intricate 3D hair structures that can be difficult to navigate. We propose an interactive hair modeling system that can help create complex hairstyles in minutes or hours that would otherwise take much longer with existing tools. Modelers, including novice users, can focus on the overall hairstyles and local hair deformations, as our system intelligently suggests the desired hair parts. Our method combines the flexibility of manual authoring and the convenience of data-driven automation. Since hair contains intricate 3D structures such as buns, knots, and strands, they are inherently challenging to create using traditional 2D interfaces. Our system provides a new 3D hair authoring interface for immersive interaction in virtual reality (VR). Users can draw high-level guide strips, from which our system predicts the most plausible hairstyles via a deep neural network trained from a professionally curated dataset. Each hairstyle in our dataset is composed of multiple variations, serving as blend-shapes to fit the user drawings via global blending and local deformation. The fitted hair models are visualized as interactive suggestions that the user can select, modify, or ignore. We conducted a user study to confirm that our system can significantly reduce manual labor while improve the output quality for modeling a variety of head and facial hairstyles that are challenging to create via existing techniques.

11 citations


Cites background from "3D Hair sketching for real-time dyn..."

  • ...sketchbased for creation [45, 15] or posing [33]) without requiring detailed inputs such as individual strands [2] or clusters [24]....

    [...]


Journal ArticleDOI
Yongtang Bao1, Yue Qi1
TL;DR: This paper proposes a novel approach to construct a realistic 3D hair model from a hybrid orientation field and demonstrates that this approach can preserve structural details of 3Dhair models.
Abstract: Image-based hair modeling methods enable artists to produce abundant 3D hair models. However, the reconstructed hair models could not preserve the structural details, such as uniformly distributed hair roots, interior strands growing in line with real distribution and exterior strands similar to images. In this paper, we propose a novel approach to construct a realistic 3D hair model from a hybrid orientation field. Our hybrid orientation field is generated from four fields. The first field makes the surface structure of a hairstyle be similar to the input images as much as possible. The second field makes the hair roots and interior hair strands be consistent with actual distribution. The tracing hair strands can be confined to the hair volume according to the third field. And the fourth field makes the growing direction of one point at a strand be compatible with its predecessor. To generate these fields, we construct high-confidence 3D strand segments from the orientation field of point cloud and 2D traced strands. Hair strands automatically grow from uniformly distributed hair roots according to the hybrid orientation field. We use energy minimization strategy to optimize the entire 3D hair model. We demonstrate that our approach can preserve structural details of 3D hair models.

8 citations


Cites methods from "3D Hair sketching for real-time dyn..."

  • ...[1] introduced a sketch-based tool for hair modeling, while it was cumbersome for a large number of hair....

    [...]


Journal ArticleDOI
Yongtang Bao1, Yue Qi1
TL;DR: This paper surveys the state of the art in the major topics of image-based techniques for hair modeling, including single-viewhair modeling, static hair modeling from multiple images, video-based dynamic hair modeled, and the editing and reusing of hair modeling results.
Abstract: With the tremendous performance increase of today’s graphics technologies, visual details of digital humans in games, online virtual worlds, and virtual reality applications are becoming significantly more demanding. Hair is a vital component of a person’s identity and can provide strong cues about age, background, and even personality. More and more researchers focus on hair modeling in the fields of computer graphics and virtual reality. Traditional methods are physics-based simulation by setting different parameters. The computation is expensive, and the constructing process is non-intuitive, difficult to control. Conversely, image-based methods have the advantages of fast modeling and high fidelity. This paper surveys the state of the art in the major topics of image-based techniques for hair modeling, including single-view hair modeling, static hair modeling from multiple images, video-based dynamic hair modeling, and the editing and reusing of hair modeling results. We first summarize the single-view approaches, which can be divided into the orientation-field and data-driven-based methods. The static methods from multiple images and dynamic methods are then reviewed in Sections III and IV . In Section V , we also review the editing and reusing of hair modeling results. The future development trends and challenges of image-based methods are proposed in the end.

6 citations


Cites methods from "3D Hair sketching for real-time dyn..."

  • ...[78] described a sketch-based tool to generate hair models with different styling parameters....

    [...]


References
More filters

Book
06 Aug 1999
Abstract: From the Publisher: OpenGL is a powerful software interface used to produce high-quality computer generated images and interactive applications using 2D and 3D objects and color bitmaps and images. The OpenGL Programming Guide, Third Edition, provides definitive and comprehensive information on OpenGL and the OpenGL Utility Library. This book discusses all OpenGL functions and their syntax shows how to use those functions to create interactive applications and realistic color images. You will find clear explanations of OpenGL functionality and many basic computer graphics techniques such as building and rendering 3D models; interactively viewing objects from different perspective points; and using shading, lighting, and texturing effects for greater realism. In addition, this book provides in-depth coverage of advanced techniques, including texture mapping, antialiasing, fog and atmospheric effects, NURBS, image processing, and more. The text also explores other key topics such as enhancing performance, OpenGL extensions, and cross-platform techniques. This third edition has been extensively updated to include the newest features of OpenGL, Version 1.2, including: 3D texture mapping Multitexturing New pixel storage formats, including packed and reversed (BGRA) formats Specular lighting after texturing The OpenGL imaging subset New GLU routines and functionality Numerous code examples are provided to practical programming techniques. The color plate section illustrates the power and sophistication of the newest version of OpenGL. The OpenGL Technical Library provides tutorial and reference books for OpenGL. The library enables programmers to gain a practical understanding of OpenGL and shows them how to unlock its full potential. The OpenGL Technical Library was originally developed by SGI and continues to evelove under the auspices of the Architecture Review Board (ARB), an industry consortium responsible for guiding the evolution of OpenGL and related technologies. The OpenGL ARB is composed of industry leaders, such as 3Dlabs, Compaq, Evans & Sutherland, Hewlett-Packard, IBM, Intel, Intergraph, Microsoft, NVIDIA, and SGI. The OpenGL Programming Guide, Third Edition was written by Mason Woo, Jackie Neider, Tom Davis, and Dave Shreiner.

712 citations


Proceedings ArticleDOI
01 Jul 2003
TL;DR: A continuous quasi-Newton optimization solves for appropriate "wind" forces to be applied to the underlying velocity field throughout the simulation to greatly speed up the optimization process while avoiding certain local minima.
Abstract: We describe a method for controlling smoke simulations through user-specified keyframes. To achieve the desired behavior, a continuous quasi-Newton optimization solves for appropriate "wind" forces to be applied to the underlying velocity field throughout the simulation. The cornerstone of our approach is a method to efficiently compute exact derivatives through the steps of a fluid simulation. We formulate an objective function corresponding to how well a simulation matches the user's keyframes, and use the derivatives to solve for force parameters that minimize this function. For animations with several keyframes, we present a novel multiple-shooting approach. By splitting large problems into smaller overlapping subproblems, we greatly speed up the optimization process while avoiding certain local minima.

258 citations


"3D Hair sketching for real-time dyn..." refers methods in this paper

  • ...[15] use a continuous quasi-Newton optimization to solve for wind-forces to be applied to the underlying velocity field throughout the simulation to match the userdefined key frames....

    [...]


Journal ArticleDOI
TL;DR: This paper develops an elaborate model for stiffness and inertial dynamics of individual hair strand, which is numerically stable and fast, and unifies the continuum interaction dynamics and the individual hair's stiffness dynamics.
Abstract: In this paper we address the difficult problem of hair dynamics, particularly hair-hair and hair-air interactions. To model these interactions, we propose to consider hair volume as a continuum. Subsequently, we treat the interaction dynamics to be fluid dynamics. This proves to be a strong as well as viable approach for an otherwise very complex phenomenon. However, we retain the individual character of hair, which is vital to visually realistic rendering of hair animation. For that, we develop an elaborate model for stiffness and inertial dynamics of individual hair strand. Being a reduced coordinate formulation, the stiffness dynamics is numerically stable and fast. We then unify the continuum interaction dynamics and the individual hair’s stiffness dynamics.

186 citations


"3D Hair sketching for real-time dyn..." refers methods in this paper

  • ...Individual particle-based methods[1, 5, 11], real-timeanimationsolutions [8,10, 16], representing detailed interactions of the hair with each other [12] and interactive hairstyling systems [2, 7] have all addressed different parts of the hair modeling and animation problem....

    [...]


Journal ArticleDOI
01 Aug 2004
TL;DR: This paper generates a smoke simulation in which the smoke is driven towards each of these targets in turn, while exhibiting natural-looking interesting smoke-like behavior.
Abstract: In this paper we present a new method for efficiently controlling animated smoke. Given a sequence of target smoke states, our method generates a smoke simulation in which the smoke is driven towards each of these targets in turn, while exhibiting natural-looking interesting smoke-like behavior. This control is made possible by two new terms that we add to the standard flow equations: (i) a driving force term that causes the fluid to carry the smoke towards a particular target, and (ii) a smoke gathering term that prevents the smoke from diffusing too much. These terms are explicitly defined by the instantaneous state of the system at each simulation timestep. Thus, no expensive optimization is required, allowing complex smoke animations to be generated with very little additional cost compared to ordinary flow simulations.

173 citations


"3D Hair sketching for real-time dyn..." refers background in this paper

  • ...Fattal and Lischinski [3] drive the smoke towards a given sequence of target smoke states....

    [...]


Book
01 Aug 2005

162 citations


"3D Hair sketching for real-time dyn..." refers methods in this paper

  • ...This is accomplished by employing OpenGL’s depth buffers and invisible planar elements [13]....

    [...]