scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

HairBrush for Immersive Data-Driven Hair Modeling

TL;DR: This work proposes an interactive hair modeling system that can help create complex hairstyles in minutes or hours that would otherwise take much longer with existing tools, and provides a new 3D hair authoring interface for immersive interaction in virtual reality (VR).
Abstract: While hair is an essential component of virtual humans, it is also one of the most challenging digital assets to create. Existing automatic techniques lack the generality and flexibility to create rich hair variations, while manual authoring interfaces often require considerable artistic skills and efforts, especially for intricate 3D hair structures that can be difficult to navigate. We propose an interactive hair modeling system that can help create complex hairstyles in minutes or hours that would otherwise take much longer with existing tools. Modelers, including novice users, can focus on the overall hairstyles and local hair deformations, as our system intelligently suggests the desired hair parts. Our method combines the flexibility of manual authoring and the convenience of data-driven automation. Since hair contains intricate 3D structures such as buns, knots, and strands, they are inherently challenging to create using traditional 2D interfaces. Our system provides a new 3D hair authoring interface for immersive interaction in virtual reality (VR). Users can draw high-level guide strips, from which our system predicts the most plausible hairstyles via a deep neural network trained from a professionally curated dataset. Each hairstyle in our dataset is composed of multiple variations, serving as blend-shapes to fit the user drawings via global blending and local deformation. The fitted hair models are visualized as interactive suggestions that the user can select, modify, or ignore. We conducted a user study to confirm that our system can significantly reduce manual labor while improve the output quality for modeling a variety of head and facial hairstyles that are challenging to create via existing techniques.

Summary (8 min read)

INTRODUCTION

  • Recent advances in modeling and rendering of digital humans have provided unprecedented realism for a variety of real-time applications, as exemplified in “Meet Mike” [33], “Siren” [34], and “Soul Machines” [1].
  • The resulting strip-based hair models can be also converted to other formats such as strands.
  • To simulate realistic usage scenarios, the authors train the network using varying numbers of sparse representative strokes.
  • The authors thus provide an immersive authoring interface in virtual reality (VR), which can facilitate 3D painting and sculpting for freeform design.
  • Users can naturally model any hairstyles using a variety of brush types in 3D without physical limitations.

Manual Hair Modeling

  • To generate photorealistic CG characters, sophisticated design tools have been proposed to model, simulate, and render human hair.
  • Manual modeling offers complete freedom in creating the desired hairstyles [13, 6, 48], but requires significant expertise and labor, especially for human hairstyles with rich and complex structures.
  • The authoring interface needs to be easy to use (e.g. sketchbased for creation [44, 15] or posing [32]) without requiring detailed inputs such as individual strands [2] or clusters [23].
  • The authors present a system that produces realistic outputs in real-time given only a few sparse user strokes to facilitate interactive exploration.

3D Deep Learning

  • Recent methods such as [50, 31] applied deep learning to reconstruct hair from a single image.
  • Saito et al. [31] represent hair geometry using 3D volumetric field, which can be efficiently encoded and learned through a volumetric variational autoencoder (VAE).
  • Their methods are limited to hair reconstruction from images and do not offer intuitive control to modify hair details.
  • The authors works is inspired by the seminal work on point clouds recognition [30].
  • Though simple, the architecture has nice properties of being invariant to the number or order of points.

Modeling in VR

  • Even with the assistance of data and/or machine learning, traditional 2D interfaces such as [18] present inherent challenges for authoring 3D content, especially those with intricate 3D structures such as human hair buns, knots, and wisps.
  • Recent advances in VR modeling [39, 16, 26] have offered a new venue for interactive 3D modeling.
  • None of existing VR platforms supports direct editing on hair geometry, not to mention an automated system which can predict and complete a complex hair model from sparse painting strokes.
  • The authors introduce the first hair modeling tool that leverages the unprecedented authoring freedom in VR and provides intelligent online hairstyle suggestion and manual authoring assistance.

DESIGN GOALS

  • To build a powerful, flexible, and easy to use hair authoring system, the authors have the following design goals in mind: Users can choose to provide more inputs for finer controls.
  • Assistance Based on sparse user inputs, their system interactively suggests complete hair structures, which can be composed of parts from different hair styles.
  • Immersion Complex 3D structures, such as buns, knots, and strands, should be easy to specify.
  • The output models should be complete and realistic, regardless of the amounts and types of user inputs.

Basic user interaction

  • As shown in Figure 2b and the supplementary video, their system supports basic user interactions such as brushing strips, loading models, undo/redo, changing stroke colors and hair textures, etc.
  • As the user draws intended hair strips, their system predicts the most plausible hairstyles, rendered as transparent hints.
  • Whenever a new user strip is detected, the interface updates and displays modeling suggestions in real time that fits the user inputs .
  • In order to distinguish user strokes from system suggestions, the hints are visualized with transparency .

Hybrid hairstyle

  • The authors system supports creation of heterogeneous hairstyle by merging multiple hairstyles in a single output.
  • Once a hairstyle is accepted or manually drawn in a local region , the user can continue to draw new guide strips and the system will update the suggestions taking into account both current guide strips and previously accepted/drawn ones , enabling a smooth blend of styles .

Brush modes

  • It can be very difficult to manually draw complicated hair structures, such as curls and braids.
  • Thus, their system provides two special hair brushes that allow users to draw spin curves and braids with simple gestures .
  • For the spin brush, the spin size and curliness can be adjusted via the touchpad of the freeform drawing controller.
  • The braid brush can guide their system to provide automatic suggestions of complex braids.
  • The authors also provide the attach brush that automatically project the whole stroke onto the scalp surface, which is useful when creating the inner layer hair.

Manual mode

  • Apart from auto-completion, manual mode is also supported in their system.
  • Users can manually draw either from scratch or on top of system suggestions.
  • If automatic anchoring is activated, the strip root will be automatically anchored onto the scalp surface.
  • Collision avoidance, if turned on, automatically pushes out the part of stroke inside the head to avoid strip-head collision.
  • To allow symmetric drawing on both sides of a head, the authors also provide the mirror tool so users only need to draw on one side .

Hairstyle prediction

  • Given sparse input user gestures as key strips, the authors estimate the user-intended hairstyle based on a nonlinear classifier trained by a deep neural network.
  • The predicted hairstyle will match the input strips at a high level, providing the best-matched class label while being robust to local perturbations that may be introduced by novice users.

Hair strip generation

  • The authors system then synthesizes the detailed hair strips of full scale that conform to the input key strips.
  • The estimated hair class will direct the remaining algorithm steps to the corresponding set of hair templates/blendshapes with the same topology but variations in size, length, and shape.
  • Due to the limited expressiveness of linear blending, the result is prone to under-fitting.
  • The authors therefore non-rigidly deform the matching strips in the output so that their geometry better matches the corresponding key strips.
  • Once selected, the hair strips will stay fixed unless they are manually deformed for further refinement.

REPRESENTATION

  • With detailed texture maps and sophisticated rendering technique, a single strip can realistically depict multiple strands of hair .
  • To make the surface normal less flat, their system adopts an U-shape strip geometry , where user can control the cross curvature of the strip.

Medial axis representation

  • The authors represent each hair strip with a fixed number of samples (30 in their implementation) evenly spaced on its medial axis .
  • The mesh geometry can be represented and reconstructed via the local frames.

Shape matching distance

  • Given a query strip, the system searches for its matching strip in the database that has the closest geometry.
  • In Equation (2), li stands for the length of i-th strip d(Pi,Pj) measures the piecewise distance between the corresponding points of Pi and Pj .
  • In sum, dSM penalizes the case where there is a large difference between the lengths or shapes of the input strips.

DATABASE CONSTRUCTION

  • The authors construct three separate databases for scalp hairs, beards and mustaches, and eye brows.
  • Since facial hairs tend to have simpler structures than scalp hairs, the authors created 5 different styles for both the beard/mustache and eyebrow databases, and each hairstyle consists of 5 different variations.
  • UI-wise, users only need to draw outer strips while their system can automatically generate globally coherent inner layers.
  • The segmentation is utilized to extract a sparse set of representative hair strips for learning high-level hairstyle features (Section 8).

HAIRSTYLE PREDICTION

  • The authors need to bridge the gap between sparse strips {Pi} (Section 6) and complete hair models {Hj} (Section 7).
  • To resolve this issue, the authors propose to compare their similarity in a latent feature space, which is more robust and invariant to spatial locations, drawing orders, and strip numbers .
  • To resemble the real application scenario, the authors only extract a random number (ranging from 1 to 10) of sparse strips from each hair model during training.
  • To consider both cases, the authors asked different users to segment the entire set of hair strips into 5 representative clusters (Section 7), with each cluster containing at most 10 nearest strips.
  • The training data are generated by sampling from these clusters in a combinatorial way.

Network details

  • Each input strip is represented with 30 uniformly sampled points, leading to a 180-dimensional (30×3×2) vector for each strip pair.
  • The classification module consists of two fully-connected layers and a softmax layer that outputs a 30-dimensional vector which encodes the probability of each hairstyle.

Run time usage

  • At run time, up to the 6 latest strips are fed into the network for hairstyle prediction.
  • The prediction accuracy of their network is 85% for top-1 and 94% for top-5 classification.
  • Once users accept system suggestions, existing guide stripes will be removed.
  • As their system provides real-time feedback, users could accept system suggestion if they find it appropriate.
  • Therefore discarding older operations enables the algorithm to provide more accurate updates according to the latest user inputs.

Blending

  • Hi classified by the network in Figure 8, the authors use its corresponding hair models {Hji} (Section 7) as ”blendshapes” [7] to fit the input key strips {P`}.
  • The method in [7] treats each complete face model as a blend shape.
  • Applying the same approach by treating entire hair meshes as linear bases does not work well as hairstyles can have more shape variations than faces.
  • The authors thus perform local blending at the hairstrip level for better expressiveness and accuracy.
  • The hair models {Hji} from the same hairstyle Hi have the same size and topology.

Strip matching

  • The purpose of strip matching is to find a suitable blending basis for each key strip P`.
  • Every strip in Hji has corresponding strips in all other hair models that belong to the same hairstyle Hi. Suppose hairstyle Hi has n strips.
  • The similarity between the strips is measured using the shape matching metric dSM defined in Equation (1).
  • Mi belongs to the j-th basis {ckj }, then {ckj } will become P`’s blending basis in the following steps.
  • The matching process could be accelerated via kd-tree search.

Key strips fitting

  • As the linear model restricts the coefficient to stay between 0 and 1 to avoid unstable extrapolations, it can under-fit large variations and produce non-smooth, rigid results.
  • The additional degrees of freedom would help smooth the outcome geometry while providing more capability in representing largely deformed structures.
  • The authors optimize the set of weights, ti, αi and si, to minimize the following fitting error using real-time L-BFGS solver: arg min ti,αi,si ∥∥∥∥.

Coefficient propagation

  • The authors then propagate the computed coefficients to the remaining bases.
  • In particular, suppose {ci} is the set of bases that have received blending coefficients.
  • The coefficient propagation can be computed efficiently since dSM (Bi, Bj) can be pre-computed after the database construction.
  • After all the blending coefficients are solved, each strip.

Deformation

  • It is prone to under-fit due to limited variations in the linear bases.
  • Therefore the authors further deform the blending result towards key strips.
  • Given the key strips {Pi} and the linear blending results {Vi}, the authors compute the target displacement for each Vi as: ∆di = ki(Pi − Vi) (8) ∆di consists of point-wise translation vectors, and ki is a weight that goes from 0 (the hair root) to 1 (the hair tip) smoothly, making sure the hair roots are always fixed.
  • The authors then propagate the displacement vectors computed at the {Vi} to the remaining strips, analogous to the coefficient propagation for blending in Equation (7).
  • Compared to optimation-based mesh deformation [36, 35], this simple one-pass method deforms hair strips in real time with sufficient quality.

Merging

  • The authors initialize a dense volumetric grid that is large enough to encompass any hair strip anchored on the scalp with all cell values set to 0.
  • For each sample point from an existing hair strip, the authors set the occupied grid cell to 1, and propagate the occupancy to its neighboring cells using a Gaussian kernel.
  • For the newly suggested hair model, the authors remove those strips that overlap with the existing strips, e.g. with more than 60% of its points locate in grid cells with positive occupancy values.

Hair Geometry Reconstruction

  • The authors first resolve the strip-head collision.
  • To accelerate the computation, the authors pre-compute a dense volumetric levelset field with respect to the scalp surface.
  • For each grid cell, its direction and distance to the scalp are also precomputed and stored.
  • With the calculated sample locations, the authors proceed to reconstruct the full geometry of hair details.
  • In particular, the authors build the Bishop frame [5] for each strip, and use the parallel transport method to calculate the transformation at each sample point.

EVALUATION

  • The authors system is able to model highly complex and diverse hairstyles, such as curly hairs , ponytails , braids , afro braids , buns , and beard .
  • These are extremely difficult or time-consuming to create from scratch using existing modeling techniques (e.g., [23, 15, 18, 20, 31]) and commercial systems (e.g., XGen, Ornatrix).
  • The authors evaluate their system via sample outcomes, comparisons with prior methods, and a pilot user study.

Results and Analysis

  • The authors show how their system can help author high-quality hair models with large variations.
  • Figure 11 shows the modeling results created with very sparse set of guide strips.
  • As seen from the first and second column, their prediction network is capable of capturing high-level features of input strips, such as hair length and curliness.
  • The authors then present the effect of blending and deformation in fitting the hair models to the input key strips (third column).
  • The authors evaluate the merging operation in Figure 10.

Comparisons

  • The authors first compare their system with the professional hair modeling tools, such as Maya XGen, to demonstrate that their system could be applied to real-world AAA game hair asset creation.
  • The authors then compare their method with a wide range of prior techniques: semi-automatic image-based hair modeling [18], and fully automatic hair reconstruction approaches [20, 31].
  • While previous hair modeling approaches mainly focus on generating hair models from 2D images, their framework enables hair creation and authoring in 3D space.
  • Since none of these previous systems can handle facial hairs, the authors only compare scalp hair authoring.

Professional tools

  • In Figure 9, the authors compare the scalp and facial hair models generated using their system with that of the modern hair modeling software deployed in most industrial studios.
  • For AAA games, one hair model could easily take days for a professional artist to simply place the strips, as manipulating the mesh geometry (vertices, edges, and faces) could be extremely tedious in a 2D interface.
  • It could take more days or even weeks to iterate the geometry and texture refinement before reaching a satisfying rendering.
  • In contrast, their VR tool provides the full expressiveness to directly place the strips in 3D, as well as realtime immersive feedback of the final rendering.
  • With system suggestion turned on, the authoring time can be further reduced to only a few minutes (bottom row).

Sketching from photos

  • The authors also compare their approach with the semi-automatic image-based hair modeling algorithm [18] .
  • As driven by a hair database, their approach retrieves the best matched hair examples based on the reference image and the user strokes drawn on 2D.
  • Due to the ambiguity of 3D projection and the view occlusion, the 2D strokes may not faithfully reflect the 3D structures.
  • The authors method, in contrast, allow users to directly sketch key hair strips in 3D, enabling more accurate retrieval of hairstyle and higher quality of detailed hair geometry, as demonstrated in Figure 13.
  • Moreover, the output of their blendshape method always falls in the plausible space, while the deformation-based method may cause artifacts.

Synthesis from photos

  • The authors further compare their algorithm with the state-of-theart automatic hair reconstruction approach [20] in Figure 14.
  • The authors result is then automatically generated without any manual refinement.
  • The deviation of local details may be either due to an inaccurate retrieval of hairstyle and deformation or the limited capability of dataset .
  • The authors also compare their approach with [31], a more recent deep learning-based method for reconstructing 3D hair from a single image.
  • In contrast to both approaches, their approach offers significantly more accurate approximation of the input image allowing users to create sophisticated hairstyles with just a few gestures.

User study

  • The authors conducted a preliminary user study to evaluate the usability of their suggestive system.
  • The authors recruited 1 experienced hair modeling artist and 8 novice users with different levels of sketching experiences as participants.

Procedure

  • The study consisted of three sessions: warm-up (10 min), target session (60 min), open session and interview (20 min).
  • For the target session, the participants were given a reference portrait image (loaded into VR) and asked to create the hair model via (1) manual drawing, and (2) suggestive modeling in their VR system .
  • The orders of these three conditions were counter-balanced among all participants.
  • For the open session, the goal is to let the participants explore the full functionality of their system and uncover potential usability issues.

Outcome

  • The authors measured the completion time and stroke counts for the target session.
  • The results show that using their suggestive system could save both the authoring time (average/min/max time: 6/3/10 minutes) and number of strokes (average/min/max: 22/11/32 strokes), compared to the manual drawing (average/min/max time: 18/15/25 minutes, average/min/max: 71/58/95 strokes).
  • The reported time of their suggestive approach includes both the manual drawing and editing operations as refinement.
  • The total stroke counts include those undone and deleted by the users.

Feedback

  • Overall, the participants found their system novel and useful, and liked the auto-complete function of their system.
  • The participants reported the real-time suggestion was gratifying and accurate, which provided them helpful visual guidance for VR drawing.
  • The artist pointed out that controlling and adjusting the strip position and angle is very time-consuming in professional hair modeling software like Maya XGen.
  • Figure 16 shows the results of their study participants.
  • The authors interactive system, on the other hand, suggests complete hair models to match sparse user strokes, and allows merging and deformation of different hairstyles at ease.

VR interface

  • Both the VR interface and the autocomplete function of their system can help improve usability.
  • The authors omitted the evaluation of the VR interface against traditional desktop interfaces, as the benefits of VR interaction are not unique to this work and have been demonstrated in other platforms such as VR brushing and painting.
  • Instead, the authors asked three professional hair modeling artists about the efforts to create their paper results using their desktop tools, and all of them told us achieving the same complexity and quality of strip placement would take them more than 2 days on average.

ACKNOWLEDGMENTS

  • This research was funded by in part by Adobe, the ONR YIP grant N00014-17-S-FO14, the CONIX Research Center, one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA, the Andrew and Erna Viterbi Early Career Chair, the U.S. Army Research Laboratory (ARL) under contract number W911NF-14-D-0005, and Sony.
  • Hao Li is affiliated with USC, USC/ICT, and Pinscreen.
  • This project was not funded by nor conducted at Pinscreen.
  • The authors would like to thank Aviral Agarwal for his help and professional advice on hair modeling, Liwen Hu for providing us the code for strip-to-strand conversion, Emily O’Brien and Mike Seymour for the scanned head models, and the anonymous reviewers for their valuable suggestions.
  • The content of the information does not necessarily reflect the position or the policy of the government, and no official endorsement should be inferred.

Did you find this useful? Give us your feedback

Figures (20)

Content maybe subject to copyright    Report

HAL Id: hal-02265931
https://hal.archives-ouvertes.fr/hal-02265931
Submitted on 12 Aug 2019
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
HairBrush for Immersive Data-Driven Hair Modeling
Jun Xing, Koki Nagano, Weikai Chen, Haotian Xu, Li-Yi Wei, Yajie Zhao,
Jingwan Lu, Byungmoon Kim, Hao Li
To cite this version:
Jun Xing, Koki Nagano, Weikai Chen, Haotian Xu, Li-Yi Wei, et al.. HairBrush for Immersive Data-
Driven Hair Modeling. 32nd ACM User Interface Software and Technology Symposium, Oct 2019,
New Orleans, United States. �10.1145/3332165.3347876�. �hal-02265931�

HairBrush for Immersive Data-Driven Hair Modeling
Jun Xing
USC Institute for Creative
Technologies
Koki Nagano
Pinscreen
Weikai Chen
USC Institute for Creative
Technologies
Haotian Xu
Wayne State University
Li-Yi Wei
Adobe Research
Yajie Zhao
USC Institute for Creative
Technologies
Jingwan Lu
Adobe Research
Byungmoon Kim
Adobe Research
Hao Li
USC Institute for Creative
Technologies
Pinscreen
ABSTRACT
While hair is an essential component of virtual humans,
it is also one of the most challenging digital assets to
create. Existing automatic techniques lack the general-
ity and flexibility to create rich hair variations, while
manual authoring interfaces often require considerable
artistic skills and efforts, especially for intricate 3D hair
structures that can be difficult to navigate. We propose
an interactive hair modeling system that can help create
complex hairstyles in minutes or hours that would oth-
erwise take much longer with existing tools. Modelers,
including novice users, can focus on the overall hairstyles
and local hair deformations, as our system intelligently
suggests the desired hair parts. Our method combines
the flexibility of manual authoring and the convenience
of data-driven automation. Since hair contains intricate
3D structures such as buns, knots, and strands, they
are inherently challenging to create using traditional 2D
interfaces. Our system provides a new 3D hair author-
ing interface for immersive interaction in virtual reality
(VR). Users can draw high-level guide strips, from which
our system predicts the most plausible hairstyles via a
deep neural network trained from a professionally curated
dataset. Each hairstyle in our dataset is composed of
multiple variations, serving as blend-shapes to fit the user
drawings via global blending and local deformation. The
fitted hair models are visualized as interactive suggestions
that the user can select, modify, or ignore. We conducted
a user study to confirm that our system can significantly
reduce manual labor while improve the output quality
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from permissions@acm.org.
UIST’19 , October 20–23, 2019, New Orleans, LA, USA
c
2019 ACM. ISBN 978-1-4503-6816-2/19/10. . . $15.00
DOI: https://doi.org/10.1145/3332165.3347876
for modeling a variety of head and facial hairstyles that
are challenging to create via existing techniques.
CCS Concepts
Human-centered computing Virtual reality;
Computing methodologies Neural networks;
Shape modeling;
Author Keywords
hair, modeling, virtual reality, data-driven, machine
learning, user interface
INTRODUCTION
Recent advances in modeling and rendering of digital
humans have provided unprecedented realism for a variety
of real-time applications, as exemplified in “Meet Mike”
[33], “Siren” [34], and “Soul Machines” [1]. Compared to
other human components, compelling 3D hair models are
particularly challenging to create, render, and animate,
due to their variety in styles and shapes.
In production, 3D hair models are still created manually
with the help of advanced design softwares, such as XGen,
Ornatrix, or Hairfarm. While these solutions incorporate
a wide range of cutting edge tools for intuitive shape and
procedural strand manipulation, they are developed for
highly trained and experienced digital artists. Even for a
skilled user, compelling and realistic hairstyles can easily
take days or weeks to produce. Human hair is volumetric
and often consists of highly intricate 3D structures, such
as strands, wisps, buns, and braids, which are difficult to
design with traditional 2D interfaces. 3D hair digitiza-
tion and data-driven techniques can reduce the need for
manual labor [27, 11, 10, 25, 19, 9], but afford limited
control for real production environments.
We propose a a practical hair design system that com-
bines the intuition and immersion of VR-based 3D inter-
actions with the efficiency and automation of data-driven

(a) head hair gestures (b) prediction (c) facial hair gestures (d) prediction (e) result
Figure 1: Immersive hairstyle authoring with our system. Users can draw high-level hair gestures (green) in VR
(a), based on which our system predicts the most plausible hairstyles (b). Our system can also help create facial
hairs such as beards and eyebrows as shown in (c) and (d). Users can interact with the suggestions to maintain full
control, including deforming hair structures and merging multiple hairstyles. The hair model produced by our system
is composed of strips that can be rendered in high quality and in real-time. The final outcome (e) visualizes the
underlying strips (left) and rendered hairs (right), and is completed by an novice in 10 minutes with 382 suggested and
71 manually-drawn strips. Please refer to the accompanying video for live actions.
modeling. In an immersive virtual environment, users can
interactively express their intentions via natural gestures,
such as brushes for braids and spins for afro-curls, and
receive realistic 3D hairstyle suggestions by our system,
similar to prior autocomplete techniques for 2D sketching
[18, 45, 47] and 3D modeling [12, 28]. The suggested
high-quality hairstyles are learned and computed from
a hair model database created by professional artists.
Our interface allows users to accept, modify, or ignore
these suggestions to maintain full control over the design
process. Figure 1 provides an example.
We represent our hair models as textured polygonal strips,
which are widely adopted in AAA real-time games such
as Final Fantasy 15 and Uncharted 4, and state-of-the-
art real-time performance-driven CG characters such as
the “Siren” demo shown at GDC 2018 [34]. Hairstrips
are flexible to model and highly efficient to render [24],
suitable for authoring, animating, and rendering mul-
tiple virtual characters. The resulting strip-based hair
models can be also converted to other formats such as
strands. In contrast, with strand-based models, barely
a single character could be rendered on a high-end ma-
chine, and existing approaches for converting strands into
poly-strips tend to cause adverse rendering effects due
to the lack of consideration of appearances and textures
during optimization.
To connect imprecise manual interactions with detailed
hairstyles, we design a deep neural network architecture
to classify sparse and varying number of input strokes
to a matching hairstyle in a database. We first ask
hair modeling artists to manually create a strip-based
hair database with diverse styles and structures. We
then expand our initial hair database using non-rigid
deformations so that the deformed hair models share the
same topology (e.g. ponytail) but vary in lengths and
shapes. To simulate realistic usage scenarios, we train the
network using varying numbers of sparse representative
strokes. To amplify the training power of the limited
data set and to enhance robustness of classification, the
network maps pairs instead of individual strips into a
latent feature space. The mapping stage has shared-
parameter layers and max-pooling, ensuring our network
scales well to arbitrary numbers of user strokes.
As each hairstyle consists of multiple geometry variations,
we treat the retrieved hair models as blend-shapes [7] to
fit the input strokes via a combination of global linear
blending and local non-linear deformation. Instead of
taking an entire hair model as blending basis, we blend at
the strip level, to facilitate better expressiveness for each
hair strip combinations. Following the blending operation,
we perform real-time deformation to coherently propagate
local details of key strips to a global scale. Our system
also supports the creation of heterogeneous hairstyles by
interactive merging of multiple hairstyles.
Even with a suggestive system, conventional 2D inter-
faces can still be difficult to create hairs due to their
complex and volumetric 3D structures. We thus pro-
vide an immersive authoring interface in virtual reality
(VR), which can facilitate 3D painting and sculpting for
freeform design. Users can naturally model any hairstyles
using a variety of brush types in 3D without physical
limitations. When interacting freely in space, new chal-
lenges arise such as the difficulty to accurately perceive
depth, position, and align objects [3]. We further propose
new interface techniques to enable precise and intuitive
interactions between the user and the hair model. We use
our VR prototype for database creation, training data
collection and subsequent user studies.
Experiments demonstrate that our system can save a
significant amount of manual effort, while providing full
degrees of freedom to create a large variation of com-
pelling hairstyles that can be difficult to produce via

existing methods, such as ponytails (Figure 1b), braids
(Figure 3), facial-hairs (Figure 1d), afro-curls (bottom
row of Figure 11), hair buns, and long hair with small
curls (Figure 17). Even novice users can create complex
hairstyles with intricate geometry, texture, and shading
in minutes that would otherwise take days for experts.
The contributions of this paper are:
An immersive and suggestive VR-based interface for
intuitive and interactive hair authoring;
A deep neural network that accurately predicts and
suggests high-level hairstyles from sparse, imprecise
brush strokes;
A hair synthesis method that combines both global
blending and local deformation;
A new hair dataset with a wide diversity of hairstyles
and structures that are manually created by profes-
sional artists.
RELATED WORK
Manual Hair Modeling
To generate photorealistic CG characters, sophisticated
design tools have been proposed to model, simulate, and
render human hair. We refer readers to [41] for an ex-
tensive overview. Manual modeling offers complete free-
dom in creating the desired hairstyles [13, 6, 48], but
requires significant expertise and labor, especially for
human hairstyles with rich and complex structures. The
authoring interface needs to be easy to use (e.g. sketch-
based for creation [44, 15] or posing [32]) without re-
quiring detailed inputs such as individual strands [2] or
clusters [23]. We present a system that produces realistic
outputs in real-time given only a few sparse user strokes
to facilitate interactive exploration.
Hair Capture
Production-level capture typically requires controlled en-
vironments, manual tuning, and sophisticated acquisition
devices, e.g. multi-view stereo rigs [14, 21, 25, 27]. To
popularize hair capture for end-users, existing methods
offer various tradeoffs among setup, quality, and robust-
ness, e.g. thermal imaging [17], capturing from multiple
views [19] versus single view [9], or requiring different
amounts and types of user inputs [11, 10, 42]. Such meth-
ods can reduce manual edits but also limit the output
effects to the captured data at hand.
Despite the large body of works on hair capture, facial
hair reconstruction remains largely unexplored. Facial
hairs can have more varied shape, density, and length
than scalp hairs. In [4], a 3D reconstruction method is
proposed to recover both the geometry of sparse facial hair
and its underlying skin surface. Other recent works [22,
8, 29, 38] focus on generating or editing facial hair in
2D. To the best of our knowledge, our method provides
the first suggestive system for intelligent 3D modeling of
both scalp and facial hairs.
Data-driven Hair Modeling
Instead of using only data captured at the current ses-
sion, a database or a set of exemplars can enrich the
scope and diversity of the output hairs [40, 18, 49]. In-
spired by these data-driven approaches and auto-compete
authoring systems [12, 45, 47, 28, 37], we provide an auto-
complete hair-modeling interface that suggests potential
detailed hair structures from a database based on sparse
user inputs. Instead of using hand-crafted metrics (e.g.
[18]) that often fail to handle the large style variations
and complexity of hairstyle (e.g., ponytail vs. braid), we
propose a deep neural network for database retrieval that
is able to exploit high-level hairstyle features. Moreover,
a deep neural network is much faster, and scales well
with the dataset size, as later shown by [20].
3D Deep Learning
Recent methods such as [50, 31] applied deep learning
to reconstruct hair from a single image. In particular,
Zhou et al. [50] take the 2D orientation field of a hair
image as input and synthesize strand features that are
evenly distributed on a parameterized 2D scalp. Saito et
al. [31] represent hair geometry using 3D volumetric field,
which can be efficiently encoded and learned through
a volumetric variational autoencoder (VAE). However,
their methods are limited to hair reconstruction from
images and do not offer intuitive control to modify hair
details. Different from these generation networks that
rely on fixed and finite-dimensional input representations,
our network is only used for nearest hairstyle retrieval
via the hair strokes represented as point sequences. Our
works is inspired by the seminal work on point clouds
recognition [30]. PointNet uses multi-layer perceptron to
encode individual 3D points into feature vectors and ag-
gregate them into a global feature vector by max pooling.
Though simple, the architecture has nice properties of
being invariant to the number or order of points.
Modeling in VR
Even with the assistance of data and/or machine learning,
traditional 2D interfaces such as [18] present inherent
challenges for authoring 3D content, especially those with
intricate 3D structures such as human hair buns, knots,
and wisps. Recent advances in VR modeling [39, 16, 26]
have offered a new venue for interactive 3D modeling.
However, none of existing VR platforms supports direct
editing on hair geometry, not to mention an automated
system which can predict and complete a complex hair
model from sparse painting strokes. We introduce the
first hair modeling tool that leverages the unprecedented
authoring freedom in VR and provides intelligent online
hairstyle suggestion and manual authoring assistance.
DESIGN GOALS
To build a powerful, flexible, and easy to use hair au-
thoring system, we have the following design goals in
mind:
Wide audience
Both novice and professionals can eas-
ily and quickly create high quality hair models with
our system.

Flexible input
Only high-level, sparse input gestures
are required to indicate intended hairstyles. Users can
choose to provide more inputs for finer controls.
Assistance
Based on sparse user inputs, our system
interactively suggests complete hair structures, which
can be composed of parts from different hair styles.
Interactivity
The suggestions should be dynamically
updated in real-time during user interaction and scale
well to different numbers of query strips.
Immersion
Complex 3D structures, such as buns, knots,
and strands, should be easy to specify.
Quality output
The output models should be complete
and realistic, regardless of the amounts and types of
user inputs.
(a) setup (b) interface
Figure 2: System setup and user interface. One controller
is used as the brush for freeform drawing, and the other
as a UI panel from which the users can pinpoint to select
different tools and change parameters (e.g. brush, color,
and models).
USER INTERFACE
To meet the design goals in Section 3, we have built an
assistant VR authoring system to help users easily and
quickly create high-quality hair models. Users draw high-
level, sparse hair strips to indicate intended hairstyles.
Based on user inputs, our system interactively suggests
complete, detailed, and realistic hair structures. Our
interface design is inspired by systems in VR brushing
(e.g., Tilt-Brush [16]) and geometry autocompletion (e.g.,
[12, 28]).
Basic user interaction
As shown in Figure 2b and the supplementary video, our
system supports basic user interactions such as brushing
strips, loading models, undo/redo, changing stroke colors
and hair textures, etc. With VR handles (HTC Vive in
our prototype), users can freely rotate, bend, and stretch
hair strips in 3D (Figure 2a), as well as change the brush
sizes and shapes via the touchpad of the freeform drawing
controller.
Online modeling suggestion and hint update
As the user draws intended hair strips, our system pre-
dicts the most plausible hairstyles, rendered as transpar-
ent hints. Whenever a new user strip is detected, the
(a) gestures (b) hint (c) transparency
(d) more gestures (e) select (f) accepted
(g) back hair merge (h) top hair merge (i) merged result
Figure 3: Examples of user interaction and system as-
sistance. The user starts drawing one guide strip at the
lower back of the head in a mirror mode (a), and our
system predicts the best matching hairstyle visualized in
transparent green (b). The transparency can be changed
by moving the controller closer (more opaque) or farther
(more transparent) from the head (c). The user can ignore
the suggestion and draw more gestures, and a different
hairstyle is retrieved and deformed to fit the current set
of user interactions (d). The user can accept part of the
suggestion using the selection brush (e), and the accepted
suggestion will be adjusted to the current brush color
(f). When the user draws a new gesture on the top of
the head, our system provides a new suggestion taking
into account both the current guide strip and the exist-
ing bun (g). Then the user accepts the suggestions and
draws another gesture in the front (h) and context-aware
suggestions are updated. (i) shows the merged result.
(a) normal (b) attach (c) spin (d) braid
Figure 4: Brush modes. (a) shows the original gestures
drawn by the user, which could have different effects
under different brush modes, including the attach brush
that attaches the strokes onto the scalp (b), and the spin
(c) and braid (d) brushes for more complex structures.

Citations
More filters
Proceedings ArticleDOI
14 Jun 2020
TL;DR: A neural network pipeline is employed that synthesizes realistic and detailed images of facial hair directly in the target image in under one second, controlled by simple and sparse guide strokes from the user defining the general structural and color properties of the target hairstyle.
Abstract: We present an interactive approach to synthesizing realistic variations in facial hair in images, ranging from subtle edits to existing hair to the addition of complex and challenging hair in images of clean-shaven subjects. To circumvent the tedious and computationally expensive tasks of modeling, rendering and compositing the 3D geometry of the target hairstyle using the traditional graphics pipeline, we employ a neural network pipeline that synthesizes realistic and detailed images of facial hair directly in the target image in under one second. The synthesis is controlled by simple and sparse guide strokes from the user defining the general structural and color properties of the target hairstyle. We qualitatively and quantitatively evaluate our chosen method compared to several alternative approaches. We show compelling interactive editing results with a prototype user interface that allows novice users to progressively refine the generated image to match their desired hairstyle, and demonstrate that our approach also allows for flexible and high-fidelity scalp hair synthesis.

31 citations


Cites background from "HairBrush for Immersive Data-Driven..."

  • ...More recently, Hairbrush [69] demonstrates an immersive data-driven modeling system for 3D strip-based hair and beard models....

    [...]

Journal ArticleDOI
03 Apr 2021-Sensors
TL;DR: In this article, the authors present an overview and analysis of existing work in Human-Centered Machine Learning (HCML) related to DL, and identify the topology of the HCML landscape by identifying research gaps, highlighting conflicting interpretations, addressing current challenges and presenting future HCML research opportunities.
Abstract: After Deep Learning (DL) regained popularity recently, the Artificial Intelligence (AI) or Machine Learning (ML) field is undergoing rapid growth concerning research and real-world application development. Deep Learning has generated complexities in algorithms, and researchers and users have raised concerns regarding the usability and adoptability of Deep Learning systems. These concerns, coupled with the increasing human-AI interactions, have created the emerging field that is Human-Centered Machine Learning (HCML). We present this review paper as an overview and analysis of existing work in HCML related to DL. Firstly, we collaborated with field domain experts to develop a working definition for HCML. Secondly, through a systematic literature review, we analyze and classify 162 publications that fall within HCML. Our classification is based on aspects including contribution type, application area, and focused human categories. Finally, we analyze the topology of the HCML landscape by identifying research gaps, highlighting conflicting interpretations, addressing current challenges, and presenting future HCML research opportunities.

29 citations

Journal ArticleDOI
TL;DR: DeepSketchHair, a deep learning based tool for modeling of 3D hair from 2D sketches, takes as input a user-drawn sketch, and automatically generates a3D hair model, matching the input sketch.
Abstract: We present DeepSketchHair , a deep learning based tool for modeling of 3D hair from 2D sketches. Given a 3D bust model as reference, our sketching system takes as input a user-drawn sketch (consisting of hair contour and a few strokes indicating the hair growing direction within a hair region), and automatically generates a 3D hair model, matching the input sketch. The key enablers of our system are three carefully designed neural networks, namely, S2ONet , which converts an input sketch to a dense 2D hair orientation field; O2VNet , which maps the 2D orientation field to a 3D vector field; and V2VNet , which updates the 3D vector field with respect to the new sketches, enabling hair editing with additional sketches in new views. All the three networks are trained with synthetic data generated from a 3D hairstyle database. We demonstrate the effectiveness and expressiveness of our tool using a variety of hairstyles and also compare our method with prior art.

15 citations

Journal ArticleDOI
TL;DR: Complex 3D curves can be created by directly drawing mid-air strokes precisely on the surface of a 3D virtual object in immersive environments (Augmented and Virtual Realities).
Abstract: Complex 3D curves can be created by directly drawing mid-air in immersive environments (Augmented and Virtual Realities). Drawing mid-air strokes precisely on the surface of a 3D virtual object, however, is difficult, necessitating a projection of the mid-air stroke onto the user “intended” surface curve. We present the first detailed investigation of the fundamental problem of 3D stroke projection in VR. An assessment of the design requirements of real-time drawing of curves on 3D objects in VR is followed by the definition and classification of multiple techniques for 3D stroke projection. We analyze the advantages and shortcomings of these approaches both theoretically and via practical pilot testing. We then formally evaluate the two most promising techniques spraycan and mimicry with 20 users in VR. The study shows a strong qualitative and quantitative user preference for our novel stroke mimicry projection algorithm. We further illustrate the effectiveness andutility of stroke mimicry to draw complex 3D curves on surfaces for various artistic and functional design applications.

11 citations

Proceedings ArticleDOI
01 Mar 2021
TL;DR: In this article, the authors explore the use of mid-air finger 3D sketching in VR for tree modeling and demonstrate the ease-of-use, efficiency, and flexibility in tree modelling and overall shape control.
Abstract: 2D sketch-based tree modeling cannot guarantee to generate plausible depth values and full 3D tree shapes. With the advent of virtual reality (VR) technologies, 3D sketching enables a new form for 3D tree modeling. However, it is labor-intensive and difficult to create realistically-looking 3D trees with complicated geometry and lots of detailed twigs with a reasonable amount of effort. In this paper, we explore the use of mid-air finger 3D sketching in VR for tree modeling. We present a hybrid approach that integrates freehand 3D sketches with an automatic population of branch geometries. The user only needs to draw a few 3D strokes in mid-air to define the envelope of the foliage (denoted as lobes) and main branches. Our algorithm then automatically generates a full 3D tree model based on these stroke inputs. Additionally, the shape of the 3D tree model can be modified by freely dragging, squeezing, or moving lobes in mid-air. We demonstrate the ease-of-use, efficiency, and flexibility in tree modeling and overall shape control. We perform user studies and show a variety of realistic tree models generated instantaneously from 3D finger sketching.

8 citations

References
More filters
Proceedings ArticleDOI
21 Jul 2017
TL;DR: This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.
Abstract: Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.

9,457 citations


"HairBrush for Immersive Data-Driven..." refers background or methods in this paper

  • ...PointNet uses multi-layer perceptron to encode individual 3D points into feature vectors and aggregate them into a global feature vector by max pooling....

    [...]

  • ...In order to handle arbitrary number of strips, we aggregate all the partial features {Fi,j } via a max-pooling layer to obtain a global feature F of the hairstyle, similar to PointNet [31]....

    [...]

  • ...Our works is inspired by the seminal work on point clouds recognition [31]....

    [...]

  • ...In order to handle arbitrary number of strips, we aggregate all the partial features {Fi,j} via a max-pooling layer to obtain a global feature F of the hairstyle, similar to PointNet [31]....

    [...]

Proceedings ArticleDOI
01 Jul 1999
TL;DR: A new technique for modeling textured 3D faces by transforming the shape and texture of the examples into a vector space representation, which regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance.
Abstract: In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness.

4,514 citations


"HairBrush for Immersive Data-Driven..." refers methods in this paper

  • ...The method in [7] treats each complete face model as a blend shape....

    [...]

  • ...Given the target hairstyle Hi classified by the network in Figure 8, we use its corresponding hair models {H } i (Section 7) as ”blendshapes” [7] to fit the input key strips {P`}....

    [...]

  • ...As each hairstyle consists of multiple geometry variations, we treat the retrieved hair models as blend-shapes [7] to fit the input strokes via a combination of global linear blending and local non-linear deformation....

    [...]

Proceedings ArticleDOI
08 Jul 2004
TL;DR: In this paper, the Laplacian of the mesh is enhanced to be invariant to locally linearized rigid transformations and scaling, which can be used to perform surface editing at interactive rates.
Abstract: Surface editing operations commonly require geometric details of the surface to be preserved as much as possible. We argue that geometric detail is an intrinsic property of a surface and that, consequently, surface editing is best performed by operating over an intrinsic surface representation. We provide such a representation of a surface, based on the Laplacian of the mesh, by encoding each vertex relative to its neighborhood. The Laplacian of the mesh is enhanced to be invariant to locally linearized rigid transformations and scaling. Based on this Laplacian representation, we develop useful editing operations: interactive free-form deformation in a region of interest based on the transformation of a handle, transfer and mixing of geometric details between two surfaces, and transplanting of a partial surface mesh onto another surface. The main computation involved in all operations is the solution of a sparse linear system, which can be done at interactive rates. We demonstrate the effectiveness of our approach in several examples, showing that the editing operations change the shape while respecting the structural geometric detail.

1,143 citations

Proceedings ArticleDOI
04 Jul 2007
TL;DR: This work argues that defining a modeling operation by asking for rigidity of the local transformations is useful in various settings, and devise a simple iterative mesh editing scheme based on this principle, that leads to detail-preserving and intuitive deformations.
Abstract: Modeling tasks, such as surface deformation and editing, can be analyzed by observing the local behavior of the surface. We argue that defining a modeling operation by asking for rigidity of the local transformations is useful in various settings. Such formulation leads to a non-linear, yet conceptually simple energy formulation, which is to be minimized by the deformed surface under particular modeling constraints. We devise a simple iterative mesh editing scheme based on this principle, that leads to detail-preserving and intuitive deformations. Our algorithm is effective and notably easy to implement, making it attractive for practical modeling applications.

1,028 citations


"HairBrush for Immersive Data-Driven..." refers methods in this paper

  • ...Compared to optimation-based mesh deformation [37, 36], this simple one-pass method deforms hair strips in real time with su cient quality....

    [...]

Journal ArticleDOI
01 Aug 2008
TL;DR: In this article, a discrete treatment of adapted framed curves, parallel transport, and holonomy is presented for a discrete geometric model of thin flexible rods with arbitrary cross section and undeformed configuration.
Abstract: We present a discrete treatment of adapted framed curves, parallel transport, and holonomy, thus establishing the language for a discrete geometric model of thin flexible rods with arbitrary cross section and undeformed configuration. Our approach differs from existing simulation techniques in the graphics and mechanics literature both in the kinematic description---we represent the material frame by its angular deviation from the natural Bishop frame---as well as in the dynamical treatment---we treat the centerline as dynamic and the material frame as quasistatic. Additionally, we describe a manifold projection method for coupling rods to rigid-bodies and simultaneously enforcing rod inextensibility. The use of quasistatics and constraints provides an efficient treatment for stiff twisting and stretching modes; at the same time, we retain the dynamic bending of the centerline and accurately reproduce the coupling between bending and twisting modes. We validate the discrete rod model via quantitative buckling, stability, and coupled-mode experiments, and via qualitative knot-tying comparisons.

572 citations


"HairBrush for Immersive Data-Driven..." refers methods in this paper

  • ...In particular, we build the Bishop frame [5] for each strip, and use the parallel transport method to calculate the transformation at each sample point....

    [...]

Frequently Asked Questions (14)
Q1. What are the contributions in "Hairbrush for immersive data-driven hair modeling" ?

Existing automatic techniques lack the generality and flexibility to create rich hair variations, while manual authoring interfaces often require considerable artistic skills and efforts, especially for intricate 3D hair structures that can be difficult to navigate. The authors propose an interactive hair modeling system that can help create complex hairstyles in minutes or hours that would otherwise take much longer with existing tools. Their method combines the flexibility of manual authoring and the convenience of data-driven automation. The authors conducted a user study to confirm that their system can significantly reduce manual labor while improve the output quality Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Modelers, including novice users, can focus on the overall hairstyles and local hair deformations, as their system intelligently suggests the desired hair parts. The fitted hair models are visualized as interactive suggestions that the user can select, modify, or ignore. 

Investigating how to arrange hair geometries based on the texture to produce more realistic rendering would be an interesting future work. One future work is to allow fine-level control of the strip geometry. A potential direction is to add efficient secondary animation authoring [ 46, 43 ] with their hair geometry modeling. 

Since their system could be used for efficient and highquality strip-based hair modeling, it has practical value for the scalable production of games and immersive content. 

To popularize hair capture for end-users, existing methods offer various tradeoffs among setup, quality, and robustness, e.g. thermal imaging [17], capturing from multiple views [19] versus single view [9], or requiring different amounts and types of user inputs [11, 10, 42]. 

The authors introduce the first hair modeling tool that leverages the unprecedented authoring freedom in VR and provides intelligent online hairstyle suggestion and manual authoring assistance. 

Preparing a high-quality dataset with large variations in both geometry and texture still requires huge amount of artistic effort, especially for curly hairs. 

To make the surface normal less flat, their system adopts an U-shape strip geometry (Figure 6a), where user can control the cross curvature of the strip. 

Therefore discarding older operations enables the algorithm to provide more accurate updates according to the latest user inputs. 

For AAA games, one hair model could easily take days for a professional artist to simply place the strips, as manipulating the mesh geometry (vertices, edges, and faces) could be extremely tedious in a 2D interface. 

The rationale behind the use of strip pair as the basic feature is that it can capture more structural information than individual strips (as illustrated in Figure 7) and yet achieve comparable accuracy with denser strip groupings (e.g. 3 or more) with less computational cost. 

The user can control the hint transparency by moving the brush closer to or farther from the head and move the head for different views. 

The additional degrees of freedom would help smooth the outcome geometry while providing more capability in representing largely deformed structures. 

In order to handle arbitrary number of strips, the authors aggregate all the partial features {Fi,j} via a max-pooling layer to obtain a global feature F of the hairstyle, similar to PointNet [30]. 

The scalp hair database D contains 30 different styles {Hi} with various hair length (long, middle and short), hairline (left, right, middle or none), and hair parts (bun and ponytail).