scispace - formally typeset
Search or ask a question
Author

Barkın Başarankut

Bio: Barkın Başarankut is an academic researcher from Bilkent University. The author has contributed to research in topics: Physically based animation & Key frame. The author has an hindex of 1, co-authored 1 publications receiving 3 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper describes a sketch-based tool, with which a user can both create hair models with different styling parameters and produce animations of these created hair models using physically and key frame-based techniques.
Abstract: Physically based simulation of human hair is a well studied and well known problem. But the “pure” physically based representation of hair (and other animation elements) is not the only concern of the animators, who want to “control” the creation and animation phases of the content. This paper describes a sketch-based tool, with which a user can both create hair models with different styling parameters and produce animations of these created hair models using physically and key frame-based techniques. The model creation and animation production tasks are all performed with direct manipulation techniques in real-time.

3 citations


Cited by
More filters
Proceedings ArticleDOI
17 Oct 2019
TL;DR: This work proposes an interactive hair modeling system that can help create complex hairstyles in minutes or hours that would otherwise take much longer with existing tools, and provides a new 3D hair authoring interface for immersive interaction in virtual reality (VR).
Abstract: While hair is an essential component of virtual humans, it is also one of the most challenging digital assets to create. Existing automatic techniques lack the generality and flexibility to create rich hair variations, while manual authoring interfaces often require considerable artistic skills and efforts, especially for intricate 3D hair structures that can be difficult to navigate. We propose an interactive hair modeling system that can help create complex hairstyles in minutes or hours that would otherwise take much longer with existing tools. Modelers, including novice users, can focus on the overall hairstyles and local hair deformations, as our system intelligently suggests the desired hair parts. Our method combines the flexibility of manual authoring and the convenience of data-driven automation. Since hair contains intricate 3D structures such as buns, knots, and strands, they are inherently challenging to create using traditional 2D interfaces. Our system provides a new 3D hair authoring interface for immersive interaction in virtual reality (VR). Users can draw high-level guide strips, from which our system predicts the most plausible hairstyles via a deep neural network trained from a professionally curated dataset. Each hairstyle in our dataset is composed of multiple variations, serving as blend-shapes to fit the user drawings via global blending and local deformation. The fitted hair models are visualized as interactive suggestions that the user can select, modify, or ignore. We conducted a user study to confirm that our system can significantly reduce manual labor while improve the output quality for modeling a variety of head and facial hairstyles that are challenging to create via existing techniques.

15 citations

Journal ArticleDOI
Yongtang Bao1, Yue Qi1
TL;DR: This paper surveys the state of the art in the major topics of image-based techniques for hair modeling, including single-viewhair modeling, static hair modeling from multiple images, video-based dynamic hair modeled, and the editing and reusing of hair modeling results.
Abstract: With the tremendous performance increase of today’s graphics technologies, visual details of digital humans in games, online virtual worlds, and virtual reality applications are becoming significantly more demanding. Hair is a vital component of a person’s identity and can provide strong cues about age, background, and even personality. More and more researchers focus on hair modeling in the fields of computer graphics and virtual reality. Traditional methods are physics-based simulation by setting different parameters. The computation is expensive, and the constructing process is non-intuitive, difficult to control. Conversely, image-based methods have the advantages of fast modeling and high fidelity. This paper surveys the state of the art in the major topics of image-based techniques for hair modeling, including single-view hair modeling, static hair modeling from multiple images, video-based dynamic hair modeling, and the editing and reusing of hair modeling results. We first summarize the single-view approaches, which can be divided into the orientation-field and data-driven-based methods. The static methods from multiple images and dynamic methods are then reviewed in Sections III and IV . In Section V , we also review the editing and reusing of hair modeling results. The future development trends and challenges of image-based methods are proposed in the end.

11 citations

Journal ArticleDOI
Yongtang Bao1, Yue Qi1
TL;DR: This paper proposes a novel approach to construct a realistic 3D hair model from a hybrid orientation field and demonstrates that this approach can preserve structural details of 3Dhair models.
Abstract: Image-based hair modeling methods enable artists to produce abundant 3D hair models. However, the reconstructed hair models could not preserve the structural details, such as uniformly distributed hair roots, interior strands growing in line with real distribution and exterior strands similar to images. In this paper, we propose a novel approach to construct a realistic 3D hair model from a hybrid orientation field. Our hybrid orientation field is generated from four fields. The first field makes the surface structure of a hairstyle be similar to the input images as much as possible. The second field makes the hair roots and interior hair strands be consistent with actual distribution. The tracing hair strands can be confined to the hair volume according to the third field. And the fourth field makes the growing direction of one point at a strand be compatible with its predecessor. To generate these fields, we construct high-confidence 3D strand segments from the orientation field of point cloud and 2D traced strands. Hair strands automatically grow from uniformly distributed hair roots according to the hybrid orientation field. We use energy minimization strategy to optimize the entire 3D hair model. We demonstrate that our approach can preserve structural details of 3D hair models.

10 citations