scispace - formally typeset
Search or ask a question
Author

Melanie Montagnol

Bio: Melanie Montagnol is an academic researcher from University of Geneva. The author has contributed to research in topics: Haptic technology & Rendering (computer graphics). The author has an hindex of 4, co-authored 4 publications receiving 28 citations.

Papers
More filters
Book ChapterDOI
26 Jun 2006
TL;DR: Various technical advancements and achievements that have been made in formulating key techniques to handle the different challenging issues involved in simulation of hair at interactive rates are discussed.
Abstract: Despite tremendous work in hair simulation a unified framework for creating realistically simulated hairstyles at interactive rates is yet not available; the main reason is that complex dynamic and optical behavior of hair are computationally expensive to simulate. To have such a framework, it is essential to find optimized solutions, especially for the various physics-based tasks, which is the main bottleneck in the simulation. In this paper, we discuss various technical advancements and achievements that have been made in formulating key techniques to handle the different challenging issues involved in simulation of hair at interactive rates. Effort has been put in all the three modules of the hair simulation – hair shape modeling, hair dynamics and hair rendering.

9 citations

Journal ArticleDOI
TL;DR: The visuo-haptic hair interaction framework consists of two layers which handle the response to the user’s interaction at a local level and at a global level, and can be used to efficiently address the specific requirements of haptics and vision.
Abstract: Over the last fifteen years, research on hair simulation has made great advances in the domains of modeling, animation and rendering, and is now moving towards more innovative interaction modalities. The combination of visual and haptic interaction within a virtual hairstyling simulation framework represents an important concept evolving in this direction. Our visuo-haptic hair interaction framework consists of two layers which handle the response to the user’s interaction at a local level (around the contact area), and at a global level (on the full hairstyle). Two distinct simulation models compute individual and collective hair behavior. Our multilayered approach can be used to efficiently address the specific requirements of haptics and vision. Haptic interaction with both models has been tested with virtual hairstyling tools.

7 citations

Journal ArticleDOI
TL;DR: A unified framework that uses the various key techniques developed for specific tasks in hair simulation and to realize the ultimate goal of ‘virtual hair-dressing room’ that is simple to use but quite effective for generating fast hairstyles is presented.
Abstract: Hair designing is one of crucial components of the hair simulation tasks. The efficiency of hair modeling is very much determined by the interactivity and the ease-to-use the designing tools within an application. This paper presents a unified framework that uses the various key techniques developed for specific tasks in hair simulation and to realize the ultimate goal of ‘virtual hair-dressing room’ that is simple to use but quite effective for generating fast hairstyles. Successful attempts have been made to handle the different challenging issues involved in simulation of hair at interactive rates. Effort has been put in developing methodologies for hair shape modeling, hair dynamics and hair rendering. A user friendly interface controlled by a haptic device facilitates designer’s interactivity with the hairstyles. Furthermore, designer’s visualization is enhanced by using real time animation and interactive rendering. Animation is done using a modified Free Form Deformation (FFD) technique that has been effectively adapted to various hairstyles. Hair Rendering is performed using an efficient scattering based technique, displaying hair with its various optical effects.

7 citations

Proceedings ArticleDOI
24 Oct 2007
TL;DR: This paper focuses on adaptive visuo-haptic simulation of hair using force feedback haptic devices, and proposes an easy-to-use interactive hair modelling interface to explore ways of integrating visual hair simulation and haptic into one multirate-multilayer-multithread application allowing for intuitive interactive hair modeling.
Abstract: In this paper, we focus on adaptive visuo-haptic simulation of hair using force feedback haptic devices, and propose an easy-to-use interactive hair modelling interface. The underlying idea is to explore ways of integrating visual hair simulation and haptic into one multirate-multilayer-multithread application allowing for intuitive interactive hair modeling. The user is allowed to interact with the simulated hair on a virtual human's head through a haptic interface. By adding the sense of touch in the proposed system, we enter the domain of multimodal perception and stimulate both vision and touch of the user. This will allow the user to see a realistic hair simulation performing at interactive rates and easily use virtual tools to model the hair style. The proposed research tackles many significant challenges in the domains of multimodal simulation, collision detection, hair simulation and haptic rendering.

5 citations


Cited by
More filters
10 Jun 2005
TL;DR: This work focuses on the design of a new approximation algorithm that reduces the cost of functional evaluations and yet increases the attainable order higher, and the classical ERK methods.
Abstract: During the last decade, a big progress has been achieved in the analysis and numerical treatment of Initial Value Problems (IVPs) in Differential Algebraic Equations (DAEs) and Ordinary Differential Equations (ODEs). In spite of the rich variety of results available in the literature, there are still many specific problems that require special attention. Two of such, which are considered in this work, are the optimization of order of accuracy and reduction of cost of functional evaluations of Explicit Runge - Kutta (ERK) methods. Traditionally, the maximum attainable order p of an s-stage ERK method for advancing the solution of an IVP is such that p(s) 4 In 1999, Goeken presented an s-stage ERK Method of order p(s)=s +1,s>2. However, this work focuses on the design of a new approximation algorithm that reduces the cost of functional evaluations and yet increases the attainable order higher U n and Jonhson [94]; and the classical ERK methods. The order p of the new scheme called Multiderivative Explicit Runge-Kutta (MERK) Methods is such that p(s) 2. The stability, convergence and implementation for the optimization of IVPs in DAEs and ODEs systems are also considered.

665 citations

Journal ArticleDOI
01 Aug 2008
TL;DR: In this paper, a new altitude spring model is proposed to prevent collapse in the simulation of volumetric tetrahedra, and it is also applicable both to bending in cloth and torsion in hair.
Abstract: Our goal is to simulate the full hair geometry, consisting of approximately one hundred thousand hairs on a typical human head. This will require scalable methods that can simulate every hair as opposed to only a few guide hairs. Novel to this approach is that the individual hair/hair interactions can be modeled with physical parameters (friction, static attraction, etc.) at the scale of a single hair as opposed to clumped or continuum interactions. In this vein, we first propose a new altitude spring model for preventing collapse in the simulation of volumetric tetrahedra, and we show that it is also applicable both to bending in cloth and torsion in hair. We demonstrate that this new torsion model for hair behaves in a fashion similar to more sophisticated models with significantly reduced computational cost. For added efficiency, we introduce a semi-implicit discretization of standard springs that makes them truly linear in multiple spatial dimensions and thus unconditionally stable without requiring Newton-Raphson iteration. We also simulate complex hair/hair interactions including sticking and clumping behavior, collisions with objects (e.g. head and shoulders) and self-collisions. Notably, in line with our goal to simulate the full head of hair, we do not generate any new hairs at render time.

224 citations

Journal ArticleDOI
27 Jul 2009
TL;DR: This paper presents a hybrid Eulerian/Lagrangian approach to handling both self and body collisions with hair efficiently while still maintaining detail, which has the efficiency of continuum/guide based hair models with the high detail of Lagrangian self-collision approaches.
Abstract: Hair simulation remains one of the most challenging aspects of creating virtual characters. Most research focuses on handling the massive geometric complexity of hundreds of thousands of interacting hairs. This is accomplished either by using brute force simulation or by reducing degrees of freedom with guide hairs. This paper presents a hybrid Eulerian/Lagrangian approach to handling both self and body collisions with hair efficiently while still maintaining detail. Bulk interactions and hair volume preservation is handled efficiently and effectively with a FLIP based fluid solver while intricate hair-hair interaction is handled with Lagrangian self-collisions. Thus the method has the efficiency of continuum/guide based hair models with the high detail of Lagrangian self-collision approaches.

65 citations

Journal ArticleDOI
TL;DR: In this article, the authors used Vroom's expectancy theory of motivation to understand consumer motivation to use Artificial Intelligence (AI) tools such as chatbots, voice assistants and augmented reality in shopping.
Abstract: The purpose of this paper is to understand motivation of young consumers to use artificial intelligence (AI) tools such as chatbots, voice assistants and augmented reality in shopping by generating Vroom’s expectancy theory of motivation using grounded theory approach.,Grounded theory approach has been used to develop the Vroom’s expectancy theory. Initially data were collected through participant interviews using theoretical sampling. These data were analyzed and coded using the three step process, i.e. open coding, axial coding and selective coding. The categories created during coding were integrated to generate Vroom’s expectancy theory of motivation.,The findings indicate that Vroom’s expectancy theory of motivation can be used to explain motivation of young consumers to use AI tools as an aid in taking shopping decisions. The motivation may be intrinsic motivation, extrinsic motivation or force choice motivation. Expectancy represents the ease of using the tools, instrumentality represents competence of tools in performing desired tasks while valence represents satisfaction, rewarding experience and trust in using of tools.,The findings of the study are based on grounded theory approach which is an inductive approach. Alternate research methodologies, both inductive and deductive, need to be employed to strengthen the external validity and generalize the results. The study is limited to shopping motives of young consumers in India. A comparison with other consumer motivational studies has not been done. Hence no claim is made regarding the advantage of Vroom’s theory over other motivational theories.,The study has strong implications for retailers in developing countries which are seen as an emerging market for retail and have introduced AI tools in recent years. The Vroom’s expectancy theory will help retailers to understand consumer motivation in using AI tools or shopping.,Vroom’s expectancy theory to understand consumer motivation to use AI tools in shopping was generated using the grounded theory approach.

55 citations

Proceedings ArticleDOI
19 Jul 2013
TL;DR: This work introduces a method for stably computing a frame along the hair curve, essential for stable simulation of curly hair, and addresses performance concerns often associated with handling hair-hair contact interactions by efficiently parallelizing the simulation.
Abstract: Artistic simulation of hair presents many challenges - ranging from incorporating artistic control to dealing with extreme motions of characters. Additionally, in a production environment, the simulation needs to be fast and results need to be usable "out of the box" (without extensive parameter modifications) in order to produce content efficiently. These challenges are only increased when simulating curly, stylized hair.We present a method for stably simulating stylized curly hair that addresses these artistic needs and performance demands. To satisfy the artistic requirement of maintaining the curl's helical shape during motion, we propose a hair model based upon an extensible elastic rod. We introduce a method for stably computing a frame along the hair curve, essential for stable simulation of curly hair. Our hair model uses a spring for controlling the bending of the curl and another for maintaining the helical shape during extension. We also address performance concerns often associated with handling hair-hair contact interactions by efficiently parallelizing the simulation. To do so, we present a technique for pruning both hair-hair contact pairs and hair particles.Our method has been used on two full length feature films and has proven to be robust and stable over a wide range of animated motion and on a variety of hair styles, from straight to wavy to curly. It has proven invaluable in providing controllable, stable and efficient simulation allowing our artists to achieve their desired performance even when facing strict scheduling demands.

45 citations