scispace - formally typeset
Search or ask a question

Showing papers in "Computing and Visualization in Science in 2011"


Journal ArticleDOI
TL;DR: A novel variance reduction technique for the standard Monte Carlo method, called the multilevel Monte Carlo Method, is described, and numerically its superiority is demonstrated.
Abstract: We consider the numerical solution of elliptic partial differential equations with random coefficients Such problems arise, for example, in uncertainty quantification for groundwater flow We describe a novel variance reduction technique for the standard Monte Carlo method, called the multilevel Monte Carlo method, and demonstrate numerically its superiority The asymptotic cost of solving the stochastic problem with the multilevel method is always significantly lower than that of the standard method and grows only proportionally to the cost of solving the deterministic problem in certain circumstances Numerical calculations demonstrating the effectiveness of the method for one- and two-dimensional model problems arising in groundwater flow are presented

571 citations


Journal ArticleDOI
TL;DR: The calculus of variations in tensor representations with a special focus on tensor networks is discussed and applied to functionals of practical interest and the rate of convergence in numerical tests is demonstrated.
Abstract: We discuss the calculus of variations in tensor representations with a special focus on tensor networks and apply it to functionals of practical interest. The survey provides all necessary ingredients for applying minimization methods in a general setting. The important cases of target functionals which are linear and quadratic with respect to the tensor product are discussed, and combinations of these functionals are presented in detail. As an example, we consider the representation rank compression in tensor networks. For the numerical treatment, we use the nonlinear block Gauss–Seidel method. We demonstrate the rate of convergence in numerical tests.

88 citations


Journal ArticleDOI
TL;DR: It is concluded that one can predict the impediment to diffusion of many molecules of practical importance and also use studies of the diffusion of selected molecular probes to reveal the barrier properties of the ECS.
Abstract: The extracellular space (ECS) consists of the narrow channels between brain cells together with their geometrical configuration and contents. Despite being only 20–60 nm in width, the ECS typically occupies 20% of the brain volume. Numerous experiments over the last 50 years have established that molecules moving through the ECS obey the laws of diffusion but with an effective diffusion coefficient reduced by a factor of about 2.6 compared to free diffusion. This review considers the origins of the diffusion barrier arising from the ECS and its properties. The paper presents a brief overview of software for implementing two point-source paradigms for measurements of localized diffusion properties: the real-time iontophoresis or pressure method for small ions and the integrative optical imaging method for macromolecules. Selected results are presented. This is followed by a discussion of the application of the MCell Monte Carlo simulation program to determining the importance of geometrical constraints, especially dead-space microdomains, and the possible role of interaction with the extracellular matrix. It is concluded that we can predict the impediment to diffusion of many molecules of practical importance and also use studies of the diffusion of selected molecular probes to reveal the barrier properties of the ECS.

83 citations


Journal ArticleDOI
TL;DR: This work investigates topology optimization based on the solid isotropic material with penalization approach on compute unified device architecture enabled graphics cards in three dimensions and finds the GPU code to be extremely efficient, being faster than a 48 core shared memory CPU system.
Abstract: We investigate topology optimization based on the solid isotropic material with penalization approach on compute unified device architecture enabled graphics cards in three dimensions. Linear elasticity is solved entirely on the GPU by a matrix-free conjugate gradient method using finite elements. Due to the unique requirements of the single instruction, multiple data stream processors, special attention is given to the procedural generation of matrix–vector products entirely on the graphics card. The GPU code is found to be extremely efficient, being faster than a 48 core shared memory CPU system. CPU and GPU implementations show different performance bottlenecks. The sources are available at http://www.mathematik.uni-trier.de/~schmidt/gputop.

73 citations


Journal ArticleDOI
TL;DR: This article investigates an application to pre-surgical planning of a total hip replacement where it is desirable to select an optimal implant for a specific patient, and proposes methods on how the FCM can be made computationally efficient to the extent that it can be used for patient specific, interactive bone simulations.
Abstract: Numerous numerical methods have been developed in an effort to accurately predict stresses in bones. The largest group are variants of the h-version of the finite element method (h-FEM), where low order Ansatz functions are used. By contrast, we3 investigate a combination of high order FEM and a fictitious domain approach, the finite cell method (FCM). While the FCM has been verified and validated in previous publications, this article proposes methods on how the FCM can be made computationally efficient to the extent that it can be used for patient specific, interactive bone simulations. This approach is called computational steering and allows to change input parameters like the position of an implant, material or loads and leads to an almost instantaneous change in the output (stress lines, deformations). This direct feedback gives the user an immediate impression of the impact of his actions to an extent which, otherwise, is hard to obtain by the use of classical non interactive computations. Specifically, we investigate an application to pre-surgical planning of a total hip replacement where it is desirable to select an optimal implant for a specific patient. Herein, optimal is meant in the sense that the expected post-operative stress distribution in the bone closely resembles that before the operation.

48 citations


Journal ArticleDOI
TL;DR: For elliptic distributed control problems it is shown that the convergence rates of multigrid methods with collective point smoothers are bounded independent of the grid size and the regularization parameter.
Abstract: In this paper we consider multigrid methods for solving saddle point problems. The choice of an appropriate smoothing strategy is a key issue in this case. Here we focus on the widely used class of collective point smoothers. These methods are constructed by a point-wise grouping of the unknowns leading to, e.g., collective Richardson, Jacobi or Gauss-Seidel relaxation methods. Their smoothing properties are well-understood for scalar problems in the symmetric and positive definite case. In this work the analysis of these methods is extended to a special class of saddle point problems, namely to the optimality system of optimal control problems. For elliptic distributed control problems we show that the convergence rates of multigrid methods with collective point smoothers are bounded independent of the grid size and the regularization (or cost) parameter.

45 citations


Journal ArticleDOI
TL;DR: An algorithm to refine space–time finite element meshes as needed for the numerical solution of parabolic initial boundary value problems based on a decomposition of the space-time cylinder into finite elements, which also allows a rather general and flexible discretization in time.
Abstract: In this paper we present an algorithm to refine space–time finite element meshes as needed for the numerical solution of parabolic initial boundary value problems. The approach is based on a decomposition of the space–time cylinder into finite elements, which also allows a rather general and flexible discretization in time. This also includes adaptive finite element meshes which move in time. For the handling of three-dimensional spatial domains, and therefore of a four-dimensional space–time cylinder, we describe a refinement strategy to decompose pentatopes into smaller ones. For the discretization of the initial boundary value problem we use an interior penalty Galerkin approach in space, and an upwind technique in time. A numerical example for the transient heat equation confirms the order of convergence as expected from the theory. First numerical results for the transient Navier–Stokes equations and for an adaptive mesh moving in time underline the applicability and flexibility of the presented approach.

43 citations


Journal ArticleDOI
TL;DR: An image-processing pipeline for the automated segmentation of yeast cells in microscopy images is proposed and takes the varying quality and highly heterogeneous characteristics of cells in transmission images into account, and can be used to extract quantitative cell-based data from transmission/fluorescence image pairs.
Abstract: An image-processing pipeline for the automated segmentation of yeast cells in microscopy images is proposed. The method is suitable for the non-invasive detection of individual cells in transmission data which can be acquired simultaneously with fluorescence data. It moreover takes the varying quality and highly heterogeneous characteristics of cells in transmission images into account, is capable to process images with dense yeast populations and can be used to extract quantitative cell-based data from transmission/fluorescence image pairs. Applicability and performance of the method is evaluated on a data set of 523 different yeast deletion mutant strains.

33 citations


Journal ArticleDOI
TL;DR: An extension of the finite element immersed boundary method is provided based on a model for the membrane that additionally accounts for bending energy and also consider inflow/outflow conditions for the external fluid flow.
Abstract: We study the mathematical modeling and numerical simulation of the motion of red blood cells (RBC) and vesicles subject to an external incompressible flow in a microchannel. RBC and vesicles are viscoelastic bodies consisting of a deformable elastic membrane enclosing an incompressible fluid. We provide an extension of the finite element immersed boundary method by Boffi and Gastaldi (Comput Struct 81:491–501, 2003), Boffi et al. (Math Mod Meth Appl Sci 17:1479–1505, 2007), Boffi et al. (Comput Struct 85:775–783, 2007) based on a model for the membrane that additionally accounts for bending energy and also consider inflow/outflow conditions for the external fluid flow. The stability analysis requires both the approximation of the membrane by cubic splines (instead of linear splines without bending energy) and an upper bound on the inflow velocity. In the fully discrete case, the resulting CFL-type condition on the time step size is also more restrictive. We perform numerical simulations for various scenarios including the tank treading motion of vesicles in microchannels, the behavior of ‘healthy’ and ‘sick’ RBC which differ by their stiffness, and the motion of RBC through thin capillaries. The simulation results are in very good agreement with experimentally available data.

27 citations


Journal ArticleDOI
TL;DR: A numerical scheme capable of solving the fully coupled microscopic SNPP system and also the corresponding averaged systems is proposed and their approximation errors in terms of $$\varepsilon $$ε are estimated numerically.
Abstract: We consider charged transport within a porous medium, which at the pore scale can be described by the non-stationary Stokes–Nernst–Planck–Poisson (SNPP) system. We state three different homogenization results using the method of two-scale convergence. In addition to the averaged macroscopic equations, auxiliary cell problems are solved in order to provide closed-form expressions for effective coefficients. Our aim is to study numerically the convergence of the models for vanishing microstructure, i. e., the behavior for $$\varepsilon \rightarrow 0$$ , where $$\varepsilon $$ is the characteristic ratio between pore diameter and size of the porous medium. To this end, we propose a numerical scheme capable of solving the fully coupled microscopic SNPP system and also the corresponding averaged systems. The discretization is performed fully implicitly in time using mixed finite elements in two space dimensions. The averaged models are evaluated using simulation results and their approximation errors in terms of $$\varepsilon $$ are estimated numerically.

23 citations


Journal Article
Abstract: The USEWOD2011 workshop investigates combinations of usage data with semantics and the web of data. The analysis of usage data may be enhanced using semantic information. Now that more and more explicit knowledge is represented on the Web, the question arises how these semantics can be used to aid large scale web usage analysis and mining. Conversely, usage data analysis can enhance semantic resources as well as Semantic Web applications. Traces of users can be used to evaluate, adapt or personalize Semantic Web applications. Also, new ways of accessing information enabled by the Web of Data imply the need to develop or adapt algorithms, methods, and techniques to analyze and interpret the usage of Web data instead of Web pages. The USEWOD2011 program includes a challenge to the workshop participants: three months before the workshop two datasets consisting of server log files of Linked Open Data sources were released. Participants are invited to come up with interesting analyses, applications, alignments, etc. for these datasets.

Journal ArticleDOI
TL;DR: A fast method is presented where the expectation value of the objective is minimized with respect to a reduced POD basis of the space of controls of PDE models with random-field coefficients.
Abstract: A strategy for the fast computation of robust controls of PDE models with random-field coefficients is presented. A robust control is defined as the control function that minimizes the expectation value of the objective over all coefficient configurations. A straightforward application of the adjoint method on this problem results in a very large optimality system. In contrast, a fast method is presented where the expectation value of the objective is minimized with respect to a reduced POD basis of the space of controls. Comparison of the POD scheme with the full optimization procedure in the case of elliptic control problems with random reaction terms and with random diffusivity demonstrates the superior computational performance of the POD method.

Journal ArticleDOI
TL;DR: This work provides fast boundary element methods which are able to deal with multiple connected computational domains, with large magnetic permeabilities, and with complicated structures with small gaps.
Abstract: For the solution of magnetostatic field problems we discuss and compare several boundary integral formulations with respect to their accuracy, their efficiency, and their robustness. We provide fast boundary element methods which are able to deal with multiple connected computational domains, with large magnetic permeabilities, and with complicated structures with small gaps. The numerical comparison is based on several examples, including a controllable reactor as a real-world problem.

Journal ArticleDOI
TL;DR: A nested multigrid method to optimize time-periodic, parabolic, partial differential equations (PDE), and considers a quadratic tracking objective with a linear parabolic PDE constraint.
Abstract: We present a nested multigrid method to optimize time-periodic, parabolic, partial differential equations (PDE). We consider a quadratic tracking objective with a linear parabolic PDE constraint. The first order optimality conditions, given by a coupled system of boundary value problems can be rewritten as an Fredholm integral equation of the second kind, which is solved by a multigrid of the second kind. The evaluation of the integral operator consists of solving sequentially a boundary value problem for respectively the state and the adjoints. Both problems are solved efficiently by a time-periodic space-time multigrid method.

Journal ArticleDOI
TL;DR: It is demonstrated that fluid dynamics strongly influence the fate of deposited nanoparticles in mucus: Sedimentation, impaction and diffusion were shown to be unlikely to contribute to particle translocation, however, intrinsic plasticity of mucus slabs and collision of such slabs may enhance particle translocated towards the pulmonary epithelium.
Abstract: Interactions of nanoparticles with respiratory fluids such as airway mucus are currently under investigation and are involved in a variety of applications. The clearance processes of those nanoparticles are still not fully understood. This study presents an approach to describe deposition, sedimentation and clearance of nanoparticles within mucus with numerical and analytical models: Particle deposition as well as motility, sedimentation and clearance were simulated with Computational Fluid Dynamics (CFD) and described analytically. Furthermore mucus plasticity as pathway for complex particle translocation was simulated using grid-free CFD methods. We could demonstrate that fluid dynamics strongly influence the fate of deposited nanoparticles in mucus: Sedimentation, impaction and diffusion were shown to be unlikely to contribute to particle translocation. However, intrinsic plasticity of mucus slabs and collision of such slabs may enhance particle translocation towards the pulmonary epithelium.

Journal ArticleDOI
TL;DR: This work addresses an optimal control approach for a model problem in cardiac electrophysiology with the goal of extinction of a reentry phenomenon through the derivation of the optimality system and the description of its discretization.
Abstract: This work addresses an optimal control approach for a model problem in cardiac electrophysiology with the goal of extinction of a reentry phenomenon. After the introduction of the mathematical model, the derivation of the optimality system, the description of its discretization and a numerical feasibility study in a parallel environment are provided.

Journal ArticleDOI
TL;DR: This paper presents an implementation of this time consuming filter process on a cluster of Nvidia Tesla high performance computing processors, which can be applied to very large amounts of data in only a few minutes.
Abstract: The scheme of inertia based anisotropic diffusion is a very powerful noise reducing and structure preserving image processing operator. This paper presents an implementation of this time consuming filter process on a cluster of Nvidia Tesla high performance computing processors, which can be applied to very large amounts of data in only a few minutes. Applying the inertia based diffusion filter to high resolution image stacks of neuron cells provides fully automatic geometric reconstructions of these images on a scale of <1μm. Such a high throughput and automatic image processing tool has great impact on various research areas, in particular the fast growing field of computational neuroscience, where one encounters increasing amount of microscopy data that needs to be processed.

Journal ArticleDOI
TL;DR: A mathematical diffusion model describing the transient transdermal penetration of two non-volatile substances, the lipophilic flufenamic acid and the hydrophilic caffeine, after finite dosing in an aqueous vehicle system is presented.
Abstract: In this paper we present a mathematical diffusion model describing the transient transdermal penetration of two non-volatile substances, the lipophilic flufenamic acid and the hydrophilic caffeine, after finite dosing in an aqueous vehicle system. A striking feature of this microscopic diffusion model is its ability to predict concentration-depth profiles. Relevant input parameters are obtained from a previously published infinite dose study (Naegel et al in Eur J Pharm Biopharm 68:368---379, 2008; Hansen et al in Eur J Pharm Biopharm 68:352---367, 2008). The quality of the model has been evaluated by comparing the concentration-depth profiles in stratum corneum (SC) and deeper skin layers of the experiment with those of the simulation. The results from the experiment and the simulation are in good agreement. The study addresses benefits and shortcomings of the model, and discusses future perspectives such as incorporating different morphological regions of the SC.

Journal ArticleDOI
TL;DR: This paper proposes and implements a hybrid convection-diffusion-shape model for simulating and predicting what has been validated medically, with respect to some aberrant colonic crypt morphogenesis, and demonstrates crypt fission, in which a single crypt starts dividing into two crypts when there is an increase of proliferative cells.
Abstract: It is generally accepted that colorectal cancer is initiated in the small pits, called crypts, that line the colon. Normal crypts exhibit a regular pit pattern, similar in two-dimensions to a U-shape, but aberrant crypts display different patterns, and in some cases show bifurcation. According to several medical articles, there is an interest in correlating pit patterns and the cellular kinetics, namely of proliferative and apoptotic cells, in colonic crypts. This paper proposes and implements a hybrid convection-diffusion-shape model for simulating and predicting what has been validated medically, with respect to some aberrant colonic crypt morphogenesis. The model demonstrates crypt fission, in which a single crypt starts dividing into two crypts, when there is an increase of proliferative cells. The overall model couples the cell movement and proliferation equations with the crypt geometry. It relies on classical continuum transport/mass conservation laws and the changes in the crypt shape are driven by the pressure exerted by the cells on the crypt wall. This pressure is related to the cell velocity by a Darcy-type law. Numerical simulations are conducted and comparisons with the medical results are shown.

Journal ArticleDOI
TL;DR: A preconditioned GMRES solver and a Newton based solver are presented, for the fluid-structure interaction problems employing a nearly incompressible elasticity model in a classical mixed displacement-pressure formulation.
Abstract: In this paper, we present some analysis and numerical studies on two partitioned fluid-structure interaction solvers: a preconditioned GMRES solver and a Newton based solver, for the fluid-structure interaction problems employing a nearly incompressible elasticity model in a classical mixed displacement-pressure formulation. Both are highly relying on robust and efficient solvers for the fluid and structure sub-problems obtained from an extended and stabilized finite element discretization on hybrid meshes. A special algebraic multigrid method capable of handling such general saddle point systems for the incompressible and nearly incompressible models is investigated.

Journal ArticleDOI
TL;DR: A heuristic hp-refinement indicator based on the ratio between the two quantities on each element and a nodal basis functions for special elements where the polynomial degree along edges is allowed to be different from the overall element degree are introduced.
Abstract: In this paper, we present a new approach to hp-adaptive finite element methods. Our a posteriori error estimates and hp-refinement indicator are inspired by the work on gradient/derivative recovery of Bank and Xu (SIAM J Numer Anal 41:2294–2312, 2003; SIAM J Numer Anal 41:2313–2332, 2003). For element τ of degree p, R(∂puhp), the (piece-wise linear) recovered function of ∂pu is used to approximate \({|\varepsilon|_{1,\tau} = |\hat{u}_{p+1} - u_{p}|_{1,\tau}}\) , which serves as our local error indicator. Under sufficient conditions on the smoothness of u, it can be shown that \({\|{\partial^{p}(\hat{u}_{p+1} - u_{p})\|_{0,\Omega}}}\) is a superconvergent approximation of \({\|(I - R){\partial}^p u_{hp}\|_{0,\Omega}}\) . Based on this, we develop a heuristic hp-refinement indicator based on the ratio between the two quantities on each element. Also in this work, we introduce nodal basis functions for special elements where the polynomial degree along edges is allowed to be different from the overall element degree. Several numerical examples are provided to show the effectiveness of our approach.

Journal ArticleDOI
TL;DR: The numerical results obtained for a variety of benchmark problems show the validity of the numerical approach, in comparison with other numerical models, and allow to investigate numerically the non-hydrostatic three-dimensional effects, in particular for the usual test cases where hydrostatic approximations are known analytically.
Abstract: The numerical simulation of three-dimensional dam break flows is discussed. A non-hydrostatic numerical model for free-surface flows is considered, which is based on the incompressible Navier---Stokes equations coupled with a volume-of-fluid approach. The numerical results obtained for a variety of benchmark problems show the validity of the numerical approach, in comparison with other numerical models, and allow to investigate numerically the non-hydrostatic three-dimensional effects, in particular for the usual test cases where hydrostatic approximations are known analytically. The numerical experiments on actual topographies, in particular the Malpasset dam break and the (hypothetical) break of the Grande-Dixence dam in Switzerland, also illustrate the capabilities of the method for large-scale simulations and real-life visualization.

Journal ArticleDOI
TL;DR: A generalized version of the Cross Approximation for 3d-tensors with main focus on theoretical issues of the construction such as the desired interpolation property or the explicit formulas for the vectors in the decomposition.
Abstract: In this article we present a generalized version of the Cross Approximation for 3d-tensors. The given tensor $${a\in\mathbb{R}^{n\times n\times n}}$$ is represented as a matrix of vectors and 2d adaptive Cross Approximation is applied in a nested way to get the tensor decomposition. The main focus lies on theoretical issues of the construction such as the desired interpolation property or the explicit formulas for the vectors in the decomposition. The computational complexity of the proposed algorithm is shown to be linear in n.

Journal ArticleDOI
TL;DR: Results of numerical experiments validate the proposed optimal control formulation and demonstrate the effectiveness of the staggered-grids multigrid solution procedure.
Abstract: The formulation of optimal control problems governed by Cauchy-Riemann equations is presented. A distributed control mechanism through divergence and curl sources is considered with the boundary conditions of mixed type. A Lagrange multiplier framework is introduced to characterize the solution to Cauchy-Riemann optimal control problems as the solution of an optimality system of four first-order partial differential equations and two optimality conditions. To solve the optimality system, staggered grids and multigrid methods are investigated. It results that staggered grids provide a natural collocation of the optimization variables and second-order accurate solutions are obtained. The proposed multigrid scheme is based on a coarsening by a factor of three that results in a nested hierarchy of staggered grids. On these grids a distributed-Gauss-Seidel and gradient-based smoothing scheme is employed. Results of numerical experiments validate the proposed optimal control formulation and demonstrate the effectiveness of the staggered-grids multigrid solution procedure.

Journal ArticleDOI
TL;DR: This contribution reports on different lifting operators for a hybrid simulation that combines a lattice Boltzmann model—a special discretization of the BoltZmann equation—with a diffusion partial differential equation and focuses on the numerical comparison of various lifting strategies.
Abstract: Mathematical models based on kinetic equations are ubiquitous in the modeling of granular media, population dynamics of biological colonies, chemical reactions and many other scientific problems. These individual-based models are computationally very expensive because the evolution takes place in the phase space. Hybrid simulations can bring down this computational cost by replacing locally in the domain—in the regions where it is justified—the kinetic model with a more macroscopic description. This splits the computational domain into subdomains. The question is how to couple these models in a mathematically correct way with a lifting operator that maps the variables of the macroscopic partial differential equation to those of the kinetic model. Indeed, a kinetic model has typically more variables than a model based on a macroscopic partial differential equation and at each interface we need the missing data. In this contribution we report on different lifting operators for a hybrid simulation that combines a lattice Boltzmann model—a special discretization of the Boltzmann equation—with a diffusion partial differential equation. We focus on the numerical comparison of various lifting strategies.

Journal ArticleDOI
TL;DR: A new adaptive method for solving the one-dimensional Helmholtz equation is introduced in algebraic multigrid framework and combines techniques from the wave-ray geometric solver and from the adaptive Bootstrap algebraic approach.
Abstract: The paper introduces a new adaptive method for solving the one-dimensional Helmholtz equation. It is implemented in algebraic multigrid framework and combines techniques from the wave-ray geometric solver and from the adaptive Bootstrap algebraic approach. The solver exhibits optimal efficiency for different, including discontinuous, wave numbers, as confirmed by numerical experiments. The approach is extendable, though not straightforwardly, to higher dimensions as briefly discussed in the concluding section.

Journal ArticleDOI
TL;DR: This paper presents three implementations of a partitioning algorithm for multi-channel images, which extends an original algorithm for single- channel images presented in the early 1990’s and shows that suitability of this algorithm for GPU allows it to become a comparable alternative to the modern partitioning algorithms (multi-label Graph-Cuts).
Abstract: The GPU programmability opens a new perspective for algorithms that have not been studied and used for real applications on commodity state-of-the-art hardware due to their computational expenses. In this paper, we present three implementations of a partitioning algorithm for multi-channel images, which extends an original algorithm for single-channel images presented in the early 1990’s. The segmentation algorithm is based on the information theory concept of minimum description length, which leads to the formulation of an energy functional. The optimal solution is obtained by minimizing the functional. The minimization approach follows a graduated non-convexity approach, which leads to a fully explicit scheme. As the scheme is applied to all pixels of the image simultaneously, it is naturally parallelizable. Besides the optimized sequential implementation in C++ we developed a GLSL version of the algorithm using vertex and fragment shaders as well as a CUDA version using global memory, shared memory, and texture memory. We compare the performance of the implementations, discuss the implementation details, and show that suitability of this algorithm for GPU allows it to become a comparable alternative to the modern partitioning algorithm (multi-label Graph-Cuts).

Journal ArticleDOI
TL;DR: For the solution of magnetostatic field problems, the authors discuss and compare several boundary integral formulations with respect to their accuracy, their efficiency, and their robustness, and provide fas...
Abstract: For the solution of magnetostatic field problems we discuss and compare several boundary integral formulations with respect to their accuracy, their efficiency, and their robustness. We provide fas...

Journal ArticleDOI
TL;DR: This paper solves linear parabolic problems using the three stage noble algorithms using the Laplace transformation method, which is both parallel in time (and can be in space, too) and extremely high order convergent.
Abstract: In this paper we solve linear parabolic problems using the three stage noble algorithms. First, the time discretization is approximated using the Laplace transformation method, which is both parallel in time (and can be in space, too) and extremely high order convergent. Second, higher-order compact schemes of order four and six are used for the the spatial discretization. Finally, the discretized linear algebraic systems are solved using multigrid to show the actual convergence rate for numerical examples, which are compared to other numerical solution methods.

Journal ArticleDOI
TL;DR: This work coupled the three-dimensional solver for the two-phase incompressible Navier-Stokes equations NaSt3DGPF with Autodesk Maya, the first published results of the full integration of a physics-oriented, high-order grid-based parallel two- phase fluid solver in Maya.
Abstract: We have coupled the three-dimensional solver for the two-phase incompressible Navier-Stokes equations NaSt3DGPF with Autodesk Maya. Maya is the industry standard software framework for the creation of three-dimensional animations. The parallel level-set-based fluid solver NaSt3DGPF simulates the interaction of two fluids like air and water. It uses high-order finite difference discretization methods that are designed for physics applications. By coupling both applications, we are now able to set up scientific fluid simulations in an easy-to-use user interface. Moreover, the rendering techniques provided by Maya allow us to create photorealistic visualizations for computational fluid dynamics problems and support the creation of highly visually realistic fluid simulations for animation movies. Altogether, we obtain an easy usable and fully coupled fluid animation toolkit for two-phase fluid simulations. These are the first published results of the full integration of a physics-oriented, high-order grid-based parallel two-phase fluid solver in Maya, at least to our knowledge.