Other affiliations: National Autonomous University of Mexico
Bio: J.C. Cajas is an academic researcher from Barcelona Supercomputing Center. The author has contributed to research in topics: Reynolds number & Heat transfer. The author has an hindex of 9, co-authored 34 publications receiving 262 citations. Previous affiliations of J.C. Cajas include National Autonomous University of Mexico.
TL;DR: This work presents a fully coupled fluid‐electro‐mechanical model of a 50th percentile human heart, implemented on Alya, the BSC multi‐physics parallel code, capable of running efficiently in supercomputers.
Abstract: In this work, we present a fully coupled fluid-electro-mechanical model of a 50th percentile human heart. The model is implemented on Alya, the BSC multi-physics parallel code, capable of running efficiently in supercomputers. Blood in the cardiac cavities is modeled by the incompressible Navier-Stokes equations and an arbitrary Lagrangian-Eulerian (ALE) scheme. Electrophysiology is modeled with a monodomain scheme and the O'Hara-Rudy cell model. Solid mechanics is modeled with a total Lagrangian formulation for discrete strains using the Holzapfel-Ogden cardiac tissue material model. The three problems are simultaneously and bidirectionally coupled through an electromechanical feedback and a fluid-structure interaction scheme. In this paper, we present the scheme in detail and propose it as a computational cardiac workbench.
TL;DR: In this paper, a low-dissipative spatial and time discretisation finite element schemes have been implemented and combined with the Wall-Adapting Local-Eddy viscosity (WALE) subgrid-scale model to solve the filtered incompressible flow equations.
Abstract: The vortex induced vibration phenomenon of a low mass ratio ( m * = 2.6 ) two-degree-of-freedom circular cylinder at subcritical Reynolds numbers ( R e = 3900 , 5300, 11,000) has been investigated by means of large-eddy simulations. A low-dissipative spatial and time discretisation finite element schemes have been implemented and combined with the Wall-Adapting Local-Eddy viscosity (WALE) subgrid-scale model to solve the filtered incompressible flow equations. Several values of the reduced velocity in the range 3 ⩽ U* ⩽ 12 have been considered. The numerical results are extensively compared with available experimental and numerical data. Particular interest has been placed in the region of maximum cross-flow amplitudes, the super-upper branch, where previous high-fidelity numerical simulations have underestimated the peak amplitudes compared with experimental results. The transition between the super-upper and lower branches is also shown and described. The numerical simulations successfully reproduce the three-branch response maximum oscillation amplitudes and associated vortex formation modes. The 2T vortex formation mode, i.e. two triplets of vortices per oscillation period, has been observed to occur in the super-upper branch, for the three different values of the Reynolds number investigated. These results contradict the claim made in previous works  that the vortex formation mode in the super-upper branch is Reynolds number dependent. Beats are observed to appear prior the transition from the super-upper to the lower branch. It is argued that they may be related with the coherence and strength of the third vortex shed at the shoulder of the cylinder each half-cycle, which is finally suppressed in the transition to the lower branch.
TL;DR: This paper presents a fast and scalable geometric partitioner based on Space Filling Curves (SFC), as an alternative to the standard graph partitioning approach, and has avoided any computing or memory bottleneck in the algorithm.
Abstract: Larger supercomputers allow the simulation of more complex phenomena with increased accuracy. Eventually this requires finer and thus also larger geometric discretizations. In this context, and extrapolating to the Exascale paradigm, meshing operations such as generation, deformation, adaptation/regeneration or partition/load balance, become a critical issue within the simulation workflow. In this paper we focus on mesh partitioning. In particular, we present a fast and scalable geometric partitioner based on Space Filling Curves (SFC), as an alternative to the standard graph partitioning approach. We have avoided any computing or memory bottleneck in the algorithm, while we have imposed that the solution achieved is independent (discounting rounding off errors) of the number of parallel processes used to compute it. The performance of the SFC-based partitioner presented has been demonstrated using up to 4096 CPU-cores in the Blue Waters supercomputer.
TL;DR: In this paper, the authors introduce mesh-based, equation-based and algebraic-based domain composition methods, which have many advantages with respect to the others: they are relatively problem independent; their implicit implementation can be hidden in the iterative solver operations; they can be implemented in a multi-code environment.
Abstract: Domain composition methods (DCM) consist in obtaining a solution to a problem, from the formulations of the same problem expressed on various subdomains. These methods have therefore the opposite objective of domain decomposition methods (DDM). Indeed, in contrast to DCM, these last techniques are usually applied to matching meshes as their purpose consists mainly in distributing the work in parallel environments. However, they are sometimes based on the same methodology as after decomposing, DDM have to recompose. As a consequence, in the literature, the term DDM has many times substituted DCM. DCM are powerful techniques that can be used for different purposes: to simplify the meshing of a complex geometry by decomposing it into different meshable pieces; to perform local refinement to adapt to local mesh requirements; to treat subdomains in relative motion (Chimera, sliding mesh); to solve multiphysics or multiscale problems, etc. The term DCM is generic and does not give any clue about how the fragmented solutions on the different subdomains are composed into a global one. In the literature, many methodologies have been proposed: they are mesh-based, equation-based, or algebraic-based. In mesh-based formulations, the coupling is achieved at the mesh level, before the governing equations are assembled into an algebraic system (mesh conforming, Shear-Slip Mesh Update, HERMESH). The equation-based counterpart recomposes the solution from the strong or weak formulation itself, and are implemented during the assembly of the algebraic system on the subdomain meshes. The different coupling techniques can be formulated for the strong formulation at the continuous level, for the weak formulation either at the continuous or at the discrete level (iteration-by-subdomains, mortar element, mesh free interpolation). Although the different methods usually lead to the same solutions at the continuous level, which usually coincide with the solution of the problem on the original domain, they have very different behaviors at the discrete level and can be implemented in many different ways. Eventually, algebraic-based formulations treat the composition of the solutions directly on the matrix and right-hand side of the individual subdomain algebraic systems. The present work introduces mesh-based, equation-based and algebraic-based DCM. It however focusses on algebraic-based domain composition methods, which have many advantages with respect to the others: they are relatively problem independent; their implicit implementation can be hidden in the iterative solver operations, which enables one to avoid intensive code rewriting; they can be implemented in a multi-code environment.
TL;DR: A dynamic load balancing library is used on the top of OpenMP pragmas in order to continuously exploit all the resources available at the node level, thus increasing the load balance and the efficiency of the parallelisation and uses the MPI.
Abstract: This work presents a parallel numerical strategy to transport Lagrangian particles in a fluid using a dynamic load balance strategy. Both fluid and particle solvers are parallel, with two levels of parallelism. The first level is based on a substructuring technique and uses message passing interface MPI as the communication library; the second level consists of OpenMP pragmas for loop parallelisation at the node level. When dealing with transient flows, there exist two main alternatives to address the coupling of these solvers. On the one hand, a single-code approach consists in solving the particle equations once the fluid solution has been obtained at the end of a time step, using the same instance of the same code. On the other hand, a multi-code approach enables one to overlap the transport of the particles with the next time-step solution of the fluid equations, and thus obtain asynchronism. In this case, different codes or two instances of the same code can be used. Both approaches will be presented. In addition, a dynamic load balancing library is used on the top of OpenMP pragmas in order to continuously exploit all the resources available at the node level, thus increasing the load balance and the efficiency of the parallelisation and uses the MPI.
01 Jan 2016
TL;DR: The numerical heat transfer and fluid flow is universally compatible with any devices to read and is available in the authors' digital library an online access to it is set as public so you can get it instantly.
Abstract: Thank you for reading numerical heat transfer and fluid flow. Maybe you have knowledge that, people have search numerous times for their favorite books like this numerical heat transfer and fluid flow, but end up in infectious downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they cope with some malicious virus inside their computer. numerical heat transfer and fluid flow is available in our digital library an online access to it is set as public so you can get it instantly. Our books collection spans in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Merely said, the numerical heat transfer and fluid flow is universally compatible with any devices to read.
••01 Jan 1998
TL;DR: A discrete technique of the Schwarz alternating method is presented, to combine the Ritz-Galerkin and finite element methods, well suited for solving singularity problems in parallel.
Abstract: A discrete technique of the Schwarz alternating method is presented in this last chapter, to combine the Ritz-Galerkin and finite element methods. This technique is well suited for solving singularity problems in parallel, and requires a little more computation for large overlap of subdomains. The convergence rate of the iterative procedure, which depends upon overlap of subdomains, will be studied. Also a balance strategy will be proposed to couple the iteration number with the element size used in the FEM. For the crack-infinity problem of singularity the total CPU time by the technique in this chapter is much less than that by the nonconforming combination in Chapter 12.
TL;DR: This article addresses the numerical modeling of many aspects of heart function, including the interaction of the cardiac electrophysiology system with contractile muscle tissue, the sub-cellular activation–contraction mechanisms, as well as the hemodynamics inside the heart chambers.
Abstract: Mathematical modeling of the human heart and its function can expand our understanding of various cardiac diseases, which remain the most common cause of death in the developed world. Like other physiological systems, the heart can be understood as a complex multiscale system involving interacting phenomena at the molecular, cellular, tissue, and organ levels. This article addresses the numerical modeling of many aspects of heart function, including the interaction of the cardiac electrophysiology system with contractile muscle tissue, the sub-cellular activation–contraction mechanisms, as well as the hemodynamics inside the heart chambers. Resolution of each of these sub-systems requires separate mathematical analysis and specially developed numerical algorithms, which we review in detail. By using specific sub-systems as examples, we also look at systemic stability, and explain for example how physiological concepts such as microscopic force generation in cardiac muscle cells, translate to coupled systems of differential equations, and how their stability properties influence the choice of numerical coupling algorithms. Several numerical examples illustrate three fundamental challenges of developing multiphysics and multiscale numerical models for simulating heart function, namely: (i) the correct upscaling from single-cell models to the entire cardiac muscle, (ii) the proper coupling of electrophysiology and tissue mechanics to simulate electromechanical feedback, and (iii) the stable simulation of ventricular hemodynamics during rapid valve opening and closure.
TL;DR: A deep learning model based on a convolutional neural network that predicts optimal meetamaterial designs and non-iteratively optimizes metamaterials for either maximizing the bulk modulus, maximizing the shear modulus or minimizing the Poisson's ratio.
Abstract: Data-driven models are rising as an auspicious method for the geometrical design of materials and structural systems. Nevertheless, existing data-driven models customarily address the optimization of structural designs rather than metamaterial designs. Metamaterials are emerging as promising materials exhibiting tailorable and unprecedented properties for a wide spectrum of applications. In this paper, we develop a deep learning (DL) model based on a convolutional neural network (CNN) that predicts optimal metamaterial designs. The developed DL model non-iteratively optimizes metamaterials for either maximizing the bulk modulus, maximizing the shear modulus, or minimizing the Poisson's ratio (including negative values). The data are generated by solving a large set of inverse homogenization boundary values problems, with randomly generated geometrical features from a specific distribution. Such s data-driven model can play a vital role in accelerating more computationally expensive design problems, such as multiscale metamaterial systems.
TL;DR: Alya's main features are introduced and focus particularly on its solvers and the performance up to 100.000 processors in Blue Waters, the NCSA supercomputer with selected multi-physics tests that are representative of the engineering world.
Abstract: Alya is a multi-physics simulation code developed at Barcelona Supercomputing Center (BSC). From its inception Alya code is designed using advanced High Performance Computing programming techniques to solve coupled problems on supercomputers efficiently. The target domain is engineering, with all its particular features: complex geometries and unstructured meshes, coupled multi-physics with exotic coupling schemes and physical models, ill-posed problems, flexibility needs for rapidly including new models, etc. Since its beginnings in 2004, Alya has scaled well in an increasing number of processors when solving single-physics problems such as fluid mechanics, solid mechanics, acoustics, etc. Over time, we have made a concerted effort to maintain and even improve scalability for multi-physics problems. This poses challenges on multiple fronts, including: numerical models, parallel implementation, physical coupling models, algorithms and solution schemes, meshing process, etc. In this paper, we introduce Alya's main features and focus particularly on its solvers. We present Alya's performance up to 100.000 processors in Blue Waters, the NCSA supercomputer with selected multi-physics tests that are representative of the engineering world. The tests are incompressible flow in a human respiratory system, low Mach combustion problem in a kiln furnace, and coupled electro-mechanical contraction of the heart. We show scalability plots for all cases and discuss all aspects of such simulations, including solver convergence.