scispace - formally typeset
Search or ask a question

Showing papers in "Advanced Modeling and Simulation in Engineering Sciences in 2021"


Journal ArticleDOI
TL;DR: In this paper, a physics-based and data-supported phenomenological microstructure model for Ti-6Al-4V is proposed that is suitable for the part-scale simulation of laser powder bed fusion processes.
Abstract: The elasto-plastic material behavior, material strength and failure modes of metals fabricated by additive manufacturing technologies are significantly determined by the underlying process-specific microstructure evolution. In this work a novel physics-based and data-supported phenomenological microstructure model for Ti-6Al-4V is proposed that is suitable for the part-scale simulation of laser powder bed fusion processes. The model predicts spatially homogenized phase fractions of the most relevant microstructural species, namely the stable $$\beta $$ -phase, the stable $$\alpha _{\text {s}}$$ -phase as well as the metastable Martensite $$\alpha _{\text {m}}$$ -phase, in a physically consistent manner. In particular, the modeled microstructure evolution, in form of diffusion-based and non-diffusional transformations, is a pure consequence of energy and mobility competitions among the different species, without the need for heuristic transformation criteria as often applied in existing models. The mathematically consistent formulation of the evolution equations in rate form renders the model suitable for the practically relevant scenario of temperature- or time-dependent diffusion coefficients, arbitrary temperature profiles, and multiple coexisting phases. Due to its physically motivated foundation, the proposed model requires only a minimal number of free parameters, which are determined in an inverse identification process considering a broad experimental data basis in form of time-temperature transformation diagrams. Subsequently, the predictive ability of the model is demonstrated by means of continuous cooling transformation diagrams, showing that experimentally observed characteristics such as critical cooling rates emerge naturally from the proposed microstructure model, instead of being enforced as heuristic transformation criteria. Eventually, the proposed model is exploited to predict the microstructure evolution for a realistic selective laser melting application scenario and for the cooling/quenching process of a Ti-6Al-4V cube of practically relevant size. Numerical results confirm experimental observations that Martensite is the dominating microstructure species in regimes of high cooling rates, e.g., due to highly localized heat sources or in near-surface domains, while a proper manipulation of the temperature field, e.g., by preheating the base-plate in selective laser melting, can suppress the formation of this metastable phase.

11 citations


Journal ArticleDOI
TL;DR: In this article, a simple numerical method is proposed to solve a simplified model of metabolic pathways in astrocytes, which can deal with arbitrarily complex cell morphologies and enables the rapid and simple modification of the model equations by users also without a deep knowledge in the numerical methods involved.
Abstract: A thorough understanding of brain metabolism is essential to tackle neurodegenerative diseases. Astrocytes are glial cells which play an important metabolic role by supplying neurons with energy. In addition, astrocytes provide scaffolding and homeostatic functions to neighboring neurons and contribute to the blood–brain barrier. Recent investigations indicate that the complex morphology of astrocytes impacts upon their function and in particular the efficiency with which these cells metabolize nutrients and provide neurons with energy, but a systematic understanding is still elusive. Modelling and simulation represent an effective framework to address this challenge and to deepen our understanding of brain energy metabolism. This requires solving a set of metabolic partial differential equations on complex domains and remains a challenge. In this paper, we propose, test and verify a simple numerical method to solve a simplified model of metabolic pathways in astrocytes. The method can deal with arbitrarily complex cell morphologies and enables the rapid and simple modification of the model equations by users also without a deep knowledge in the numerical methods involved. The results obtained with the new method (CutFEM) are as accurate as the finite element method (FEM) whilst CutFEM disentangles the cell morphology from its discretisation, enabling us to deal with arbitrarily complex morphologies in two and three dimensions.

8 citations


Journal ArticleDOI
TL;DR: In this paper, a boundary element method (BEM) algorithm was developed to model and simulate the nonlinear thermal stresses problems in micropolar functionally graded anisotropic composites with temperature-dependent properties.
Abstract: The main aim of this article is to develop a new boundary element method (BEM) algorithm to model and simulate the nonlinear thermal stresses problems in micropolar functionally graded anisotropic (FGA) composites with temperature-dependent properties. Some inside points are chosen to treat the nonlinear terms and domain integrals. An integral formulation which is based on the use of Kirchhoff transformation is firstly used to simplify the transient heat conduction governing equation. Then, the residual nonlinear terms are carried out within the current formulation. The domain integrals can be effectively treated by applying the Cartesian transformation method (CTM). In the proposed BEM technique, the nonlinear temperature is computed on the boundary and some inside domain integral. Then, nonlinear displacement can be calculated at each time step. With the calculated temperature and displacement distributions, we can obtain the values of nonlinear thermal stresses. The efficiency of our proposed methodology has been improved by using the communication-avoiding versions of the Arnoldi (CA-Arnoldi) preconditioner for solving the resulting linear systems arising from the BEM to reduce the iterations number and computation time. The numerical outcomes establish the influence of temperature-dependent properties on the nonlinear temperature distribution, and investigate the effect of the functionally graded parameter on the nonlinear displacements and thermal stresses, through the micropolar FGA composites with temperature-dependent properties. These numerical outcomes also confirm the validity, precision and effectiveness of the proposed modeling and simulation methodology.

8 citations


Journal ArticleDOI
TL;DR: In this paper, a semi-analytical modeling strategy is proposed, that is based on an initial finite element computation, followed by the use of analytical electromagnetic equations to solve the coupled electromagnetic-thermal problem.
Abstract: The numerical simulation of the induction heating process can be computationally expensive, especially if ferromagnetic materials are studied. There are several analytical models that describe the electromagnetic phenomena. However, these are very limited by the geometry of the coil and the workpiece. Thus, the usual method for computing more complex systems is to use the finite element method to solve the set of equations in the multiphysical system, but this easily becomes very time consuming. This paper deals with the problem of solving a coupled electromagnetic - thermal problem with higher computational efficiency. For this purpose, a semi-analytical modeling strategy is proposed, that is based on an initial finite element computation, followed by the use of analytical electromagnetic equations to solve the coupled electromagnetic-thermal problem. The usage of the simplified model is restricted to simple geometrical features such as flat or curved surfaces with great curvature to skin depth ratio. Numerical and experimental validation of the model show an average error between 0.9% and 4.1% in the prediction of the temperature evolution, reaching a greater accuracy than other analyzed commercial softwares. A 3D case of a double-row large size ball bearing is also presented, fully validating the proposed approach in terms of computational time and accuracy for complex industrial cases.

7 citations


Journal ArticleDOI
TL;DR: In this paper, a POD-based surrogate model with Radial Basis Function interpolation is used to model high-dimensional FE data fields and the effect of (pre)processing methods on the accuracy of the result field is systematically investigated.
Abstract: Surrogate modelling is a powerful tool to replace computationally expensive nonlinear numerical simulations, with fast representations thereof, for inverse analysis, model-based control or optimization. For some problems, it is required that the surrogate model describes a complete output field. To construct such surrogate models, proper orthogonal decomposition (POD) can be used to reduce the dimensionality of the output data. The accuracy of the surrogate models strongly depends on the (pre)processing actions that are used to prepare the data for the dimensionality reduction. In this work, POD-based surrogate models with Radial Basis Function interpolation are used to model high-dimensional FE data fields. The effect of (pre)processing methods on the accuracy of the result field is systematically investigated. Different existing methods for surrogate model construction are compared with a novel method. Special attention is given to data fields consisting of several physical meanings, e.g. displacement, strain and stress. A distinction is made between the errors due to truncation and due to interpolation of the data. It is found that scaling the data per physical part substantially increases the accuracy of the surrogate model.

7 citations


Journal ArticleDOI
TL;DR: In this paper, a NURBS-based geometry representation is combined with a fully separated representation for allying efficiency and generality by addressing complex geometries, and some numerical examples are considered to prove the potential of the proposed methodology.
Abstract: Space separation within the Proper Generalized Decomposition—PGD—rationale allows solving high dimensional problems as a sequence of lower dimensional ones. In our former works, different geometrical transformations were proposed for addressing complex shapes and spatially non-separable domains. Efficient implementation of separated representations needs expressing the domain as a product of characteristic functions involving the different space coordinates. In the case of complex shapes, more sophisticated geometrical transformations are needed to map the complex physical domain into a regular one where computations are performed. This paper aims at proposing a very efficient route for accomplishing such space separation. A NURBS-based geometry representation, usual in computer aided design—CAD—, is retained and combined with a fully separated representation for allying efficiency (ensured by the fully separated representations) and generality (by addressing complex geometries). Some numerical examples are considered to prove the potential of the proposed methodology.

5 citations


Journal ArticleDOI
TL;DR: In this article, a multiscale model for real-time simulation of terrain dynamics is explored, which combines the description of soil as a continuous solid, as distinct particles and as rigid multibodies.
Abstract: A multiscale model for real-time simulation of terrain dynamics is explored. To represent the dynamics on different scales the model combines the description of soil as a continuous solid, as distinct particles and as rigid multibodies. The models are dynamically coupled to each other and to the earthmoving equipment. Agitated soil is represented by a hybrid of contacting particles and continuum solid, with the moving equipment and resting soil as geometric boundaries. Each zone of active soil is aggregated into distinct bodies, with the proper mass, momentum and frictional-cohesive properties, which constrain the equipment’s multibody dynamics. The particle model parameters are pre-calibrated to the bulk mechanical parameters for a wide range of different soils. The result is a computationally efficient model for earthmoving operations that resolve the motion of the soil, using a fast iterative solver, and provide realistic forces and dynamic for the equipment, using a direct solver for high numerical precision. Numerical simulations of excavation and bulldozing operations are performed to test the model and measure the computational performance. Reference data is produced using coupled discrete element and multibody dynamics simulations at relatively high resolution. The digging resistance and soil displacements with the real-time multiscale model agree with the reference model up to 10–25%, and run more than three orders of magnitude faster.

5 citations


Journal ArticleDOI
TL;DR: In this paper, an energy-consistent formulation for the classical 1D formulation of blood flow in an arterial segment is presented. But the authors do not consider the effect of the actin-myosin interaction.
Abstract: In this work we provide a novel energy-consistent formulation for the classical 1D formulation of blood flow in an arterial segment. The resulting reformulation is shown to be suitable for the coupling with a lumped (0D) model of the heart that incorporates a reduced formulation of the actin-myosin interaction. The coupling being consistent with energy balances, we provide a complete heart-circulation model compatible with thermodynamics hence stable numerically and informative physiologically. These latter two properties are verified by numerical experiments.

4 citations


Journal ArticleDOI
TL;DR: The presented work proposes to couple Bayesian inference with attractive and advanced numerical techniques so that real-time and sequential assimilation can be envisioned and synthesis of control laws in a stochastic context is investigated into the DDDAS framework.
Abstract: This research work deals with the implementation of so-called Dynamic Data-Driven Application Systems (DDDAS) in structural mechanics activities. It aims at designing a real-time numerical feedback loop between a physical system of interest and its numerical simulator, so that (i) the simulation model is dynamically updated from sequential and in situ observations on the system; (ii) the system is appropriately driven and controlled in service using predictions given by the simulator. In order to build such a feedback loop and take various uncertainties into account, a suitable stochastic framework is considered for both data assimilation and control, with the propagation of these uncertainties from model updating up to command synthesis by using a specific and attractive sampling technique. Furthermore, reduced order modeling based on the Proper Generalized Decomposition (PGD) technique is used all along the process in order to reach the real-time constraint. This permits fast multi-query evaluations and predictions, by means of the parametrized physics-based model, in the online phase of the feedback loop. The control of a fusion welding process under various scenarios is considered to illustrate the proposed methodology and to assess the performance of the associated numerical architecture.

3 citations


Journal ArticleDOI
TL;DR: In this article, a robust digital pipeline from CT images to the simulation of contact between multiple bodies is presented, which relies on a recently developed immersed finite element algorithm that is capable of simulating unilateral contact between solids without meshing.
Abstract: This paper presents a robust digital pipeline from CT images to the simulation of contact between multiple bodies. The proposed strategy relies on a recently developed immersed finite element algorithm that is capable of simulating unilateral contact between solids without meshing (Claus and Kerfriden in Int J Numer Methods Eng 113(6):938–966, 2018). It was shown that such an approach reduces the difficulties associated with the digital flow of information from analytically defined geometries to mechanical simulations. We now propose to extend our approach to include geometries, which are not defined mathematically but instead are obtained from images, and encoded in 3D arrays of voxels. This paper introduces two novel elements. Firstly, we reformulate our contact algorithm into an extension of an augmented Lagrangian CutFEM algorithm. Secondly, we develop an efficient algorithm to convert the surface data generated by standard segmentation tools used in medical imaging into level-set functions. These two elements give rise to a robust digital pipeline with minimum user intervention. We demonstrate the capabilities of our algorithm on a hip joint geometry with contact between the femur and the hip bone.

3 citations


Journal ArticleDOI
TL;DR: In this article, a time-separated representation within the standard Proper Generalized Decomposition (PGD) is proposed, where the time coordinate is transformed into a multi-dimensional time through new separated coordinates, each representing one scale.
Abstract: Solutions of partial differential equations can exhibit multiple time scales. Standard discretization techniques are constrained to capture the finest scale to accurately predict the response of the system. In this paper, we provide an alternative route to circumvent prohibitive meshes arising from the necessity of capturing fine-scale behaviors. The proposed methodology is based on a time-separated representation within the standard Proper Generalized Decomposition, where the time coordinate is transformed into a multi-dimensional time through new separated coordinates, each representing one scale, while continuity is ensured in the scale coupling. For instance, when considering two different time scales, the governing Partial Differential Equation is commuted into a nonlinear system that iterates between the so-called microtime and macrotime, so that the time coordinate can be viewed as a 2D time. The macroscale effects are taken into account by means of a finite element-based macro-discretization, whereas the microscale effects are handled with unidimensional parent spaces that are replicated throughout the time domain. The resulting separated representation allows us a very fine time discretization without impacting the computational efficiency. The proposed formulation is explored and numerically verified on thermal and elastodynamic problems.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an approach for fluid-solid and contact interaction problems including thermo-mechanical coupling and reversible phase transitions, where all fields are spatially discretized using the method of smoothed particle hydrodynamics (SPH).
Abstract: The present work proposes an approach for fluid–solid and contact interaction problems including thermo-mechanical coupling and reversible phase transitions. The solid field is assumed to consist of several arbitrarily-shaped, undeformable but mobile rigid bodies, that are evolved in time individually and allowed to get into mechanical contact with each other. The fluid field generally consists of multiple liquid or gas phases. All fields are spatially discretized using the method of smoothed particle hydrodynamics (SPH). This approach is especially suitable in the context of continually changing interface topologies and dynamic phase transitions without the need for additional methodological and computational effort for interface tracking as compared to mesh- or grid-based methods. Proposing a concept for the parallelization of the computational framework, in particular concerning a computationally efficient evaluation of rigid body motion, is an essential part of this work. Finally, the accuracy and robustness of the proposed framework is demonstrated by several numerical examples in two and three dimensions, involving multiple rigid bodies, two-phase flow, and reversible phase transitions, with a focus on two potential application scenarios in the fields of engineering and biomechanics: powder bed fusion additive manufacturing (PBFAM) and disintegration of food boluses in the human stomach. The efficiency of the parallel computational framework is demonstrated by a strong scaling analysis.

Journal ArticleDOI
TL;DR: In this article, a weakly-invasive reformulation of the LATIN-PGD method is proposed, which is intended to be directly embedded into Simcenter Samcef (SSA) finite element software.
Abstract: Digital Twins, which tend to intervene over the entire life cycle of products from early design phase to predictive maintenance through optimization processes, are increasingly emerging as an essential component in the future of industries. To reduce the computational time reduced-order modeling (ROM) methods can be useful. However, the spread of ROM methods at an industrial level is currently hampered by the difficulty of introducing them into commercial finite element software, due to the strong intrusiveness of the associated algorithms, preventing from getting robust and reliable tools all integrated in a certified product. This work tries to circumvent this issue by introducing a weakly-invasive reformulation of the LATIN-PGD method which is intended to be directly embedded into Simcenter Samcef $$^{\hbox {TM}}$$ finite element software. The originality of this approach lies in the remarkably general way of doing, allowing PGD method to deal with not only a particular application but with all facilities already included in such softwares—any non-linearities, any element types, any boundary conditions...—and thus providing a new high-performance all-inclusive non-linear solver.

Journal ArticleDOI
TL;DR: In this article, the authors focus on quantitatively comparing the cardiac model with various types of data that can be extracted from tagged-MRI after an initial image processing step, such as 3D displacement fields, deforming tag planes or grids, or apparent 2D displacements This quantitative comparison is then used to feed a sequential data assimilation procedure.
Abstract: Tagged Magnetic Resonance images (tagged-MRI) are generally considered to be the gold standard of medical imaging in cardiology By imaging spatially-modulated magnetizations of the deforming tissue, indeed, this modality enables an assessment of intra-myocardial deformations over the heart cycle The objective of the present work is to incorporate the most valuable information contained in tagged-MRI in a data assimilation framework, in order to perform joint state-parameter estimation for a complete biomechanical model of the heart This type of estimation is the second major step, after initial anatomical personalization, for obtaining a genuinely patient-specific model that integrates the individual characteristics of the patient, an essential prerequisite for benefitting from the model predictive capabilities Here, we focus our attention on proposing adequate means of quantitatively comparing the cardiac model with various types of data that can be extracted from tagged-MRI after an initial image processing step, namely, 3D displacements fields, deforming tag planes or grids, or apparent 2D displacements This quantitative comparison—called discrepancy measure—is then used to feed a sequential data assimilation procedure In the state estimation stage of this procedure, we also propose a new algorithm based on the prediction–correction paradigm, which provides increased flexibility and effectiveness in the solution process The complete estimation chain is eventually assessed with synthetic data, produced by running a realistic model simulation representing an infarcted heart characterized by increased stiffness and reduced contractility in a given region of the myocardium From this simulation we extract the 3D displacements, tag planes and grids, and apparent 2D displacements, and we assess the estimation with each corresponding discrepancy measure We demonstrate that—via regional estimation of the above parameters—the data assimilation procedure allows to quantitatively estimate the biophysical parameters with good accuracy, thus simultaneously providing the location of the infarct and characterizing its seriousness This shows great potential for combining a biomechanical heart model with tagged-MRI in order to extract valuable new indices in clinical diagnosis

Journal ArticleDOI
TL;DR: In this paper, a methodology based on Lie analysis is proposed to investigate the mechanical behavior of materials exhibiting experimental master curves, based on the idea that the mechanical response of materials is associated with hidden symmetries reflected in the form of the energy functional and the dissipation potential leading to constitutive laws written in the framework of the thermodynamics of irreversible processes.
Abstract: A methodology based on Lie analysis is proposed to investigate the mechanical behavior of materials exhibiting experimental master curves. It is based on the idea that the mechanical response of materials is associated with hidden symmetries reflected in the form of the energy functional and the dissipation potential leading to constitutive laws written in the framework of the thermodynamics of irreversible processes. In constitutive modeling, symmetry analysis lets one formulate the response of a material in terms of so-called master curves, and construct rheological models based on a limited number of measurements. The application of symmetry methods leads to model reduction in a double sense: in treating large amounts number of measurements data to reduce them in a form exploitable for the construction of constitutive models, and by exploiting equivalence transformations extending point symmetries to efficiently reduce the number of significant parameters, and thus the computational cost of solving boundary value problems (BVPs). The symmetry framework and related conservation law analysis provide invariance properties of the constitutive models, allowing to predict the influence of a variation of the model parameters on the material response or on the solution of BVPs posed over spatial domains. The first part of the paper is devoted to the presentation of the general methodology proposed in this contribution. Examples of construction of rheological models based on experimental data are given for setting up a reduced model of the uniaxial creep and rupture behaviour of a Chrome-Molybdenum alloy (9Cr1Mo) at different temperatures and stress levels. Constitutive equations for creep and rupture master responses are identified for this alloy, and validated based on experimental data. Equivalence transformations are exemplified in the context of parameter reduction in fully nonlinear anisotropic fiber-reinforced elastic solids.

Journal ArticleDOI
TL;DR: In this article, the behavior of reinforced concrete structural members at high temperatures in case of fire was evaluated through the testing of 27 concrete cylinder starting at room temperature and increasing up to 260 degrees C. The concrete material behavior was implemented into the ABAQUS software and a finite simulation of reinforcedcrete beams exposed to actual fire conditions were conducted.
Abstract: The danger of fire is present always and everywhere. The imminent danger depends upon the actual type and length of fire exposure. Reinforced concrete structural members are loadbearing components in buildup structures and are therefore at high risk, since the entire structure might potentially collapse upon their failure. Thus, it is imperative to comprehend the behavior of reinforced concrete members at high temperatures in case of fire. In this study, the mechanical properties of concrete exposed to high temperatures were experimentally determined through the testing of 27 concrete cylinder starting at room temperature and increasing up to 260 °C. The concrete material behavior was implemented into the ABAQUS software and a finite simulation of reinforced concrete beams exposed to actual fire conditions were conducted. The finite element models compared favorably with the available experimental results. Thus, providing a valuable tool that allows for the prediction of failure in case of a fire event.

Journal ArticleDOI
TL;DR: In this article, the authors propose a framework to automatically construct 3D quadrilateral meshes of complicated geometry and arbitrary topology adapted for parametric studies using a triangulation of the solid 3D model's boundary provided from B-Rep CAD models or scanned geometry.
Abstract: This paper presents an effective framework to automatically construct 3D quadrilateral meshes of complicated geometry and arbitrary topology adapted for parametric studies. The input is a triangulation of the solid 3D model’s boundary provided from B-Rep CAD models or scanned geometry. The triangulated mesh is decomposed into a set of cuboids in two steps: pants decomposition and cuboid decomposition. This workflow includes an integration of a geometry-feature-aware pants-to-cuboids decomposition algorithm. This set of cuboids perfectly replicates the input surface topology. Using aligned global parameterization, patches are re-positioned on the surface in a way to achieve low overall distortion, and alignment to principal curvature directions and sharp features. Based on the cuboid decomposition and global parameterization, a 3D quadrilateral mesh is extracted. For different parametric instances with the same topology but different geometries, the MEG-IsoQuad method allows to have the same representation: isotopological meshes holding the same connectivity where each point on a mesh has an analogous one into all other meshes. Faithful 3D numerical charts of parametric geometries are then built using standard data-based techniques. Geometries are then evaluated in real-time. The efficiency and the robustness of the proposed approach are illustrated through a few parametric examples.

Journal ArticleDOI
TL;DR: In this paper, a coupled thermomechanical finite element model tailored to the part-scale simulation of metal additive manufacturing processes such as selective laser melting is proposed, where a consistent constitutive law is derived on the basis of a Voigt-type spatial homogenization procedure across the relevant phases, powder, melt and solid.
Abstract: This article proposes a coupled thermomechanical finite element model tailored to the part-scale simulation of metal additive manufacturing processes such as selective laser melting. A first focus lies on the derivation of a consistent constitutive law on basis of a Voigt-type spatial homogenization procedure across the relevant phases, powder, melt and solid. The proposed constitutive law accounts for the irreversibility of phase change and consistently represents thermally induced residual stresses. In particular, the incorporation of a reference strain term, formulated in rate form, allows to consistently enforce a stress-free configuration for newly solidifying material at melt temperature. Application to elementary test cases demonstrates the validity of the proposed constitutive law and allows for a comparison with analytical and reference solutions. Moreover, these elementary solidification scenarios give detailed insights and foster understanding of basic mechanisms of residual stress generation in melting and solidification problems with localized, moving heat sources. As a second methodological aspect, dual mortar meshtying strategies are proposed for the coupling of successively applied powder layers. This approach allows for very flexible mesh generation for complex geometries. As compared to collocation-type coupling schemes, e.g., based on hanging nodes, these mortar methods enforce the coupling conditions between non-matching meshes in an L2-optimal manner. The combination of the proposed constitutive law and mortar meshtying approach is validated on realistic three-dimensional examples, representing a first step towards part-scale predictions.

Journal ArticleDOI
TL;DR: In this paper, the authors apply the goal oriented error estimator to shell elements and propagate this error field into the continuum sub-model, which is tested on a simplified aluminium beam section with four different local feature designs.
Abstract: A major challenge with modern aircraft design is the occurrence of structural features of varied length scales. Structural stiffness can be accurately represented using homogenisation, however aspects such as the onset of failure may require information on more refined length scale for both metallic and composite components. This work considers the errors encountered in the coarse global models due to the mesh size and how these are propagated into detailed local sub-models. The error is calculated by a goal oriented error estimator, formulated by solving dual problems and Zienkiewicz-Zhu smooth field recovery. Specifically, the novel concept of this work is applying the goal oriented error estimator to shell elements and propagating this error field into the continuum sub-model. This methodology is tested on a simplified aluminium beam section with four different local feature designs, thereby illustrating the sensitivity to various local features with a common global setting. The simulations show that when the feature models only contained holes on the flange section, there was little sensitivity of the von Mises stress to the design modifications. However, when holes were added to the webbing section, there were large stress concentrations that predicted yielding. Despite this increase in nominal stress, the maximum error does not significantly change. However, the error field does change near the holes. A Monte Carlo simulation utilising marginal distributions is performed to show the robustness of the multi-scale analysis to uncertainty in the global error estimation as would be expected in experimental measurements. This shows a trade-off between Saint-Venant’s principle of the applied loading and stress concentrations on the feature model when investigating the response variance.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a new data structure for structuring graph data, which is commonly used for physically based simulation of soft tissue during surgery, and deformable objects in general.
Abstract: Simulators for virtual surgery training need to perform complex calculations very quickly to provide realistic haptic and visual interactions with a user. The complexity is further increased by the addition of cuts to virtual organs, such as would be needed for performing tumor resection. A common method for achieving large performance improvements is to make use of the graphics hardware (GPU) available on most general-use computers. Programming GPUs requires data structures that are more rigid than on conventional processors (CPU), making that data more difficult to update. We propose a new method for structuring graph data, which is commonly used for physically based simulation of soft tissue during surgery, and deformable objects in general. Our method aligns all nodes of the graph in memory, independently from the number of edges they contain, allowing for local modifications that do not affect the rest of the structure. Our method also groups memory transfers so as to avoid updating the entire graph every time a small cut is introduced in a simulated organ. We implemented our data structure as part of a simulator based on a meshless method. Our tests show that the new GPU implementation, making use of the new graph structure, achieves a 10 times improvement in computation times compared to the previous CPU implementation. The grouping of data transfers into batches allows for a 80–90% reduction in the amount of data transferred for each graph update, but accounts only for a small improvement in performance. The data structure itself is simple to implement and allows simulating increasingly complex models that can be cut at interactive rates.

Journal ArticleDOI
TL;DR: In this paper, an isogeometric B-Rep mortar-based mapping method for geometry models stemming directly from Computer-Aided Design (CAD) is systematically augmented and applied to partitioned FSI simulations.
Abstract: In this study the isogeometric B-Rep mortar-based mapping method for geometry models stemming directly from Computer-Aided Design (CAD) is systematically augmented and applied to partitioned Fluid-Structure Interaction (FSI) simulations. Thus, the newly proposed methodology is applied to geometries described by their Boundary Representation (B-Rep) in terms of trimmed multipatch Non-Uniform Rational B-Spline (NURBS) discretizations as standard in modern CAD. The proposed isogeometric B-Rep mortar-based mapping method is herein extended for the transformation of fields between a B-Rep model and a low order discrete surface representation of the geometry which typically results when the Finite Volume Method (FVM) or the Finite Element Method (FEM) are employed. This enables the transformation of such fields as tractions and displacements along the FSI interface when Isogeometric B-Rep Analysis (IBRA) is used for the structural discretization and the FVM is used for the fluid discretization. The latter allows for diverse discretization schemes between the structural and the fluid Boundary Value Problem (BVP), taking into consideration the special properties of each BVP separately while the constraints along the FSI interface are satisfied in an iterative manner within partitioned FSI. The proposed methodology can be exploited in FSI problems with an IBRA structural discretization or to FSI problems with a standard FEM structural discretization in the frame of the Exact Coupling Layer (ECL) where the interface fields are smoothed using the underlying B-Rep parametrization, thus taking advantage of the smoothness that the NURBS basis functions offer. All new developments are systematically investigated and demonstrated by FSI problems with lightweight structures whereby the underlying geometric parametrizations are directly taken from real-world CAD models, thus extending IBRA into coupled problems of the FSI type.

Journal ArticleDOI
TL;DR: In this paper, a nonparametric method assessing the error and variability margins in solutions depicted in a separated form using experimental results is illustrated in a real-time PGD solution enhanced with the solution variability and the confidence intervals.
Abstract: A nonparametric method assessing the error and variability margins in solutions depicted in a separated form using experimental results is illustrated in this work. The method assess the total variability of the solution including the modeling error and the truncation error when experimental results are available. The illustrated method is based on the use of the PGD separated form solutions, enriched by transforming a part of the PGD basis vectors into probabilistic one. The constructed probabilistic vectors are restricted to the physical solution’s Stiefel manifold. The result is a real-time parametric PGD solution enhanced with the solution variability and the confidence intervals.

Journal ArticleDOI
TL;DR: In this article, a tree-based tensor format was used for the estimation of the wall pressure for a flow over a cylinder for a range of low to medium Reynolds numbers for which we observe two flow regimes: a laminar flow with periodic vortex shedding and a turbulent wake (sub-critic regime).
Abstract: The paper deals with approximations of periodic functions that play a significant role in harmonic analysis. The approach revisits the trigonometric polynomials, seen as combinations of functions, and proposes to extend the class of models of the combined functions to a wider class of functions. The key here is to use structured functions, that have low complexity, with suitable functional representation and adapted parametrizations for the approximation. Such representation enables to approximate multivariate functions with few eventually random samples. The new parametrization is determined automatically with a greedy procedure, and a low rank format is used for the approximation associated with each new parametrization. A supervised learning algorithm is used for the approximation of a function of multiple random variables in tree-based tensor format, here the particular Tensor Train format. Adaptive strategies using statistical error estimates are proposed for the selection of the underlying tensor bases and the ranks for the Tensor-Train format. The method is applied for the estimation of the wall pressure for a flow over a cylinder for a range of low to medium Reynolds numbers for which we observe two flow regimes: a laminar flow with periodic vortex shedding and a laminar boundary layer with a turbulent wake (sub-critic regime). The automatic re-parametrization enables here to take into account the specific periodic feature of the pressure.

Journal ArticleDOI
TL;DR: In this paper, the Brinkman volume penalization technique is used in the context of a high-order Discontinous Galerkin method to model moving wall boundaries for compressible fluid flow simulations.
Abstract: In this work we investigate the Brinkman volume penalization technique in the context of a high-order Discontinous Galerkin method to model moving wall boundaries for compressible fluid flow simulations High-order approximations are especially of interest as they require few degrees of freedom to represent smooth solutions accurately This reduced memory consumption is attractive on modern computing systems where the memory bandwidth is a limiting factor Due to their low dissipation and dispersion they are also of particular interest for aeroacoustic problems However, a major problem for the high-order discretization is the appropriate representation of wall geometries In this work we look at the Brinkman penalization technique, which addresses this problem and allows the representation of geometries without modifying the computational mesh The geometry is modelled as an artificial porous medium and embedded in the equations As the mesh is independent of the geometry with this method, it is not only well suited for high-order discretizations but also for problems where the obstacles are moving We look into the deployment of this strategy by briefly discussing the Brinkman penalization technique and its application in our solver and investigate its behavior in fundamental one-dimensional setups, such as shock reflection at a moving wall and the formation of a shock in front of a piston This is followed by the application to setups with two and three dimensions, illustrating the method in the presence of curved surfaces

Journal ArticleDOI
TL;DR: In this paper, a projection-based model order reduction (MOR) framework was proposed to reduce the computational cost in a many-query context due to model uncertainties in two material and one geometric parameter, and a reduced-order basis construction technique combining the concept of subspace angles and greedy maximin distance sampling.
Abstract: This work proposes a framework for projection-based model order reduction (MOR) of computational models aiming at a mechanical analysis of abdominal aortic aneurysms (AAAs). The underlying full-order model (FOM) is patient-specific, stationary and nonlinear. The quantities of interest are the von Mises stress and the von Mises strain field in the AAA wall, which result from loading the structure to the level of diastolic blood pressure at a fixed, imaged geometry (prestressing stage) and subsequent loading to the level of systolic blood pressure with associated deformation of the structure (deformation stage). Prestressing is performed with the modified updated Lagrangian formulation (MULF) approach. The proposed framework aims at a reduction of the computational cost in a many-query context resulting from model uncertainties in two material and one geometric parameter. We apply projection-based MOR to the MULF prestressing stage, which has not been presented to date. Additionally, we propose a reduced-order basis construction technique combining the concept of subspace angles and greedy maximin distance sampling. To further achieve computational speedup, the reduced-order model (ROM) is equipped with the energy-conserving mesh sampling and weighting hyper reduction method. Accuracy of the ROM is numerically tested in terms of the quantities of interest within given bounds of the parameter domain and performance of the proposed ROM in the many-query context is demonstrated by comparing ROM and FOM statistics built from Monte Carlo sampling for three different patient-specific AAAs.

Journal ArticleDOI
TL;DR: In this paper, a 3D combustion simulation of a diesel engine was performed using a combination of transition SST turbulence model, eddy dissipation model, Moss-Brookes model for soot calculation and Zeldovich mechanism for NO production.
Abstract: The current study aims at simulating the in-cylinder combustion process in a diesel engine and investigating the engine performance and pollutant formation. The combustion simulation is performed on a 3D sector employing appropriate models for various physical and chemical processes contributing in the combustion phenomenon. The overall model includes Transition SST turbulence model, eddy dissipation model for turbulence chemistry interaction, Moss–Brookes model for soot calculation and Zeldovich mechanism for NO production other than the usual transport equations. The numerical solutions are based on the finite volume discretization of the governing partial differential equations. Engine performance has been studied in terms of pressure, temperature and heat release rate while the pollutants formation has been investigated in terms of soot and thermal NO production. The results show that the ignition delay is quite short and that the injection timing may be successfully employed to control the combustion behavior. The simulation results are quite consistent with the expected behavior of the target variables indicating that the CFD analysis can be successfully employed in the diesel engine design. The results validation may be acknowledged in view of the mesh independence test, literature comparison and justification of the models.