scispace - formally typeset
Search or ask a question

Showing papers in "Advanced Modeling and Simulation in Engineering Sciences in 2016"


Journal ArticleDOI
TL;DR: A novel model reduction approach for the approximation of non linear hyperbolic equations in the scalar and the system cases that has the potential of being accurate when involving only very few modes, generating physically acceptable, oscillation-free, solutions.
Abstract: We propose a novel model reduction approach for the approximation of non linear hyperbolic equations in the scalar and the system cases. The approach relies on an offline computation of a dictionary of solutions together with an online $$L^1$$ L 1 - norm minimization of the residual. It is shown why this is a natural framework for hyperbolic problems and tested on nonlinear problems such as Burgers’ equation and the one-dimensional Euler equations involving shocks and discontinuities. Efficient algorithms are presented for the computation of the $$L^1$$ L 1 -norm minimizer, both in the cases of linear and nonlinear residuals. Results indicate that the method has the potential of being accurate when involving only very few modes, generating physically acceptable, oscillation-free, solutions.

71 citations


Journal ArticleDOI
TL;DR: A non-intrusive procedure is developed in which the 4D sensitivity fields are obtained with a commercial finite element code, allowing for a large versatility in meshing and incorporation of complex constitutive laws.
Abstract: The goal of the present study is to illustrate the full integration of sensor and imaging data into numerical procedures for the purpose of identification of constitutive laws and their validation. The feasibility of such approaches is proven in the context of in situ tests monitored by tomography. The bridging tool consists of spatiotemporal (i.e., 4D) analyses with dedicated (integrated) correlation algorithms. A tensile test on nodular graphite cast iron sample is performed within a lab tomograph. The reconstructed volumes are registered via integrated digital volume correlation (DVC) that incorporates a finite element modeling of the test, thereby performing a mechanical integration in 4D registration of a series of 3D images. In the present case a non-intrusive procedure is developed in which the 4D sensitivity fields are obtained with a commercial finite element code, allowing for a large versatility in meshing and incorporation of complex constitutive laws. Convergence studies can thus be performed in which the quality of the discretization is controlled both for the simulation and the registration. Incremental DVC analyses are carried out with the scans acquired during the in situ mechanical test. For DVC, the mesh size results from a compromise between measurement uncertainties and its spatial resolution. Conversely, a numerically good mesh may reveal too fine for the considered material microstructure. With the integrated framework proposed herein, 4D registrations can be performed and missing boundary conditions of the reference state as well as mechanical parameters of an elastoplastic constitutive law are determined in fair condition both for DVC and simulation.

64 citations


Journal ArticleDOI
TL;DR: This work highlights the fact that the Euclidean distance is suboptimal and that local MOR procedures can be improved by the use of a metric directly related to the projections underlying the reduction.
Abstract: Projection-based model order reduction (MOR) using local subspaces is becoming an increasingly important topic in the context of the fast simulation of complex nonlinear models. Most approaches rely on multiple local spaces constructed using parameter, time or state-space partitioning. State-space partitioning is usually based on Euclidean distances. This work highlights the fact that the Euclidean distance is suboptimal and that local MOR procedures can be improved by the use of a metric directly related to the projections underlying the reduction. More specifically, scale-invariances of the underlying model can be captured by the use of a true projection error as a dissimilarity criterion instead of the Euclidean distance. The capability of the proposed approach to construct local and compact reduced subspaces is illustrated by approximation experiments of several data sets and by the model reduction of two nonlinear systems.

37 citations


Journal ArticleDOI
TL;DR: The focus of this paper is laid on the generation of a numerically efficient hybrid surrogate modelling strategy, combining Gappy proper orthogonal decomposition (GPOD) and recurrent neural networks (RNN) in this hybrid RNN-GPOD surrogate model.
Abstract: Realistic 3D simulations of the tunnelling process are increasingly required to investigate the interactions between machine-driven tunnel construction and the surrounding soil in order to provide reliable estimates of the expected settlements and associated risks of damage for existing structures, in particular in urban tunnelling projects. To accomplish the step from large-scale computational analysis to real-time predictions of expected settlements during tunnel construction, the focus of this paper is laid on the generation of a numerically efficient hybrid surrogate modelling strategy, combining Gappy proper orthogonal decomposition (GPOD) and recurrent neural networks (RNN). In this hybrid RNN-GPOD surrogate model, the RNN is employed to extrapolate the time variant settlements at several monitoring points within an investigated surface area and GPOD is utilised to predict the whole field of surface settlements based on the RNN predictions and a POD radial basis functions approximation. Both parts of the surrogate model are created based on results of finite element simulations from geotechnical and process parameters varied within the range of intervals given in the design stage of a tunnel project. In the construction stage, the hybrid surrogate model is applied for real-time reliability analyses of the mechanised tunnelling process to support the machine operator in steering the tunnel boring machine.

35 citations


Journal ArticleDOI
TL;DR: In this paper a mixed method is developed that can handle anisotropic materials with inextensible fibers that can be relaxed to extensible fiber behaviour and a classical ansatz, known from the modeling of volume constraint is adopted leading stable elements that could be used in the finite strain regime.
Abstract: Anisotropic material with inextensible fibers introduce constraints in the mathematical formulations. This is always the case when fibers with high stiffness in a certain direction are present and a relatively weak matrix material is supporting these fibers. In numerical solution methods like the finite element method the presence of constraints—in this case associated to a possible fiber inextensibility compared to a matrix—lead to so called locking-phenomena. This can be overcome by special interpolation schemes as has been discussed extensively for volume constraints like incompressibility as well as contact constraints. For anisotropic material behaviour the most severe case is related to inextensible fibers. In this paper a mixed method is developed that can handle anisotropic materials with inextensible fibers that can be relaxed to extensible fiber behaviour. For this purpose a classical ansatz, known from the modeling of volume constraint is adopted leading stable elements that can be used in the finite strain regime.

28 citations


Journal ArticleDOI
TL;DR: It is shown numerically and analytically that the introduced cut-element stabilisation technique provides an effective bound on the size of the Nitsche parameters and, in turn, leads to well-conditioned system matrices.
Abstract: We present an immersed finite element technique for boundary-value and interface problems from nonlinear solid mechanics. Its key features are the implicit representation of domain boundaries and interfaces, the use of Nitsche’s method for the incorporation of boundary conditions, accurate numerical integration based on marching tetrahedrons and cut-element stabilisation by means of extrapolation. For discretisation structured and unstructured background meshes with Lagrange basis functions are considered. We show numerically and analytically that the introduced cut-element stabilisation technique provides an effective bound on the size of the Nitsche parameters and, in turn, leads to well-conditioned system matrices. In addition, we introduce a novel approach for representing and analysing geometries with sharp features (edges and corners) using an implicit geometry representation. This allows the computation of typical engineering parts composed of solid primitives without the need of boundary-fitted meshes.

28 citations


Journal ArticleDOI
TL;DR: It is shown how Bayes’s theory is the proper mathematical background for this identification process and how this theoretical background can be used in an actual numerical procedure, and shortly discuss various numerical approximations.
Abstract: When a mathematical or computational model is used to analyse some system, it is usual that some parameters resp. functions or fields in the model are not known, and hence uncertain. These parametric quantities are then identified by actual observations of the response of the real system. In a probabilistic setting, Bayes’s theory is the proper mathematical background for this identification process. The possibility of being able to compute a conditional expectation turns out to be crucial for this purpose. We show how this theoretical background can be used in an actual numerical procedure, and shortly discuss various numerical approximations.

28 citations


Journal ArticleDOI
TL;DR: A dynamic data-driven adaptation approach that adapts the reduced model from incomplete sensor data obtained from the system during the online computations and derives updates to the reduced models directly from the incomplete data, without recourse to the full model.
Abstract: This work presents a data-driven online adaptive model reduction approach for systems that undergo dynamic changes. Classical model reduction constructs a reduced model of a large-scale system in an offline phase and then keeps the reduced model unchanged during the evaluations in an online phase; however, if the system changes online, the reduced model may fail to predict the behavior of the changed system. Rebuilding the reduced model from scratch is often too expensive in time-critical and real-time environments. We introduce a dynamic data-driven adaptation approach that adapts the reduced model from incomplete sensor data obtained from the system during the online computations. The updates to the reduced models are derived directly from the incomplete data, without recourse to the full model. Our adaptivity approach approximates the missing values in the incomplete sensor data with gappy proper orthogonal decomposition. These approximate data are then used to derive low-rank updates to the reduced basis and the reduced operators. In our numerical examples, incomplete data with 30–40 % known values are sufficient to recover the reduced model that would be obtained via rebuilding from scratch.

27 citations


Journal ArticleDOI
TL;DR: The proposed reduced order model combines efficient shape deformation and accurate and stable velocity and pressure approximation for incompressible viscous flows, computed with a reduced order method, and is applied to computational fluid dynamics problems considering a Stokes flow model.
Abstract: In this work we provide a combination of isogeometric analysis with reduced order modelling techniques, based on proper orthogonal decomposition, to guarantee computational reduction for the numerical model, and with free-form deformation, for versatile geometrical parametrization. We apply it to computational fluid dynamics problems considering a Stokes flow model. The proposed reduced order model combines efficient shape deformation and accurate and stable velocity and pressure approximation for incompressible viscous flows, computed with a reduced order method. Efficient offline–online computational decomposition is guaranteed in view of repetitive calculations for parametric design and optimization problems. Numerical test cases show the efficiency and accuracy of the proposed reduced order model.

26 citations


Journal ArticleDOI
TL;DR: This paper proposes the off-line calculation of rich parametric solutions that can be then explored on-line in real time in order to perform inverse analysis, process optimisation or on-lines simulation-based control of automated fibre placement.
Abstract: Automated fibre placement (AFP) is an incipient manufacturing process for composite structures. Despite its conceptual simplicity it involves many complexities related to the necessity of melting the thermoplastic at the interface tape-substrate, ensuring the consolidation that needs the diffusion of molecules and control the residual stresses installation responsible of the residual deformations of the formed parts. The optimisation of the process and the determination of the process window requires a plethora of simulations because there are many parameters involved in the characterization of the material and the process. The exploration of the design space cannot be envisaged by using standard simulation techniques. In this paper we propose the off-line calculation of rich parametric solutions that can be then explored on-line in real time in order to perform inverse analysis, process optimisation or on-line simulation-based control. In particular, in the present work, and in continuity with our former works, we consider two main extra-parameters, the first related to the line acceleration and the second to the number of plies laid-up.

22 citations


Journal ArticleDOI
TL;DR: This work proposes to apply the proper orthogonal decomposition combined with the (Discrete) empirical interpolation method in order to reduce the computation time required to study the start-up of an electrical machine until it reaches the steady state.
Abstract: Model order reduction (MOR) methods are more and more applied on many different fields of physics in order to reduce the number of unknowns and thus the computational time of large-scale systems. However, their application is quite recent in the field of computational electromagnetics. In the case of electrical machine, the numerical model has to take into account the nonlinear behaviour of ferromagnetic materials, motion of the rotor, circuit equations and mechanical coupling. In this context, we propose to apply the proper orthogonal decomposition combined with the (Discrete) empirical interpolation method in order to reduce the computation time required to study the start-up of an electrical machine until it reaches the steady state. An empirical offline/online approach based on electrical engineering is proposed in order to build an efficient reduced model accurate on the whole operating range. Finally, a 2D example of a synchronous machine is studied with a reduced model deduced from the proposed approach.

Journal ArticleDOI
TL;DR: In this paper, a non-intrusive technique is proposed to substitute the simplified connectors by a precise 3D nonlinear model, without modifying the global plate model, and each computation can be conducted on independent optimized software.
Abstract: Most large engineering structures are described as assemblies of plates and shells and they are computed as such using adhoc Finite Element packages. In fact their computation in 3D would be much too costly. In this framework, the connections between the parts are often modeled by means of simplified tying models. In order to improve the reliability of such simulations, we propose to apply a non-intrusive technique so as to virtually substitute the simplified connectors by a precise 3D nonlinear model, without modifying the global plate model. Moreover each computation can be conducted on independent optimized software. After a description of the method, examples are used to analyze its performance, and to draw some conclusions on the validity and limitation of both the modeling of junction by rigid connectors and the use of submodeling techniques for the estimation of the carrying capacity of bolted plates.

Journal ArticleDOI
TL;DR: The present contribution demonstrates that multi-level hp-adaptive schemes can be efficiently driven by an explicit a-posteriori error estimators, and adopts the classical residual-based error estimator.
Abstract: Recently, a multi-level hp-version of the finite element method (FEM) was proposed to ease the difficulties of treating hanging nodes, while providing full hp-approximation capabilities. In the original paper, the refinement procedure made use of a-priori knowledge of the solution. However, adaptive procedures can produce discretizations which are more effective than an intuitive choice of element sizes h and polynomial degree distributions p. This is particularly prominent when a-priori knowledge of the solution is only vague or unavailable. The present contribution demonstrates that multi-level hp-adaptive schemes can be efficiently driven by an explicit a-posteriori error estimator. To this end, we adopt the classical residual-based error estimator. The main insight here is that its extension to multi-level hp-FEM is possible by considering the refined-most overlay elements as integration domains. We demonstrate on several two- and three-dimensional examples that exponential convergence rates can be obtained.

Journal ArticleDOI
TL;DR: These numerical experiments provide a justification of the dynamic gradient damage model along with its current implementation, when it is used as a phase-field model for complex real-world dynamic fracture problems.
Abstract: Gradient damage models can be acknowledged as a unified framework of dynamic brittle fracture. As a phase-field approach to fracture, they are gaining popularity over the last few years in the computational mechanics community. This paper concentrates on a better understanding of these models. We will highlight their properties during the initiation and propagation phases of defect evolution. The variational ingredients of the dynamic gradient damage model are recalled. Temporal discretization based on the Newmark- $$\beta $$ scheme is performed. Several energy release rates in gradient damage models are introduced to bridge the link from damage to fracture. An antiplane tearing numerical experiment is considered. It is found that the phase-field crack tip is governed by the asymptotic Griffith’s law. In the absence of unstable crack propagation, the dynamic gradient damage model converges to the quasi-static one. The defect evolution is in quantitative accordance with the linear elastic fracture mechanics predictions. These numerical experiments provide a justification of the dynamic gradient damage model along with its current implementation, when it is used as a phase-field model for complex real-world dynamic fracture problems.

Journal ArticleDOI
TL;DR: The TPWL method for a swing dynamics model shows that the method provides accurate reduced order models for non-linear transient problems.
Abstract: This paper concerns the application of reduced order modeling techniques to power grid simulation. Swing dynamics is a complex non-linear phenomenon due to which model order reduction of these problems is intricate. A multi point linearization based model reduction technique trajectory piece-wise linearization (TPWL) method is adopted to address the problem of approximating the nonlinear term in swing models. The method combines proper orthogonal decomposition with TPWL in order to build a suitable reduced order model that can accurately predict the swing dynamics. The method consists of two stages, an offline stage where model reduction and selection of linearization points is performed and an online stage where the reduced order multi-point linear simulation is performed. An improvement of the strategy for point selection is also proposed. The TPWL method for a swing dynamics model shows that the method provides accurate reduced order models for non-linear transient problems.

Journal ArticleDOI
TL;DR: An immersed finite element method for solid–fluid interaction is presented with application focus on highly deformable elastic bodies in a Stokes flow environment, motivated by microfluidic experimentation in mechanobiology.
Abstract: An immersed finite element method for solid–fluid interaction is presented with application focus on highly deformable elastic bodies in a Stokes flow environment. The method is based on a global balance equation which combines the solid and fluid momentum balances, the fluid mass balance and, in weak form, the interface conditions. By means of an Updated Lagrangian description for finite elasticity, only one analysis mesh is used, where the solid particles are backtracked in order to preserve the deformation history. The method results in a full coupling of the solid-fluid system which is solved by an exact Newton method. The location of the material interface is captured by a signed distance function and updated according to the computed displacement increments and the help of an explicit surface parameterisation; no body-fitted volume meshes are needed. Special emphasis is placed on the accurate integration of finite elements traversed by the interface and the related numerical stability of the shape function basis. A number of applications for compressible Neo-Hookean solids subject to creeping flow are presented, motivated by microfluidic experimentation in mechanobiology.

Journal ArticleDOI
TL;DR: The goal of the present paper is to significantly improve computational efficiency by using model reduction and the computation of the consistent tangent operator based on the reduced stiffness matrix of the microstructure is achieved.
Abstract: Many applications are based on the use of materials with heterogeneous microstructure. Prominent examples are fiber-reinforced composites, multi-phase steels or soft tissue to name only a few. The modeling of structures composed of such materials is suitably carried out at different scales. At the micro scale, the detailed microstructure is taken into account, whereas the modeling at the macro scale serves to include sophisticated structural geometries with complex boundary conditions. The procedure is crucially based on an intelligent bridging between the scales. One of the methods derived for this purpose is the meanwhile well established FE $$^2$$ method which, however, leads to a very high computational effort. Unfortunately, this impedes the use of the FE $$^2$$ method and similar methodologies for practically relevant problems as they occur e.g. in production or medical technology. The goal of the present paper is to significantly improve computational efficiency by using model reduction. The suggested procedure is very generally applicable. It holds for large deformations as well as for all relevant types of inelasticity. An important merit of the work is the computation of the consistent tangent operator based on the reduced stiffness matrix of the microstructure. In this way a very fast (in most cases quadratic) convergence within the Newton iteration at macro level is achieved.

Journal ArticleDOI
TL;DR: This paper presents a methodology for conducting a 3-D static fracture analysis with applications to a gas turbine compressor blade and demonstrates that for the applied loading condition, a mixed mode crack propagation is expected.
Abstract: This paper presents a methodology for conducting a 3-D static fracture analysis with applications to a gas turbine compressor blade. An open crack model is considered in the study and crack-tip driving parameters are estimated by using 3-D singular crack-tip elements in ANSYS $$\circledR $$ . The static fracture analysis is verified with a special purpose fracture code (FRANC3D). Once the crack front is perfectly defined and validated, a free vibration study is conducted by analyzing the natural frequencies and modeshapes for both a single blade and bladed disk system. Taking advantage of high performance computing resources, a high fidelity finite element model is considered in the parametric investigation. In the fracture simulation, the influence of the size of a single edged crack as well as the rotational velocity on fracture parameters (stress intensity factors and J-Integral) are evaluated. Results demonstrate that for the applied loading condition, a mixed mode crack propagation is expected. In the modal analysis study, increasing the depth of the crack leads to a decrease in the natural frequencies of both the single blade and bladed disk system, while increasing the rotational velocity increases the natural frequencies. The presence of a crack also leads to mode localization for all mode families, a phenomenon that cannot be captured by a single blade analysis.

Journal ArticleDOI
TL;DR: The work deals on computational design of structural materials by resorting to computational homogenization and topological optimization techniques, which results into a large diminution of the resulting computational costs, which make affordable the proposed methodology for multiscale computational material design.
Abstract: The work deals on computational design of structural materials by resorting to computational homogenization and topological optimization techniques. The goal is then to minimize the structural (macro-scale) compliance by appropriately designing the material distribution (microstructure) at a lower scale (micro-scale), which, in turn, rules the mechanical properties of the material. The specific features of the proposed approach are: (1) The cost function to be optimized (structural stiffness) is defined at the macro-scale, whereas the design variables defining the micro-structural topology lie on the low scale. Therefore a coupled, two-scale (macro/micro), optimization problem is solved unlike the classical, single-scale, topological optimization problems. (2) To overcome the exorbitant computational cost stemming from the multiplicative character of the aforementioned multiscale approach, a specific strategy, based on the consultation of a discrete material catalog of micro-scale optimized topologies (Computational Vademecum) is used. The Computational Vademecum is computed in an offline process, which is performed only once for every constitutive-material, and it can be subsequently consulted as many times as desired in the online design process. This results into a large diminution of the resulting computational costs, which make affordable the proposed methodology for multiscale computational material design. Some representative examples assess the performance of the considered approach.

Journal ArticleDOI
TL;DR: An a posteriori hyper-reduction method is specifically considered in order to deal with the cost associated to the non-linearity of the problems.
Abstract: Inhomogeneous essential boundary conditions must be carefully treated in the formulation of Reduced Order Models (ROMs) for non-linear problems. In order to investigate this issue, two methods are analysed: one in which the boundary conditions are imposed in an strong way, and a second one in which a weak imposition of boundary conditions is made. The ideas presented in this work apply to the big realm of a posteriori ROMs. Nevertheless, an a posteriori hyper-reduction method is specifically considered in order to deal with the cost associated to the non-linearity of the problems. Applications to nonlinear transient heat conduction problems with temperature dependent thermophysical properties and time dependent essential boundary conditions are studied. However, the strategies introduced in this work are of general application.

Journal ArticleDOI
TL;DR: It is shown that a PGD formulation can facilitate the construction of so-called admissible fields which is a technical key-point of CRE, which has been widely used over the last 40 years for verification and validation of computational mechanics models.
Abstract: The paper deals with the constitutive relation error (CRE) concept which has been widely used over the last 40 years for verification and validation of computational mechanics models. It more specifically focuses on the beneficial use of model reduction based on proper generalized decomposition (PGD) into this CRE concept. Indeed, it is shown that a PGD formulation can facilitate the construction of so-called admissible fields which is a technical key-point of CRE. Numerical illustrations, addressing both model verification and model updating, are presented to assess the performances of the proposed approach.

Journal ArticleDOI
TL;DR: A damage driven remeshing strategy towards fracture allows one to simulate arbitrary crack paths in three-dimensional evolving geometries and has a significant potential for a wide range of industrial applications.
Abstract: A crack initiation and propagation algorithm driven by non-local ductile damage is proposed in a three-dimensional finite strain framework. The evolution of plastic strain and stress triaxiality govern a non-local ductile damage field via constitutive equations. When the damage reaches a critical threshold, a discontinuity in the form of a crack surface is inserted into the three-dimensional continuum. The location and direction of the introduced discontinuity directly result from the damage field. Crack growth is also determined by the evolution of damage at the crack tip and the crack surface is adaptively extended in the computed direction. Frequent remeshing is used to computationally track the initiation and propagation of cracks, as well as to simultaneously maintain a good quality of the finite elements undergoing large deformations. This damage driven remeshing strategy towards fracture allows one to simulate arbitrary crack paths in three-dimensional evolving geometries. It has a significant potential for a wide range of industrial applications. Numerical examples are solved to demonstrate the ability of the proposed framework.

Journal ArticleDOI
TL;DR: The numerical results show that the square-root formulation performs efficiently in the tested benchmark problems at high-Weissenberg number flows, such as the lid-driven cavity flow, the flow around a confined cylinder, the cross-slot flow and the impacting drop free surface problem.
Abstract: We present a numerical study of a stabilization method for computing confined and free-surface flows of highly elastic viscoelastic fluids. In this approach, the constitutive equation based on the conformation tensor, which is used to define the viscoelastic model, is modified introducing an evolution equation for the square-root conformation tensor. Both confined and free-surface flows are considered, using two different numerical codes. A finite volume method is used for confined flows and a finite difference code developed in the context of the marker-and-cell method is used for confined and free-surface flows. The implementation of the square-root formulation was performed in both numerical schemes and discussed in terms of its ability and efficiency to compute steady and transient viscoelastic fluid flows. The numerical results show that the square-root formulation performs efficiently in the tested benchmark problems at high-Weissenberg number flows, such as the lid-driven cavity flow, the flow around a confined cylinder, the cross-slot flow and the impacting drop free surface problem.

Journal ArticleDOI
TL;DR: A novel finite element formulation that significantly reduces the number of degrees of freedom necessary to obtain reasonably accurate approximations of the low-frequency component of the deformation in boundary-value problems is proposed.
Abstract: We propose a novel finite element formulation that significantly reduces the number of degrees of freedom necessary to obtain reasonably accurate approximations of the low-frequency component of the deformation in boundary-value problems. In contrast to the standard Ritz–Galerkin approach, the shape functions are defined on a Lie algebra—the logarithmic space—of the deformation function. We construct a deformation function based on an interpolation of transformations at the nodes of the finite element. In the case of the geometrically exact planar Bernoulli beam element presented in this work, these transformation functions at the nodes are given as rotations. However, due to an intrinsic coupling between rotational and translational components of the deformation function, the formulation provides for a good approximation of the deflection of the beam, as well as of the resultant forces and moments. As both the translational and the rotational components of the deformation function are defined on the logarithmic space, we propose to refer to the novel approach as the “Logarithmic finite element method”, or “LogFE” method.

Journal ArticleDOI
TL;DR: A strategy to reduce the computational costs of the simulation of dynamic fracture processes in quasi-brittle materials, based on a combination of domain decomposition (DD) and model order reduction (MOR) techniques is discussed.
Abstract: In this paper, we discuss a strategy to reduce the computational costs of the simulation of dynamic fracture processes in quasi-brittle materials, based on a combination of domain decomposition (DD) and model order reduction (MOR) techniques. Fracture processes are simulated by means of three-dimensional finite element models in which use is made of cohesive elements, introduced on-the-fly wherever a cracking criterion is attained. The body is initially subdivided into sub-domains; for each sub-domain MOR is obtained through a proper orthogonal decomposition (POD) of the equations governing its evolution, until when it starts getting cracked. After crack inception within a sub-domain, the solution is switched back to the original full-order model for that sub-domain only. The computational gain attained through the coupled use of DD and POD thus depends on the geometry of the body, on the topology of sub-domains and, on top of all, on the spreading of cracking induced by load conditions. Numerical examples concerning well-established fracture tests are used for validation, and the attainable reduction of the computing time is discussed at varying decomposition into sub-domains, even in the absence of a full exploitation of parallel computing potentialities.

Journal ArticleDOI
TL;DR: It is proved that the recursive POD representation is computationally more accurate than the Proper Generalized Decomposition for multivariate functions, and an exponential rate of convergence is proved, when applied to the solution of the reaction-diffusion partial differential equation.
Abstract: This paper focuses on the low-dimensional representation of multivariate functions. We study a recursive POD representation, based upon the use of the power iterate algorithm to recursively expand the modes retained in the previous step. We obtain general error estimates for the truncated expansion, and prove that the recursive POD representation provides a quasi-optimal approximation in $$L^2$$ norm. We also prove an exponential rate of convergence, when applied to the solution of the reaction-diffusion partial differential equation. Some relevant numerical experiments show that the recursive POD is computationally more accurate than the Proper Generalized Decomposition for multivariate functions. We also recover the theoretical exponential convergence rate for the solution of the reaction-diffusion equation.

Journal ArticleDOI
TL;DR: It will be shown that given a lowRank approximation, the SVD of a provided low rank approximation can be reached at convergence and efficiency of such a compressing method will be demonstrated.
Abstract: The proper generalized decomposition (PGD) aims at finding the solution of a generic problems into a low rank approximation. On the contrary to the singular value decomposition (SVD), such a low rank approximation is generally not the optimal one leading to memory issues and loss of computational efficiency. Nonetheless, the computational cost of the SVD is generally prohibitive to be performed. In this paper, authors suggest an algorithm to address this issue. First, the algorithm is described and studied in details. It consists in a cheap iterative method compressing a low rank expansion. It will be shown that given a low rank approximation, the SVD of a provided low rank approximation can be reached at convergence. Behavior of the method is exhibited on a numerical application. Second, the algorithm is embedded into a general space-time PGD solver to compress the iterated separated form for the solution. An application to a quasi-static frictional contact problem is illustrated. Then, efficiency of such a compressing method will be demonstrated.

Journal ArticleDOI
TL;DR: A new simulation protocol for finite-element model calibration for plasticity-induced fatigue by the recourse to hyper-reduction of mechanical models is illustrated, in order to solve less nonlinear mechanical equations during the calibration of mechanical parameters.
Abstract: Many mechanical experiments in plasticity-induced fatigue are prepared by the recourse to finite element simulations. Usual simulation outputs, like local stress estimations or lifetime predictions, are useful to choose boundary conditions and the shape of a specimen. In practice, many other numerical data are also generated by these simulations. But unfortunately, these data are ignored, although they can facilitate the calibration procedure. The focus of this paper is to illustrate a new simulation protocol for finite-element model calibration. By the recourse to hyper-reduction of mechanical models, more data science is involved in the proposed protocol, in order to solve less nonlinear mechanical equations during the calibration of mechanical parameters. Usually, the location of the crack initiation is very sensitive to the heterogeneities in the material. The proposed protocol is versatile enough in order to focus the hyper-reduced predictions where the first crack is initiated during the fatigue test. In this paper, we restrict our attention to elastoplasticity or elastoviscoplasticity without damage nor crack propagation. We propose to take advantage of the duration of both the experiment design and the experimental protocol, to collect numerical data aiming to reduce the computational complexity of the calibration procedure. Until experimental data are available, we have time to prepare the calibration by substituting numerical data to nonlinear equations. This substitution is performed by the recourse to the hyper-reduction method (Ryckelynck in J Comput Phys 202(1):346–366, 2005, Int J Numer Method Eng 77(1):75–89, 2009). An hyper-reduced order model involves a reduced basis for the displacement approximation, a reduced basis for stress predictions and a reduced integration domain for the setting of reduced governing equations. The reduced integration domain incorporates a zone of interest that covers the location of the crack initiation. This zone of interest is updated according to experimental observations performed during the fatigue test. Bending experiments have been performed to study the influence of a grain boundary on AM1 superalloy oligocyclic fatigue at high temperature. The proposed hyper-reduction framework is shown to be relevant for the modeling of these experiments. To account for the microstructure generated by a real industrial casting process, the specimen has been machined in a turbine blade. The model calibration aims to identify the loading condition applied on the specimen in order to estimate the stress at the point where the first crack is initiated, before the crack propagation. The model parameters are related to the load distribution on the specimen. The calibration speed-up obtained by hyper-reduction is almost 1000, including the update of the reduced integration domain focused on the experimental location of the crack initiation. The related electric-energy saving is 99.9 %.

Journal ArticleDOI
TL;DR: Though existing methods are improved by an order of magnitude, preparing them for the use with a full car model, they still remain heuristic approaches that need to supervised by experienced engineers.
Abstract: Car crash simulations need a lot of computation time. Model reduction can be applied in order to gain time-savings. Due to the highly nonlinear nature of a crash, an automatic separation in parts behaving linearly and nonlinearly is valuable for the subsequent model reduction. We analyze existing preprocessing and clustering methods like k-means and spectral clustering for their suitability in identifying nonlinear behavior. Based on these results, we improve existing and develop new algorithms which are especially suited for crash simulations. They are objectively rated with measures and compared with engineering experience. In future work, this analysis can be used to choose appropriate model reduction techniques for specific parts of a car. A crossmember of a 2001 Ford Taurus finite element model serves as an industrial-sized example. Since a non-intrusive black box approach is assumed, only heuristic approaches are possible. We show that our methods are superior in terms of simplicity, quality and speed. They also free the user from arbitrarily setting parameters in clustering algorithms. Though we improved existing methods by an order of magnitude, preparing them for the use with a full car model, they still remain heuristic approaches that need to supervised by experienced engineers.

Journal ArticleDOI
TL;DR: An overview of the possibilities of the finite increment calculus (FIC) approach for deriving computational methods in mechanics with improved numerical properties for stability and accuracy is presented.
Abstract: In this paper we present an overview of the possibilities of the finite increment calculus (FIC) approach for deriving computational methods in mechanics with improved numerical properties for stability and accuracy. The basic concepts of the FIC procedure are presented in its application to problems of advection-diffusion-reaction, fluid mechanics and fluid-structure interaction solved with the finite element method (FEM). Examples of the good features of the FIC/FEM technique for solving some of these problems are given. A brief outline of the possibilities of the FIC/FEM approach for error estimation and mesh adaptivity is given.