scispace - formally typeset
Search or ask a question
JournalISSN: 2213-7467

Advanced Modeling and Simulation in Engineering Sciences 

SpringerOpen
About: Advanced Modeling and Simulation in Engineering Sciences is an academic journal published by SpringerOpen. The journal publishes majorly in the area(s): Finite element method & Computer science. It has an ISSN identifier of 2213-7467. It is also open access. Over the lifetime, 253 publications have been published receiving 2945 citations.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: The essential differences between a model and a digital twin are highlighted, some of the key benefits of using digital twins are outlined, and directions for further research are suggested to fully exploit the potential of the approach.
Abstract: “When I use a word, it means whatever I want it to mean”: Humpty Dumpty in Alice’s Adventures Through The Looking Glass, Lewis Carroll. “Digital twin” is currently a term applied in a wide variety of ways. Some differences are variations from sector to sector, but definitions within a sector can also vary significantly. Within engineering, claims are made regarding the benefits of using digital twinning for design, optimisation, process control, virtual testing, predictive maintenance, and lifetime estimation. In many of its usages, the distinction between a model and a digital twin is not made clear. The danger of this variety and vagueness is that a poor or inconsistent definition and explanation of a digital twin may lead people to reject it as just hype, so that once the hype and the inevitable backlash are over the final level of interest and use (the “plateau of productivity”) may fall well below the maximum potential of the technology. The basic components of a digital twin (essentially a model and some data) are generally comparatively mature and well-understood. Many of the aspects of using data in models are similarly well-understood, from long experience in model validation and verification and from development of boundary, initial and loading conditions from measured values. However, many interesting open questions exist, some connected with the volume and speed of data, some connected with reliability and uncertainty, and some to do with dynamic model updating. In this paper we highlight the essential differences between a model and a digital twin, outline some of the key benefits of using digital twins, and suggest directions for further research to fully exploit the potential of the approach.

236 citations

Journal ArticleDOI
TL;DR: This paper presents an algorithmic subdivision approach that aims to be as robust as the spacetree decomposition even for close-to-degenerate cases—but remains geometrically accurate at the same time.
Abstract: One question in the context of immersed boundary or fictitious domain methods is how to compute discontinuous integrands in cut elements accurately. A frequently used method is to apply a composed Gaussian quadrature based on a spacetree subdivision. Although this approach works robustly on any geometry, the resulting integration mesh yields a low order representation of the boundary. If high order shape functions are employed to approximate the solution, this lack of geometric approximation power prevents exponential convergence in the asymptotic range. In this paper we present an algorithmic subdivision approach that aims to be as robust as the spacetree decomposition even for close-to-degenerate cases—but remains geometrically accurate at the same time. Based on 2D numerical examples, we will show that optimal convergence rates can be obtained with a nearly optimal number of integration points.

82 citations

Journal ArticleDOI
TL;DR: A novel model reduction approach for the approximation of non linear hyperbolic equations in the scalar and the system cases that has the potential of being accurate when involving only very few modes, generating physically acceptable, oscillation-free, solutions.
Abstract: We propose a novel model reduction approach for the approximation of non linear hyperbolic equations in the scalar and the system cases. The approach relies on an offline computation of a dictionary of solutions together with an online $$L^1$$ L 1 - norm minimization of the residual. It is shown why this is a natural framework for hyperbolic problems and tested on nonlinear problems such as Burgers’ equation and the one-dimensional Euler equations involving shocks and discontinuities. Efficient algorithms are presented for the computation of the $$L^1$$ L 1 -norm minimizer, both in the cases of linear and nonlinear residuals. Results indicate that the method has the potential of being accurate when involving only very few modes, generating physically acceptable, oscillation-free, solutions.

71 citations

Journal ArticleDOI
TL;DR: A non-intrusive procedure is developed in which the 4D sensitivity fields are obtained with a commercial finite element code, allowing for a large versatility in meshing and incorporation of complex constitutive laws.
Abstract: The goal of the present study is to illustrate the full integration of sensor and imaging data into numerical procedures for the purpose of identification of constitutive laws and their validation. The feasibility of such approaches is proven in the context of in situ tests monitored by tomography. The bridging tool consists of spatiotemporal (i.e., 4D) analyses with dedicated (integrated) correlation algorithms. A tensile test on nodular graphite cast iron sample is performed within a lab tomograph. The reconstructed volumes are registered via integrated digital volume correlation (DVC) that incorporates a finite element modeling of the test, thereby performing a mechanical integration in 4D registration of a series of 3D images. In the present case a non-intrusive procedure is developed in which the 4D sensitivity fields are obtained with a commercial finite element code, allowing for a large versatility in meshing and incorporation of complex constitutive laws. Convergence studies can thus be performed in which the quality of the discretization is controlled both for the simulation and the registration. Incremental DVC analyses are carried out with the scans acquired during the in situ mechanical test. For DVC, the mesh size results from a compromise between measurement uncertainties and its spatial resolution. Conversely, a numerically good mesh may reveal too fine for the considered material microstructure. With the integrated framework proposed herein, 4D registrations can be performed and missing boundary conditions of the reference state as well as mechanical parameters of an elastoplastic constitutive law are determined in fair condition both for DVC and simulation.

64 citations

Journal ArticleDOI
TL;DR: It is shown that the percentage of recombined hexahedra strongly depends on the location of the vertices in the initial 3D mesh, and that the execution times are reasonable and non-conformal quadrilateral faces adjacent to triangular faces are present in the final meshes.
Abstract: Indirect quad mesh generation methods rely on an initial triangular mesh. So called triangle-merge techniques are then used to recombine the triangles of the initial mesh into quadrilaterals. This way, high-quality full-quad meshes suitable for finite element calculations can be generated for arbitrary two-dimensional geometries. In this paper, a similar indirect approach is applied to the three-dimensional case, i.e., a method to recombine tetrahedra into hexahedra. Contrary to the 2D case, a 100% recombination rate is seldom attained in 3D. Instead, part of the remaining tetrahedra are combined into prisms and pyramids, eventually yielding a mixed mesh. We show that the percentage of recombined hexahedra strongly depends on the location of the vertices in the initial 3D mesh. If the vertices are placed at random, less than 50% of the tetrahedra will be combined into hexahedra. In order to reach larger ratios, the vertices of the initial mesh need to be anticipatively organized into a lattice-like structure. This can be achieved with a frontal algorithm, which is applicable to both the two- and three-dimensional cases. The quality of the vertex alignment inside the volumes relies on the quality of the alignment on the surfaces. Once the vertex placement process is completed, the region is tetrahedralized with a Delaunay kernel. A maximum number of tetrahedra are then merged into hexahedra using the algorithm of Yamakawa-Shimada. Non-uniform mixed meshes obtained following our approach show a volumic percentage of hexahedra that usually exceeds 80%. The execution times are reasonable. However, non-conformal quadrilateral faces adjacent to triangular faces are present in the final meshes.

55 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
20238
202240
202126
202047
201912
201832