scispace - formally typeset
Search or ask a question

Showing papers in "International Journal for Numerical Methods in Engineering in 2013"


Journal ArticleDOI
TL;DR: In this paper, a phase-field model for cohesive fracture is developed, which is suitable for incorporation in phase field approaches to fracture and gradient-enhanced damage models, with particular emphasis on the Dirichlet boundary conditions that arise in the phase field approximation and the sensitivity to the parameter that balances the field and the boundary contributions.
Abstract: In this paper, a phase-field model for cohesive fracture is developed. After casting the cohesive zone approach in an energetic framework, which is suitable for incorporation in phase-field approaches, the phase-field approach to brittle fracture is recapitulated. The approximation to the Dirac function is discussed with particular emphasis on the Dirichlet boundary conditions that arise in the phase-field approximation. The accuracy of the discretisation of the phase field, including the sensitivity to the parameter that balances the field and the boundary contributions, is assessed at the hand of a simple example. The relation to gradient-enhanced damage models is highlighted, and some comments on the similarities and the differences between phase-field approaches to fracture and gradient-damage models are made. A phase-field representation for cohesive fracture is elaborated, starting from the aforementioned energetic framework. The strong as well as the weak formats are presented, the latter being the starting point for the ensuing finite element discretisation, which involves three fields: the displacement field, an auxiliary field that represents the jump in the displacement across the crack, and the phase field. Compared to phase-field approaches for brittle fracture, the modelling of the jump of the displacement across the crack is a complication, and the current work provides evidence that an additional constraint has to be provided in the sense that the auxiliary field must be constant in the direction orthogonal to the crack. The sensitivity of the results with respect to the numerical parameter needed to enforce this constraint is investigated, as well as how the results depend on the orders of the discretisation of the three fields. Finally, examples are given that demonstrate grid insensitivity for adhesive and for cohesive failure, the latter example being somewhat limited because only straight crack propagation is considered.

309 citations


Journal ArticleDOI
TL;DR: In this article, a simple and efficient algorithm for FEM-based computational fracture of plates and shells is proposed, which maximizes the mesh quality complying with the predicted crack path (which depends on the specific propagation theory in use).
Abstract: We propose a simple and efficient algorithm for FEM-based computational fracture of plates and shells (cf. [1]) with both brittle and ductile materials based on edge rotation and load control. Rotation axes are the crack front nodes and each crack front edge in surface discretizations affects the position of only one or two nodes. Modified positions of the entities maximize the mesh quality complying with the predicted crack path (which depends on the specific propagation theory in use). Compared with XFEM or with classical tip remeshing, the proposed solution has algorithmic and generality advantages. The propagation algorithm is simpler than the aforementioned alternatives and the approach is independent of the underlying element used for discretization. For history-dependent materials, there are still some transfer of relevant quantities between elements. However, diffusion of results is more limited than with tip or full remeshing. To illustrate the advantages of our approach, three prototype models are used: tip energy dissipation (LEFM), cohesive-zone approaches and ductile fracture. Both the Sutton crack path criterion and the path estimated by the Eshelby tensor are employed. Traditional fracture benchmarks, including one with plastic hinges, and newly proposed verification tests are solved. These were found to be very good in terms of crack path and load/deflection accuracy.

241 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose a computational framework for diffusive fracture for dynamic problems that allows the simulation of complex evolving crack topologies based on the introduction of a local history field that contains a maximum reference energy obtained in the deformation history.
Abstract: SUMMARY The numerical modeling of dynamic failure mechanisms in solids due to fracture based on sharp crack discontinuities suffers in situations with complex crack topologies and demands the formulation of additional branching criteria. This drawback can be overcome by a diffusive crack modeling, which is based on the introduction of a crack phase field. Following our recent works on quasi-static modeling of phase-field-type brittle fracture, we propose in this paper a computational framework for diffusive fracture for dynamic problems that allows the simulation of complex evolving crack topologies. It is based on the introduction of a local history field that contains a maximum reference energy obtained in the deformation history, which may be considered as a measure of the maximum tensile strain in the history. This local variable drives the evolution of the crack phase field. Its introduction provides a very transparent representation of the balance equation that governs the diffusive crack topology. In particular, it allows for the construction of a very robust algorithmic treatment for elastodynamic problems of diffusive fracture. Here, we extend the recently proposed operator split scheme from quasi-static to dynamic problems. In a typical time step, it successively updates the history field, the crack phase field, and finally the displacement field. We demonstrate the performance of the phase field formulation of fracture by means of representative numerical examples, which show the evolution of complex crack patterns under dynamic loading. Copyright © 2012 John Wiley & Sons, Ltd.

216 citations


Journal ArticleDOI
TL;DR: In this article, an extension of Nitsche's method for elasticity problems in the framework of higher order and higher continuity approximation schemes such as the B-spline and non-uniform rational basis spline version of the finite cell method or isogeometric analysis on trimmed geometries is presented.
Abstract: SUMMARY Enforcing essential boundary conditions plays a central role in immersed boundary methods. Nitsche's idea has proven to be a reliable concept to satisfy weakly boundary and interface constraints. We formulate an extension of Nitsche's method for elasticity problems in the framework of higher order and higher continuity approximation schemes such as the B-spline and non-uniform rational basis spline version of the finite cell method or isogeometric analysis on trimmed geometries. Furthermore, we illustrate a significant improvement of the flexibility and applicability of this extension in the modeling process of complex 3D geometries. With several benchmark problems, we demonstrate the overall good convergence behavior of the proposed method and its good accuracy. We provide extensive studies on the stability of the method, its influence parameters and numerical properties, and a rearrangement of the numerical integration concept that in many cases reduces the numerical effort by a factor two. A newly composed boundary integration concept further enhances the modeling process and allows a flexible, discretization-independent introduction of boundary conditions. Finally, we present our strategy in the framework of the modeling and isogeometric analysis process of trimmed non-uniform rational basis spline geometries. Copyright © 2013 John Wiley & Sons, Ltd.

177 citations


Journal ArticleDOI
TL;DR: In this article, a new method for the numerical integration over curved surfaces and volumes defined by a level set function is proposed, based on the solution of a small linear system based on a simplified variant of the moment-fitting equations.
Abstract: We introduce a new method for the numerical integration over curved surfaces and volumes defined by a level set function. The method is based on the solution of a small linear system based on a simplified variant of the moment-fitting equations. Numerical experiments suggest that the accuracy of the resulting quadrature rules exceeds the accuracy of traditional methods by orders of magnitude. Using moments up to an order of p, the measured experimental orders of convergence exceed hp. Consequently, their construction is very efficient because only coarse computational grids are required. The conceptual simplicity allows for the application on very general grid types, which is demonstrated by numerical experiments on quadrilateral, triangular and hexahedral grids.

172 citations


Journal ArticleDOI
TL;DR: In this article, a Chebyshev inclusion function based on the truncated Chebyhev series is proposed to achieve sharper and tighter bounds for meaningful solutions of interval functions, to effectively handle the overestimation caused by the wrapping effect, intrinsic to interval computations.
Abstract: This study proposes a new uncertain analysis method for multibody dynamics of mechanical systems based on Chebyshev inclusion functions The interval model accounts for the uncertainties in multibody mechanical systems comprising uncertain-but-bounded parameters, which only requires lower and upper bounds of uncertain parameters, without having to know probability distributions. A Chebyshev inclusion function based on the truncated Chebyshev series, rather than the Taylor inclusion function, is proposed to achieve sharper and tighter bounds for meaningful solutions of interval functions, to effectively handle the overestimation caused by the wrapping effect, intrinsic to interval computations. The Mehler integral is used to evaluate the coefficients of Chebyshev polynomials in the numerical implementation. The multibody dynamics of mechanical systems are governed by index-3 differential algebraic equations (DAEs), including a combination of differential equations and algebraic equations, responsible for the dynamics of the system subject to certain constraints. The proposed interval method with Chebyshev inclusion functions is applied to solve the DAEs in association with appropriate numerical solvers. This study employs HHT-I3 as the numerical solver to transform the DAEs into a series of nonlinear algebraic equations at each integration time step, which are solved further by using the Newton–Raphson iterative method at the current time step. Two typical multibody dynamic systems with interval parameters, the slider crank and double pendulum mechanisms, are employed to demonstrate the effectiveness of the proposed methodology. The results show that the proposed methodology can supply sufficient numerical accuracy with a reasonable computational cost and is able to effectively handle the wrapping effect, as cosine functions are incorporated to sharpen the range of non-monotonic interval functions. Copyright © 2013 John Wiley & Sons, Ltd.

167 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the solution errors of PDEs due to quadrature inaccuracy can be significantly reduced when the variationally inconsistent methods are corrected with the proposed method, and consequently the optimal convergence rate can be either partially or fully restored.
Abstract: Author(s): Hillman, Michael Charles | Advisor(s): Chen, Jiun-Shyan | Abstract: The rate of convergence in Galerkin methods for solving boundary value problems is determined by the order of completeness in the trial space and order of accuracy in the domain integration. If insufficiently accurate domain integration is employed, the optimal convergence rate cannot be attained. For meshfree methods accurate domain integration is difficult to achieve without costly high order quadrature, and the lack of accuracy in the domain integration may lead to sub-optimal convergence, or even solutions that diverge with refinement. The difficulty in domain integration is due to the overlap of the shape function supports and the rational nature of the shape functions themselves. This dissertation introduces a general framework to achieve the optimal order of convergence consistent with the order of trial space without high order quadrature.First, the conditions for achieving arbitrary order exactness in a boundary value problem using the Galerkin approximation with quadrature are derived. The conditions are derived in a general form and are applicable to all types of problems: the test function gradients in the Galerkin approximation should be consistent with the chosen numerical integration, and this is termed variational consistency. Specifically, integration by parts of the inner product between the test function and differential operator acting on the desired exact solution should hold when evaluated with the chosen quadrature. Specific problems are then considered with the conditions explicitly stated, including elasticity, the Euler-Bernoulli beam, the Kirchhoff-Love plate, and the non-linear formulation of solid mechanics.Treating the type of numerical integration as a given, test function gradients are then constructed to satisfy this condition. The resulting method is arbitrarily high order exact and applicable to all types of integration. The method is then used as a correction to several commonly used numerical integration methods and applied to the various boundary value problems. It is demonstrated that the error induced by numerical integration is greatly reduced, and optimal convergence is either partially or fully restored. Further, it is shown that the variationally consistent integration methods are more effective than their standard counterparts in terms of computing time and solution error.

137 citations


Journal ArticleDOI
TL;DR: The proposed method turns out to be particularly useful for a variety of earthquake engineering problems, such as modeling of dynamic soil structure and site‐city interaction effects, where accounting for multiscale wave propagation phenomena as well as sharp discontinuities in mechanical properties of the media is crucial.
Abstract: SUMMARY This work presents a new high performance open-source numerical code, namely SPectral Elements in Elastodynamics with Discontinuous Galerkin, to approach seismic wave propagation analysis in visco-elastic heterogeneous three-dimensional media on both local and regional scale. Based on non-conforming high-order techniques, such as the discontinuous Galerkin spectral approximation, along with efficient and scalable algorithms, the code allows one to deal with a non-uniform polynomial degree distribution as well as a locally varying mesh size. Validation benchmarks are illustrated to check the accuracy, stability, and performance features of the parallel kernel, whereas illustrative examples are discussed to highlight the engineering applications of the method. The proposed method turns out to be particularly useful for a variety of earthquake engineering problems, such as modeling of dynamic soil structure and site-city interaction effects, where accounting for multiscale wave propagation phenomena as well as sharp discontinuities in mechanical properties of the media is crucial. Copyright © 2013 John Wiley & Sons, Ltd.

120 citations


Journal ArticleDOI
TL;DR: In this paper, the CPDI2 algorithm is proposed to more accurately track particle domains as quadrilaterals in 2-D (hexahedra in 3-D) by removing overlaps or gaps between particle domains.
Abstract: SUMMARY Convected particle domain interpolation (CPDI) is a recently developed extension of the material point method, in which the shape functions on the overlay grid are replaced with alternative shape functions, which (by coupling with the underlying particle topology) facilitate efficient and algorithmically straightforward evaluation of grid node integrals in the weak formulation of the governing equations. In the original CPDI algorithm, herein called CPDI1, particle domains are tracked as parallelograms in 2-D (or parallelepipeds in 3-D). In this paper, the CPDI method is enhanced to more accurately track particle domains as quadrilaterals in 2-D (hexahedra in 3-D). This enhancement will be referred to as CPDI2. Not only does this minor revision remove overlaps or gaps between particle domains, it also provides flexibility in choosing particle domain shape in the initial configuration and sets a convenient conceptual framework for enrichment of the fields to accurately solve weak discontinuities in the displacement field across a material interface that passes through the interior of a grid cell. The new CPDI2 method is demonstrated, with and without enrichment, using one-dimensional and two-dimensional examples. Copyright © 2013 John Wiley & Sons, Ltd.

119 citations


Journal ArticleDOI
TL;DR: In this paper, a comparison of high-order and linear Galerkin methods for implicit solvers is presented, and it is shown that the high order methods are more efficient than linear ones.
Abstract: SUMMARY To evaluate the computational performance of high-order elements, a comparison based on operation count is proposed instead of runtime comparisons. More specifically, linear versus high-order approximations are analyzed for implicit solver under a standard set of hypotheses for the mesh and the solution. Continuous and discontinuous Galerkin methods are considered in two-dimensional and three-dimensional domains for simplices and parallelotopes. Moreover, both element-wise and global operations arising from different Galerkin approaches are studied. The operation count estimates show, that for implicit solvers, high-order methods are more efficient than linear ones. Copyright © 2013 John Wiley & Sons, Ltd.

103 citations


Journal ArticleDOI
TL;DR: In this article, an iterative method to treat the inverse problem of detecting cracks and voids in two-dimensional piezoelectric structures is proposed, where the method involves solving the forward problem for various flaw configurations, and at each iteration, the response of the material is minimized at known specific points along the boundary to match measured data.
Abstract: SUMMARY An iterative method to treat the inverse problem of detecting cracks and voids in two-dimensional piezoelectric structures is proposed. The method involves solving the forward problem for various flaw configurations, and at each iteration, the response of piezoelectric material is minimized at known specific points along the boundary to match measured data. Extended finite element method (XFEM) is employed for solving the forward problem as it allows the use of a single regular mesh for a large number of iterations with different flaw geometries. The minimization of cost function is performed by multilevel coordinate search (MCS) method. The algorithm is an intermediate between purely heuristic methods and methods that allow an assessment of the quality of the minimum obtained and is in spirit similar to the direct method for global optimization. In this paper, the XFEM-MCS methodology is applied to two-dimensional electromechanical problems where flaws considered are straight cracks and elliptical voids. The results show that this methodology can be effectively employed for damage detection in piezoelectric materials. Copyright © 2013 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, a new structural topology optimization method using a dual-level point-wise density approximant and the meshless Galerkin weak-forms is proposed, totally based on a set of arbitrarily scattered field nodes to discretize the design domain.
Abstract: This paper proposes a new structural topology optimization method using a dual-level point-wise density approximant and the meshless Galerkin weak-forms, totally based on a set of arbitrarily scattered field nodes to discretize the design domain. The moving least squares (MLS) method is used to construct shape functions with compactly supported weight functions, to achieve meshless approximations of system state equations. The MLS shape function with the zero-order consistency will degenerate to the well-known ‘Shepard function’, while the MLS shape function with the first-order consistency refers to the widely studied ‘MLS shape function’. The Shepard function is then applied to construct a physically meaningful dual-level density approximant, because of its non-negative and range-restricted properties. First, in terms of the original set of nodal density variables, this study develops a nonlocal nodal density approximant with enhanced smoothness by incorporating the Shepard function into the problem formulation. The density at any node can be evaluated according to the density variables located inside the influence domain of the current node. Second, in the numerical implementation, we present a point-wise density interpolant via the Shepard function method. The density of any computational point is determined by the surrounding nodal densities within the influence domain of the concerned point. According to a set of generic design variables scattered at field nodes, an alternative solid isotropic material with penalization model is thus established through the proposed dual-level density approximant. The Lagrangian multiplier method is included to enforce the essential boundary conditions because of the lack of the Kronecker delta function property of MLS meshless shape functions. Two benchmark numerical examples are employed to demonstrate the effectiveness of the proposed method, in particular its applicability in eliminating numerical instabilities. Copyright © 2012 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce spectral coarse spaces for the balanced domain decomposition and the finite element tearing and interconnecting methods, specifically designed for the two-level methods to be scalable and robust with respect to the coefficients in the equation and the choice of the decomposition.
Abstract: We introduce spectral coarse spaces for the balanced domain decomposition and the finite element tearing and interconnecting methods. These coarse spaces are specifically designed for the two-level methods to be scalable and robust with respect to the coefficients in the equation and the choice of the decomposition. We achieve this by solving generalized eigenvalue problems on the interfaces between subdomains to identify the modes that slow down convergence. Theoretical bounds for the condition numbers of the preconditioned operators, which depend only on a chosen threshold, and the maximal number of neighbors of a subdomain are presented and proved. For the finite element tearing and interconnecting method, there are two versions of the two-level method: one based on the full Dirichlet preconditioner and the other on the, cheaper, lumped preconditioner. Some numerical tests confirm these results. Copyright © 2013 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, an isogeometric solid-like shell formulation is proposed in which B-spline basis functions are used to construct the mid-surface of the shell, in combination with a linear Lagrange shape function in the thickness direction.
Abstract: An isogeometric solid-like shell formulation is proposed in which B-spline basis functions are used to construct the mid-surface of the shell. In combination with a linear Lagrange shape function in the thickness direction, this yields a complete three-dimensional representation of the shell. The proposed shell element is implemented in a standard finite element code using Bezier extraction. The formulation is verified using different benchmark tests.

Journal ArticleDOI
TL;DR: In this article, a model that combines interface debonding and frictional contact is presented, where the onset of fracture is explicitly modeled using the well-known cohesive approach, and the debonding process is controlled by a new extrinsic traction separation law, which accounts for mode mixity, and yields two separate values for energy dissipation in mode I and mode II loading.
Abstract: We present a model that combines interface debonding and frictional contact. The onset of fracture is explicitly modeled using the well-known cohesive approach. Whereas the debonding process is controlled by a new extrinsic traction separation law, which accounts for mode mixity, and yields two separate values for energy dissipation in mode I and mode II loading, the impenetrability condition is enforced with a contact algorithm. We resort to the classical law of unilateral contact and Coulomb friction. The contact algorithm is coupled together to the cohesive approach in order to have a continuous transition from crack nucleation to the pure frictional state after complete decohesion. We validate our model by simulating a shear test on a masonry wallette and by reproducing an experimental test on a masonry wall loaded in compression and shear. Copyright (C) 2012 John Wiley & Sons, Ltd.


Journal ArticleDOI
TL;DR: In this paper, the numerical manifold space of the Hermitian form was constructed to solve the Kirchhoff's thin plate problem, and the mixed primal formulation and the penalized formulation fitted to the NMM were derived from the minimum potential principle.
Abstract: SUMMARY For second-order problems, where the behavior is described by second-order partial differential equations, the numerical manifold method (NMM) has gained great success. Because of difficulties in the construction of the H 2-regular Lagrangian partition of unity subordinate to the finite element cover; however, few applications of the NMM have been found to fourth-order problems such as Kirchhoff's thin plate problems. Parallel to the finite element methods, this study constructs the numerical manifold space of the Hermitian form to solve fourth-order problems. From the minimum potential principle, meanwhile, the mixed primal formulation and the penalized formulation fitted to the NMM for Kirchhoff's thin plate problems are derived. The typical examples indicate that by the proposed procedures, even those earliest developed elements in the finite element history, such as Zienkiewicz's plate element, regain their vigor. Copyright © 2013 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the linear static analysis of composite cylindrical structures by means of a shell finite element with variable through-the-thickness kinematic.
Abstract: The present paper considers the linear static analysis of composite cylindrical structures by means of a shell finite element with variable through-the-thickness kinematic. The refined models used are grouped in the Unified Formulation by Carrera (CUF), and they permit to accurately describe the distribution of displacements and stresses along the thickness of the multilayered shell. The shell element has nine nodes, and the mixed interpolation of tensorial components method is employed to contrast the membrane and shear locking phenomenon. Different composite cylindrical shells are analyzed, with various laminations and thickness ratios. The governing equations are derived from the principle of virtual displacement in order to apply the finite element method. The results, obtained with different theories contained in the CUF, are compared with both the elasticity solutions given in the literature and the analytical solutions obtained using Navier's method. From the analysis, one can conclude that the shell element based on the CUF is very efficient, and its use is mandatory with respect to the classical models in the study of composite structures.


Journal ArticleDOI
TL;DR: In this article, a pseudo-third dimension is used to represent a pseudo third dimension in two-dimensional problems to facilitate new hole insertion, and the update of the secondary function is connected to the primary level set function forming a meaningful link between boundary optimization and hole creation.
Abstract: SUMMARY Structural shape and topology optimization using level set functions is becoming increasingly popular. However, traditional methods do not naturally allow for new hole creation and solutions can be dependent on the initial design. Various methods have been proposed that enable new hole insertion; however, the link between hole insertion and boundary optimization can be unclear. The new method presented in this paper utilizes a secondary level set function that represents a pseudo third dimension in two-dimensional problems to facilitate new hole insertion. The update of the secondary function is connected to the primary level set function forming a meaningful link between boundary optimization and hole creation. The performance of the method is investigated to identify suitable parameters that produce good solutions for a range of problems. Copyright © 2012 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: An algorithm is presented that computes solution spaces for arbitrary non-linear high-dimensional systems by performing Monte Carlo sampling and Bayesian statistics on a multi-dimensional box with permissible intervals for each design parameter.
Abstract: SUMMARY In some engineering problems, tolerance to variation of design parameters is essential In an early development phase of a distributed development process for example, the system performance should reach the design goal even under large variations of uncertain component properties The tolerance to parameter variations may be measured by the size of a solution space on which the system is guaranteed to deliver the required performance In order to decouple dimensions, the solution space is described as multi-dimensional box with permissible intervals for each design parameter An algorithm is presented that computes solution spaces for arbitrary non-linear high-dimensional systems Starting from a design point with required performance, a candidate box is iteratively evaluated and modified The evaluation is performed by Monte Carlo sampling and Bayesian statistics The modification algorithm drives the evolution toward increasing box size Robustness and reliability with respect to the required performance can be assessed without knowledge of the particular kind of uncertainty Sensitivity to design parameters can be quantified by the widths of solution intervals Designs failing to meet the performance requirement can be improved by adjusting parameter values to lie within the solution space The approach is motivated and illustrated by automotive crash design problems Copyright © 2012 John Wiley & Sons, Ltd

Journal ArticleDOI
TL;DR: The novelty of this paper includes a new approach to generating snapshots, POD's application to large-scale eigenvalue calculations, and reduced-order model's application in reactor physics.
Abstract: SUMMARY A reduced-order model based on proper orthogonal decomposition (POD) has been presented and applied to solving eigenvalue problems. The model is constructed via the method of snapshots, which is based upon the singular value decomposition of a matrix containing the characteristics of a solution as it evolves through time. Part of the novelty of this work is in how this snapshot data are generated, and this is through the recasting of eigenvalue problem, which is time independent, into a time-dependent form. Instances of time-dependent eigenfunction solutions are therefore used to construct the snapshot matrix. The reduced order model's capabilities in efficiently resolving eigenvalue problems that typically become computationally expensive (using standard full model discretisations) has been demonstrated. Although the approach can be adapted to most general eigenvalue problems, the examples presented here are based on calculating dominant eigenvalues in reactor physics applications. The approach is shown to reconstruct both the eigenvalues and eigenfunctions accurately using a significantly reduced number of unknowns in comparison with ‘full’ models based on finite element discretisations. The novelty of this paper therefore includes a new approach to generating snapshots, POD's application to large-scale eigenvalue calculations, and reduced-order model's application in reactor physics.Copyright © 2013 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors developed some effective numerical techniques for designing stiff structures with less stress concentrations, which is achieved by introducing some specific stress measures, which are sensitive to the existence of high local stresses, in the problem formulation and resolving the corresponding optimization problem numerically in a level set framework.
Abstract: 1. Abstract Although the phenomenon of stress concentration is of paramount importance to engineers when they are designing load-carrying structures, stiffness is often used as the solely concerned objective or constraint function in the studies of optimal topology design of continuum structures. Sometimes this will lead to optimal designs with severe stress concentrations which may be highly responsible for the fracture, creep and fatigue of structures. Thus, considering stress-related objective or constraint functions in topology optimization problems is very important from both theoretical and application perspectives. It has been known, however, that this kind of problem is very challenging since several difficulties must be overcome in order to solve it effectively. The first difficulty stems from the fact that stress constrained topology optimization problems always suffer from the so-called singularity problem. The second difficulty in stress-related optimization problem is due to the high computational cost caused by the large number of local stress constraints. The conventional treatment of this difficulty with use of the so-called global stress measures cannot give an adequate control of the magnitude of local stress level. The third difficulty is related to the accuracy of stress computation which is greatly influenced by the local geometry of structure. The aim of the present work is to develop some effective numerical techniques for designing stiff structures with less stress concentrations. This is achieved by introducing some specific stress measures, which are sensitive to the existence of high local stresses, in the problem formulation and resolving the corresponding optimization problem numerically in a level set framework. In the first global stress measure, local geometry information such as boundary curvature is introduced while in the second global stress measure, stress gradient is employed to locate the hot points of high local stresses automatically. Our study indicates that with use of the proposed numerical schemes and proposed global stress measures, the intrinsic difficulties mentioned above in stress-related topology optimization of continuum structures can be overcome in a natural way. 2.

Journal ArticleDOI
TL;DR: A port (interface) approximation and a posteriori error bound framework for a general component‐based static condensation method in the context of parameter‐dependent linear elliptic partial differential equations is introduced.
Abstract: SUMMARY We introduce a port (interface) approximation and a posteriori error bound framework for a general component-based static condensation method in the context of parameter-dependent linear elliptic partial differential equations. The key ingredients are as follows: (i) efficient empirical port approximation spaces—the dimensions of these spaces may be chosen small to reduce the computational cost associated with formation and solution of the static condensation system; and (ii) a computationally tractable a posteriori error bound realized through a non-conforming approximation and associated conditioner—the error in the global system approximation, or in a scalar output quantity, may be bounded relatively sharply with respect to the underlying finite element discretization. Our approximation and a posteriori error bound framework is of particular computational relevance for the static condensation reduced basis element (SCRBE) method. We provide several numerical examples within the SCRBE context, which serve to demonstrate the convergence rate of our port approximation procedure as well as the efficacy of our port reduction error bounds. Copyright © 2013 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, a gradient reproduction kernel approximation is proposed to solve second-order PDEs with strong form collocation. And the authors also show that the same order of convergence rates in the primary unknown and its first-order derivative is achieved, owing to the imposition of gradient reproducing conditions.
Abstract: SUMMARY The earlier work in the development of direct strong form collocation methods, such as the reproducing kernel collocation method (RKCM), addressed the domain integration issue in the Galerkin type meshfree method, such as the reproducing kernel particle method, but with increased computational complexity because of taking higher order derivatives of the approximation functions and the need for using a large number of collocation points for optimal convergence. In this work, we intend to address the computational complexity in RKCM while achieving optimal convergence by introducing a gradient reproduction kernel approximation. The proposed gradient RKCM reduces the order of differentiation to the first order for solving second-order PDEs with strong form collocation. We also show that, different from the typical strong form collocation method where a significantly large number of collocation points than the number of source points is needed for optimal convergence, the same number of collocation points and source points can be used in gradient RKCM. We also show that the same order of convergence rates in the primary unknown and its first-order derivative is achieved, owing to the imposition of gradient reproducing conditions. The numerical examples are given to verify the analytical prediction. Copyright © 2012 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A method is described that allows to build triangular meshes that are better suited for recombination into quadrangles that is performed by using the infinity norm to compute distances in the meshing process.
Abstract: A new indirect way of producing all-quad meshes is presented. The method takes advantage of a well known algorithm of the graph theory, namely the Blossom algorithm that computes the minimum cost perfect matching in a graph in polynomial time. Then, the triangulation itself is taylored with the aim of producing right triangles in the domain. This is done using the infinity norm to compute distances in the meshing process. The alignement of the triangles is controlled by a cross field that is defined on the domain. Meshes constructed this way have their points aligned with the cross field direction and their triangles are almost right everywhere. Then, recombination with our Blossom-based approach yields quadrilateral meshes of excellent quality.

Journal ArticleDOI
TL;DR: A high‐order hybridizable discontinuous Galerkin method for solving elliptic interface problems in which the solution and gradient are nonsmooth because of jump conditions across the interface, and proposes to use superparametric elements at the interface to recover the optimal convergence.
Abstract: We present a high-order hybridizable discontinuous Galerkin method for solving elliptic interface problems in which the solution and gradient are nonsmooth because of jump conditions across the interface. The hybridizable discontinuous Galerkin method is endowed with several distinct characteristics. First, they reduce the globally coupled unknowns to the approximate trace of the solution on element boundaries, thereby leading to a significant reduction in the global degrees of freedom. Second, they provide, for elliptic problems with polygonal interfaces, approximations of all the variables that converge with the optimal order of kC1 in the L. /-norm where k denotes the polynomial order of the approximation spaces. Third, they possess some superconvergence properties that allow the use of an inexpensive element-by-element postprocessing to compute a new approximate solution that converges with order kC2. However, for elliptic problems with finite jumps in the solution across the curvilinear interface, the approximate solution and gradient do not converge optimally if the elements at the interface are isoparametric. The discrepancy between the exact geometry and the approximate triangulation near the curved interfaces results in lower order convergence. To recover the optimal convergence for the approximate solution and gradient, we propose to use superparametric elements at the interface. Copyright © 2012 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, an innovative numerical approach is proposed, which combines the simplicity of low-order finite elements connectivity with the geometric flexibility of meshless methods, and the natural neighbor concept is applied to enforce the nodal connectivity.
Abstract: SUMMARY In this work an innovative numerical approach is proposed, which combines the simplicity of low-order finite elements connectivity with the geometric flexibility of meshless methods. The natural neighbour concept is applied to enforce the nodal connectivity. Resorting to the Delaunay triangulation a background integration mesh is constructed, completely dependent on the nodal mesh. The nodal connectivity is imposed through nodal sets with reduce size, reducing significantly the test function construction cost. The interpolations functions, constructed using Euclidian norms, are easily obtained. To prove the good behaviour of the proposed interpolation function several data-fitting examples and first-order partial differential equations are solved. The proposed numerical method is also extended to the elastostatic analysis, where classic solid mechanics benchmark examples are solved. Copyright © 2013 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, a nonsmooth generalized-α time integration method for smooth flexible multibody dynamics is proposed, which allows a consistent treatment of the non-smooth phenomena induced by unilateral constraints and an accurate description of the structural vibrations during free motions.
Abstract: SUMMARY Mechanical systems are usually subjected not only to bilateral constraints, but also to unilateral constraints. Inspired by the generalized- α time integration method for smooth flexible multibody dynamics, this paper presents a nonsmooth generalized- α method, which allows a consistent treatment of the nonsmooth phenomena induced by unilateral constraints and an accurate description of the structural vibrations during free motions. Both the algorithm and the implementation are illustrated in detail. Numerical example tests are given in the scope of both rigid and flexible body models, taking account for both linear and nonlinear systems and comprising both unilateral and bilateral constraints. The extended nonsmooth generalized- α method is verified through comparison to the traditional Moreau–Jean method and the fully implicit Newmark method. Results show that the nonsmooth generalized- α method benefits from the accuracy and stability property of the classical generalized- α method with controllable numerical damping. In particular, when it comes to the analysis of flexible systems, the nonsmooth generalized- α method shows much better accuracy property than the other two methods. Copyright © 2013 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A novel algorithm based on the extended finite element method (XFEM) and an enhanced artificial bee colony (EABC) algorithm to detect and quantify multiple flaws in structures is presented.
Abstract: SUMMARY We present a novel algorithm based on the extended finite element method (XFEM) and an enhanced artificial bee colony (EABC) algorithm to detect and quantify multiple flaws in structures. The concept is based on recent work that have shown the excellent synergy between XFEM, used to model the forward problem, and a genetic-type algorithm to solve an inverse identification problem and converge to the ‘best’ flaw parameters. In this paper, an adaptive algorithm that can detect multiple flaws without any knowledge on the number of flaws beforehand is proposed. The algorithm is based on the introduction of topological variables into the search space, used to adaptively activate/deactivate flaws during run time until convergence is reached. The identification is based on a limited number of strain sensors assumed to be attached to the structure surface boundaries. Each flaw is approximated by a circular void with the following three variables: center coordinates (xc, yc) and radius (rc), within the XFEM framework. In addition, the proposed EABC scheme is improved by a guided-to-best solution updating strategy and a local search (LS) operator of the Nelder–Mead simplex type that show fast convergence and superior global/LS abilities compared with the standard ABC or classic genetic algorithms. Several numerical examples, with increasing level of difficulty, are studied in order to evaluate the proposed algorithm. In particular, we consider identification of multiple flaws with unknown a priori information on the number of flaws (which makes the inverse problem harder), the proximity of flaws, flaws having irregular shapes (similar to artificial noise), and the effect of structured/unstructured meshes. The results show that the proposed XFEM–EABC algorithm is able to converge on all test problems and accurately identify flaws. Hence, this methodology is found to be robust and efficient for nondestructive detection and quantification of multiple flaws in structures. Copyright © 2013 John Wiley & Sons, Ltd.