scispace - formally typeset
Search or ask a question

Showing papers on "Basis function published in 2013"


Journal ArticleDOI
TL;DR: The extension of the previously developed domain based local pair-natural orbital (DLPNO) based singles- and doubles coupled cluster ( DLPNO-CCSD) method to perturbatively include connected triple excitations is reported and the first CCSD(T) level calculation on an entire protein, Crambin with 644 atoms, and more than 6400 basis functions is demonstrated.
Abstract: In this work, the extension of the previously developed domain based local pair-natural orbital (DLPNO) based singles- and doubles coupled cluster (DLPNO-CCSD) method to perturbatively include connected triple excitations is reported. The development is based on the concept of triples-natural orbitals that span the joint space of the three pair natural orbital (PNO) spaces of the three electron pairs that are involved in the calculation of a given triple-excitation contribution. The truncation error is very smooth and can be significantly reduced through extrapolation to the zero threshold. However, the extrapolation procedure does not improve relative energies. The overall computational effort of the method is asymptotically linear with the system size O(N). Actual linear scaling has been confirmed in test calculations on alkane chains. The accuracy of the DLPNO-CCSD(T) approximation relative to semicanonical CCSD(T0) is comparable to the previously developed DLPNO-CCSD method relative to canonical CCSD. Relative energies are predicted with an average error of approximately 0.5 kcal∕mol for a challenging test set of medium sized organic molecules. The triples correction typically adds 30%-50% to the overall computation time. Thus, very large systems can be treated on the basis of the current implementation. In addition to the linear C150H302 (452 atoms, >8800 basis functions) we demonstrate the first CCSD(T) level calculation on an entire protein, Crambin with 644 atoms, and more than 6400 basis functions.

1,151 citations


Journal ArticleDOI
TL;DR: This work extends the definition of analysis-suitable T-splines to encompass unstructured control grids and develops basis functions which are smooth (rational) polynomials defined in terms of the Bezier extraction framework and which pass standard patch tests.

366 citations


Journal ArticleDOI
TL;DR: The proposed blind compressive sensing scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding, and the utility of the BCS scheme in accelerating contrast enhanced dynamic data is demonstrated.
Abstract: We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the nonorthogonal nature of the dictionary basis functions. Since the number of degrees-of-freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting l1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler subproblems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the l1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the l0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding. Our phase transition experiments demonstrate that the BCS scheme provides much better recovery rates than classical Fourier-based CS schemes, while being only marginally worse than the dictionary aware setting. Since the overhead in additionally estimating the dictionary is low, this method can be very useful in dynamic magnetic resonance imaging applications, where the signal is not sparse in known dictionaries. We demonstrate the utility of the BCS scheme in accelerating contrast enhanced dynamic data. We observe superior reconstruction performance with the BCS scheme in comparison to existing low rank and compressed sensing schemes.

234 citations


Journal ArticleDOI
TL;DR: The control parameterization method is a popular numerical technique for solving optimal control problems as mentioned in this paper, which discretizes the control space by approximating the control function by a linear combination of basis functions.
Abstract: The control parameterization method is a popular numerical technique for solving optimal control problems. The main idea of control parameterization is to discretize the control space by approximating the control function by a linear combination of basis functions. Under this approximation scheme, the optimal control problem is reduced to an approximate nonlinear optimization problem with a finite number of decision variables. This approximate problem can then be solved using nonlinear programming techniques. The aim of this paper is to introduce the fundamentals of the control parameterization method and survey its various applications to non-standard optimal control problems. Topics discussed include gradient computation, numerical convergence, variable switching times, and methods for handling state constraints. We conclude the paper with some suggestions for future research.

226 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider series estimators for the conditional mean in light of four new ingredients: sharp LLNs for matrices derived from the non-commutative Khinchin inequalities, bounds on the Lebesgue factor that controls the ratio between the L ∞ and L 2 -norms of approximation errors, maximal inequalities for processes whose entropy integrals diverge at some rate, and strong approximations to series-type processes.

208 citations


Journal ArticleDOI
TL;DR: The present algorithm uses the idea of finding a numerically well-conditioned basis function set in the same function space as is spanned by the ill- Conditioned near-flat original Gaussian RBFs, and transpires that the change of basis can be achieved without dealing with any infinite expansions.
Abstract: Traditional finite difference (FD) methods are designed to be exact for low degree polynomials. They can be highly effective on Cartesian-type grids, but may fail for unstructured node layouts. Radial basis function-generated finite difference (RBF-FD) methods overcome this problem and, as a result, provide a much improved geometric flexibility. The calculation of RBF-FD weights involves a shape parameter @e. Small values of @e (corresponding to near-flat RBFs) often lead to particularly accurate RBF-FD formulas. However, the most straightforward way to calculate the weights (RBF-Direct) becomes then numerically highly ill-conditioned. In contrast, the present algorithm remains numerically stable all the way into the @e->0 limit. Like the RBF-QR algorithm, it uses the idea of finding a numerically well-conditioned basis function set in the same function space as is spanned by the ill-conditioned near-flat original Gaussian RBFs. By exploiting some properties of the incomplete gamma function, it transpires that the change of basis can be achieved without dealing with any infinite expansions. Its strengths and weaknesses compared with the Contour-Pade, RBF-RA, and RBF-QR algorithms are discussed.

194 citations


Journal ArticleDOI
TL;DR: This paper proposes a sparse approximation to a robust vector field learning method, sparse vector field consensus (SparseVFC), and derives a statistical learning bound on the speed of the convergence, and applies SparseVFC to the mismatch removal problem.

178 citations


Journal ArticleDOI
TL;DR: In this paper, a tensor-product B-spline representation is used to represent the density field and the design space is restricted to the Bspline space without extraneous filtering or penalty.

151 citations


Posted Content
TL;DR: In this article, the authors present a new approach to value determination that uses a simple closed-form computation to directly compute a least-squares decomposed approximation to the value function for any weights.
Abstract: Many large MDPs can be represented compactly using a dynamic Bayesian network. Although the structure of the value function does not retain the structure of the process, recent work has shown that value functions in factored MDPs can often be approximated well using a decomposed value function: a linear combination of restricted basis functions, each of which refers only to a small subset of variables. An approximate value function for a particular policy can be computed using approximate dynamic programming, but this approach (and others) can only produce an approximation relative to a distance metric which is weighted by the stationary distribution of the current policy. This type of weighted projection is ill-suited to policy improvement. We present a new approach to value determination, that uses a simple closed-form computation to directly compute a least-squares decomposed approximation to the value function for any weights. We then use this value determination algorithm as a subroutine in a policy iteration process. We show that, under reasonable restrictions, the policies induced by a factored value function are compactly represented, and can be manipulated efficiently in a policy iteration process. We also present a method for computing error bounds for decomposed value functions using a variable-elimination algorithm for function optimization. The complexity of all of our algorithms depends on the factorization of system dynamics and of the approximate value function.

147 citations


Journal ArticleDOI
TL;DR: In this article, a discontinuous Galerkin surface integral equation (IEDG) method is proposed for time harmonic electromagnetic wave scattering from nonpenetrable targets, which allows the implementation of the combined field integral equation using square-integrable,, trial and test functions without any considerations of continuity requirements across element boundaries.
Abstract: We present a discontinuous Galerkin surface integral equation method, herein referred to as IEDG, for time harmonic electromagnetic wave scattering from nonpenetrable targets. The proposed IEDG algorithm allows the implementation of the combined field integral equation (CFIE) using square-integrable, , trial and test functions without any considerations of continuity requirements across element boundaries. Due to the local characteristics of basis functions, it is possible to employ nonconformal surface discretizations of the targets. Furthermore, it enables the possibility to mix different types of elements and employ different order of basis functions within the same discretization. Therefore, the proposed IEDG method is highly flexible to apply adaptation techniques. Numerical results are included to validate the accuracy and demonstrate the versatility of the proposed IEDG method. In addition, a complex large-scale simulation is conducted to illustrate the potential benefits offered by the proposed method for modeling multiscale electrically large targets.

144 citations


Journal ArticleDOI
TL;DR: The reduced basis approximation and a posteriori error estimation for steady Stokes flows in affinely parametrized geometries are extended, focusing on the role played by the Brezzi’s and Babuška's stability constants.
Abstract: In this paper we review and we extend the reduced basis approximation and a posteriori error estimation for steady Stokes flows in affinely parametrized geometries, focusing on the role played by the Brezzi's and Babuska's stability constants. The crucial ingredients of the methodology are a Galerkin projection onto a low-dimensional space of basis functions properly selected, an affine parametric dependence enabling to perform competitive Offline-Online splitting in the computational procedure and a rigorous a posteriori error estimation on field variables. The combinatiofn of these three factors yields substantial computational savings which are at the basis of an efficient model order reduction, ideally suited for real-time simulation and many-query contexts (e.g. optimization, control or parameter identification). In particular, in this work we focus on (i) the stability of the reduced basis approximation based on the Brezzi's saddle point theory and the introduction of a supremizer operator on the pressure terms, (ii) a rigorous a posteriori error estimation procedure for velocity and pressure fields based on the Babuska's inf-sup constant (including residuals calculations), (iii) the computation of a lower bound of the stability constant, and (iv) different options for the reduced basis spaces construction. We present some illustrative results for both interior and external steady Stokes flows in parametrized geometries representing two parametrized classical Poiseuille and Couette flows, a channel contraction and a simple flow control problem around a curved obstacle.

Journal ArticleDOI
TL;DR: An isogeometric Reissner-Mindlin shell derived from the continuum theory is presented and the improved accuracy yields considerable savings in computation cost for a predefined error bound.

Journal ArticleDOI
TL;DR: A novel technique is presented to facilitate the implementation of hierarchical b-splines and their interfacing with conventional finite element implementations and is applied to convergence studies of linear and geometrically nonlinear problems in one, two and three space dimensions.

Journal ArticleDOI
TL;DR: A novel algorithm based on a hybrid Gaussian and Plane Waves approach with the resolution-of-identity (RI) approximation is developed for MP2, scaled opposite-spin MP2 (SOS-MP2), and direct-RPA (dRPA) correlation energies of finite and extended system.
Abstract: The second-order Moller-Plesset perturbation energy (MP2) and the Random Phase Approximation (RPA) correlation energy are increasingly popular post-Kohn-Sham correlation methods. Here, a novel algorithm based on a hybrid Gaussian and Plane Waves (GPW) approach with the resolution-of-identity (RI) approximation is developed for MP2, scaled opposite-spin MP2 (SOS-MP2), and direct-RPA (dRPA) correlation energies of finite and extended system. The key feature of the method is that the three center electron repulsion integrals (μν|P) necessary for the RI approximation are computed by direct integration between the products of Gaussian basis functions μν and the electrostatic potential arising from the RI fitting densities P. The electrostatic potential is obtained in a plane waves basis set after solving the Poisson equation in Fourier space. This scheme is highly efficient for condensed phase systems and offers a particularly easy way for parallel implementation. The RI approximation allows to speed up the MP2 energy calculations by a factor 10 to 15 compared to the canonical implementation but still requires O(N(5)) operations. On the other hand, the combination of RI with a Laplace approach in SOS-MP2 and an imaginary frequency integration in dRPA reduces the computational effort to O(N(4)) in both cases. In addition to that, our implementations have low memory requirements and display excellent parallel scalability up to tens of thousands of processes. Furthermore, exploiting graphics processing units (GPU), a further speedup by a factor ∼2 is observed compared to the standard only CPU implementations. In this way, RI-MP2, RI-SOS-MP2, and RI-dRPA calculations for condensed phase systems containing hundreds of atoms and thousands of basis functions can be performed within minutes employing a few hundred hybrid nodes. In order to validate the presented methods, various molecular crystals have been employed as benchmark systems to assess the performance, while solid LiH has been used to study the convergence with respect to the basis set and system size in the case of RI-MP2 and RI-dRPA.

Journal ArticleDOI
TL;DR: The methodology for parametrized quadratic optimization problems with elliptic equations as constraint and infinite dimensional control variable is developed and recast the optimal control problem in the framework of saddle-point problems in order to take advantage of the already developed RB theory for Stokes-type problems.
Abstract: We propose a suitable model reduction paradigm -- the certied reduced basis method (RB) -- for the rapid and reliable solution of parametrized optimal control problems governed by partial dierential equations (PDEs). In particular, we develop the methodology for parametrized quadratic optimization problems with elliptic equations as constraint and infinite dimensional control variable. Firstly, we recast the optimal control problem in the framework of saddle-point problems in order to take advantage of the already developed RB theory for Stokes-type problems. Then, the usual ingredients of the RB methodology are called into play: a Galerkin projection onto a low dimensional space of basis functions properly selected by an adaptive procedure; an affine parametric dependence enabling to perform competitive Offline-Online splitting in the computational procedure; an efficient and rigorous a posteriori error estimate on the state, control and adjoint variables as well as on the cost functional. Finally, we address some numerical tests that confirm our theoretical results and show the efficiency of the proposed technique.

Journal ArticleDOI
TL;DR: A nonintrusive reduced‐order modeling method based on the notion of space‐time‐parameter proper orthogonal decomposition (POD) for approximating the solution of nonlinear parametrized time‐dependent partial differential equations that leads to reduced‐ order models that accurately capture the behavior of the field variables as a function of the spatial coordinates, the parameter vector and time.
Abstract: We propose a nonintrusive reduced-order modeling method based on the notion of space-time-parameter proper orthogonal decomposition (POD) for approximating the solution of nonlinear parametrized time-dependent partial differential equations. A two-level POD method is introduced for constructing spatial and temporal basis functions with special properties such that the reduced-order model satisfies the boundary and initial conditions by construction. A radial basis function approximation method is used to estimate the undetermined coefficients in the reduced-order model without resorting to Galerkin projection. This nonintrusive approach enables the application of our approach to general problems with complicated nonlinearity terms. Numerical studies are presented for the parametrized Burgers' equation and a parametrized convection-reaction-diffusion problem. We demonstrate that our approach leads to reduced-order models that accurately capture the behavior of the field variables as a function of the spatial coordinates, the parameter vector and time. © 2013 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 2013

Journal ArticleDOI
TL;DR: An extensive numerical study showing the effects of shear rate, surface tension, and the geometry of the domain on the phase evolution of the binary fluid and a new periodic Bezier extraction operator are presented.

Journal ArticleDOI
TL;DR: In this article, a finite-difference time-domain (FDTD)-based method is developed to analyze 3D microwave circuits with uncertain parameters, such as variability and tolerances in the physical dimensions and geometry introduced by manufacturing processes.
Abstract: A novel finite-difference time-domain (FDTD)-based method is developed to analyze 3-D microwave circuits with uncertain parameters, such as variability and tolerances in the physical dimensions and geometry introduced by manufacturing processes. The proposed method incorporates geometrical variation into the FDTD algorithm by appropriately parameterizing and distorting the rectilinear and curvilinear computational lattices. Generalized polynomial chaos is used to expand the time-domain electric and magnetic fields in terms of orthogonal polynomial chaos basis functions of the uncertain mesh parameters. The technique is validated by modeling several microstrip circuits with uncertain physical dimensions and geometry. The computed S-parameters are compared against Monte Carlo simulations, and good agreement for the statistics is observed over 0-25 GHz. A considerable computational advantage over the Monte Carlo method is also achieved.

Journal ArticleDOI
TL;DR: The results of numerical experiments are compared with the analytical solution, finite difference (FD) method and some published methods to confirm the accuracy and efficiency of the new scheme presented in this paper.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a self-regularizing integral equation that does not suffer from ill-conditioned systems when the frequency is low (i) and/or when the discretization density is high, (ii) their applicability is limited to the quasi-static regime, (iii) they require a search for global topological loops, and (iv) they suffer from numerical cancellations in the solution when very low.
Abstract: All known integral equation techniques for simulating scattering and radiation from arbitrarily shaped, perfect electrically conducting objects suffer from one or more of the following shortcomings: (i) they give rise to ill-conditioned systems when the frequency is low (ii) and/or when the discretization density is high, (iii) their applicability is limited to the quasi-static regime, (iv) they require a search for global topological loops, (v) they suffer from numerical cancellations in the solution when the frequency is very low. This work presents an equation that does not suffer from any of the above drawbacks when applied to smooth and closed objects. The new formulation is obtained starting from a Helmholtz decomposition of two discretizations of the electric field integral operator obtained by using RWGs and dual bases respectively. The new decomposition does not leverage Loop and Star/Tree basis functions, but projectors that derive from them. Following the decomposition, the two discretizations are combined in a Calderon-like fashion resulting in a new overall equation that is shown to exhibit self-regularizing properties without suffering from the limitations of existing formulations. Numerical results show the usefulness of the proposed method both for closed and open structures.

Journal ArticleDOI
TL;DR: In this article, a series of numerically tabulated atom-centered orbital (NAO) basis sets with valence correlation consistency (VCC), termed NAO- VCC-nZ, is presented.
Abstract: We present a series of numerically tabulated atom-centered orbital (NAO) basis sets with valence-correlation consistency (VCC), termed NAO- VCC-nZ. Here the index 'nZ' refers to the number of basis functions used for the valence shell with n = 2, 3, 4, 5. These basis sets are constructed analogous to Dunning's cc-pVnZ, but utilize the more flexible shape of NAOs. Moreover, an additional group of (sp) basis functions, called enhanced minimal basis, is established in NAO-VCC-nZ, increasing the contribution of the s and p functions to achieve the valence-correlation consistency. NAO-VCC-nZ basis sets are generated by minimizing frozen-core random-phase approximation (RPA) total energies of individual atoms from H to Ar. We demonstrate that NAO-VCC- nZ basis sets are suitable for converging electronic total-energy calculations based on valence-only (frozen-core) correlation methods which contain explicit sums over unoccupied states (e.g. the RPA or second-order Moller-Plesset

Journal ArticleDOI
TL;DR: The new concept of a spatially varying photometric scale factor is introduced which will be important for DIA applied to wide-field imaging data in order to adapt to transparency and airmass variations across the field-of-view.
Abstract: We present a general framework for matching the point-spread function (PSF), photometric scaling and sky background between two images, a subject which is commonly referred to as difference image analysis (DIA). We introduce the new concept of a spatially varying photometric scale factor which will be important for DIA applied to wide-field imaging data in order to adapt to transparency and airmass variations across the field-of-view. Furthermore, we demonstrate how to separately control the degree of spatial variation of each kernel basis function, the photometric scale factor and the differential sky background. We discuss the common choices for kernel basis functions within our framework, and we introduce the mixed-resolution delta basis functions to address the problem of the size of the least-squares problem to be solved when using delta basis functions. We validate and demonstrate our algorithm on simulated and real data. We also describe a number of useful optimizations that may be capitalized on during the construction of the least-squares matrix and which have not been reported previously. We pay special attention to presenting a clear notation for the DIA equations which are set out in a way that will hopefully encourage developers to tackle the implementation of DIA software.

Journal ArticleDOI
TL;DR: This work presents a priori and a posteriori error analyses of a new multiscale hybrid-mixed method (MHM) for an elliptic model, and proposes a face-residual a posterioru error estimator that controls the error of both variables in the natural norms.
Abstract: This work presents a priori and a posteriori error analyses of a new multiscale hybrid-mixed method (MHM) for an elliptic model. Specially designed to incorporate multiple scales into the construction of basis functions, this finite element method relaxes the continuity of the primal variable through the action of Lagrange multipliers, while assuring the strong continuity of the normal component of the flux (dual variable). As a result, the dual variable, which stems from a simple postprocessing of the primal variable, preserves local conservation. We prove existence and uniqueness of a solution for the MHM method as well as optimal convergence estimates of any order in the natural norms. Also, we propose a face-residual a posteriori error estimator, and prove that it controls the error of both variables in the natural norms. Several numerical tests assess the theoretical results.

Journal ArticleDOI
TL;DR: In this paper, an isogeometric solid-like shell formulation is proposed in which B-spline basis functions are used to construct the mid-surface of the shell, in combination with a linear Lagrange shape function in the thickness direction.
Abstract: An isogeometric solid-like shell formulation is proposed in which B-spline basis functions are used to construct the mid-surface of the shell. In combination with a linear Lagrange shape function in the thickness direction, this yields a complete three-dimensional representation of the shell. The proposed shell element is implemented in a standard finite element code using Bezier extraction. The formulation is verified using different benchmark tests.

Proceedings ArticleDOI
01 Jan 2013
TL;DR: This work presents a new method of inverse optimal control based on minimizing the extent to which observed trajectories violate first-order necessary conditions for optimality, which is more computationally efficient than prior methods, performs similarly to prior approaches under large perturbations to the system, and better learns the true cost function under small perturbation.
Abstract: Inverse optimal control is the problem of computing a cost function with respect to which observed state and input trajectories are optimal. We present a new method of inverse optimal control based on minimizing the extent to which observed trajectories violate first-order necessary conditions for optimality. We consider continuous-time deterministic optimal control systems with a cost function that is a linear combination of known basis functions. We compare our approach with three prior methods of inverse optimal control. We demonstrate the performance of these methods by performing simulation experiments using a collection of nominal system models. We compare the robustness of these methods by analysing how they perform under perturbations to the system. To this purpose, we consider two scenarios: one in which we exactly know the set of basis functions in the cost function, and another in which the true cost function contains an unknown perturbation. Results from simulation experiments show that our new method is more computationally efficient than prior methods, performs similarly to prior approaches under large perturbations to the system, and better learns the true cost function under small perturbations.

Journal ArticleDOI
TL;DR: In this paper, an efficient numerical technique is developed to approximate the solution of two-dimensional cubic nonlinear Schrodinger equations, which is based on the nonsymmetric radial basis function collocation method (Kansa's method), within an operator Newton algorithm.
Abstract: In this paper, an efficient numerical technique is developed to approximate the solution of two-dimensional cubic nonlinear Schrodinger equations. The method is based on the nonsymmetric radial basis function collocation method (Kansa's method), within an operator Newton algorithm. In the proposed process, three-dimensional radial basis functions (especially, three-dimensional Multiquadrics (MQ) and Inverse multiquadrics (IMQ) functions) are used as the basis functions. For solving the resulting nonlinear system, an algorithm based on the Newton approach is constructed and applied. In the multilevel Newton algorithm, to overcome the instability of the standard methods for solving the resulting ill-conditioned system an interesting and efficient technique based on the Tikhonov regularization technique with GCV function method is used for solving the ill-conditioned system. Finally, the presented method is used for solving some examples of the governing problem. The comparison between the obtained numerical solutions and the exact solutions demonstrates the reliability, accuracy and efficiency of this method.

Journal ArticleDOI
TL;DR: In this article, the improved element-free Galerkin (IEFG) method is applied to study the partial differential equations that control the heat flow in three-dimensional space, and the traditional difference method for two-point boundary value problems is selected for the time discretization.
Abstract: With the improved moving least-squares (IMLS) approximation, an orthogonal function system with a weight function is used as the basis function. The combination of the element-free Galerkin (EFG) method and the IMLS approximation leads to the development of the improved element-free Galerkin (IEFG) method. In this paper, the IEFG method is applied to study the partial differential equations that control the heat flow in three-dimensional space. With the IEFG technique, the Galerkin weak form is employed to develop the discretized system equations, and the penalty method is applied to impose the essential boundary conditions. The traditional difference method for two-point boundary value problems is selected for the time discretization. As the transient heat conduction equations and the boundary and initial conditions are time dependent, the scaling parameter, number of nodes and time step length are considered in a convergence study.

Journal ArticleDOI
TL;DR: The novelty of the contribution lies in the way the authors handle the information flow from output to input space, and the way they handle the effect of the input space partition upon network's performance.

Journal ArticleDOI
TL;DR: In this article, a new family of systematically constructed near-singularity cancellation transformations is presented, yielding quadrature rules for integrating nearsingular kernels over triangular surfaces, which can be applied to higher-order basis functions and curvilinear settings.
Abstract: A new family of systematically constructed near-singularity cancellation transformations is presented, yielding quadrature rules for integrating near-singular kernels over triangular surfaces. This family results from a structured augmentation of the well-known Duffy transformation. The benefits of near-singularity cancellation quadrature are that no analytical integral evaluations are required and applicability in higher-order basis function and curvilinear settings. Six specific transformations are constructed for near-singularities of orders one, two and three. Two of these transformations are found to be equivalent to existing ones. The performance of the new schemes is thoroughly assessed and compared with that of existing schemes. Results for the gradient of the scalar Green function are also presented. For simplicity, static kernel results are shown. The new schemes are competitive with and in some cases superior to the existing schemes considered.

Journal ArticleDOI
TL;DR: In this article, the non-linear dynamics of an axially moving beam with time-dependent axial speed were investigated, including numerical results for the nonlinear resonant response of the system in the sub-critical speed regime and global dynamical behavior.
Abstract: This paper investigates the non-linear dynamics of an axially moving beam with time-dependent axial speed, including numerical results for the non-linear resonant response of the system in the sub-critical speed regime and global dynamical behavior. Using Galerkin's technique, the non-linear partial differential equation of motion is discretized and reduced to a set of ordinary differential equations (ODEs) by choosing the basis functions to be eigenfunctions of a stationary beam. The set of ODEs is solved by the pseudo-arclength continuation technique, for the system in the sub-critical axial speed regime, and by direct time integration to investigate the global dynamics. Results are shown through frequency–response curves as well as bifurcation diagrams of the Poincare maps. Points of interest in the parameter space in the form of time traces, phase-plane portraits, Poincare maps, and fast Fourier transforms (FFTs) are also highlighted. Numerical results indicate that the system displays a wide variety of rich and interesting dynamical behavior.