scispace - formally typeset
Search or ask a question

Showing papers on "Basis function published in 2020"


Journal ArticleDOI
TL;DR: FEMa—a finite element machine classifier—for supervised learning problems, where each training sample is the center of a basis function, and the whole training set is modeled as a probabilistic manifold for classification purposes, is presented.
Abstract: Machine learning has played an essential role in the past decades and has been in lockstep with the main advances in computer technology. Given the massive amount of data generated daily, there is a need for even faster and more effective machine learning algorithms that can provide updated models for real-time applications and on-demand tools. This paper presents FEMa—a finite element machine classifier—for supervised learning problems, where each training sample is the center of a basis function, and the whole training set is modeled as a probabilistic manifold for classification purposes. FEMa has its theoretical basis in the finite element method, which is widely used for numeral analysis in engineering problems. It is shown FEMa is parameterless and has a quadratic complexity for both training and classification phases when basis functions are used that satisfy certain properties. The proposed classifier yields very competitive results when compared to some state-of-the-art supervised pattern recognition techniques.

161 citations


Journal ArticleDOI
TL;DR: In this article, an approximate series expansion of the covariance function in terms of an eigenfunction expansion of Laplace operator in a compact subset of the Gaussian process is proposed.
Abstract: This paper proposes a novel scheme for reduced-rank Gaussian process regression. The method is based on an approximate series expansion of the covariance function in terms of an eigenfunction expansion of the Laplace operator in a compact subset of $$\mathbb {R}^d$$. On this approximate eigenbasis, the eigenvalues of the covariance function can be expressed as simple functions of the spectral density of the Gaussian process, which allows the GP inference to be solved under a computational cost scaling as $$\mathcal {O}(nm^2)$$ (initial) and $$\mathcal {O}(m^3)$$ (hyperparameter learning) with m basis functions and n data points. Furthermore, the basis functions are independent of the parameters of the covariance function, which allows for very fast hyperparameter learning. The approach also allows for rigorous error analysis with Hilbert space theory, and we show that the approximation becomes exact when the size of the compact subset and the number of eigenfunctions go to infinity. We also show that the convergence rate of the truncation error is independent of the input dimensionality provided that the differentiability order of the covariance function increases appropriately, and for the squared exponential covariance function it is always bounded by $${\sim }1/m$$ regardless of the input dimensionality. The expansion generalizes to Hilbert spaces with an inner product which is defined as an integral over a specified input density. The method is compared to previously proposed methods theoretically and through empirical tests with simulated and real data.

138 citations


Proceedings ArticleDOI
01 May 2020
TL;DR: This paper translates the result from the behavioral context to the classical state-space control framework and extends it to certain classes of nonlinear systems, which are linear in suitable input-output coordinates, and shows how this extension can be applied to the data-driven simulation problem, where it introduces kernel-methods to obtain a rich set of basis functions.
Abstract: The vector space of all input-output trajectories of a discrete-time linear time-invariant (LTI) system is spanned by time-shifts of a single measured trajectory, given that the respective input signal is persistently exciting. This fact, which was proven in the behavioral control framework, shows that a single measured trajectory can capture the full behavior of an LTI system and might therefore be used directly for system analysis and controller design, without explicitly identifying a model. In this paper, we translate the result from the behavioral context to the classical state-space control framework and we extend it to certain classes of nonlinear systems, which are linear in suitable input-output coordinates. Moreover, we show how this extension can be applied to the data-driven simulation problem, where we introduce kernel-methods to obtain a rich set of basis functions.

109 citations


Journal ArticleDOI
TL;DR: CRYSTAL is a periodic ab initio code that uses a Gaussian-type basis set to express crystalline orbitals (i.e., Bloch functions) and can be used efficiently on high performance computing machines up to thousands of cores.
Abstract: CRYSTAL is a periodic ab initio code that uses a Gaussian-type basis set to express crystalline orbitals (i.e., Bloch functions). The use of atom-centered basis functions allows treating 3D (crystals), 2D (slabs), 1D (polymers), and 0D (molecules) systems on the same grounds. In turn, all-electron calculations are inherently permitted along with pseudopotential strategies. A variety of density functionals are implemented, including global and range-separated hybrids of various natures and, as an extreme case, Hartree–Fock (HF). The cost for HF or hybrids is only about 3–5 times higher than when using the local density approximation or the generalized gradient approximation. Symmetry is fully exploited at all steps of the calculation. Many tools are available to modify the structure as given in input and simplify the construction of complicated objects, such as slabs, nanotubes, molecules, and clusters. Many tensorial properties can be evaluated by using a single input keyword: elastic, piezoelectric, photoelastic, dielectric, first and second hyperpolarizabilities, etc. The calculation of infrared and Raman spectra is available, and the intensities are computed analytically. Automated tools are available for the generation of the relevant configurations of solid solutions and/or disordered systems. Three versions of the code exist: serial, parallel, and massive-parallel. In the second one, the most relevant matrices are duplicated on each core, whereas in the third one, the Fock matrix is distributed for diagonalization. All the relevant vectors are dynamically allocated and deallocated after use, making the code very agile. CRYSTAL can be used efficiently on high performance computing machines up to thousands of cores.

106 citations


Posted Content
TL;DR: Numerical results indicate that DL-ROMs whose dimension is equal to the intrinsic dimensionality of the PDE solutions manifold are able to efficiently approximate the solution of parametrized PDEs, especially in cases for which a huge number of POD modes would have been necessary to achieve the same degree of accuracy.
Abstract: Traditional reduced order modeling techniques such as the reduced basis (RB) method (relying, e.g., on proper orthogonal decomposition (POD)) suffer from severe limitations when dealing with nonlinear time-dependent parametrized PDEs, because of the fundamental assumption of linear superimposition of modes they are based on. For this reason, in the case of problems featuring coherent structures that propagate over time such as transport, wave, or convection-dominated phenomena, the RB method usually yields inefficient reduced order models (ROMs) if one aims at obtaining reduced order approximations sufficiently accurate compared to the high-fidelity, full order model (FOM) solution. To overcome these limitations, in this work, we propose a new nonlinear approach to set reduced order models by exploiting deep learning (DL) algorithms. In the resulting nonlinear ROM, which we refer to as DL-ROM, both the nonlinear trial manifold (corresponding to the set of basis functions in a linear ROM) as well as the nonlinear reduced dynamics (corresponding to the projection stage in a linear ROM) are learned in a non-intrusive way by relying on DL algorithms; the latter are trained on a set of FOM solutions obtained for different parameter values. In this paper, we show how to construct a DL-ROM for both linear and nonlinear time-dependent parametrized PDEs; moreover, we assess its accuracy on test cases featuring different parametrized PDE problems. Numerical results indicate that DL-ROMs whose dimension is equal to the intrinsic dimensionality of the PDE solutions manifold are able to approximate the solution of parametrized PDEs in situations where a huge number of POD modes would be necessary to achieve the same degree of accuracy.

99 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the relationship between deep neural networks (DNN) with ReLU function as the activation function and continuous piecewise linear (CPWL) functions, especially CPWL functions from the simplicial linear finite element method (FEM).
Abstract: In this paper, we investigate the relationship between deep neural networks (DNN) with rectified linear unit (ReLU) function as the activation function and continuous piecewise linear (CPWL) functions, especially CPWL functions from the simplicial linear finite element method (FEM). We first consider the special case of FEM. By exploring the DNN representation of its nodal basis functions, we present a ReLU DNN representation of CPWL in FEM. We theoretically establish that at least $2$ hidden layers are needed in a ReLU DNN to represent any linear finite element functions in $\Omega \subseteq \mathbb{R}^d$ when $d\ge2$. Consequently, for $d=2,3$ which are often encountered in scientific and engineering computing, the minimal number of two hidden layers are necessary and sufficient for any CPWL function to be represented by a ReLU DNN. Then we include a detailed account on how a general CPWL in $\mathbb R^d$ can be represented by a ReLU DNN with at most $\lceil\log_2(d+1)\rceil$ hidden layers and we also give an estimation of the number of neurons in DNN that are needed in such a representation. Furthermore, using the relationship between DNN and FEM, we theoretically argue that a special class of DNN models with low bit-width are still expected to have an adequate representation power in applications. Finally, as a proof of concept, we present some numerical results for using ReLU DNNs to solve a two point boundary problem to demonstrate the potential of applying DNN for numerical solution of partial differential equations.

90 citations


Posted Content
TL;DR: It is shown that self-distillation iterations modify regularization by progressively limiting the number of basis functions that can be used to represent the solution, which implies that while a few rounds of self- Distillation may reduce over-fitting, further rounds may lead to under-fitting and thus worse performance.
Abstract: Knowledge distillation introduced in the deep learning context is a method to transfer knowledge from one architecture to another. In particular, when the architectures are identical, this is called self-distillation. The idea is to feed in predictions of the trained model as new target values for retraining (and iterate this loop possibly a few times). It has been empirically observed that the self-distilled model often achieves higher accuracy on held out data. Why this happens, however, has been a mystery: the self-distillation dynamics does not receive any new information about the task and solely evolves by looping over training. To the best of our knowledge, there is no rigorous understanding of this phenomenon. This work provides the first theoretical analysis of self-distillation. We focus on fitting a nonlinear function to training data, where the model space is Hilbert space and fitting is subject to $\ell_2$ regularization in this function space. We show that self-distillation iterations modify regularization by progressively limiting the number of basis functions that can be used to represent the solution. This implies (as we also verify empirically) that while a few rounds of self-distillation may reduce over-fitting, further rounds may lead to under-fitting and thus worse performance.

86 citations


Journal ArticleDOI
TL;DR: The formalism of the linearised muffin-tin orbital (LMTO) method is revisited in detail and developed further by the introduction of short-ranged tight-binding basis functions for full-potential calculations.

81 citations


Journal ArticleDOI
TL;DR: A novel model reduction method based on proper orthogonal decomposition and temporal convolutional neural network that depends only on the solution of flow field to construct reduced order model is presented.

68 citations


Journal ArticleDOI
TL;DR: The main goal of this paper is to design multiscale basis functions within GMsFEM framework such that the convergence of method is independent of the contrast and linearly decreases with respect to mesh size if oversampling size is appropriately chosen.

64 citations


Proceedings Article
13 Feb 2020
TL;DR: In this article, the authors provide a theoretical analysis of self-distillation in the context of fitting a nonlinear function to training data, where the model space is Hilbert space and fitting is subject to regularization in this function space.
Abstract: Knowledge distillation introduced in the deep learning context is a method to transfer knowledge from one architecture to another. In particular, when the architectures are identical, this is called self-distillation. The idea is to feed in predictions of the trained model as new target values for retraining (and iterate this loop possibly a few times). It has been empirically observed that the self-distilled model often achieves higher accuracy on held out data. Why this happens, however, has been a mystery: the self-distillation dynamics does not receive any new information about the task and solely evolves by looping over training. To the best of our knowledge, there is no rigorous understanding of this phenomenon. This work provides the first theoretical analysis of self-distillation. We focus on fitting a nonlinear function to training data, where the model space is Hilbert space and fitting is subject to $\ell_2$ regularization in this function space. We show that self-distillation iterations modify regularization by progressively limiting the number of basis functions that can be used to represent the solution. This implies (as we also verify empirically) that while a few rounds of self-distillation may reduce over-fitting, further rounds may lead to under-fitting and thus worse performance.

Journal ArticleDOI
TL;DR: A general procedure is introduced which generates sparse sampling points in time and frequency from compact orthogonal basis representations, such as Chebyshev polynomials and intermediate representation (IR) basis functions, which accurately resolve the information contained in the Green's function.
Abstract: Efficient ab initio calculations of correlated materials at finite temperatures require compact representations of the Green's functions both in imaginary time and in Matsubara frequency. In this paper, we introduce a general procedure which generates sparse sampling points in time and frequency from compact orthogonal basis representations, such as Chebyshev polynomials and intermediate representation basis functions. These sampling points accurately resolve the information contained in the Green's function, and efficient transforms between different representations are formulated with minimal loss of information. As a demonstration, we apply the sparse sampling scheme to diagrammatic $GW$ and second-order Green's function theory calculations of a hydrogen chain of noble gas atoms and of a silicon crystal.

Journal ArticleDOI
TL;DR: Experimental results show that attribute compression using higher order volumetric functions is an improvement over the first-order functions used in the emerging MPEG point cloud compression standard.
Abstract: Compression of point clouds has so far been confined to coding the positions of a discrete set of points in space and the attributes of those discrete points. We introduce an alternative approach based on volumetric functions that are functions defined not just on a finite set of points but throughout space. As in regression analysis, volumetric functions are continuous functions that are able to interpolate values on a finite set of points as linear combinations of continuous basis functions. Using a B-spline wavelet basis, we are able to code volumetric functions representing both geometry and attributes. Geometry compression is addressed in Part II of this paper, while attribute compression is addressed in Part I. Attributes are represented by a volumetric function whose coefficients can be regarded as a critically sampled orthonormal transform that generalizes the recent successful Region-Adaptive Hierarchical (or Haar) Transform to higher orders. Experimental results show that attribute compression using higher order volumetric functions is an improvement over the first-order functions used in the emerging MPEG point cloud compression standard.

Journal ArticleDOI
TL;DR: Experimental results show that geometry compression using volumetric functions improves over the methods used in the emerging MPEG Point Cloud Compression (G-PCC) standard.
Abstract: Compression of point clouds has so far been confined to coding the positions of a discrete set of points in space and the attributes of those discrete points. We introduce an alternative approach based on volumetric functions, which are functions defined not just on a finite set of points, but throughout space. As in regression analysis, volumetric functions are continuous functions that are able to interpolate values on a finite set of points as linear combinations of continuous basis functions. Using a B-spline wavelet basis, we are able to code volumetric functions representing both geometry and attributes. Attribute compression is addressed in Part I of this paper, while geometry compression is addressed in Part II. Geometry is represented implicitly as the level set of a volumetric function (the signed distance function or similar). Experimental results show that geometry compression using volumetric functions improves over the methods used in the emerging MPEG Point Cloud Compression (G-PCC) standard.

Journal ArticleDOI
TL;DR: Novel orthogonal fractional-order Legendre-Fourier moments are proposed for pattern recognition applications and new descriptors were found to be superior to the existing ones in terms of accuracy, stability, noise resistance, invariance to similarity transformations, recognition rates and computational times.

Journal ArticleDOI
TL;DR: Inverse scattering problems of the reconstructions of physical properties of a medium from boundary measurements are substantially challenging ones as discussed by the authors, and the performance on experimental data of a newly developed convexification method for a 3D coefficient inverse problem for the case of objects buried in a sandbox a fixed frequency and the point source moving along an interval of a straight line.
Abstract: Inverse scattering problems of the reconstructions of physical properties of a medium from boundary measurements are substantially challenging ones. This work aims to verify the performance on experimental data of a newly developed convexification method for a 3D coefficient inverse problem for the case of objects buried in a sandbox a fixed frequency and the point source moving along an interval of a straight line. Using a special Fourier basis, the method of this work strongly relies on a new derivation of a boundary value problem for a system of coupled quasilinear elliptic equations. This problem, in turn, is solved via the minimization of a Tikhonov-like functional weighted by a Carleman Weight Function. The global convergence of the numerical procedure is established analytically. The numerical verification is performed using experimental data, which are raw backscatter data of the electric field. These data were collected using a microwave scattering facility at The University of North Carolina at Charlotte.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a mesh-free method to solve interface problems using the deep learning approach, where two types of PDEs are considered: an elliptic PDE with a discontinuous and high-contrast coefficient, and a linear elasticity equation with discontinuous stress tensor.

Journal ArticleDOI
TL;DR: It is stressed that more challenges await in obtaining accurate and numerically stable THC factorization for wavefunction amplitudes as well as the space spanned by virtual orbitals in large basis sets and implementing sparsity-aware THC-RI algorithms.
Abstract: We present a systematically improvable tensor hypercontraction (THC) factorization based on interpolative separable density fitting (ISDF). We illustrate algorithmic details to achieve this within the framework of Becke's atom-centered quadrature grid. A single ISDF parameter cISDF controls the trade-off between accuracy and cost. In particular, cISDF sets the number of interpolation points used in THC, NIP = cISDF × NX with NX being the number of auxiliary basis functions. In conjunction with the resolution-of-the-identity (RI) technique, we develop and investigate the THC-RI algorithms for cubic-scaling exact exchange for Hartree-Fock and range-separated hybrids (e.g., ωB97X-V) and quartic-scaling second- and third-order Moller-Plesset theory (MP2 and MP3). These algorithms were evaluated over the W4-11 thermochemistry (atomization energy) set and A24 noncovalent interaction benchmark set with standard Dunning basis sets (cc-pVDZ, cc-pVTZ, aug-cc-pVDZ, and aug-cc-pVTZ). We demonstrate the convergence of THC-RI algorithms to numerically exact RI results using ISDF points. Based on these, we make recommendations on cISDF for each basis set and method. We also demonstrate the utility of THC-RI exact exchange and MP2 for larger systems such as water clusters and C20. We stress that more challenges await in obtaining accurate and numerically stable THC factorization for wave function amplitudes as well as for the space spanned by virtual orbitals in large basis sets and implementing sparsity-aware THC-RI algorithms.

Journal ArticleDOI
TL;DR: A method to obtain from medical-image data in discrete form an anatomically realistic NURBS representation of the lumen motion, without sudden, unrealistic changes introduced by the higher-order representation is presented.
Abstract: Patient-specific computational flow analysis of coronary arteries with time-dependent medical-image data can provide valuable information to doctors making treatment decisions. Reliable computational analysis requires a good core method, high-fidelity space and time discretizations, and an anatomically realistic representation of the lumen motion. The space–time variational multiscale (ST-VMS) method has a good track record as a core method. The ST framework, in a general context, provides higher-order accuracy. The VMS feature of the ST-VMS addresses the computational challenges associated with the multiscale nature of the unsteady flow in the artery. The moving-mesh feature of the ST framework enables high-resolution flow computation near the moving fluid–solid interfaces. The ST isogeometric analysis is a superior discretization method. With IGA basis functions in space, it enables more accurate representation of the lumen geometry and increased accuracy in the flow solution. With IGA basis functions in time, it enables a smoother representation of the lumen motion and a mesh motion consistent with that. With cubic NURBS in time, we obtain a continuous acceleration from the lumen-motion representation. Here we focus on making the lumen-motion representation anatomically realistic. We present a method to obtain from medical-image data in discrete form an anatomically realistic NURBS representation of the lumen motion, without sudden, unrealistic changes introduced by the higher-order representation. In the discrete projection from the medical-image data to the NURBS representation, we supplement the least-squares terms with two penalty terms, corresponding to the first and second time derivatives of the control-point trajectories. The penalty terms help us avoid the sudden unrealistic changes. The computation we present demonstrates the effectiveness of the method.

Journal ArticleDOI
03 Apr 2020
TL;DR: In this paper, a pixel-aware deep function-mixture network is proposed to learn a complicated mapping function from the RGB image to the HSI counterpart using a deep convolutional neural network.
Abstract: Spectral super-resolution (SSR) aims at generating a hyperspectral image (HSI) from a given RGB image. Recently, a promising direction is to learn a complicated mapping function from the RGB image to the HSI counterpart using a deep convolutional neural network. This essentially involves mapping the RGB context within a size-specific receptive field centered at each pixel to its spectrum in the HSI. The focus thereon is to appropriately determine the receptive field size and establish the mapping function from RGB context to the corresponding spectrum. Due to their differences in category or spatial position, pixels in HSIs often require different-sized receptive fields and distinct mapping functions. However, few efforts have been invested to explicitly exploit this prior.To address this problem, we propose a pixel-aware deep function-mixture network for SSR, which is composed of a new class of modules, termed function-mixture (FM) blocks. Each FM block is equipped with some basis functions, i.e., parallel subnets of different-sized receptive fields. Besides, it incorporates an extra subnet as a mixing function to generate pixel-wise weights, and then linearly mixes the outputs of all basis functions with those generated weights. This enables us to pixel-wisely determine the receptive field size and the mapping function. Moreover, we stack several such FM blocks to further increase the flexibility of the network in learning the pixel-wise mapping. To encourage feature reuse, intermediate features generated by the FM blocks are fused in late stage, which proves to be effective for boosting the SSR performance. Experimental results on three benchmark HSI datasets demonstrate the superiority of the proposed method.

Journal ArticleDOI
TL;DR: In this article, the authors introduced a novel class of nonlinear optimal control problems generated by dynamical systems involved with variable-order fractional derivatives in the Atangana-Baleanu-Caputo sense.
Abstract: This paper introduces a novel class of nonlinear optimal control problems generated by dynamical systems involved with variable-order fractional derivatives in the Atangana–Baleanu–Caputo sense. A computational method based on the Chebyshev cardinal functions and their operational matrix of variable-order fractional derivative (which is generated for the first time in the present study) is proposed for the numerical solution of this class of problems. The presented method is based on transformation of the main problem to solving system of nonlinear algebraic equations. To do this, the state and control variables are expanded in terms of the Chebyshev cardinal functions with unknown coefficients, then the cardinal property of these basis functions together with their operational matrix are employed to generate a constrained extremum problem, which is solved by the Lagrange multipliers method. The applicability and accuracy of the established method are investigated through some numerical examples. The reported results confirm that the established scheme is highly accurate in providing acceptable results.

Journal ArticleDOI
TL;DR: The proposed FrPHTs and FrQP HTs are outperformed the classical polar harmonic transforms, the quaternion polar harmonic transform and the existing fractional orthogonal transforms in terms of accuracy and numerical stability, digital image reconstruction, RST invariances, robustness to noise and computational efficiency.
Abstract: A novel set of fractional orthogonal polar harmonic transforms for gray-scale and color image analysis are presented in this paper. These transforms are divided into two groups. The first group contains fractional polar complex exponential transforms (FrPCETs), fractional polar cosine transforms (FrPCTs), and fractional polar sine transforms (FrPSTs) for gray-scale images. The second group contains the fractional quaternion polar complex exponential transforms (FrQPCETs), fractional quaternion polar cosine transforms (FrQPCTs), and fractional quaternion polar sine transforms (FrQPSTs) for color images. All mathematical formulae for the basis functions, orthogonality relations and reconstruction forms are derived and their validity are proved. The required mathematical forms for invariance to rotation, scaling and translation (RST) are derived. A series of experiments is performed to test the validity of the proposed fractional polar harmonic transforms (FrPHTs) and the fractional quaternion polar harmonic transforms (FrQPHTs). The performances of the proposed FrPHTs and FrQPHTs are outperformed the classical polar harmonic transforms, the quaternion polar harmonic transforms and the existing fractional orthogonal transforms in terms of accuracy and numerical stability, digital image reconstruction, RST invariances, robustness to noise and computational efficiency.

Journal ArticleDOI
TL;DR: A fully Bayesian framework for function-on-scalars regression with many predictors is developed, which incorporates shrinkage priors that effectively remove unimportant scalar covariates from the model and reduce sensitivity to the number of (unknown) basis functions.
Abstract: We develop a fully Bayesian framework for function-on-scalars regression with many predictors. The functional data response is modeled nonparametrically using unknown basis functions, which produce...

Journal ArticleDOI
TL;DR: A dynamical low-rank approximation method is developed for the time-dependent radiation transport equation in 1-D and 2-D Cartesian geometries and it is shown that the low- rank algorithm can obtain high-fidelity results by increasing the number of basis functions while keeping the rank fixed.

Posted Content
TL;DR: This work proposes a graph neural network-based representation learning framework for heterogeneous hypergraphs, an extension of conventional graphs, which can well characterize multiple non-pairwise relations and shows that relationships beyond pairwise are also advantageous in the spammer detection.
Abstract: Recently, graph neural networks have been widely used for network embedding because of their prominent performance in pairwise relationship learning. In the real world, a more natural and common situation is the coexistence of pairwise relationships and complex non-pairwise relationships, which is, however, rarely studied. In light of this, we propose a graph neural network-based representation learning framework for heterogeneous hypergraphs, an extension of conventional graphs, which can well characterize multiple non-pairwise relations. Our framework first projects the heterogeneous hypergraph into a series of snapshots and then we take the Wavelet basis to perform localized hypergraph convolution. Since the Wavelet basis is usually much sparser than the Fourier basis, we develop an efficient polynomial approximation to the basis to replace the time-consuming Laplacian decomposition. Extensive evaluations have been conducted and the experimental results show the superiority of our method. In addition to the standard tasks of network embedding evaluation such as node classification, we also apply our method to the task of spammers detection and the superior performance of our framework shows that relationships beyond pairwise are also advantageous in the spammer detection.

Journal ArticleDOI
TL;DR: A new Multi-material Isogeometric Topology Optimization (M-ITO) method for the optimization of multiple materials distribution, where an improved Multi-Material Interpolation model is developed using Non-Uniform Rational B-splines (NURBS), namely the N-MMI.

Journal ArticleDOI
TL;DR: A low-scaling G0W0 algorithm for molecules using pair atomic density fitting (PADF) and an imaginary time representation of the Green’s function and its implementation in the Slater type orbital (STO)-based Amsterdam density functional ( ADF) electronic structure code is described.
Abstract: We derive a low-scaling G0W0 algorithm for molecules using pair atomic density fitting (PADF) and an imaginary time representation of the Green's function and describe its implementation in the Slater type orbital (STO)-based Amsterdam density functional (ADF) electronic structure code We demonstrate the scalability of our algorithm on a series of water clusters with up to 432 atoms and 7776 basis functions and observe asymptotic quadratic scaling with realistic threshold qualities controlling distance effects and basis sets of triple-ζ (TZ) plus double polarization quality Also owing to a very small prefactor, a G0W0 calculation for the largest of these clusters takes only 240 CPU hours with these settings We assess the accuracy of our algorithm for HOMO and LUMO energies in the GW100 database With errors of 024 eV for HOMO energies on the quadruple-ζ level, our implementation is less accurate than canonical all-electron implementations using the larger def2-QZVP GTO-type basis set Apart from basis set errors, this is related to the well-known shortcomings of the GW space-time method using analytical continuation techniques as well as to numerical issues of the PADF approach of accurately representing diffuse atomic orbital (AO) products We speculate that these difficulties might be overcome by using optimized auxiliary fit sets with more diffuse functions of higher angular momenta Despite these shortcomings, for subsets of medium and large molecules from the GW5000 database, the error of our approach using basis sets of TZ and augmented double-ζ (DZ) quality is decreasing with system size On the augmented DZ level, we reproduce canonical, complete basis set limit extrapolated reference values with an accuracy of 80 meV on average for a set of 20 large organic molecules We anticipate our algorithm, in its current form, to be very useful in the study of single-particle properties of large organic systems such as chromophores and acceptor molecules

Journal ArticleDOI
TL;DR: It is shown that, using Szász-Mirakyan operator as basis functions and tuning the polynomial coefficients by the adaptive laws calculated in the stability analysis, uniformly ultimately bounded stability can be assured.
Abstract: In the present work, impedance control of robot manipulators is enhanced The controller is designed by using Szasz–Mirakyan operator as Universal approximator Although the Szasz–Mirakyan operator has been extensively used for dealing with approximation of nonlinear functions, the main novelty of this paper is presenting a completely different application of Szasz–Mirakyan operator Since in robust or adaptive control, the nonlinear function which should be approximated is unknown In fact, the Lyapunov theorem must be used to tune its adjustable parameters In accordance with the universal approximation theorem, Szasz–Mirakyan operator which is an extended version of the Bernstein polynomial is able to approximate uncertainties including un-modeled dynamics and external disturbances This fact is completely discussed in this paper It is shown that, using Szasz–Mirakyan operator as basis functions and tuning the polynomial coefficients by the adaptive laws calculated in the stability analysis, uniformly ultimately bounded stability can be assured The transient performance of the controller has been also analyzed Numerical simulations on an electrically driven manipulator are provided Simulation results verify that the role of Szasz–Mirakyan operator in uncertainty compensation and enhancing the tracking error is undeniable

Journal ArticleDOI
TL;DR: In this paper, a geometrically nonlinear continuum shell element using a NURBS-based isogeometric analysis (IGA) approach is presented for the analysis of functionally graded material (FGM) structures.

Journal ArticleDOI
TL;DR: A new approach is developed for applications of shape optimization on the two-dimensional time harmonic wave propagation (Helmholtz equation) in acoustic problems and the obtained results are compared against previously published numerical methods using sensitivity analysis and genetic algorithms to verify the efficiency of the proposed approaches.
Abstract: In this paper, a new approach is developed for applications of shape optimization on the two-dimensional time harmonic wave propagation (Helmholtz equation) in acoustic problems. The particle swarm optimization (PSO) algorithm - a gradient-free optimization method avoiding the sensitivity analysis - is coupled with two boundary element methods (BEM) and isogeometric analysis (IGA). The first method is the conventional isogeometric boundary element method (IGABEM). The second method is the eXtended IGABEM (XIBEM) enriched with the partition-of-unity expansion using a set of plane waves. In both methods, the computational domain is parameterized and the unknown solution is approximated using non-uniform rational B-splines basis functions (NURBS). In the optimization models, the advantage of IGA is the feature of representing the three models; i.e. shape design/analysis/optimization, using a set of control points, which also represent control variables and optimization parameters, making communication between the three models easy and straightforward. A numerical example is considered for the duct problem to validate the presented techniques against the analytical solution. Furthermore, two different applications for various frequencies are studied; the vertical noise barrier and the horn problems, and the obtained results are compared against previously published numerical methods using sensitivity analysis and genetic algorithms to verify the efficiency of the proposed approaches.