scispace - formally typeset
Search or ask a question

Showing papers on "Basis function published in 2007"


Journal ArticleDOI
TL;DR: A library of Gaussian basis sets that has been specifically optimized to perform accurate molecular calculations based on density functional theory and can be used in first principles molecular dynamics simulations and is well suited for linear scaling calculations.
Abstract: We present a library of Gaussian basis sets that has been specifically optimized to perform accurate molecular calculations based on density functional theory. It targets a wide range of chemical environments, including the gas phase, interfaces, and the condensed phase. These generally contracted basis sets, which include diffuse primitives, are obtained minimizing a linear combination of the total energy and the condition number of the overlap matrix for a set of molecules with respect to the exponents and contraction coefficients of the full basis. Typically, for a given accuracy in the total energy, significantly fewer basis functions are needed in this scheme than in the usual split valence scheme, leading to a speedup for systems where the computational cost is dominated by diagonalization. More importantly, binding energies of hydrogen bonded complexes are of similar quality as the ones obtained with augmented basis sets, i.e., have a small (down to 0.2 kcal/mol) basis set superposition error, and the monomers have dipoles within 0.1 D of the basis set limit. However, contrary to typical augmented basis sets, there are no near linear dependencies in the basis, so that the overlap matrix is always well conditioned, also, in the condensed phase. The basis can therefore be used in first principles molecular dynamics simulations and is well suited for linear scaling calculations.

2,700 citations


Journal ArticleDOI
TL;DR: In this article, the effects of smoothness of basis functions on solution accuracy within the isogeometric analysis framework were investigated. And they concluded that the potential for the k-method is high, but smoothness is an issue that is not well understood due to the historical dominance of C 0 -continuous finite elements.

606 citations


Journal ArticleDOI
TL;DR: In this paper, a NURBS-based variational formulation for the Cahn-Hilliard equation was tested on two-dimensional and three-dimensional problems, and steady state solutions in two-dimensions and, for the first time, in threedimensions were presented.

604 citations


Journal ArticleDOI
TL;DR: A more efficient representation is introduced here as a orthogonal set of basis functions that localizes the spectrum and retains the advantageous phase properties of the S-transform, and can perform localized cross spectral analysis to measure phase shifts between each of multiple components of two time series.

363 citations


Journal Article
TL;DR: A novel spectral framework for solving Markov decision processes (MDPs) by jointly learning representations and optimal policies is introduced, and several strategies for scaling the proposed framework to large MDPs are outlined.
Abstract: This paper introduces a novel spectral framework for solving Markov decision processes (MDPs) by jointly learning representations and optimal policies. The major components of the framework described in this paper include: (i) A general scheme for constructing representations or basis functions by diagonalizing symmetric diffusion operators (ii) A specific instantiation of this approach where global basis functions called proto-value functions (PVFs) are formed using the eigenvectors of the graph Laplacian on an undirected graph formed from state transitions induced by the MDP (iii) A three-phased procedure called representation policy iteration comprising of a sample collection phase, a representation learning phase that constructs basis functions from samples, and a final parameter estimation phase that determines an (approximately) optimal policy within the (linear) subspace spanned by the (current) basis functions. (iv) A specific instantiation of the RPI framework using least-squares policy iteration (LSPI) as the parameter estimation method (v) Several strategies for scaling the proposed approach to large discrete and continuous state spaces, including the Nystrom extension for out-of-sample interpolation of eigenfunctions, and the use of Kronecker sum factorization to construct compact eigenfunctions in product spaces such as factored MDPs (vi) Finally, a series of illustrative discrete and continuous control tasks, which both illustrate the concepts and provide a benchmark for evaluating the proposed approach. Many challenges remain to be addressed in scaling the proposed framework to large MDPs, and several elaboration of the proposed framework are briefly summarized at the end.

336 citations


Journal ArticleDOI
TL;DR: This paper devise an efficient and isotropy-preserving construction algorithm, namely, the lattice-point algorithm to generate realizations of materials from their two-point correlation functions based on the Yeong-Torquato technique, and prove that S2(r) cannot completely specify a two-phase heterogeneous material alone.
Abstract: Heterogeneous materials abound in nature and man-made situations. Examples include porous media, biological materials, and composite materials. Diverse and interesting properties exhibited by these materials result from their complex microstructures, which also make it difficult to model the materials. Yeong and Torquato [Phys. Rev. E 57, 495 (1998)] introduced a stochastic optimization technique that enables one to generate realizations of heterogeneous materials from a prescribed set of correlation functions. In this first part of a series of two papers, we collect the known necessary conditions on the standard two-point correlation function S2(r) and formulate a conjecture. In particular, we argue that given a complete two-point correlation function space, S2(r) of any statistically homogeneous material can be expressed through a map on a selected set of bases of the function space. We provide examples of realizable two-point correlation functions and suggest a set of analytical basis functions. We also discuss an exact mathematical formulation of the (re)construction problem and prove that S2(r) cannot completely specify a two-phase heterogeneous material alone. Moreover, we devise an efficient and isotropy-preserving construction algorithm, namely, the lattice-point algorithm to generate realizations of materials from their two-point correlation functions based on the Yeong-Torquato technique. Subsequent analysis can be performed on the generated images to obtain desired macroscopic properties. These developments are integrated here into a general scheme that enables one to model and categorize heterogeneous materials via two-point correlation functions. We will mainly focus on basic principles in this paper. The algorithmic details and applications of the general scheme are given in the second part of this series of two papers.

322 citations


Journal ArticleDOI
TL;DR: An application of reduced basis method for Stokes equations in domains with affine parametric dependence is presented, ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.

296 citations


Journal ArticleDOI
TL;DR: The proposed LTS algorithm for ADER-DG is very general and does not need any temporal synchronization between the elements, and is computationally much more efficient for problems with strongly varying element size or material parameters since it allows to reduce the total number of element updates considerably.
Abstract: SUMMARY This article describes the extension of the arbitrary high-order Discontinuous Galerkin (ADER-DG) method to treat locally varying polynomial degress of the basis functions, so-called p-adaptivity, as well as locally varying time steps that may be different from one element to another. The p-adaptive version of the scheme is useful in complex 3-D models with small-scale features which have to be meshed with reasonably small elements to capture the necessary geometrical details of interest. Using a constant high polynomial degree of the basis functions in the whole computational domain can lead to an unreasonably high CPU effort since good spatial resolution at the surface may be already obtained by the fine mesh. Therefore, it can be more adequate in some cases to use a lower order method in the small elements to reduce the CPU effort without loosing much accuracy. To further increase computational efficiency, we present a new local time stepping (LTS) algorithm. For usual explicit time stepping schemes the element with the smallest time step resulting from the stability criterion of the method will dictate its time step to all the other elements of the computational domain. In contrast, by using local time stepping, each element can use its optimal time step given by the local stability condition. Our proposed LTS algorithm for ADER-DG is very general and does not need any temporal synchronization between the elements. Due to the ADER approach, accurate time interpolation is automatically provided at the element interfaces such that the computational overhead is very small and such that the method maintains the uniform high order of accuracy in space and time as in the usual ADER-DG schemes with a globally constant time step. However, the LTS ADER-DG method is computationally much more efficient for problems with strongly varying element size or material parameters since it allows to reduce the total number of element updates considerably. This holds especially for unstructured tetrahedral meshes that contain strongly degenerate elements, so-called slivers. We show numerical convergence results and CPU times for LTS ADER-DG schemes up to sixth order in space and time on irregular tetrahedral meshes containing elements of very different size and also on tetrahedral meshes containing slivers. Further validation of the algorithm is provided by results obtained for the layer over half-space (LOH.1) benchmark problem proposed by the Pacific Earthquake Engineering Research Center. Finally, we present a realistic application on earthquake modelling and ground motion prediction for the alpine valley of Grenoble.

273 citations


Journal ArticleDOI
TL;DR: An innovative procedure is presented that allows the method of moments (MoM) analysis of large and complex antenna and scattering problems at a reduced memory and CPU cost, bounded within the resources provided by a standard (32 bit) personal computer.
Abstract: An innovative procedure is presented that allows the method of moments (MoM) analysis of large and complex antenna and scattering problems at a reduced memory and CPU cost, bounded within the resources provided by a standard (32 bit) personal computer. The method is based on the separation of the overall geometry into smaller portions, called blocks, and on the degrees of freedom of the field. The blocks need not be electrically unconnected. On each block, basis functions are generated with support on the entire block, that are subsequently used as basis functions for the analysis of the complete structure. Only a small number of these functions is required to obtain an accurate solution; therefore, the overall number of unknowns is drastically reduced with consequent impact on storage and solution time. These entire-domain basis functions are called synthetic functions; they are generated from the solution of the electromagnetic problem for the block in isolation, under excitation by suitably defined sources. The synthetic functions are obtained from the responses to all sources via a procedure based on the singular-value decomposition. Because of the strong reduction of the global number of unknowns, one can store the MoM matrix and afford a direct solution. The method is kernel-free, and can be implemented on existing MoM codes.

267 citations


Journal ArticleDOI
TL;DR: A new method is introduced, RBF-QR, which entirely eliminates such ill-conditioning in the special case when the data points are distributed over the surface of a sphere, and it allows the RBF shape parameter to be optimized without the limitations imposed by stability concerns.
Abstract: When radial basis functions (RBFs) are made increasingly flat, the interpolation error typically decreases steadily until some point when Runge-type oscillations either halt or reverse this trend. Because the most obvious method to calculate an RBF interpolant becomes a numerically unstable algorithm for a stable problem in the case of near-flat basis functions, there will typically also be a separate point at which disastrous ill-conditioning enters. We introduce here a new method, RBF-QR, which entirely eliminates such ill-conditioning, and we apply it in the special case when the data points are distributed over the surface of a sphere. This algorithm works even for thousands of node points, and it allows the RBF shape parameter to be optimized without the limitations imposed by stability concerns. Since interpolation in the flat RBF limit on a sphere is found to coincide with spherical harmonics interpolation, new insights are gained as to why the RBF approach (with nonflat basis functions) often is the more accurate of the two methods.

245 citations


Journal ArticleDOI
TL;DR: The explicitly correlated coupled-cluster method CCSD(T) is extended to include F12 geminal basis functions that decay exponentially with the interelectronic distance and reproduce the form of the average Coulomb hole more accurately than linear-r12 as discussed by the authors.
Abstract: The explicitly-correlated coupled-cluster method CCSD(T)(R12) is extended to include F12 geminal basis functions that decay exponentially with the interelectronic distance and reproduce the form of the average Coulomb hole more accurately than linear-r12. Equations derived using the Ansatz 2 strong orthogonality projector are presented. The convergence of the correlation energy with orbital basis set for the new CCSD(T)(F12) method is studied and found to be rapid, 98% of the basis set limit correlation energy is typically recovered using triple-ζ orbital basis sets. The performance for reaction enthalpies is assessed via a test set of 15 reactions involving 23 molecules. The title statement is found to hold equally true for total and relative correlation energies.

Journal ArticleDOI
TL;DR: A class of new finite- element methods, called immersed-interface finite-element methods, is developed to solve elliptic interface problems with nonhomogeneous jump conditions to provide fast simulation of interface dynamics that does not require remeshing.
Abstract: In this work, a class of new finite-element methods, called immersed-interface finite-element methods, is developed to solve elliptic interface problems with nonhomogeneous jump conditions. Simple non-body-fitted meshes are used. A single function that satisfies the same nonhomogeneous jump conditions is constructed using a level-set representation of the interface. With such a function, the discontinuities across the interface in the solution and flux are removed, and an equivalent elliptic interface problem with homogeneous jump conditions is formulated. Special finite-element basis functions are constructed for nodal points near the interface to satisfy the homogeneous jump conditions. Error analysis and numerical tests are presented to demonstrate that such methods have an optimal convergence rate. These methods are designed as an efficient component of the finite-element level-set methodology for fast simulation of interface dynamics that does not require remeshing.

Journal ArticleDOI
TL;DR: In this article, a divergence-free displacement field is computed from a scalar potential by means of a "stream-function" formulation such that the displacement field can be automatically locking-free in the presence of the incompressibility constraint.

Journal ArticleDOI
TL;DR: In this article, a 4DVAR approach based on proper orthogonal decomposition (POD) is proposed to reduce the dimension of control space and reduce the size of dynamical model, both in dramatic ways.
Abstract: Four-dimensional variational data assimilation (4DVAR) is a powerful tool for data assimilation in meteorology and oceanography. However, a major hurdle in use of 4DVAR for realistic general circulation models is the dimension of the control space (generally equal to the size of the model state variable and typically of order 10 7 -10 8 ) and the high computational cost in computing the cost function and its gradient that require integration model and its adjoint model. In this paper, we propose a 4DVAR approach based on proper orthogonal decomposition (POD). POD is an efficient way to carry out reduced order modelling by identifying the few most energetic modes in a sequence of snapshots from a time-dependent system, and providing a means of obtaining a low-dimensional description of the system's dynamics. The POD-based 4DVAR not only reduces the dimension of control space, but also reduces the size of dynamical model, both in dramatic ways. The novelty of our approach also consists in the inclusion of adaptability, applied when in the process of iterative control the new control variables depart significantly from the ones on which the POD model was based upon. In addition, these approaches also allow to conveniently constructing the adjoint model. The proposed POD-based 4DVAR methods are tested and demonstrated using a reduced gravity wave ocean model in Pacific domain in the context of identical twin data assimilation experiments. A comparison with data assimilation experiments in the full model space shows that with an appropriate selection of the basis functions the optimization in the POD space is able to provide accurate results at a reduced computational cost. The POD-based 4DVAR methods have the potential to approximate the performance of full order 4DVAR with less than 1/100 computer time of the full order 4DVAR. The HFTN (Hessian-free truncated-Newton)algorithm benefits most from the order reduction (see (Int. J. Numer. Meth. Fluids, in press)) since computational savings are achieved both in the outer and inner iterations of this method.

Journal ArticleDOI
TL;DR: Two fast sparse approximation schemes for least squares support vector machine (LS-SVM) are presented to overcome the limitation of LS-S VM that it is not applicable to large data sets and to improve test speed.
Abstract: In this paper, we present two fast sparse approximation schemes for least squares support vector machine (LS-SVM), named FSALS-SVM and PFSALS-SVM, to overcome the limitation of LS-SVM that it is not applicable to large data sets and to improve test speed. FSALS-SVM iteratively builds the decision function by adding one basis function from a kernel-based dictionary at one time. The process is terminated by using a flexible and stable epsilon insensitive stopping criterion. A probabilistic speedup scheme is employed to further improve the speed of FSALS-SVM and the resulting classifier is named PFSALS-SVM. Our algorithms are of two compelling features: low complexity and sparse solution. Experiments on benchmark data sets show that our algorithms obtain sparse classifiers at a rather low cost without sacrificing the generalization performance

Journal ArticleDOI
TL;DR: The WFS representation is a data smoothing technique that provides the explicit smooth functional estimation of unknown cortical boundary as a linear combination of basis functions and is applied in quantifying the amount of gray matter in a group of high functioning autistic subjects.
Abstract: We present a novel weighted Fourier series (WFS) representation for cortical surfaces. The WFS representation is a data smoothing technique that provides the explicit smooth functional estimation of unknown cortical boundary as a linear combination of basis functions. The basic properties of the representation are investigated in connection with a self-adjoint partial differential equation and the traditional spherical harmonic (SPHARM) representation. To reduce steep computational requirements, a new iterative residual fitting (IRF) algorithm is developed. Its computational and numerical implementation issues are discussed in detail. The computer codes are also available at http://www.stat.wisc.edu/ ~mchung/softwares/weighted-SPHARM/weighted-SPHARM.html . As an illustration, the WFS is applied in quantifying the amount of gray matter in a group of high functioning autistic subjects. Within the WFS framework, cortical thickness and gray matter density are computed and compared

Journal ArticleDOI
TL;DR: A sharper error estimate than previously obtained is presented, and a formula for a finite, optimal c value that minimizes the solution error for a given grid size is obtained.
Abstract: Multiquadric (MQ) collocation method is highly efficient for solving partial differential equations due to its exponential error convergence rate. A special feature of the method is that error can be reduced by increasing the value of shape constant c in the MQ basis function, without refining the grid. It is believed that in a numerical solution without roundoff error, infinite accuracy can be achieved by letting c → ∞ . Using the arbitrary precision computation, this paper tests the above conjecture. A sharper error estimate than previously obtained is presented. A formula for a finite, optimal c value that minimizes the solution error for a given grid size is obtained. Using residual errors, constants in error estimate and optimal c formula can be obtained. These results are supported by numerical examples.

Journal ArticleDOI
TL;DR: Estimates prove very precisely the previously made empirical observation that the use of low-energy coarse spaces can lead to robust preconditioners that are robust even for large coefficient variation inside domains when the classical method fails to be robust.
Abstract: We consider additive Schwarz domain decomposition preconditioners for piecewise linear finite element approximations of elliptic PDEs with highly variable coefficients. In contrast to standard analyses, we do not assume that the coefficients can be resolved by a coarse mesh. This situation arises often in practice, for example in the computation of flows in heterogeneous porous media, in both the deterministic and (Monte–Carlo simulated) stochastic cases. We consider preconditioners which combine local solves on general overlapping subdomains together with a global solve on a general coarse space of functions on a coarse grid. We perform a new analysis of the preconditioned matrix, which shows rather explicitly how its condition number depends on the variable coefficient in the PDE as well as on the coarse mesh and overlap parameters. The classical estimates for this preconditioner with linear coarsening guarantee good conditioning only when the coefficient varies mildly inside the coarse grid elements. By contrast, our new results show that, with a good choice of subdomains and coarse space basis functions, the preconditioner can still be robust even for large coefficient variation inside domains, when the classical method fails to be robust. In particular our estimates prove very precisely the previously made empirical observation that the use of low-energy coarse spaces can lead to robust preconditioners. We go on to consider coarse spaces constructed from multiscale finite elements and prove that preconditioners using this type of coarsening lead to robust preconditioners for a variety of binary (i.e., two-scale) media model problems. Moreover numerical experiments show that the new preconditioner has greatly improved performance over standard preconditioners even in the random coefficient case. We show also how the analysis extends in a straightforward way to multiplicative versions of the Schwarz method.

Proceedings ArticleDOI
20 Jun 2007
TL;DR: A simple, Bellman-error-based approach to generating basis functions for value-function approximation is analyzed and it is shown that it generates orthogonal basis functions that provably tighten approximation error bounds.
Abstract: We analyze a simple, Bellman-error-based approach to generating basis functions for value-function approximation. We show that it generates orthogonal basis functions that provably tighten approximation error bounds. We also illustrate the use of this approach in the presence of noise on some sample problems.

Journal ArticleDOI
TL;DR: This paper presents a computationally viable implementation of the steered response power (SRP) source localization method and shows that by only including a few basis functions per microphone pair, the SRP map is quite accurately represented.
Abstract: The process of locating an acoustic source given measurements of the sound field at multiple microphones is of significant interest as both a classical array signal processing problem, and more recently, as a solution to the problems of automatic camera steering, teleconferencing, hands-free processing, and others. Despite the proven efficacy of steered-beamformer approaches to localization in harsh conditions, their practical application to real-time settings is hindered by undesirably high computational demands. This paper presents a computationally viable implementation of the steered response power (SRP) source localization method. The conventional approach is generalized by introducing an inverse mapping that maps relative delays to sets of candidate locations. Instead of traversing the three-dimensional location space, the one-dimensional relative delay space is traversed; at each lag, all locations which are inverse mapped by that delay are updated. This means that the computation of the SRP map is no longer performed sequentially in space. Most importantly, by subsetting the space of relative delays to only those that achieve a high level of cross-correlation, the required number of algorithm updates is drastically reduced without compromising localization accuracy. The generalization is scalable in the sense that the level of subsetting is an algorithm parameter. It is shown that this generalization may be viewed as a spatial decomposition of the SRP energy map into weighted basis functions-in this context, it becomes evident that the full SRP search considers all basis functions (even the ones with very low weighting). On the other hand, it is shown that by only including a few basis functions per microphone pair, the SRP map is quite accurately represented. As a result, in a real environment, the proposed generalization achieves virtually the same anomaly rate as the full SRP search while only performing 10% the amount of algorithm updates as the full search.

Proceedings Article
19 Jul 2007
TL;DR: Shift-invariant sparse coding (SISC) as mentioned in this paper is an extension of sparse coding which reconstructs a (usually time-series) input using all of the basis functions in all possible shifts.
Abstract: Sparse coding is an unsupervised learning algorithm that learns a succinct high-level representation of the inputs given only unlabeled data; it represents each input as a sparse linear combination of a set of basis functions. Originally applied to modeling the human visual cortex, sparse coding has also been shown to be useful for self-taught learning, in which the goal is to solve a supervised classification task given access to additional unlabeled data drawn from different classes than that in the supervised learning problem. Shift-invariant sparse coding (SISC) is an extension of sparse coding which reconstructs a (usually time-series) input using all of the basis functions in all possible shifts. In this paper, we present an efficient algorithm for learning SISC bases. Our method is based on iteratively solving two large convex optimization problems: The first, which computes the linear coefficients, is an L1-regularized linear least squares problem with potentially hundreds of thousands of variables. Existing methods typically use a heuristic to select a small subset of the variables to optimize, but we present a way to efficiently compute the exact solution. The second, which solves for bases, is a constrained linear least squares problem. By optimizing over complex-valued variables in the Fourier domain, we reduce the coupling between the different variables, allowing the problem to be solved efficiently. We show that SISC's learned high-level representations of speech and music provide useful features for classification tasks within those domains. When applied to classification, under certain conditions the learned features outperform state of the art spectral and cepstral features.

Proceedings ArticleDOI
29 Jul 2007
TL;DR: An extension to Lagrangian finite element methods to allow for large plastic deformations of solid materials and an enhanced plasticity model that preserves volume and includes creep and work hardening/softening are presented.
Abstract: We present an extension to Lagrangian finite element methods to allow for large plastic deformations of solid materials. These behaviors are seen in such everyday materials as shampoo, dough, and clay as well as in fantastic gooey and blobby creatures in special effects scenes. To account for plastic deformation, we explicitly update the linear basis functions defined over the finite elements during each simulation step. When these updates cause the basis functions to become ill-conditioned, we remesh the simulation domain to produce a new high-quality finite-element mesh, taking care to preserve the original boundary. We also introduce an enhanced plasticity model that preserves volume and includes creep and work hardening/softening. We demonstrate our approach with simulations of synthetic objects that squish, dent, and flow. To validate our methods, we compare simulation results to videos of real materials.

Journal ArticleDOI
01 Sep 2007
TL;DR: This work presents a method for animating deformable objects using a novel finite element discretization on convex polyhedra, and uses an elasticity model based on Cauchy strain and stiffness warping for fast and robust computations.
Abstract: We present a method for animating deformable objects using a novel finite element discretization on convex polyhedra. Our finite element approach draws upon recently introduced 3D mean value coordinates to define smooth interpolants within the elements. The mathematical properties of our basis functions guarantee convergence. Our method is a natural extension to linear interpolants on tetrahedra: for tetrahedral elements, the methods are identical. For fast and robust computations, we use an elasticity model based on Cauchy strain and stiffness warping. This more flexible discretization is particularly useful for simulations that involve topological changes, such as cutting or fracture. Since splitting convex elements along a plane produces convex elements, remeshing or subdivision schemes used in simulations based on tetrahedra are not necessary, leading to less elements after such operations. We propose various operators for cutting the polyhedral discretization. Our method can handle arbitrary cut trajectories, and there is no limit on how often elements can be split.

Journal ArticleDOI
TL;DR: This paper presents a new formulation for recovering the fiber tract geometry within a voxel from diffusion weighted magnetic resonance imaging (MRI) data, in the presence of single or multiple neuronal fibers, and defines a discrete set of diffusion basis functions.
Abstract: In this paper, we present a new formulation for recovering the fiber tract geometry within a voxel from diffusion weighted magnetic resonance imaging (MRI) data, in the presence of single or multiple neuronal fibers. To this end, we define a discrete set of diffusion basis functions. The intravoxel information is recovered at voxels containing fiber crossings or bifurcations via the use of a linear combination of the above mentioned basis functions. Then, the parametric representation of the intravoxel fiber geometry is a discrete mixture of Gaussians. Our synthetic experiments depict several advantages by using this discrete schema: the approach uses a small number of diffusion weighted images (23) and relatively small b values (1250 s/mm2 ), i.e., the intravoxel information can be inferred at a fraction of the acquisition time required for datasets involving a large number of diffusion gradient orientations. Moreover our method is robust in the presence of more than two fibers within a voxel, improving the state-of-the-art of such parametric models. We present two algorithmic solutions to our formulation: by solving a linear program or by minimizing a quadratic cost function (both with non-negativity constraints). Such minimizations are efficiently achieved with standard iterative deterministic algorithms. Finally, we present results of applying the algorithms to synthetic as well as real data.

Journal ArticleDOI
TL;DR: An overview of the construction of meshfree basis functions is presented, with particular emphasis on moving least‐squares approximant, natural neighbour‐based polygonal interpolants, and entropy approximants.
Abstract: In this paper, an overview of the construction of meshfree basis functions is presented, with particular emphasis on moving least-squares approximants, natural neighbour-based polygonal interpolants, and entropy approximants. The use of information-theoretic variational principles to derive approximation schemes is a recent development. In this setting, data approximation is viewed as an inductive inference problem, with the basis functions being synonymous with a discrete probability distribution and the polynomial reproducing conditions acting as the linear constraints. The maximization (minimization) of the Shannon–Jaynes entropy functional (relative entropy functional) is used to unify the construction of globally and locally supported convex approximation schemes. A JAVA applet is used to visualize the meshfree basis functions, and comparisons and links between different meshfree approximation schemes are presented. Copyright © 2006 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: An approach based on the empirical mode decomposition technique and proper orthogonal decomposition is proposed to examine dynamic trends and phase relationships between key system signals from measured data and improves the ability of the EMD technique to capture abrupt changes in the observed data.
Abstract: An approach based on the empirical mode decomposition (EMD) technique and proper orthogonal decomposition is proposed to examine dynamic trends and phase relationships between key system signals from measured data. Drawing on the EMD approach, and the method of snapshots, a technique based on the notion of proper orthogonal modes, is used to express an ensemble of measured data as a linear combination of basis functions or modes. This approach improves the ability of the EMD technique to capture abrupt changes in the observed data. Analytical criteria to describe the energy relationships in the observed oscillations are derived and a physical interpretation of the system modes is suggested. It is shown that in addition to providing estimates of time dependent mode shapes, the analysis also provides a method to identify the modes with the most energy embedded in the underlying signals. The method is applied to conduct post-mortem analysis of measured data of a real event in northern Mexico and to transient stability data

Journal ArticleDOI
TL;DR: This paper proposes to finesse this Bayesian framework by specifying spatial priors using Sparse Spatial Basis Functions (SSBFs), defined via a hierarchical probabilistic model which, when inverted, automatically selects an appropriate subset of basis functions.

Journal ArticleDOI
TL;DR: In this paper, a response surface model is developed using radial basis functions, producing a model whose objective function values match those of the original system at all sampled data points, and interpolation to any other point is easily accomplished and generates a model which represents the system over the entire parameter space.

Journal ArticleDOI
TL;DR: The proposed method provides an approximation to the complete probabilistic description of the eigensolution and circumvents the dependence of the statistical solution on the quality of the underlying random number generator.
Abstract: A new procedure for characterizing the solution of the eigenvalue problem in the presence of uncertainty is presented. The eigenvalues and eigenvectors are described through their projections on the polynomial chaos basis. An efficient method for estimating the coefficients with respect to this basis is proposed. The method uses a Galerkin-based approach by orthogonalizing the residual in the eigenvalue–eigenvector equation to the subspace spanned by the basis functions used for approximation. In this way, the stochastic problem is framed as a system of deterministic non-linear algebraic equations. This system of equations is solved using a Newton–Raphson algorithm. Although the proposed approach is not based on statistical sampling, the efficiency of the proposed method can be significantly enhanced by initializing the non-linear iterative process with a small statistical sample synthesized through a Monte Carlo sampling scheme. The proposed method offers a number of advantages over existing methods based on statistical sampling. First, it provides an approximation to the complete probabilistic description of the eigensolution. Second, it reduces the computational overhead associated with solving the statistical eigenvalue problem. Finally, it circumvents the dependence of the statistical solution on the quality of the underlying random number generator. Copyright © 2007 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: It is demonstrated that the new network can lead to a parsimonious model with much better generalization property compared with the traditional single width RBF networks.