scispace - formally typeset
Search or ask a question

Showing papers on "Basis function published in 2000"


01 Jan 2000
TL;DR: In this article, a self-adaptive mesh scheme is presented in the context of the quasi-static and full-wave analysis of general anisotropic multiconductor arbitrary shaped waveguiding structures.
Abstract: This Key Note presents a summary of the development of the Finite Element Method in the field of Electromagnet ic Engineering, together with a description of several contributions of the authors to the Finite Element Method and its application to the solution of electromagnetic problems. First, a self-adaptive mesh scheme is presented in the context of the quasi-static and full-wave analysis of general anisotropic multiconductor arbitrary shaped waveguiding structures. A comparison between two a posteriori error estimates is done. The first one is based on the complete residual of the differential equations defining the problem. The second one is based on a recovery or smoothing technique of the electromagnetic field. Next, an implementation of the first family of Nedelec's curl-conforming elements done by the authors is outlined. Its features are highlighted and compared with other curl-conforming elements. A presentation of an iterative procedure using a numerically exact radiation condition for the analysis of open (scattering and radiation) problems follows. Other contributions of the authors, like the use of wavelet like basis functions and an implementation of a Time Domain Finite Element Method in the context of two-dimensional scattering problems are only mentioned due to the lack of space.

2,311 citations


Book
01 Jan 2000
TL;DR: Second Quantization Spin in Second Quantization Orbital Rotations Exact and Approximate Wave Functions The Standard Models Atomic Basis Functions Short-range Interactions and Orbital Expansions Gaussian Basis Sets Molecular Integral Evaluation Hartree-Fock Theory Configuration-Interaction Theory Multiconfigurational Self-Consistent Field Theory Coupled-Cluster Theory Perturbation Theory Calibration of the Electronic-Structure Models List of Acronyms Index
Abstract: Second Quantization Spin in Second Quantization Orbital Rotations Exact and Approximate Wave Functions The Standard Models Atomic Basis Functions Short-Range Interactions and Orbital Expansions Gaussian Basis Sets Molecular Integral Evaluation Hartree-Fock Theory Configuration-Interaction Theory Multiconfigurational Self-Consistent Field Theory Coupled-Cluster Theory Perturbation Theory Calibration of the Electronic-Structure Models List of Acronyms Index

1,740 citations


Journal ArticleDOI
TL;DR: A new implementation of the approximate coupled cluster singles and doubles method CC2 is reported, which is suitable for large scale integral-direct calculations and employs the resolution of the identity (RI) approximation for two-electron integrals to reduce the CPU time needed for calculation and I/O of these integrals.
Abstract: A new implementation of the approximate coupled cluster singles and doubles method CC2 is reported, which is suitable for large scale integral-direct calculations. It employs the resolution of the identity (RI) approximation for two-electron integrals to reduce the CPU time needed for calculation and I/O of these integrals. We use a partitioned form of the CC2 equations which eliminates the need to store double excitation cluster amplitudes. In combination with the RI approximation this formulation of the CC2 equations leads to a reduced scaling of memory and disk space requirements with the number of correlated electrons (n) and basis functions (N) to, respectively, O(N2) and O(nN2), compared to O(n2N2) in previous implementations. The reduced CPU, memory and disk space requirements make it possible to perform CC2 calculations with accurate basis sets on large molecules, which would not be accessible with conventional implementations of the CC2 method. We present an application to vertical excitation ene...

1,326 citations


Journal ArticleDOI
TL;DR: In this article, a new basis set for a full potential treatment of crystal electronic structures is presented and compared to that of the well-known linearized augmented plane-wave (LAPW) method.

854 citations


Journal ArticleDOI
TL;DR: Based on the theory of approximation, this paper presents a unified analysis of interpolation and resampling techniques and shows that, contrary to the common belief, those that perform best are not interpolating.
Abstract: Based on the theory of approximation, this paper presents a unified analysis of interpolation and resampling techniques. An important issue is the choice of adequate basis functions. The authors show that, contrary to the common belief, those that perform best are not interpolating. By opposition to traditional interpolation, the authors call their use generalized interpolation; they involve a prefiltering step when correctly applied. The authors explain why the approximation order inherent in any basis function is important to limit interpolation artifacts. The decomposition theorem states that any basis function endowed with approximation order ran be expressed as the convolution of a B spline of the same order with another function that has none. This motivates the use of splines and spline-based functions as a tunable way to keep artifacts in check without any significant cost penalty. The authors discuss implementation and performance issues, and they provide experimental evidence to support their claims.

842 citations


Journal ArticleDOI
TL;DR: The original mortar approach to matching at the interface is realized by enforcing an orthogonality relation between the jump and a modified trace space which serves as a space of Lagrange multipliers, which is replaced by a dual space without losing the optimality of the method.
Abstract: The mortar finite element method allows the coupling of different discretization schemes and triangulations across subregion boundaries. In the original mortar approach the matching at the interface is realized by enforcing an orthogonality relation between the jump and a modified trace space which serves as a space of Lagrange multipliers. In this paper, this Lagrange multiplier space is replaced by a dual space without losing the optimality of the method. The advantage of this new approach is that the matching condition is much easier to realize. In particular, all the basis functions of the new method are supported in a few elements. The mortar map can be represented by a diagonal matrix; in the standard mortar method a linear system of equations must be solved. The problem is considered in a positive definite nonconforming variational as well as an equivalent saddle-point formulation.

576 citations


Journal ArticleDOI
TL;DR: An integrated method for clustering of QRS complexes is presented which includes basis function representation and self-organizing neural networks (NN's) and outperforms both a published supervised learning method as well as a conventional template cross-correlation clustering method.
Abstract: An integrated method for clustering of QRS complexes is presented which includes basis function representation and self-organizing neural networks (NN's). Each QRS complex is decomposed into Hermite basis functions and the resulting coefficients and width parameter are used to represent the complex. By means of this representation, unsupervised self-organizing NNs are employed to cluster the data into 25 groups. Using the MIT-BIH arrhythmia database, the resulting clusters are found to exhibit a very low degree of misclassification (1.5%). The integrated method outperforms, on the MIT-BIH database, both a published supervised learning method as well as a conventional template cross-correlation clustering method.

555 citations


Journal ArticleDOI
TL;DR: This paper explores several techniques, each of which improves the conditioning of the coefficient matrix and the solution accuracy, and recommends using what has been learned from the FEM practitioners and combining their methods with what has be learned in RBF simulations to form a flexible, hybrid approach to solve complex multidimensional problems.
Abstract: Madych and Nelson [1] proved multiquadric (MQ) mesh-independent radial basis functions (RBFs) enjoy exponential convergence. The primary disadvantage of the MQ scheme is that it is global, hence, the coefficient matrices obtained from this discretization scheme are full. Full matrices tend to become progressively more ill-conditioned as the rank increases. In this paper, we explore several techniques, each of which improves the conditioning of the coefficient matrix and the solution accuracy. The methods that were investigated are 1. (1) replacement of global solvers by block partitioning, LU decomposition schemes, 2. (2) matrix preconditioners, 3. (3) variable MQ shape parameters based upon the local radius of curvature of the function being solved, 4. (4) a truncated MQ basis function having a finite, rather than a full band-width, 5. (5) multizone methods for large simulation problems, and 6. (6) knot adaptivity that minimizes the total number of knots required in a simulation problem. The hybrid combination of these methods contribute to very accurate solutions. Even though FEM gives rise to sparse coefficient matrices, these matrices in practice can become very ill-conditioned. We recommend using what has been learned from the FEM practitioners and combining their methods with what has been learned in RBF simulations to form a flexible, hybrid approach to solve complex multidimensional problems.

461 citations


Journal ArticleDOI
TL;DR: In this paper, a reduced-order modeling approach for active control of fluid dynamical systems based on proper orthogonal decomposition (POD) is presented, which allows the extraction of a reduced set of basis functions, perhaps just a few from a computational or experimental database through an eigenvalue analysis.
Abstract: In this article, a reduced-order modeling approach, suitable for active control of fluid dynamical systems, based on proper orthogonal decomposition (POD) is presented. The rationale behind the reduced-order modeling is that numerical simulation of Navier–Stokes equations is still too costly for the purpose of optimization and control of unsteady flows. The possibility of obtaining reduced-order models that reduce the computational complexity associated with the Navier–Stokes equations is examined while capturing the essential dynamics by using the POD. The POD allows the extraction of a reduced set of basis functions, perhaps just a few, from a computational or experimental database through an eigenvalue analysis. The solution is then obtained as a linear combination of this reduced set of basis functions by means of Galerkin projection. This makes it attractive for optimal control and estimation of systems governed by partial differential equations (PDEs). It is used here in active control of fluid flows governed by the Navier–Stokes equations. In particular, flow over a backward-facing step is considered. Reduced-order models/low-dimensional dynamical models for this system are obtained using POD basis functions (global) from the finite element discretizations of the Navier–Stokes equations. Their effectiveness in flow control applications is shown on a recirculation control problem using blowing on the channel boundary. Implementational issues are discussed and numerical experiments are presented. Copyright © 2000 John Wiley & Sons, Ltd.

398 citations


Journal ArticleDOI
TL;DR: In this paper, the use of basis functions as an intermediate representation for sensorimotor transformations has been studied and the neural basis of computation, learning and short-term memory has been shown to be consistent with responses of cortical neurons.
Abstract: Behaviors such as sensing an object and then moving your eyes or your hand toward it require that sensory information be used to help generate a motor command, a process known as a sensorimotor transformation. Here we review models of sensorimotor transformations that use a flexible intermediate representation that relies on basis functions. The use of basis functions as an intermediate is borrowed from the theory of nonlinear function approximation. We show that this approach provides a unifying insight into the neural basis of three crucial aspects of sensorimotor transformations, namely, computation, learning and short-term memory. This mathematical formalism is consistent with the responses of cortical neurons and provides a fresh perspective on the issue of frames of reference in spatial representations.

368 citations


Journal ArticleDOI
TL;DR: An energy decomposition scheme based on the block-localized wave function (BLW) method is proposed in this paper, which is the definition and the full optimization of the diabatic state wave function, where the charge transfer among interacting molecules is deactivated.
Abstract: An energy decomposition scheme based on the block-localized wave function (BLW) method is proposed. The key of this scheme is the definition and the full optimization of the diabatic state wave function, where the charge transfer among interacting molecules is deactivated. The present energy decomposition (ED), BLW-ED, method is similar to the Morokuma decomposition scheme in definition of the energy terms, but differs in implementation and the computational algorithm. In addition, in the BLW-ED approach, the basis set superposition error is fully taken into account. The application of this scheme to the water dimer and the lithium cation–water clusters reveals that there is minimal charge transfer effect in hydrogen-bonded complexes. At the HF/aug-cc-PVTZ level, the electrostatic, polarization, and charge-transfer effects contribute 65%, 24%, and 11%, respectively, to the total bonding energy (−3.84 kcal/mol) in the water dimer. On the other hand, charge transfer effects are shown to be significant in Lewis acid–base complexes such as H3NSO3 and H3NBH3. In this work, the effect of basis sets used on the energy decomposition analysis is addressed and the results manifest that the present energy decomposition scheme is stable with a modest size of basis functions.

Journal ArticleDOI
TL;DR: A recently developed general theory for basis construction will be presented, that is a generalization of the classical Laguerre theory, particularly exploiting the property that basis function models are linearly parametrized.

Journal ArticleDOI
TL;DR: This paper approaches the issue of the optimal selection of the norm, namely the H1 norm, used in POD for the compressible Navier–Stokes equations by several numerical tests and finds that low order modeling of relatively complex flow simulations provides good qualitative results compared with reference computations.
Abstract: Fluid flows are very often governed by the dynamics of a mall number of coherent structures, i.e., fluid features which keep their individuality during the evolution of the flow. The purpose of this paper is to study a low order simulation of the Navier–Stokes equations on the basis of the evolution of such coherent structures. One way to extract some basis functions which can be interpreted as coherent structures from flow simulations is by Proper Orthogonal Decomposition (POD). Then, by means of a Galerkin projection, it is possible to find the system of ODEs which approximates the problem in the finite-dimensional space spanned by the POD basis functions. It is found that low order modeling of relatively complex flow simulations, such as laminar vortex shedding from an airfoil at incidence and turbulent vortex shedding from a square cylinder, provides good qualitative results compared with reference computations. In this respect, it is shown that the accuracy of numerical schemes based on simple Galerkin projection is insufficient and numerical stabilization is needed. To conclude, we approach the issue of the optimal selection of the norm, namely the H 1 norm, used in POD for the compressible Navier–Stokes equations by several numerical tests.

Journal ArticleDOI
TL;DR: In this article, the authors propose a rigorous and practical methodology for the derivation of accurate finite-dimensional approximations and the synthesis of nonlinear output feedback controllers for non-linear parabolic PDE systems for which the manipulated inputs, the controlled and measured outputs are distributed in space.
Abstract: This article proposes a rigorous and practical methodology for the derivation of accurate finite-dimensional approximations and the synthesis of non-linear output feedback controllers for non-linear parabolic PDE systems for which the manipulated inputs, the controlled and measured outputs are distributed in space. The method consists of three steps: first, the Karhunen-Loeve expansion is used to derive empirical eigenfunctions of the non-linear parabolic PDE system, then the empirical eigenfunctions are used as basis functions within a Galerkin's and approximate inertial manifold model reduction framework to derive low-order ODE systems that accurately describe the dominant dynamics of the PDE system, and finally, these ODE systems are used for the synthesis of non-linear output feedback controllers that guarantee stability and enforce output tracking in the closed-loop system. The proposed method is used to perform model reduction and synthesize a non-linear dynamic output feedback controller for a rapi...

Journal ArticleDOI
TL;DR: This article investigates an alternative optimization approach based on block coordinate relaxation (BCR) for sets of basis functions that are the finite union of sets of orthonormal basis functions (e.g., wavelet packets), and shows that the BCR algorithm is globally convergent, and empirically, the B CR algorithm is faster than the IP algorithm for a variety of signal denoising problems.
Abstract: An important class of nonparametric signal processing methods entails forming a set of predictors from an overcomplete set of basis functions associated with a fast transform (e.g., wavelet packets). In these methods, the number of basis functions can far exceed the number of sample values in the signal, leading to an ill-posed prediction problem. The “basis pursuit” denoising method of Chen, Donoho, and Saunders regularizes the prediction problem by adding an l 1 penalty term on the coefficients for the basis functions. Use of an l 1 penalty instead of l 2 has significant benefits, including higher resolution of signals close in time/frequency and a more parsimonious representation. The l 1 penalty, however, poses a challenging optimization problem that was solved by Chen, Donoho and Saunders using a novel application of interior-point algorithms (IP). This article investigates an alternative optimization approach based on block coordinate relaxation (BCR) for sets of basis functions that are th...

Journal ArticleDOI
TL;DR: In this article, an all-electron implementation of the Gaussian and augmented plane wave density functional method (GAPW) is presented, which allows ab-initio density functional calculations for periodic and non-periodic systems.
Abstract: We present an all-electron implementation of the Gaussian and augmented plane wave density functional method (GAPW method), which allows ab-initio density functional calculations for periodic and non-periodic systems. The GAPW method uses a Gaussian basis set to expand the Kohn–Sham orbitals, whereas an augmented plane wave basis set is introduced as an auxiliary basis set to expand the electronic charge density. The results of the all-electron calculations for a representative set of small molecules are reported to demonstrate the accuracy and reliability of the GAPW method. Furthermore, its performance is shown for some larger systems, including calculations on unbranched alkane chains up to n-C100H202 with 1804 basis functions and a fully hydrated crystalline RNA duplex (sodium guanylyl-3′-5′-cytidine nonahydrate) with 368 atoms and 3168 basis functions. Finally, as a first test an all-electron ab-initio molecular dynamics (MD) run was performed for 32 water molecules in a simple cubic box under ambient conditions. A standard single processor workstation (IBM 397) was used for all the presented calculations.

Journal ArticleDOI
TL;DR: A lattice structure for an M-channel linear-phase perfect reconstruction filter bank (LPPRFB) based on the singular value decomposition (SVD) is introduced, which can be proven to use a minimal number of delay elements and to completely span a large class of LPPRFBs.
Abstract: A lattice structure for an M-channel linear-phase perfect reconstruction filter bank (LPPRFB) based on the singular value decomposition (SVD) is introduced. The lattice can be proven to use a minimal number of delay elements and to completely span a large class of LPPRFBs: all analysis and synthesis filters have the same FIR length, sharing the same center of symmetry. The lattice also structurally enforces both linear-phase and perfect reconstruction properties, is capable of providing fast and efficient implementation, and avoids the costly matrix inversion problem in the optimization process. From a block transform perspective, the new lattice can be viewed as representing a family of generalized lapped biorthogonal transform (GLBT) with an arbitrary number of channels M and arbitrarily large overlap. The relaxation of the orthogonal constraint allows the GLBT to have significantly different analysis and synthesis basis functions, which can then be tailored appropriately to fit a particular application. Several design examples are presented along with a high-performance GLBT-based progressive image coder to demonstrate the potential of the new transforms.

Journal ArticleDOI
TL;DR: In this article, a coupled surface-volume integral equation approach is presented for the calculation of electromagnetic scattering from conducting objects coated with materials, which can be easily accelerated using fast solvers such as the multilevel fast multipole algorithm.
Abstract: A coupled surface-volume integral equation approach is presented fur the calculation of electromagnetic scattering from conducting objects coated with materials. Free-space Green's function is used in the formulation of both integral equations. In the method of moments (MoM) solution to the integral equations, the target is discretized using triangular patches for conducting surfaces and tetrahedral cells for dielectric volume. General roof-top basis functions are used to expand the surface and volume currents, respectively. This approach is applicable to inhomogeneous material coating, and, because of the use of free-space Green's function, it can be easily accelerated using fast solvers such as the multilevel fast multipole algorithm.

Proceedings Article
30 Jun 2000
TL;DR: This work presents a new approach to value determination, that uses a simple closed-form computation to compute a least-squares decomposed approximation to the value function for any weights directly, and uses this value determination algorithm as a subroutine in a policy iteration process.
Abstract: Many large MDPs can be represented compactly using a dynamic Bayesian network. Although the structure of the value function does not retain the structure of the process, recent work has suggested that value functions in factored MDPs can often be approximated well using a factored value function: a linear combination of restricted basis functions, each of which refers only to a small subset of variables. An approximate factored value function for a particular policy can be computed using approximate dynamic programming, but this approach (and others) can only produce an approximation relative to a distance metric which is weighted by the stationary distribution of the current policy. This type of weighted projection is ill-suited to policy improvement. We present a new approach to value determination, that uses a simple closed-form computation to compute a least-squares decomposed approximation to the value function for any weights directly. We then use this value determination algorithm as a subroutine in a policy iteration process. We show that, under reasonable restrictions, the policies induced by a factored value function can be compactly represented as a decision list, and can be manipulated efficiently in a policy iteration process. We also present a method for computing error bounds for decomposed value functions using a variableelimination algorithm for function optimization. The complexity of all of our algorithms depends on the factorization of the system dynamics and of the approximate value function.

Journal ArticleDOI
TL;DR: In this paper, a multi-coefficient correlation method based on quadratic configuration interaction with single and double excitations (MC-QCISD) and basis sets using segmented contraction and having the same exponential parameters in the s and p spaces.
Abstract: This paper presents a multi-coefficient correlation method based on quadratic configuration interaction with single and double excitations (MC-QCISD) and basis sets using segmented contraction and having the same exponential parameters in the s and p spaces. The results are comparable to a previous multi-coefficient correlation method based on coupled cluster theory with less efficient correlation-consistent basis sets, and they are better than a previous multi-coefficient correlation method based on Moller−Plesset fourth order perturbation theory with single, double, and quadruple excitations with correlation-consistent basis functions. The mean unsigned error per bond of the MC-QCISD method is 0.72 kcal/mol. The new method should be very efficient for computing geometries of open-shell transition states.

01 May 2000
TL;DR: This work presents an approach where suitable reduced order models are derived successively and give global convergence results.
Abstract: The proper orthogonal decomposition (POD) is a model reduction technique for the simulation of physical processes governed by partial differential equations, e.g. fluid flows. It can also be used to develop reduced order control models. Fundamental is the computation of POD basis functions that represent the influence of the control action on the system in order to get a suitable control model. We present an approach where suitable reduced order models are derived successively and give global convergence results.

Journal ArticleDOI
TL;DR: In this paper, a new computational algorithm that accounts for tunneling effects is introduced and tested against exact solution of the Schrodinger equation for two multiconfigurational model problems.
Abstract: Quantum mechanical tunneling effects are investigated using an extension of the full multiple spawning (FMS) method. The FMS method uses a multiconfigurational frozen Gaussian ansatz for the wave function and it allows for dynamical expansion of the basis set during the simulation. Basis set growth is controlled by allowing this expansion only when the dynamics signals impending failure of classical mechanics, e.g., nonadiabatic and/or tunneling effects. Previous applications of the FMS method have emphasized the modeling of nonadiabatic effects. Here, a new computational algorithm that accounts for tunneling effects is introduced and tested against exact solution of the Schrodinger equation for two multi-dimensional model problems. The algorithm first identifies the tunneling events and then determines the initial conditions for the newly spawned basis functions. Quantitative agreement in expectation values, tunneling doublets and tunneling splitting is demonstrated for a wide range of conditions.

Journal ArticleDOI
TL;DR: In this article, the problem of choosing appropriate atomic orbital basis sets for ab initio calculations on dipole-bound anions has been examined, and the issue of designing and centering the extra diffuse basis functions for the excess electron has also been studied.
Abstract: The problem of choosing appropriate atomic orbital basis sets for ab initio calculations on dipole-bound anions has been examined. Such basis sets are usually constructed as combination of a standard valence-type basis, designed to describe the neutral molecular core, and an extra diffuse set designed to describe the charge distribution of the extra electron. As part of the present work, it has been found that the most commonly used valence-type basis sets (e.g., 6-31CCG or 6-311CG), when so augmented, are subject to unpredictable under- or overestimating electron binding energies for dipole-bound anions. Whereas, when the aug-cc-pVDZ, aug-cc-pVTZ (or other medium-size polarized (MSP) basis sets) are so augmented, more reliable binding energies are obtained especially when the electron binding energy is calculated at the CCSD(T) level of theory. The issue of designing and centering the extra diffuse basis functions for the excess electron has also been studied, and our findings are discussed here. c 2000 John Wiley & Sons, Inc. Int J Quantum Chem 80: 1024-1038, 2000

Journal ArticleDOI
TL;DR: An efficient algorithm combining the adaptive integral method and the discrete complex-image method (DCIM) is presented in this paper for analyzing large-scale microstrip structures.
Abstract: An efficient algorithm combining the adaptive integral method and the discrete complex-image method (DCIM) is presented in this paper for analyzing large-scale microstrip structures. The arbitrarily shaped microstrips are discretized using triangular elements with Rao-Wilton-Glisson basis functions. These basis functions are then projected onto a rectangular grid, which enables the calculation of the resultant matrix-vector product using the fast Fourier transform. The method retains the advantages of the well-known conjugate-gradient fast-Fourier-transform method, as well as the excellent modeling capability offered by triangular elements. The resulting algorithm has the memory requirement proportional to O(N) and the operation count for the matrix-vector multiplication proportional to O(N log N), where N denotes the number of unknowns. The required spatial Green's functions are computed efficiently using the DCIM, which further speeds up the algorithm. Numerical results for some microstrip circuits and a microstrip antenna array are presented to demonstrate the efficiency and accuracy of this method.

Journal ArticleDOI
TL;DR: The Hamilton--Jacobi--Bellman (HJB) equation associated with the {robust/\hinfty} filter (as well as the Mortensen filter) is considered.
Abstract: The Hamilton--Jacobi--Bellman (HJB) equation associated with the {robust/\hinfty} filter (as well as the Mortensen filter) is considered. These filters employ a model where the disturbances have finite power. The HJB equation for the filter information state is a first-order equation with a term that is quadratic in the gradient. Yet the solution operator is linear in the max-plus algebra. This property is exploited by the development of a numerical algorithm where the effect of the solution operator on a set of basis functions is computed off-line. The precomputed solutions are stored as vectors of coefficients of the basis functions. These coefficients are then used directly in the real-time computations.

Proceedings ArticleDOI
05 Jun 2000
TL;DR: New speech features using independent component analysis to human speeches, which resemble Gabor-like features, are proposed and given much better recognition rates than conventional mel-frequency cepstral features.
Abstract: In this paper, we proposed new speech features using independent component analysis to human speeches. When independent component analysis is applied to speech signals for efficient encoding the adapted basis functions resemble Gabor-like features. Trained basis functions have some redundancies, so we select some of the basis functions by the reordering method. The basis functions are almost ordered from the low frequency basis vector to the high frequency basis vector. And this is compatible with the fact that human speech signals have much more information in the low frequency range. Those features can be used in automatic speech recognition systems and the proposed method gives much better recognition rates than conventional mel-frequency cepstral features.

Journal ArticleDOI
TL;DR: The general framework, concepts, and procedures of anatomically informed basis functions (AIBF), a new method for the analysis of functional magnetic resonance imaging (fMRI) data, are introduced and it is shown that the approach offers several desirable features particularly in terms of superresolution and localization.

Journal ArticleDOI
TL;DR: In this article, the feasibility of using a basis set approach to the study of quantum dissipative dynamics is investigated for the spin-boson model, a system of two discrete states linearly coupled to a harmonic bath.
Abstract: The feasibility of using a basis set approach to the study of quantum dissipative dynamics is investigated for the spin-boson model, a system of two discrete states linearly coupled to a harmonic bath. The infinite Hamiltonian is discretized to a finite number of degrees of freedom. Traditional basis set approach, in a multiconfiguration time-dependent Hartree context, is used to solve the time-dependent Schrodinger equations by explicitly including all the degrees of freedom (“system”+“bath”). Quantities such as the reduced density matrix are then evaluated via a quadrature summation/Monte Carlo procedure over a certain number of time-dependent wave functions. Numerically exact results are obtained by systematically increasing the number of bath modes used to represent the condensed phase environment, as well as other variational parameters (number of basis functions, configurations, etc.). The potential of the current method is briefly discussed.

Patent
Michael E. Tipping1
01 Sep 2000
TL;DR: The relevance vector machine (RVM) as mentioned in this paper is a probabilistic basis model with a Bayesian treatment, where a prior is introduced over the weights governed by a set of hyperparameters.
Abstract: A relevance vector machine (RVM) for data modeling is disclosed. The RVM is a probabilistic basis model. Sparsity is achieved through a Bayesian treatment, where a prior is introduced over the weights governed by a set of hyperparameters. As compared to a Support Vector Machine (SVM), the non-zero weights in the RVM represent more prototypical examples of classes, which are termed relevance vectors. The trained RVM utilizes many fewer basis functions than the corresponding SVM, and typically superior test performance. No additional validation of parameters (such as C) is necessary to specify the model, except those associated with the basis.

Journal ArticleDOI
TL;DR: In this article, a new extended version of the R -matrix method for the calculation of continuum properties in which non-orthogonal orbitals are extensively used for describing both the target states and the R-matrix basis functions is presented.
Abstract: We present a new extended version of the R -matrix method for the calculation of continuum properties in which non-orthogonal orbitals are extensively used for describing both the target states and the R -matrix basis functions. In particular, a B -spline basis is used for the description of continuum states in the inner region and the target states may be obtained from independent calculations. This leads to a generalized eigenvalue problem but has the advantage of requiring much smaller bases for accurate representation of target wavefunctions and to achieve convergence in the close-coupling expansion. The present approach and its code are both applicable to a general atom and their efficiency for low-energy scattering processes is demonstrated by calculating the photoionization of Li. A detailed analysis of the resonance structure is given. Very good agreement with experimental data has been obtained, and considerable improvement in the description of resonances has been achieved in comparison with the standard R -matrix calculations.