scispace - formally typeset
Search or ask a question

Showing papers on "Basis (linear algebra) published in 2010"


Journal ArticleDOI
25 Jan 2010-Analyst
TL;DR: The increasing interest in Support Vector Machines (SVMs) over the past 15 years is described, including its application to multivariate calibration, and why it is useful when there are outliers and non-linearities.
Abstract: The increasing interest in Support Vector Machines (SVMs) over the past 15 years is described. Methods are illustrated using simulated case studies, and 4 experimental case studies, namely mass spectrometry for studying pollution, near infrared analysis of food, thermal analysis of polymers and UV/visible spectroscopy of polyaromatic hydrocarbons. The basis of SVMs as two-class classifiers is shown with extensive visualisation, including learning machines, kernels and penalty functions. The influence of the penalty error and radial basis function radius on the model is illustrated. Multiclass implementations including one vs. all, one vs. one, fuzzy rules and Directed Acyclic Graph (DAG) trees are described. One-class Support Vector Domain Description (SVDD) is described and contrasted to conventional two- or multi-class classifiers. The use of Support Vector Regression (SVR) is illustrated including its application to multivariate calibration, and why it is useful when there are outliers and non-linearities.

1,899 citations


Journal ArticleDOI
TL;DR: The fitting basis is used to obtain a new PES for H3O(+) based on roughly 62 000 ab initio energies and is illustrated for several classes of molecules.
Abstract: We describe a procedure to develop a fitting basis for molecular potential energy surfaces (PESs) that is invariant with respect to permutation of like atoms. The method is based on a straightforward symmetrization of a primitive monomial basis and illustrated for several classes of molecules. A numerically efficient method to evaluate the resulting expression for the PES is also described. The fitting basis is used to obtain a new PES for H3O(+) based on roughly 62 000 ab initio energies.

226 citations


Journal ArticleDOI
TL;DR: In this article, Shi et al. presented a tree-tensor-network-based method to study strongly correlated systems with nonlocal interactions in higher dimensions, where the local sites can be coupled to more than two neighboring auxiliary subspaces.
Abstract: We present a tree-tensor-network-based method to study strongly correlated systems with nonlocal interactions in higher dimensions. Although the momentum-space and quantum-chemistry versions of the density-matrix renormalization group (DMRG) method have long been applied to such systems, the spatial topology of DMRG-based methods allows efficient optimizations to be carried out with respect to one spatial dimension only. Extending the matrix-product-state picture, we formulate a more general approach by allowing the local sites to be coupled to more than two neighboring auxiliary subspaces. Following [Y. Shi, L. Duan, and G. Vidal, Phys. Rev. A 74, 022320 (2006)], we treat a treelike network ansatz with arbitrary coordination number $z$, where the $z=2$ case corresponds to the one-dimensional (1D) scheme. For this ansatz, the long-range correlation deviates from the mean-field value polynomially with distance, in contrast to the matrix-product ansatz, which deviates exponentially. The computational cost of the tree-tensor-network method is significantly smaller than that of previous DMRG-based attempts, which renormalize several blocks into a single block. In addition, we investigate the effect of unitary transformations on the local basis states and present a method for optimizing such transformations. For the 1D interacting spinless fermion model, the optimized transformation interpolates smoothly between real space and momentum space. Calculations carried out on small quantum chemical systems support our approach.

206 citations


Proceedings Article
06 Dec 2010
TL;DR: Given upper bounds on the dimension, volume, and curvature, it is shown that Empirical Risk Minimization can produce a nearly optimal manifold using a number of random samples that is independent of the ambient dimension of the space in which data lie.
Abstract: The hypothesis that high dimensional data tends to lie in the vicinity of a low dimensional manifold is the basis of a collection of methodologies termed Manifold Learning. In this paper, we study statistical aspects of the question of fitting a manifold with a nearly optimal least squared error. Given upper bounds on the dimension, volume, and curvature, we show that Empirical Risk Minimization can produce a nearly optimal manifold using a number of random samples that is independent of the ambient dimension of the space in which data lie. We obtain an upper bound on the required number of samples that depends polynomially on the curvature, exponentially on the intrinsic dimension, and linearly on the intrinsic volume. For constant error, we prove a matching minimax lower bound on the sample complexity that shows that this dependence on intrinsic dimension, volume and curvature is unavoidable. Whether the known lower bound of O(k/e2 + log 1/δ/e2) for the sample complexity of Empirical Risk minimization on k-means applied to data in a unit ball of arbitrary dimension is tight, has been an open question since 1997 [3]. Here e is the desired bound on the error and δ is a bound on the probability of failure. We improve the best currently known upper bound [14] of O(k2/e2 + log 1/δ/e2) to O(k/e2 (min (k, + log4 k/e/e2)) + log 1/δ/e2). Based on these results, we devise a simple algorithm for k-means and another that uses a family of convex programs to fit a piecewise linear curve of a specified length to high dimensional data, where the sample complexity is independent of the ambient dimension.

198 citations


Journal ArticleDOI
TL;DR: An efficient algorithm is proposed for the a priori construction of separated representations of square integrable vector-valued functions defined on a high-dimensional probability space, which are the solutions of systems of stochastic algebraic equations.
Abstract: Uncertainty quantification and propagation in physical systems appear as a critical path for the improvement of the prediction of their response. Galerkin-type spectral stochastic methods provide a general framework for the numerical simulation of physical models driven by stochastic partial differential equations. The response is searched in a tensor product space, which is the product of deterministic and stochastic approximation spaces. The computation of the approximate solution requires the solution of a very high dimensional problem, whose calculation costs are generally prohibitive. Recently, a model reduction technique, named Generalized Spectral Decomposition method, has been proposed in order to reduce these costs. This method belongs to the family of Proper Generalized Decomposition methods. It takes part of the tensor product structure of the solution function space and allows the a priori construction of a quasi optimal separated representation of the solution, which has quite the same convergence properties as a posteriori Hilbert Karhunen-Loeve decompositions. The associated algorithms only require the solution of a few deterministic problems and a few stochastic problems on deterministic reduced basis (algebraic stochastic equations), these problems being uncoupled. However, this method does not circumvent the “curse of dimensionality” which is associated with the dramatic increase in the dimension of stochastic approximation spaces, when dealing with high stochastic dimension. In this paper, we propose a marriage between the Generalized Spectral Decomposition algorithms and a separated representation methodology, which exploits the tensor product structure of stochastic functions spaces. An efficient algorithm is proposed for the a priori construction of separated representations of square integrable vector-valued functions defined on a high-dimensional probability space, which are the solutions of systems of stochastic algebraic equations.

186 citations


Journal ArticleDOI
TL;DR: An active basis model, a shared sketch algorithm, and a computational architecture of sum-max maps for representing, learning, and recognizing deformable templates are proposed.
Abstract: This article proposes an active basis model, a shared sketch algorithm, and a computational architecture of sum-max maps for representing, learning, and recognizing deformable templates. In our generative model, a deformable template is in the form of an active basis, which consists of a small number of Gabor wavelet elements at selected locations and orientations. These elements are allowed to slightly perturb their locations and orientations before they are linearly combined to generate the observed image. The active basis model, in particular, the locations and the orientations of the basis elements, can be learned from training images by the shared sketch algorithm. The algorithm selects the elements of the active basis sequentially from a dictionary of Gabor wavelets. When an element is selected at each step, the element is shared by all the training images, and the element is perturbed to encode or sketch a nearby edge segment in each training image. The recognition of the deformable template from an image can be accomplished by a computational architecture that alternates the sum maps and the max maps. The computation of the max maps deforms the active basis to match the image data, and the computation of the sum maps scores the template matching by the log-likelihood of the deformed active basis.

181 citations


Journal ArticleDOI
TL;DR: The joint application of the density matrix renormalization group and canonical transformation theory to multireference quantum chemistry provides the ability to describe static correlation in large active spaces, and provides a high-order description of the dynamic correlation effects.
Abstract: We describe the joint application of the density matrix renormalization group and canonical transformation theory to multireference quantum chemistry. The density matrix renormalization group provides the ability to describe static correlation in large active spaces, while the canonical transformation theory provides a high-order description of the dynamic correlation effects. We demonstrate the joint theory in two benchmark systems designed to test the dynamic and static correlation capabilities of the methods, namely, (i) total correlation energies in long polyenes and (ii) the isomerization curve of the [Cu2O2]^(2+) core. The largest complete active spaces and atomic orbital basis sets treated by the joint DMRG-CT theory in these systems correspond to a (24e,24o) active space and 268 atomic orbitals in the polyenes and a (28e,32o) active space and 278 atomic orbitals in [Cu2O2]^(2+).

170 citations


Journal Article
TL;DR: In this paper, a hierarchical partition of the parameter domain into smaller parameter subdomains is proposed, based on proximity to judiciously chosen parameter anchor points within each subdomain.
Abstract: We present a new “$hp$” parameter multidomain certified reduced basis (RB) method for rapid and reliable online evaluation of functional outputs associated with parametrized elliptic partial differential equations. We propose, and provide theoretical justification for, a new procedure for adaptive partition (“$h$”-refinement) of the parameter domain into smaller parameter subdomains: we pursue a hierarchical splitting of the parameter (sub)domains based on proximity to judiciously chosen parameter anchor points within each subdomain. Subsequently, we construct individual standard RB approximation spaces (“$p$”-refinement) over each subdomain. Greedy parameter sampling procedures and a posteriori error estimation play important roles in both the “$h$”-type and “$p$”-type stages of the new algorithm. We present illustrative numerical results for a convection-diffusion problem: the new “$hp$”-approach is considerably faster (respectively, more costly) than the standard “$p$”-type reduced basis method in the online (respectively, offline) stage.

141 citations


Journal ArticleDOI
TL;DR: A best basis extension of compressed sensing recovery is proposed that makes use of sparsity in a tree-structured dictionary of orthogonal bases and improves the recovery with respect to fixed sparsity priors.
Abstract: This paper proposes a best basis extension of compressed sensing recovery. Instead of regularizing the compressed sensing inverse problem with a sparsity prior in a fixed basis, our framework makes use of sparsity in a tree-structured dictionary of orthogonal bases. A new iterative thresholding algorithm performs both the recovery of the signal and the estimation of the best basis. The resulting reconstruction from compressive measurements optimizes the basis to the structure of the sensed signal. Adaptivity is crucial to capture the regularity of complex natural signals. Numerical experiments on sounds and geometrical images indeed show that this best basis search improves the recovery with respect to fixed sparsity priors.

140 citations


Journal ArticleDOI
TL;DR: The utility of the QMC method to provide FCI energies for realistic systems and basis sets is demonstrated, and the anomalous case of Na suggests that its basis set may be improvable.
Abstract: A new quantum Monte Carlo (QMC) method is used to calculate exact, full configuration-interaction (FCI) energies of the neutral and cationic elements from Li to Mg, in a family of commonly used basis sets. Annihilation processes between positive and negative walkers enable the exact N-electron wave function to emerge as a linear superposition of the (factorially large) space of Slater determinants, with individual determinants being stochastically sampled. As a result, extremely large spaces (exceeding 1015 determinants) become accessible for FCI calculations. No fixed-node approximation is necessary, and the only remaining source of error is the one-electron basis set, which can be systematically reduced by enlargement of the basis set. We have investigated the family of aug-cc-pVXZ Dunning basis sets up to X=5. The resulting ionization potentials are—with one exception (Na)—consistently accurate to within chemical accuracy. The anomalous case of Na suggests that its basis set may be improvable. Extrapol...

136 citations


Journal ArticleDOI
TL;DR: A coupling of the reduced basis methods and free-form deformations for shape optimization and design of systems modelled by elliptic PDEs is presented, which gives a parameterization of the shape that is independent of the mesh, the initial geometry, and the underlying PDE model.

Journal ArticleDOI
TL;DR: The relativistic double-zeta (dz) and triple zeta (tz) basis sets for the 5d elements Hf-Hg have been revised for consistency with the recently optimized 4f basis sets as discussed by the authors.
Abstract: The relativistic double-zeta (dz) and triple-zeta (tz) basis sets for the 5d elements Hf-Hg have been revised for consistency with the recently optimized 4f basis sets. The new dz basis sets have 24 s functions instead of 22 s functions, and the new tz basis sets have 30 s functions instead of 29 s functions. New contraction patterns have been determined, including the 6p orbital.

Journal ArticleDOI
TL;DR: Vertical electronic excitation energies and one-electron properties of 28 medium-sized molecules from a previously proposed benchmark set are revisited using the augmented correlation-consistent triple-zeta aug-cc-pVTZ basis set in CC2, CCSDR(3), and CC3 calculations.
Abstract: Vertical electronic excitation energies and one-electron properties of 28 medium-sized molecules from a previously proposed benchmark set are revisited using the augmented correlation-consistent triple-zeta aug-cc-pVTZ basis set in CC2, CCSDR(3), and CC3 calculations The results are compared to those obtained previously with the smaller TZVP basis set For each of the three coupled cluster methods, a correlation coefficient greater than 0994 is found between the vertical excitation energies computed with the two basis sets The deviations of the CC2 and CCSDR(3) results from the CC3 reference values are very similar for both basis sets, thus confirming previous conclusions on the intrinsic accuracy of CC2 and CCSDR(3) This similarity justifies the use of CC2- or CCSDR(3)-based corrections to account for basis set incompleteness in CC3 studies of vertical excitation energies For oscillator strengths and excited-state dipole moments, CC2 calculations with the aug-cc-pVTZ and TZVP basis sets give correla

Journal ArticleDOI
TL;DR: This paper treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ1-minimization, and shows that sufficiently incoherent bases are locally identifiable with high probability.
Abstract: This paper treats the problem of learning a dictionary providing sparse representations for a given signal class, via l1-minimization. The problem can also be seen as factorizing a d × N matrix Y = (y1 . . . yN), yn ∈ ℝd of training signals into a d × K dictionary matrix Φ and a K × N coefficient matrix X = (x1 . . . xN), xn ∈ ℝK, which is sparse. The exact question studied here is when a dictionary coefficient pair (Φ, X) can be recovered as local minimum of a (nonconvex) l1-criterion with input Y = Φ X. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialized to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples N grows up to a logarithmic factor only linearly with the signal dimension, i.e., N ≈ CK log K, in contrast to previous approaches requiring combinatorially many samples.

Journal ArticleDOI
TL;DR: Comparisons with literature potentials indicate that the present ab initio interaction potential is the most accurate representation of the argon-argon interaction to date.
Abstract: A new ab initio interaction potential for the electronic ground state of argon dimer has been developed. The potential is a sum of contributions corresponding to various levels of the coupled-cluster theory up to the full coupled-cluster method with single, double, triple, and quadruple excitations. All contributions have been calculated in larger basis sets than used in the development of previous Ar(2) potentials, including basis sets optimized by us up to the septuple(sextuple)-zeta level for the frozen-core (all-electron) energy. The diffuse augmentation functions have also been optimized. The effects of the frozen-core approximation and the relativistic effects have been computed at the CCSD(T) level. We show that some basis sets used in literature to compute these corrections may give qualitatively wrong results. Our calculations also show that the effects of high excitations do not necessarily converge significantly faster (in absolute values) in basis set size than the effects of lower excitations, as often assumed in literature. Extrapolations to the complete basis set limits have been used for most terms. Careful examination of the basis set convergence patterns enabled us to determine uncertainties of the ab initio potential. The interaction energy at the near-minimum interatomic distance of 3.75 A amounts to -99.291±0.32 cm(-1). The ab initio energies were fitted to an analytic potential which predicts a minimum at 3.762 A with a depth of 99.351 cm(-1). Comparisons with literature potentials indicate that the present one is the most accurate representation of the argon-argon interaction to date.

Journal ArticleDOI
TL;DR: The proposed methodology to combine the Ensemble Kalman filter (EnKF) and the level set parameterization for history matching of facies distribution is demonstrated to be able to capture the main features of the reference facies distributions.

Proceedings ArticleDOI
18 Jul 2010
TL;DR: The contribution reviews the technique of EMD and related algorithms and discusses illustrative applications.
Abstract: Due to external stimuli, biomedical signals are in general non-linear and non-stationary. Empirical Mode Decomposition in conjunction with a Hilbert spectral transform, together called Hilbert-Huang Transform, is ideally suited to extract essential components which are characteristic of the underlying biological or physiological processes. The method is fully adaptive and generates the basis to represent the data solely from these data and based on them. The basis functions, called Intrinsic Mode Functions (IMFs) represent a complete set of locally orthogonal basis functions whose amplitude and frequency may vary over time. The contribution reviews the technique of EMD and related algorithms and discusses illustrative applications.

Journal ArticleDOI
TL;DR: In this article, the Bose-Hubbard model is used to illustrate exact diagonalization techniques in a pedagogical way and the Lanczos algorithm is applied to solve low lying eigenstates and eigenvalues.
Abstract: We take the Bose–Hubbard model to illustrate exact diagonalization techniques in a pedagogical way. We follow the route of first generating all the basis vectors, then setting up the Hamiltonian matrix with respect to this basis and finally using the Lanczos algorithm to solve low lying eigenstates and eigenvalues. Emphasis is placed on how to enumerate all the basis vectors and how to use the hashing trick to set up the Hamiltonian matrix or matrices corresponding to other quantities. Although our route is not necessarily the most efficient one in practice, the techniques and ideas introduced are quite general and may find use in many other problems.

Journal ArticleDOI
TL;DR: The sparsity regularization approach using the l1-norm minimization leads to a better-posed inverse problem that improves the non-uniqueness of the history matching solutions and promotes solutions that are, according to the prior belief, sparse in the transform domain.
Abstract: In this paper, we present a new approach for estimating spatially-distributed reservoir properties from scattered nonlinear dynamic well measurements by promoting sparsity in an appropriate transform domain where the unknown properties are believed to have a sparse approximation. The method is inspired by recent advances in sparse signal reconstruction that is formalized under the celebrated compressed sensing paradigm. Here, we use a truncated low-frequency discrete cosine transform (DCT) is redundant to approximate the spatial parameters with a sparse set of coefficients that are identified and estimated using available observations while imposing sparsity on the solution. The intrinsic continuity in geological features lends itself to sparse representations using selected low frequency DCT basis elements. By recasting the inversion in the DCT domain, the problem is transformed into identification of significant basis elements and estimation of the values of their corresponding coefficients. To find these significant DCT coefficients, a relatively large number of DCT basis vectors (without any preferred orientation) are initially included in the approximation. Available measurements are combined with a sparsity-promoting penalty on the DCT coefficients to identify coefficients with significant contribution and eliminate the insignificant ones. Specifically, minimization of a least-squares objective function augmented by an l 1-norm of DCT coefficients is used to implement this scheme. The sparsity regularization approach using the l 1-norm minimization leads to a better-posed inverse problem that improves the non-uniqueness of the history matching solutions and promotes solutions that are, according to the prior belief, sparse in the transform domain. The approach is related to basis pursuit (BP) and least absolute selection and shrinkage operator (LASSO) methods, and it extends the application of compressed sensing to inverse modeling with nonlinear dynamic observations. While the method appears to be generally applicable for solving dynamic inverse problems involving spatially-distributed parameters with sparse representation in any linear complementary basis, in this paper its suitability is demonstrated using low frequency DCT basis and synthetic waterflooding experiments.

Journal ArticleDOI
Qi Wu1
TL;DR: A hybrid-load-forecasting model based on g-SVM and embedded chaotic particle swarm optimization (ECPSO) is put forward and the results of application of load forecasting indicate that the hybrid model is effective and feasible.
Abstract: Load forecasting is an important subject for power distribution systems and has been studied from different points of view. This paper aims at the Gaussian noise parts of load series the standard v-support vector regression machine with @e-insensitive loss function that cannot deal with it effectively. The relation between Gaussian noises and loss function is built up. On this basis, a new v-support vector machine (v-SVM) with the Gaussian loss function technique named by g-SVM is proposed. To seek the optimal unknown parameters of g-SVM, a chaotic particle swarm optimization is also proposed. And then, a hybrid-load-forecasting model based on g-SVM and embedded chaotic particle swarm optimization (ECPSO) is put forward. The results of application of load forecasting indicate that the hybrid model is effective and feasible.

Book ChapterDOI
12 Oct 2010
TL;DR: Reduced basis approximation and a posteriori error estimation for general linear parabolic equations and subsequently for a nonlinear parabolic equation, the incompressible Navier-- Stokes equations are presented.
Abstract: In this paper we consider reduced basis approximation and a posteriori error estimation for linear functional outputs of affinely parametrized linear and non-linear parabolic partial differential equations. The essential ingredients are Galerkin projection onto a low-dimensional space associated with a smooth ``parametric manifold'' --- dimension reduction; efficient and effective Greedy and POD-Greedy sampling methods for identification of optimal and numerically stable approximations --- rapid convergence; rigorous and sharp a posteriori error bounds (and associated stability factors) for the linear-functional outputs of interest --- certainty; and Offline-Online computational decomposition strategies --- minimum marginal cost for high performance in the real-time/embedded (e.g., parameter estimation, control) and many-query (e.g., design optimization, uncertainty quantification, multi- scale) contexts. In this paper we first present reduced basis approximation and a posteriori error estimation for general linear parabolic equations and subsequently for a nonlinear parabolic equation, the incompressible Navier-- Stokes equations. We then present results for the application of our (parabolic) reduced basis methods to Bayesian parameter estimation: detection and characterization of a delamination crack by transient thermal analysis.

Journal ArticleDOI
TL;DR: The main features of the method are the following: rapid convergence on the entire representative set of parameters, rigorous a posteriori error estimators for the output, and a parameter independent off-linephase and a computationally very efficient on-line phase to enable the rapid solution of many-query problems arising in control, optimization, and design.
Abstract: We propose certified reduced basis methods for the efficient and reliable evaluation of a general output that is implicitly connected to a given parameterized input through the harmonic Maxwell's equations. The truth approximation and the development of the reduced basis through a greedy approach is based on a discontinuous Galerkin approximation of the linear partial differential equation. The formulation allows the use of different approximation spaces for solving the primal and the dual truth approximation problems to respect the characteristics of both problem types, leading to an overall reduction in the off-line computational effort. The main features of the method are the following: (i) rapid convergence on the entire representative set of parameters, (ii) rigorous a posteriori error estimators for the output, and (iii) a parameter independent off-line phase and a computationally very efficient on-line phase to enable the rapid solution of many-query problems arising in control, optimization, and design. The versatility and performance of this approach is shown through a numerical experiment, illustrating the modeling of material variations and problems with resonant behavior.

Journal ArticleDOI
TL;DR: This paper focuses on sparse coding of data vectors as sparse linear combinations of basis elements in the context of discrete-time reinforcement learning.
Abstract: Sparse coding--that is, modelling data vectors as sparse linear combinations of basis elements--is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focus...

Journal ArticleDOI
TL;DR: The accuracy and simultaneous cost savings of the F12b approach are such that it should enable high quality property calculations to be performed on chemical systems that are too large for standard CCSD(T).
Abstract: Explicitly correlated CCSD(T)-F12a/b methods combined with basis sets specifically designed for this technique have been tested for their ability to reproduce standard CCSD(T) benchmark data covering 16 small molecules composed of hydrogen and carbon. The standard method calibration set was obtained with very large one-particle basis sets, including some aug-cc-pV7Z and aug-cc-pV8Z results. Whenever possible, the molecular properties (atomization energies, structures, and harmonic frequencies) were extrapolated to the complete basis set limit in order to facilitate a direct comparison of the standard and explicitly correlated approaches without ambiguities arising from the use of different basis sets. With basis sets of triple-ζ quality or better, the F12a variant was found to overshoot the presumed basis set limit, while the F12b method converged rapidly and uniformly. Extrapolation of F12b energies to the basis set limit was found to be very effective at reproducing the best standard method atomization energies. Even extrapolations based on the small cc-pVDZ-F12/cc-pVTZ-F12 combination proved capable of a mean absolute deviation of 0.20 kcal/mol. The accuracy and simultaneous cost savings of the F12b approach are such that it should enable high quality property calculations to be performed on chemical systems that are too large for standard CCSD(T).

Journal ArticleDOI
TL;DR: In this paper, the spectral pollution of self-adjoint operators with a gap in their essential spectrum occuring in Quantum Mechanics is investigated. But the authors focus on spectral pollution in the Galerkin basis.
Abstract: This paper, devoted to the study of spectral pollution, contains both abstract results and applications to some self-adjoint operators with a gap in their essential spectrum occuring in Quantum Mechanics. First we consider Galerkin basis which respect the decomposition of the ambient Hilbert space into a direct sum $H=PH\oplus(1-P)H$, given by a fixed orthogonal projector $P$, and we localize the polluted spectrum exactly. This is followed by applications to periodic Schrodinger operators (pollution is absent in a Wannier-type basis), and to Dirac operator (several natural decompositions are considered). In the second part, we add the constraint that within the Galerkin basis there is a certain relation between vectors in $PH$ and vectors in $(1-P)H$. Abstract results are proved and applied to several practical methods like the famous "kinetic balance" of relativistic Quantum Mechanics.

Journal ArticleDOI
Frank Jensen1
TL;DR: An atomic counterpoise method is proposed to calculate estimates of inter- and intramolecular basis set superposition errors and can be applied for both independent particle and electron correlation models.
Abstract: An atomic counterpoise method is proposed to calculate estimates of inter- and intramolecular basis set superposition errors. The method estimates the basis set superposition error as a sum of atomic contributions and can be applied for both independent particle and electron correlation models. It is shown that the atomic counterpoise method provides results very similar to the molecular counterpoise method for intermolecular basis set superposition errors at both the HF and MP2 levels of theory with a sequence of increasingly larger basis sets. The advantage of the atomic counterpoise method is that it can be applied with equal ease to estimate intramolecular basis set superposition errors, for which few other methods exist. The atomic counterpoise method is computationally quite efficient, requiring typically double the amount of computer time as required for calculating the uncorrected energy.

Journal ArticleDOI
Qi Wu1
TL;DR: The results of application in car sale series forecasting show that the forecasting approach based on the hybrid PSOWv-SVM model is effective and feasible, and the comparison between the method proposed in this paper and other ones is given, which proves that this method is, for the discussed example, better than hybrid PSOv- SVM and other traditional methods.

Book ChapterDOI
05 Sep 2010
TL;DR: The optimal rank-(R1,R2, ...Rn) tensor decomposition model, proposed in this paper, could automatically explore the low-dimensional structure of the tensor data, seeking optimal dimension and basis for each mode and separating the irregular patterns.
Abstract: Confronted with the high-dimensional tensor-like visual data, we derive a method for the decomposition of an observed tensor into a low-dimensional structure plus unbounded but sparse irregular patterns. The optimal rank-(R1,R2, ...Rn) tensor decomposition model that we propose in this paper, could automatically explore the low-dimensional structure of the tensor data, seeking optimal dimension and basis for each mode and separating the irregular patterns. Consequently, our method accounts for the implicit multi-factor structure of tensor-like visual data in an explicit and concise manner. In addition, the optimal tensor decomposition is formulated as a convex optimization through relaxation technique. We then develop a block coordinate descent (BCD) based algorithm to efficiently solve the problem. In experiments, we show several applications of our method in computer vision and the results are promising.

Journal ArticleDOI
TL;DR: In this article, the authors derived a local error representation for exponential operator splitting methods when applied to evolutionary problems that involve critical parameters, including Schrodinger equations and parabolic initial-boundary value problems with high spatial gradients.
Abstract: In this paper, we are concerned with the derivation of a local error representation for exponential operator splitting methods when applied to evolutionary problems that involve critical parameters. Employing an abstract formulation of differential equations on function spaces, our framework includes Schrodinger equations in the semi-classical regime as well as parabolic initial-boundary value problems with high spatial gradients. We illustrate the general mechanism on the basis of the first-order Lie splitting and the second-order Strang splitting method. Further, we specify the local error representation for a fourth-order splitting scheme by Yoshida. From the given error estimate it is concluded that higher-order exponential operator splitting methods are favourable for the time-integration of linear Schrodinger equations in the semi-classical regime with critical parameter 0

Journal ArticleDOI
TL;DR: This paper develops a new theory for directly measuring BRDFs in a basis representation by projecting incident light as a sequence of basis functions from a spherical zone of directions, and derives an orthonormal basis over spherical zones that is ideally suited for this task.
Abstract: Realistic descriptions of surface reflectance have long been a topic of interest in both computer vision and computer graphics research. In this paper, we describe a novel high speed approach for the acquisition of bidirectional reflectance distribution functions (BRDFs). We develop a new theory for directly measuring BRDFs in a basis representation by projecting incident light as a sequence of basis functions from a spherical zone of directions. We derive an orthonormal basis over spherical zones that is ideally suited for this task. BRDF values outside the zonal directions are extrapolated by re-projecting the zonal measurements into a spherical harmonics basis, or by fitting analytical reflection models to the data. For specular materials, we experiment with alternative basis acquisition approaches such as compressive sensing with a random subset of the higher order orthonormal zonal basis functions, as well as measuring the response to basis defined by an analytical model as a way of optically fitting the BRDF to such a representation. We verify this approach with a compact optical setup that requires no moving parts and only a small number of image measurements. Using this approach, a BRDF can be measured in just a few minutes.