scispace - formally typeset
Search or ask a question

Showing papers on "Basis (linear algebra) published in 2003"


Journal ArticleDOI
TL;DR: This work proposes here that time, space and quantity are part of a generalized magnitude system and outlines A Theory Of Magnitude (ATOM) as a conceptually new framework within which to re-interpret the cortical processing of these elements of the environment.

1,651 citations


Journal ArticleDOI
TL;DR: In this paper, a hybrid density functional method for sugar and sugar-like molecules, MPW1S, was presented, which is optimized for sugars and sugarlike molecules using the modified Perdew−Wang density functional.
Abstract: The addition of diffuse functions to a double-ζ basis set is shown to be more important than increasing to a triple-ζ basis when calculating reaction energies, reaction barrier heights, and conformational energies with density functional theory, in particular with the modified Perdew−Wang density functional. It is shown that diffuse basis functions are vital to describe the relative energies between reactants, products, and transition states in isogyric reactions, and they provide enormous improvement in accuracy for conformational equilibria, using 1, 2-ethanediol and butadiene as examples. As a byproduct of the present study, we present a one-parameter hybrid density functional method optimized for sugars and sugar-like molecules; this is called MPW1S.

671 citations


Book
01 Jan 2003
TL;DR: In this paper, the Haar system is used to compute the Schauder Hierarchical basis for multiresolution and multilevel preconditioning, which is a nonlinear approximation in Besov spaces.
Abstract: Introduction. Notations. 1. Basic examples. 1.1 Introduction. 1.2 The Haar system. 1.3 The Schauder hierarchical basis. 1.4 Multivariate constructions. 1.5 Adaptive approximation. 1.6 Multilevel preconditioning. 1.7 Conclusions. 1.8 Historical notes. 2. Multiresolution approximation. 2.1 Introduction. 2.2 Multiresolution analysis. 2.3 Refinable functions. 2.4 Subdivision schemes. 2.5 Computing with refinable functions. 2.6 Wavelets and multiscale algorithms. 2.7 Smoothness analysis. 2.8 Polynomial exactness. 2.9 Duality, orthonormality and interpolation. 2.10 Interpolatory and orthonormal wavelets. 2.11 Wavelets and splines. 2.12 Bounded domains and boundary conditions. 2.13 Point values, cell averages, finite elements. 2.14 Conclusions. 2.15 Historical notes. 3. Approximation and smoothness. 3.1 Introduction. 3.2 Function spaces. 3.3 Direct estimates. 3.4 Inverse estimates. 3.5 Interpolation and approximation spaces. 3.6 Characterization of smoothness classes. 3.7 Lp-unstable approximation and 0 1. 3.8 Negative smoothness and Lp-spaces. 3.9 Bounded domains. 3.10 Boundary conditions. 3.11 Multilevel preconditioning. 3.12 Conclusions. 3.13 Historical notes. 4. Adaptivity. 4.1 Introduction. 4.2 Nonlinear approximation in Besov spaces. 4.3 Nonlinear wavelet approximation in Lp. 4.4 Adaptive finite element approximation. 4.5 Other types of nonlinear approximations. 4.6 Adaptive approximation of operators. 4.7 Nonlinear approximation and PDE's. 4.8 Adaptive multiscale processing. 4.9 Adaptive space refinement. 4.10 Conclusions. 4.11 Historical notes. References. Index.

547 citations


Journal ArticleDOI
TL;DR: Signal space separation (SSS) provides an elegant method to remove external disturbances and can be used in transforming the interesting signals to virtual sensor configurations, and enables physiological DC phenomena to be recorded using voluntary head movements.
Abstract: Multichannel measurement with hundreds of channels oversamples a curl-free vector field, like the magnetic field in a volume free of sources. This is based on the constraint caused by the Laplace's equation for the magnetic scalar potential; outside of the source volume the signals are spatially band limited. A functional solution of Laplace's equation enables one to separate the signals arising from the sphere enclosing the interesting sources, e.g. the currents in the brain, from the magnetic interference. Signal space separation (SSS) is accomplished by calculating individual basis vectors for each term of the functional expansion to create a signal basis covering all measurable signal vectors. Because the SSS basis is linearly independent for all practical sensor arrangements, any signal vector has a unique SSS decomposition with separate coefficients for the interesting signals and signals coming from outside the interesting volume. Thus, SSS basis provides an elegant method to remove external disturbances. The device-independent SSS coefficients can be used in transforming the interesting signals to virtual sensor configurations. This can also be used in compensating for distortions caused by movement of the object by modeling it as movement of the sensor array around a static object. The device-independence of the decomposition also enables physiological DC phenomena to be recorded using voluntary head movements. When used with properly designed sensor array, SSS does not affect the morphology or the signal-to-noise ratio of the interesting signals.

508 citations


Journal ArticleDOI
TL;DR: An efficient method for optimizing single-determinant wave functions of medium and large systems is presented, based on a minimization of the energy functional using a new set of variables to perform orbital transformations.
Abstract: An efficient method for optimizing single-determinant wave functions of medium and large systems is presented. It is based on a minimization of the energy functional using a new set of variables to perform orbital transformations. With this method convergence of the wave function is guaranteed. Preconditioners with different computational cost and efficiency have been constructed. Depending on the preconditioner, the method needs a number of iterations that is very similar to the established diagonalization–DIIS approach, in cases where the latter converges well. Diagonalization of the Kohn–Sham matrix can be avoided and the sparsity of the overlap and Kohn–Sham matrix can be exploited. If sparsity is taken into account, the method scales as O(MN2), where M is the total number of basis functions and N is the number of occupied orbitals. The relative performance of the method is optimal for large systems that are described with high quality basis sets, and for which the density matrices are not yet sparse....

462 citations


Journal ArticleDOI
TL;DR: It is proved that bound entangled states cannot help increase the distillable entanglement of a state beyond its regularized entanglements of formation assisted by bound entangling.
Abstract: We report new results and generalizations of our work on unextendible product bases (UPB), uncompletable product bases and bound entanglement. We present a new construction for bound entangled states based on product bases which are only completable in a locally extended Hilbert space. We introduce a very useful representation of a product basis, an orthogonality graph. Using this representation we give a complete characterization of unextendible product bases for two qutrits. We present several generalizations of UPBs to arbitrary high dimensions and multipartite systems. We present a sufficient condition for sets of orthogonal product states to be distinguishable by separable superoperators. We prove that bound entangled states cannot help increase the distillable entanglement of a state beyond its regularized entanglement of formation assisted by bound entanglement.

348 citations


Journal ArticleDOI
TL;DR: This work focuses on mixtures of factor analyzers from the perspective of a method for model-based density estimation from high-dimensional data, and hence for the clustering of such data.

284 citations


Journal ArticleDOI
TL;DR: In this article, Calderon-Zygmund singular integral operators have been studied in the context of discrete groups of dilations, and they have been shown to be an unconditional basis for the anisotropic Hardy space H A.
Abstract: In this paper, motivated in part by the role of discrete groups of dilations in wavelet theory, we introduce and investigate the anisotropic Hardy spaces associated with very general discrete groups of dilations. This formulation includes the classical isotropic Hardy space theory of Fefferman and Stein and parabolic Hardy space theory of Calderon and Torchinsky. Given a dilation A, that is an n × n matrix all of whose eigenvalues λ satisfy |λ| > 1, define the radial maximal function M φf(x) := sup k∈Z |(f ∗ φk)(x)|, where φk(x) = | detA|φ(Ax). Here φ is any test function in the Schwartz class with ∫ φ 6= 0. For 0 < p < ∞ we introduce the corresponding anisotropic Hardy space H A as a space of tempered distributions f such that M φf belongs to L (R). Anisotropic Hardy spaces enjoy the basic properties of the classical Hardy spaces. For example, it turns out that this definition does not depend on the choice of the test function φ as long as ∫ φ 6= 0. These spaces can be equivalently introduced in terms of grand, tangential, or nontangential maximal functions. We prove the Calderon-Zygmund decomposition which enables us to show the atomic decomposition of H A. As a consequence of atomic decomposition we obtain the description of the dual to H A in terms of Campanato spaces. We provide a description of the natural class of operators acting on H A, i.e., Calderon-Zygmund singular integral operators. We also give a full classification of dilations generating the same space H A in terms of spectral properties of A. In the second part of this paper we show that for every dilation A preserving some lattice and satisfying a particular expansiveness property there is a multiwavelet in the Schwartz class. We also show that for a large class of dilations (lacking this property) all multiwavelets must be combined minimally supported in frequency, and thus far from being regular. We show that r-regular (tight frame) multiwavelets form an unconditional basis (tight frame) for the anisotropic Hardy space H A. We also describe the sequence space characterizing wavelet coefficients of elements of the anisotropic Hardy space. 2000 Mathematics Subject Classification. Primary 42B30, 42C40; Secondary 42B20, 42B25.

213 citations


Posted Content
TL;DR: In this article, an ordered pair of linear transformations (i.e., a Leonard pair on a field and a vector space over a field with finite positive dimension) is considered.
Abstract: Let $K$ denote a field and let $V$ denote a vector space over $K$ with finite positive dimension We consider an ordered pair of linear transformations $A:V\to V$ and $A^*:V\to V$ that satisfy conditions (i), (ii) below (i) There exists a basis for $V$ with respect to which the matrix representing $A$ is irreducible tridiagonal and the matrix representing $A^*$ is diagonal (ii) There exists a basis for $V$ with respect to which the matrix representing $A$ is diagonal and the matrix representing $A^*$ is irreducible tridiagonal We call such a pair a Leonard pair on $V$ We give an overview of the theory of Leonard pairs

212 citations


Journal ArticleDOI
TL;DR: A contracted basis-iterative method for calculating numerically exact vibrational energy levels of methane (a 9D calculation) using products of eigenfunctions of bend and stretch Hamiltonians obtained by freezing coordinates at equilibrium.
Abstract: We present a contracted basis-iterative method for calculating numerically exact vibrational energy levels of methane (a 9D calculation). The basis functions we use are products of eigenfunctions of bend and stretch Hamiltonians obtained by freezing coordinates at equilibrium. The basis functions represent the desired wavefunctions well, yet are simple enough that matrix-vector products may be evaluated efficiently. We use Radau polyspherical coordinates. The bend functions are computed in a nondirect product finite basis representation [J. Chem. Phys. 118, 6956 (2003)] and the stretch functions are computed in a product potential optimized discrete variable (PODVR) basis. The memory required to store the bend basis is reduced by a factor of ten by storing it on a compacted grid. The stretch basis is optimized by discarding PODVR functions with high potential energies. The size of the primitive basis is 33 billion. The size of the product contracted basis is six orders of magnitude smaller. Parity symmetr...

179 citations



Patent
30 Jan 2003
TL;DR: In this article, a two-dimensional yield map for a device, such as an integrated circuit, in a fabrication facility is computed and associated with layout data for the device in a hierarchical and/or instance-based layout file.
Abstract: A two-dimensional yield map for a device, such as an integrated circuit, in a fabrication facility is computed and associated with layout data for the device in a hierarchical and/or instance-based layout file. The device has a layout including a pattern characterizable by a combination of members of a set of basis shapes. A set of basis pre-images include yield map data representing an interaction of respective members of the set of basis shapes with a defect model. A yield map for the pattern is created by combining basis pre-images corresponding to basis shapes in the combination of members that characterize the pattern to provide a combination result. The output may be displayed as a two dimensional map to an engineer performing yield analysis, or otherwise processed.

Journal ArticleDOI
TL;DR: In this paper, it was shown that if a function f is in BV, its coefficient sequence in a normalized wavelet basis satisfies a class of weak-� 1 type estimates.
Abstract: We establish new results on the space BV of functions with bounded variation. While it is well known that this space admits no unconditional basis, we show that it is “almost” characterized by wavelet expansions in the following sense: if a function f is in BV, its coefficient sequence in a BV normalized wavelet basis satisfies a class of weak-� 1 type estimates. These weak estimates can be employed to prove many interesting results. We use them to identify the interpolation spaces between BV and Sobolev or Besov spaces, and to derive new Gagliardo-Nirenberg-type inequalities. 1. Background and main results

Journal ArticleDOI
TL;DR: It is shown that the electronic wavefunction does not need to be fully optimized in the earlier stages of geometry optimization when using the partitioned rational function optimizer (P-RFO and L-BFGS).

Journal ArticleDOI
TL;DR: In this article, the dependency of the semi-empirical fits to a given basis set for a generalized gradient approximation and a hybrid functional is investigated, and the resulting functionals are then tested for other basis sets, evaluating their errors and transferability.
Abstract: When developing and assessing density functional theory methods, a finite basis set is usually employed. In most cases, however, the issue of basis set dependency is neglected. Here, we assess several basis sets and functionals. In addition, the dependency of the semiempirical fits to a given basis set for a generalized gradient approximation and a hybrid functional is investigated. The resulting functionals are then tested for other basis sets, evaluating their errors and transferability.

Journal ArticleDOI
TL;DR: In this article, the authors derived a new cone beam transform inversion formula, which is explicitly based on Grangeat's formula (1990) and the classical 3D Radon Transform inversion.
Abstract: Given a rather general weight function n0, we derive a new cone beam transform inversion formula. The derivation is explicitly based on Grangeat’s formula (1990) and the classical 3D Radon transform inversion. The new formula is theoretically exact and is represented by a 2D integral. We show that if the source trajectory C is complete in the sense of Tuy (1983) (and satisfies two other very mild assumptions), then substituting the simplest weight n0 ≡ 1 gives a convolution-based FBP algorithm. However, this easy choice is not always optimal from the point of view of practical applications. The weight n0 ≡ 1 works well for closed trajectories, but the resulting algorithm does not solve the long object problem if C is not closed. In the latter case one has to use the flexibility in choosing n0 and find the weight that gives an inversion formula with the desired properties. We show how this can be done for spiral CT. It turns out that the two inversion algorithms for spiral CT proposed earlier by the author are particular cases of the new formula. For general trajectories the choice of weight should be done on a case-by-case basis.

Proceedings ArticleDOI
18 Jun 2003
TL;DR: It is shown that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace from just one image taken under arbitrary illumination conditions.
Abstract: We propose a new approach for face recognition under arbitrary illumination conditions, which requires only one training image per subject (if there is no pose variation) and no 3D shape information. Our method is based on the result of Basri and Jacobs (2001), which demonstrated that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace. In this paper, we show that we can recover basis images spanning this space from just one image taken under arbitrary illumination conditions. First, using a bootstrap set consisting of 3D face models, we compute a statistical model for each basis image. During training, given a novel face image under arbitrary illumination, we recover a set of images for this face. We prove that these images are the set of basis images with maximum probability. During testing, we recognize the face for which there exists a weighted combination of basis images that is the closest to the test face image. We provide a series of experiments that achieve high recognition rates, under a wide range of illumination conditions, including multiple sources of illumination. Our method achieves comparable levels of accuracy with methods that have much more onerous training data requirements.

Journal ArticleDOI
TL;DR: Rassolov et al. as mentioned in this paper proposed a modification to the 6-31G* basis set, which has recently been extended to cover first-row transition metals, by reoptimization of the D-shell exponents and coefficients by a two-step procedure, keeping the rest of the basis unchanged.
Abstract: We propose a modification to the popular 6-31G* basis set, which has recently been extended to cover first-row transition metals [Rassolov et al., J. Chem. Phys. 109, 1223 (1998)]. As demonstrated by a number of calculations, the existing basis performs poorly for many transition metals, particularly those toward the end of the series (Co, Ni, and especially Cu). The reason for this lies primarily with the 3D shell, which lacks a sufficiently diffuse exponent. A reoptimization of the D-shell exponents and coefficients by a two-step procedure, keeping the rest of the basis unchanged, corrects the problem, giving a basis set that performs uniformly well across the entire first-row transition metal series from scandium to copper.

Journal ArticleDOI
TL;DR: The dependency of the semiempirical fits to a given basis set for a generalized gradient approximation and a hybrid functional is investigated and the resulting functionals are tested for other basis sets, evaluating their errors and transferability.
Abstract: When developing and assessing density functional theory methods, a finite basis set is usually employed. In most cases, however, the issue of basis set dependency is neglected. Here, we assess several basis sets and functionals. In addition, the dependency of the semiempirical fits to a given basis set for a generalised gradient approximation and a hybrid functional is investigated. The resulting functionals are then tested for other basis sets, evaluating their errors and transferability.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the Feynman integrals needed to compute two-loop self-energy functions for general masses and external momenta, and derived the derivatives of these basis functions with respect to all squared-mass arguments, the renormalization scale, and the external momentum invariant.
Abstract: I study the Feynman integrals needed to compute two-loop self-energy functions for general masses and external momenta. A convenient basis for these functions consists of the four integrals obtained at the end of Tarasov's recurrence relation algorithm. The basis functions are modified here to include one-loop and two-loop counterterms to render them finite; this simplifies the presentation of results in practical applications. I find the derivatives of these basis functions with respect to all squared-mass arguments, the renormalization scale, and the external momentum invariant, and express the results algebraically in terms of the basis. This allows all necessary two-loop self-energy integrals to be efficiently computed numerically using the differential equation in the external momentum invariant. I also use the differential equations method to derive analytic forms for various special cases, including a four-propagator integral with three distinct nonzero masses.

Book ChapterDOI
TL;DR: This paper presents various forms of set approximations via the unifying concept of modal–style Operators, indicating the usefulness of this approach in qualitative data analysis.
Abstract: A large part of qualitative data analysis is concerned with approximations of sets on the basis of relational information. In this paper, we present various forms of set approximations via the unifying concept of modal–style operators. Two examples indicate the usefulness of the approach.

Journal ArticleDOI
TL;DR: This paper writes the domain or manifold on which the operator equation is posed as an overlapping union of subdomains, each of them being the image under a smooth parametrization of the hypercube, and proves that this adaptive method has optimal computational complexity.
Abstract: In "Adaptive wavelet methods II---Beyond the elliptic case" of Cohen, Dahmen, and DeVore [Found. Comput. Math., 2 (2002), pp. 203--245], an adaptive method has been developed for solving general operator equations. Using a Riesz basis of wavelet type for the energy space, the operator equation is transformed into an equivalent matrix-vector system. This system is solved iteratively, where the application of the infinite stiffness matrix is replaced by an adaptive approximation. Assuming that the stiffness matrix is sufficiently compressible, i.e., that it can be sufficiently well approximated by sparse matrices, it was proved that the adaptive method has optimal computational complexity in the sense that it converges with the same rate as the best N-term approximation for the solution, assuming that the latter would be explicitly available. The condition concerning compressibility requires that, dependent on their order, the wavelets have sufficiently many vanishing moments, and that they be sufficiently smooth. However, except on tensor product domains, wavelets that satisfy this smoothness requirement are not easy to construct. In this paper we write the domain or manifold on which the operator equation is posed as an overlapping union of subdomains, each of them being the image under a smooth parametrization of the hypercube. By lifting wavelets on the hypercube to the subdomains, we obtain a {\em frame} for the energy space. With this frame the operator equation is transformed into a matrix-vector system, after which this system is solved iteratively by an adaptive method similar to the one from the work of Cohen, Dahmen, and DeVore. With this approach, frame elements that have sufficiently many vanishing moments and are sufficiently smooth, something needed for the compressibility, are easily constructed. By handling additional difficulties due to the fact that a frame gives rise to an underdetermined matrix-vector system, we prove that this adaptive method has optimal computational complexity.

Journal ArticleDOI
TL;DR: A new basis is constructed for the space Γn = span{1, t, t2,...,tn-2, sin t, cos t by an integral approach, and it is shown that such basis and curves share the same properties as the Bernstein basis and the Bezier curves in polynomial spaces respectively.

Journal ArticleDOI
TL;DR: In this article, the authors consider several greedy conditions for bases in Banach spaces that arise naturally in the study of the thresholding greedy algorithm and show that almost greedy bases are essentially optimal for n-term approximation when the TGA is modified to include a Chebyshev approximation.
Abstract: We consider several greedy conditions for bases in Banach spaces that arise naturally in the study of the Thresholding Greedy Algorithm (TGA). In particular, we continue the study of almost greedy bases begun in [3]. We show that almost greedy bases are essentially optimal for n-term approximation when the TGA is modified to include a Chebyshev approximation. We prove that if a Banach space X has a basis and contains a complemented subspace with a symmetric basis and finite cotype then X has an almost greedy basis. We show that c0 is the only L∞ space to have a quasi-greedy basis. The Banach spaces which contain almost greedy basic sequences are characterized.

Journal ArticleDOI
01 Jun 2003
TL;DR: An analysis of the learning capabilities and a comparison of the net performances with other approaches have been performed and it is shown that the resulting network improves the approximation results.
Abstract: In this paper a neural network for solving partial differential equations (PDE) is described The activation functions of the hidden nodes are the radial basis functions (RBF) whose parameters are learnt by a two-stage gradient descent strategy A new growing radial basis functions-node insertion strategy with different radial basis functions is used in order to improve the net performances The learning strategy is able to save computational time and memory space because of the selective growing of nodes whose activation functions consist of different radial basis functions An analysis of the learning capabilities and a comparison of the net performances with other approaches have been performed It is shown that the resulting network improves the approximation results

Journal ArticleDOI
TL;DR: In this article, the semi-smooth Newton method is used for a class of variational inequalities in infinite dimensions and it is shown that they are equivalent to certain active set strategies.
Abstract: Semi–smooth Newton methods are analyzed for a class of variational inequalities in infinite dimensions. It is shown that they are equivalent to certain active set strategies. Global and local super-linear convergence are proved. To overcome the phenomenon of finite speed of propagation of discretized problems a penalty version is used as the basis for a continuation procedure to speed up convergence. The choice of the penalty parameter can be made on the basis of an L ∞ estimate for the penalized solutions. Unilateral as well as bilateral problems are considered.

Journal ArticleDOI
TL;DR: It is shown that primitive data swaps or moves are the only moves that have to be included in a Markov basis that links all the contingency tables having a set of fixed marginals when this set of marginals induces a decomposable independence graph.
Abstract: We show that primitive data swaps or moves are the only moves that have to be included in a Markov basis that links all the contingency tables having a set of fixed marginals when this set of marginals induces a decomposable independence graph. We give formulae that fully identify such Markov bases and show how to use these formulae to dynamically generate random moves.

Journal Article
TL;DR: In this article, the authors presented an algorithm that, given a lattice basis b 1,.., b n finds in O(n 2 k/4 ) average time a shorter vector than b 1 provided that b 1 is at most n/(2k) times longer than the length of the shortest, nonzero lattice vector.
Abstract: We present a novel practical algorithm that given a lattice basis b 1 ,.., b n finds in O(n 2 () k/4 ) average time a shorter vector than b 1 provided that b 1 is () n/(2k) times longer than the length of the shortest, nonzero lattice vector. We assume that the given basis b 1 ,..., b n has an orthogonal basis that is typical for worst case lattice bases. The new reduction method samples short lattice vectors in high dimensional sublattices, it advances in sporadic big jumps. It decreases the approximation factor achievable in a given time by known methods to less than its fourth-th root. We further speed up the new method by the simple and the general birthday method.

Journal ArticleDOI
TL;DR: In this article, a set of algorithms for fast and reliable numerical integration of one-loop multi-leg (up to six) Feynman diagrams with special attention to the behavior around singular points in phase space is presented.

Journal ArticleDOI
TL;DR: In this article, a simple extrapolation formula of (X+γ)−3 which fits correlation energies with correlation consistent basis sets to estimate the basis set limit was devised by varying the parameter γ according to basis set quality and correlation level, which is suitable for calculations at the second order Moller-Plesset perturbation theory and single and double excitation coupled cluster theory with perturbative triples correction level.
Abstract: A simple extrapolation formula of (X+γ)−3 which fits correlation energies with correlation consistent (aug-)cc-pVXZ and (aug-)cc-pV(X+1)Z[X=D(2),T(3),Q(4)] basis sets to estimate the basis set limit was devised by varying the parameter γ according to basis set quality and correlation level. The explicit extrapolation formulas suitable for calculations at the second order Moller–Plesset perturbation theory and single and double excitation coupled cluster theory with perturbative triples correction level are presented and applications are made to estimate the basis set limit binding energies of various hydrogen-bonded and van der Waals clusters. A comparison of the results by this formula with the reference basis set limit results and the results by other extrapolation methods reveals that the extrapolation formulas proposed here can yield the reliable basis set limit estimates even with the small basis sets and could be used effectively for investigating large weakly bound complexes.