scispace - formally typeset
Search or ask a question

Showing papers on "Basis (linear algebra) published in 1993"


Journal ArticleDOI
TL;DR: In this paper, a set of seven-component f-type polarization functions has been optimized for use with the pseudo-potentials of Hay and Wadt at the CISD level of theory for the transition metals ScCu, YAg, LaAu in the energetically lowest-lying s 1 d n electronic state.

1,927 citations


Journal ArticleDOI
TL;DR: The lexicographical GroBner basis can be obtained by applying this algorithm after a total degree Grobner basis computation: it is usually much faster to compute the basis this way than with a direct application of Buchberger's algorithm.

708 citations


Journal ArticleDOI
TL;DR: Mallat′s Heuristic is formalized and proved, which says that wavelet bases are optimal for representing functions containing singularities, when there may be an arbitrary number of singularity distributed.

470 citations


Journal ArticleDOI
TL;DR: In this paper, a simple method is developed for computing the interactions among various factors influencing the atmospheric circulations, and numerical simulations can be utilized to obtain the pure contribution of any factor to any predicted field, as well as the contributions due to the mutual interactions among two or more factors.
Abstract: A simple method is developed for computing the interactions among various factors influencing the atmospheric circulations. It is shown how numerical simulations can be utilized to obtain the pure contribution of any factor to any predicted field, as well as the contributions due to the mutual interactions among two or more factors. The mathematical basis for n factors is developed, and it is shown that 2n simulations are required for the separation of the contributions and their possible interactions. The method is demonstrated with two central factors, the topography and surface fluxes, and their effect on the rainfall distribution for a cyclone evolution in the Mediterranean.

408 citations


Posted Content
TL;DR: The wavelet transform as mentioned in this paper maps each $f(x)$ to its coefficients with respect to an orthogonal basis of piecewise constant functions, constructed by dilation and translation.
Abstract: This note is a very basic introduction to wavelets. It starts with an orthogonal basis of piecewise constant functions, constructed by dilation and translation. The ``wavelet transform'' maps each $f(x)$ to its coefficients with respect to this basis. The mathematics is simple and the transform is fast (faster than the Fast Fourier Transform, which we briefly explain), but approximation by piecewise constants is poor. To improve this first wavelet, we are led to dilation equations and their unusual solutions. Higher-order wavelets are constructed, and it is surprisingly quick to compute with them --- always indirectly and recursively. We comment informally on the contest between these transforms in signal processing, especially for video and image compression (including high-definition television). So far the Fourier Transform --- or its 8 by 8 windowed version, the Discrete Cosine Transform --- is often chosen. But wavelets are already competitive, and they are ahead for fingerprints. We present a sample of this developing theory.

311 citations


Journal ArticleDOI
TL;DR: In this article, a Gaussian basis set leading to wave functions with atomic total energies within 1 m E h of the Hartree-Fock values was prepared using the well-tempered formula for atoms Ga through Rn.

256 citations


Book ChapterDOI
01 Jan 1993
TL;DR: A general strategy for solving the motion planning problem for real analytic, controllable systems without drift by computing a control that provides an exact solution of the original problem if the given system is nilpotent.
Abstract: We propose a general strategy for solving the motion planning problem for real analytic, controllable systems without drift. The procedure starts by computing a control that steers the given initial point to the desired target point for an extended system, in which a number of Lie brackets of the system vector fields are added to the right-hand side. The main point then is to use formal calculations based on the product expansion relative to a P. Hall basis, to produce another control that achieves the desired result on the formal level. It then turns out that this control provides an exact solution of the original problem if the given system is nilpotent. When the system is not nilpotent, one can still produce an iterative algorithm that converges very fast to a solution. Using the theory of feedback nilpotentization, one can find classes of non-nilpotent systems for which the algorithm, in cascade with a precompensator, produces an exact solution in a finite number of steps. We also include results of simulations which illustrate the effectiveness of the procedure.

215 citations


Journal ArticleDOI
TL;DR: The local convergence properties of the two techniques designed to solve the efficient computation of all roots of a system of nonlinear polynomial equations inn variables which lie within ann-dimensional domain are examined.

210 citations


Journal ArticleDOI
TL;DR: The authors apply an P-test and an AIC based approach for multiresolution analysis of TV systems and advocate the use of a wavelet basis because of its flexibility in capturing the signal's characteristics at different scales, and discuss how to choose the optimal wavelets basis for a given system trajectory.
Abstract: Parametric identification of time-varying (TV) systems is possible if each TV coefficient can be expanded onto a finite set of basis sequences. The problem then becomes time invariant with respect to the parameters of the expansion. The authors address the question of selecting this set of basis sequences. They advocate the use of a wavelet basis because of its flexibility in capturing the signal's characteristics at different scales, and discuss how to choose the optimal wavelet basis for a given system trajectory. They also develop statistical tests to keep only the basis sequences that significantly contribute to the description of the system's time-variation. By formulating the problem as a regressor selection problem, they apply an P-test and an AIC based approach for multiresolution analysis of TV systems. The resulting algorithm can estimate TV AR or ARMAX models and determine their orders. They apply this algorithm to both synthetic and real speech data and compare it with the Kalman filtering TV parameter estimator. >

197 citations


Journal ArticleDOI
TL;DR: This paper gives an affirmative answer to a conjecture given in [10]: the Bernstein basis has optimal shape preserving properties among all normalized totally positive bases for the space of polynomials of degree less than or equal ton over a compact interval.
Abstract: This paper gives an affirmative answer to a conjecture given in [10]: the Bernstein basis has optimal shape preserving properties among all normalized totally positive bases for the space of polynomials of degree less than or equal ton over a compact interval. There is also a simple test to recognize normalized totally positive bases (which have good shape preserving properties), and the corresponding corner cutting algorithm to generate the Bezier polygon is also included. Among other properties, it is also proved that the Wronskian matrix of a totally positive basis on an interval [a, ∞) is also totally positive.

180 citations


Journal ArticleDOI
TL;DR: An axiomatic basis is developed for the relationship between conditional independence and graphical models in statistical analysis and unconditional independence relative to normal models can be axiomatized with a finite set of axioms.
Abstract: This article develops an axiomatic basis for the relationship between conditional independence and graphical models in statistical analysis. In particular, the following relationships are established: (1) every axiom for conditional independence is an axiom for graph separation, (2) every graph represents a consistent set of independence and dependence constraints, (3) all binary factorizations of strictly positive probability models can be encoded and determined in polynomial time using their correspondence to graph separation, (4) binary factorizations of non-strictly positive probability models can also be derived in polynomial time albeit less efficiently and (5) unconditional independence relative to normal models can be axiomatized with a finite set of axioms.

Journal ArticleDOI
TL;DR: It is shown here that a necessary condition for achieving this goal is that the truncated system inherit the symmetry properties of the original infinite-dimensional system, leading to efficient finite truncations.
Abstract: The proper orthogonal decomposition (POD) (also called Karhunen–Loeve expansion) has been recently used in turbulence to derive optimally fast converging bases of spatial functions, leading to efficient finite truncations. Whether a finite number of these modes can be used in numerical simulations to derive an “accurate” finite set of ordinary differential equations, over a certain range of bifurcation parameter values, still remains an open question. It is shown here that a necessary condition for achieving this goal is that the truncated system inherit the symmetry properties of the original infinite-dimensional system. In most cases, this leads to a systematic involvement of the symmetry group in deriving a new expansion basis called the symmetric POD basis. The Kuramoto–Sivashinsky equation with periodic boundary conditions is used as a paradigm to illustrate this point of view. However, the conclusion is general and can be applied to other equations, such as the Navier–Stokes equations, the complex G...

Journal ArticleDOI
TL;DR: Four algorithms, A-D, were developed to align two groups of biological sequences, which are designed to evaluate the cost for a deletion/insertion more accurately when internal gaps are present in either or both groups of sequences.
Abstract: Four algorithms, A-D, were developed to align two groups of biological sequences. Algorithm A is equivalent to the conventional dynamic programming method widely used for aligning ordinary sequences, whereas algorithms B-D are designed to evaluate the cost for a deletion/insertion more accurately when internal gaps are present in either or both groups of sequences. Rigorous optimization of the 'sum of pairs' (SP) score is achieved by algorithm D, whose average performance is close to O(MNL2), where M and N are numbers of sequences included in the two groups and L is the mean length of the sequences. Algorithm B uses some approximations to cope with profile-based operations, whereas algorithm C is a simpler variant of algorithm D. These group-to-group alignment algorithms were applied to multiple sequence alignment with two iterative strategies: a progressive method based on a given binary tree and a randomized grouping--realignment method. The advantages and disadvantages of the four algorithms are discussed on the basis of the results of examinations of several protein families.

Journal ArticleDOI
TL;DR: In this paper, the symmetry-adapted perturbation theory of intermolecular forces offers an independent reference point to determine efficacy of some computational approaches aiming at elimination of BSSE.
Abstract: The basis set extension (BSE) effects such as primary and secondary basis set superposition errors (BSSE) are discussed on the formal and numerical ground. The symmetry‐adapted perturbation theory of intermolecular forces offers an independent reference point to determine efficacy of some computational approaches aiming at elimination of BSSE. The formal and numerical results support the credibility of the function counterpoise method which dictates that the dimer energy calculated within a supermolecular approach decomposes into monomer energies reproduced with the dimer centered basis set and the interaction energy term which also takes advantage of the full dimer basis. Another consistent approach was found to be Cullen’s ‘‘strictly monomer molecular orbital’’ SCF method [J. M. Cullen, Int. J. Quantum Chem. Symp. 25, 193 (1991)] in which all BSE effects are a priori eliminated. This approach misses, however, the charge transfer component of the interaction energy. The SCF and MP2 results obtained withi...

Journal ArticleDOI
TL;DR: The eoadmap algorithm described in [3, 4] which worked only for compact, regularly stratified sets is replaced by a one-dimensional semi-algebraic subset R(S) of S.
Abstract: In this paper we study the problem of determining whether two points lie in the same connected component of a semi-algebraic set S. Although we are mostly concerned with sets S⊆R k , our algorithm can also decide if points in an arbitrary set S⊆R k can be joined by a semi-algebraic path, for any real closed field R. Our algorithm computer a one-dimensional semi-algebraic subset R(S) of S (actually of an embedding of S in a space R k for a certain real extension field R of the given field R). R(S) is called the roadmap of S. The basis of this work is the eoadmap algorithm described in [3, 4] which worked only for compact, regularly stratified sets

Journal ArticleDOI
TL;DR: In this paper, the relative merits of using both global and local transforms to depict cold-frontal features are explored, and an antisymmetric wavelet basis set is shown to resolve the characteristics of the transition zone, and associated wave and/or eddy activity, with a relatively small number of members of the basis set.
Abstract: Atmospheric cold fronts observed in the boundary layer represent relatively sharp transition zones between air masses of disparate physical characteristics. Further, wavelike features and/or eddy structures are often observed in conjunction with the passage of a frontal zone. The relative merits of using both global and local (with respect to the span of a basis element) transforms to depict cold-frontal features are explored. The data represent both tower and aircraft observations of cold fronts. An antisymmetric wavelet basis set is shown to resolve the characteristics of the transition zone, and associated wave and/or eddy activity, with a relatively small number of members of the basis set. In contrast, the Fourier transformation assigns a significant amplitude to a large number of members of the basis set to resolve a frontal-type feature. In principle, empirical orthogonal functions provide an optimal decomposition of the variance. The observed transition zone, however, has to be phase alig...

Journal ArticleDOI
TL;DR: The locally dense basis set approach to the calculation of nuclear magnetic resonance shieldings is capable of determining chemical shieldings nearly as well as a calculation with a balanced basis set of quality equal to the locally dense set, but with considerable savings of CPU time.
Abstract: The locally dense basis set approach to the calculation of nuclear magnetic resonance shieldings is one in which a sufficiently large or dense set of basis functions is used for an atom or molecular fragment containing the resonant nucleus or nuclei of interest and fewer or attenuated sets of basis functions employed elsewhere. Provided the dense set is of sufficient size, this approach is capable of determining chemical shieldings nearly as well as a calculation with a balanced basis set of quality equal to the locally dense set, but with considerable savings of CPU time

Journal ArticleDOI
TL;DR: In this article, the Painleve analysis is used to retrieve the four famous solutions of Nozaki and Bekki, represented by the constant coefficients of two linear partial differential equations and a finite set of constants.

Journal ArticleDOI
TL;DR: In this paper, the higher spin analogs of the six-vertex model on the basis of its symmetry under the quantum affine algebra were studied and the space of states, transfer matrix, vacuum, creation/ annihilation operators of particles, and local operators, purely in the language of representation theory.
Abstract: We study the higher spin analogs of the six-vertex model on the basis of its symmetry under the quantum affine algebra . Using the method developed recently for the XXZ spin chain, we formulate the space of states, transfer matrix, vacuum, creation/ annihilation operators of particles, and local operators, purely in the language of representation theory. We find that, regardless of the level of the representation involved, the particles have spin 1/2, and that the n-particle space has an RSOS type structure rather than a simple tensor product of the one-particle space. This agrees with the picture proposed earlier by Reshetikhin.

Journal ArticleDOI
TL;DR: A frequency-domain algorithm for motion estimation based on overlapped transforms of the image data is developed as an alternative to block matching methods, and gives comparable or smaller prediction errors than standard models using exhaustive search block matching.
Abstract: A frequency-domain algorithm for motion estimation based on overlapped transforms of the image data is developed as an alternative to block matching methods. The complex lapped transform (CLT) is first defined by extending the lapped orthogonal transform (LOT) to have complex basis functions. The CLT basis functions decay smoothly to zero at their end points, and overlap by 2:1 when a data sequence is transformed. A method for estimating cross-correlation functions in the CLT domain is developed. This forms the basis of a motion estimation algorithm that calculates vectors for overlapping, windowed regions of data. The overlapping data window used has no block edge discontinuities and results in smoother motion fields. Furthermore, when motion compensation is performed using similar overlapping regions, the algorithm gives comparable or smaller prediction errors than standard models using exhaustive search block matching, and computational load is lower for larger displacement ranges and block sizes. >

Journal ArticleDOI
TL;DR: In this paper, it was shown that when the same procedure is applied to biorthogonal wavelet bases, not all the resulting wavelet packets lead to Riesz bases for a factor of O(L 2 (L √ 2 (log n) ).
Abstract: Starting from a multiresolution analysis and the corresponding orthonormal wavelet basis, Coifman and Meyer have constructed wavelet packets, a library from which many different orthonormal bases can be picked. This paper proves that when the same procedure is applied to biorthogonal wavelet bases, not all the resulting wavelet packets lead to Riesz bases for $L^2 (\mathbb{R})$.

Patent
27 Dec 1993
TL;DR: In this article, a reference table is provided for variable length encoding motion vectors based on a particular value range and degree of accuracy, where the value of a motion vector to be encoded is divided to form a quotient and a remainder.
Abstract: In connection with compression-coding of video signals on the basis of inter-frame correlation, a single reference table is used for variable length encoding of inter-frame motion vectors established on the basis of various motion vector value ranges and degrees of accuracy. A reference table is provided for variable length encoding motion vectors based on a particular value range and degree of accuracy. In order to use the same table for motion vectors based on a larger value range than that for which the table was designed, the value of a motion vector to be encoded is divided to form a quotient and a remainder. An addition bit code is formed on the basis of the remainder and is appended to a variable length code which corresponds in the reference table to the quotient so that a variable length code value is formed for the motion vector based on the larger range. As to motion vectors based on a finer degree of accuracy than was provided for in the reference table, the motion vector value is multiplied by an appropriate factor and the resulting product is used to obtain a corresponding variable length code value from the reference table. Decoding is performed using the same table and by means of decoding operations that are the inverse of the encoding operations.

Journal ArticleDOI
TL;DR: In this paper, a second-order Mo/ller-Plesset perturbation theory with linear r12 terms (MP2•R12) is presented. But the program does not allow for the use of very large Gaussian basis sets in conjunction with point group symmetry.
Abstract: A principal source of error in electronic structure calculations is the inability of conventional CI (configuration interaction) expansions to describe the electron–electron cusp. This manifests itself in the slow convergence of correlation treatments with finite basis sets which are commonly applied in traditional ab initio quantum chemistry. In this paper we describe results obtained by adding special n‐particle functions, which have terms linear in the interelectronic coordinate r12, to the usual trial wave function, which is an expansion in terms of Slater determinants. A vectorized and efficient computer program has been written for putting into practice second‐order Mo/ller–Plesset perturbation theory with linear r12 terms (MP2‐R12): the sore program. It exploits both direct integral evaluation strategies and techniques that permit the full (also nonabelian) use of molecular point group symmetry. These two ingredients to the program allow for the use of very large Gaussian basis sets in conjunction ...

Journal ArticleDOI
TL;DR: It is proved that if an orthonormal basis can begenerated using a tree structure, it can be generated specifically by a paraunitary tree.
Abstract: The known result that a binary-tree-structured filter bank with the same paraunitary polyphase matrix on all levels generates an orthonormal basis is generalized to binary trees having different paraunitary matrices on each level. A converse result that every orthonormal wavelet basis can be generated by a tree-structured filter bank having paraunitary polyphase matrices is then proved. The concept of orthonormal bases is extended to generalized (nonbinary) tree structures, and it is seen that a close relationship exists between orthonormality and paraunitariness. It is proved that a generalized tree structure with paraunitary polyphase matrices produces an orthonormal basis. Since not all phases can be generated by tree-structured filter banks, it is proved that if an orthonormal basis can be generated using a tree structure, it can be generated specifically by a paraunitary tree. >

Posted Content
TL;DR: The so-called non-standard form (which achieves decoupling among the scales) and the associated fast numerical algorithms are considered and examples of non- standard forms of several basic operators (e.g. derivatives) will be computed explicitly.
Abstract: Wavelet based algorithms in numerical analysis are similar to other transform methods in that vectors and operators are expanded into a basis and the computations take place in this new system of coordinates. However, due to the recursive definition of wavelets, their controllable localization in both space and wave number (time and frequency) domains, and the vanishing moments property, wavelet based algorithms exhibit new and important properties. For example, the multiresolution structure of the wavelet expansions brings about an efficient organization of transformations on a given scale and of interactions between different neighbouring scales. Moreover, wide classes of operators which naively would require a full (dense) matrix for their numerical description, have sparse representations in wavelet bases. For these operators sparse representations lead to fast numerical algorithms, and thus address a critical numerical issue. We note that wavelet based algorithms provide a systematic generalization of the Fast Multipole Method (FMM) and its descendents. These topics will be the subject of the lecture. Starting from the notion of multiresolution analysis, we will consider the so-called non-standard form (which achieves decoupling among the scales) and the associated fast numerical algorithms. Examples of non-standard forms of several basic operators (e.g. derivatives) will be computed explicitly.

Book ChapterDOI
13 Sep 1993
TL;DR: This work presents an approach in which the map manifold is built from a small set of basis manifolds, and the resulting parametrized self-organizing maps (“PSOM”) require only very few reference vectors for their construction.
Abstract: The Self-organizing Map creates a dimension reducing mapping from a feature space onto a non-linear map manifold. In the basic mapping algorithm, this mapping is discretized and for higher-dimensional map manifolds a very large number of reference vectors must be used to obtain a good resolution. To overcome this limitation, we present an approach in which the map manifold is built from a small set of basis manifolds. The resulting parametrized self-organizing maps (“PSOM”) require only very few reference vectors for their construction. They can be considered as recurrent networks with a continuous attractor manifold. We illustrate the approach with the construction of a PSOM for the six-dimensional configuration manifold of a puma manipulator, embedded in a 15-dimensional variable space.

Book
01 Nov 1993
TL;DR: This book clarifies how to exploit the differences in optimizing implementations of multi-dimensional Fourier transforms in a form that is convenient for writing highly efficient code on a variety of vector and parallel computers.
Abstract: The main emphasis of this book is the development of algorithms for processing multi-dimensional digital signals, particularly algorithms for multi-dimensional Fourier transforms, in a form that is convenient for writing highly efficient code on a variety of vector and parallel computers. The rapidly increasing power of computing chips, the increased availability of vector and array processors, and the increased size of data sets to be analyzed make code-writing a difficult task. By emphasizing the unified basis for the various approaches to multidimensional Fourier transforms, this book also clarifies how to exploit the differences in optimizing implementations.

Journal ArticleDOI
TL;DR: In this article, the theory of mono-and multifractal sets is presented and a geometric and thermodynamic description of the multifractal set is developed. But the focus is mainly on the application of the fractal concept for a thermodynamic system with partial memory loss, turbulent fluid flow, hierarchically coordinated set of statistical ensembles, Anderson's transition, and incommensurable and quasicrystalline structures.
Abstract: Basic information about the theory of mono- and multifractal sets is presented. Geometric and thermodynamic descriptions are developed. The geometric picture is presented on the basis of the simplest examples of the Koch and Cantor fractal sets. An ultrametric space, representing the metric of a fractal set, is introduced on the basis of Cayley's hierarchical tree. The spectral characteristics of a multifractal formation are described. Attention is focused mainly on the application of the fractal concept for a thermodynamic system with partial memory loss, turbulent fluid flow, hierarchically coordinated set of statistical ensembles, Anderson's transition, and incommensurable and quasicrystalline structures.

Journal ArticleDOI
TL;DR: Fink and Rheinboldt as mentioned in this paper developed a priori local error estimates for the scalar-parameter case of the reduced basis method by considering the method in a differential-geometric setting.
Abstract: : In an earlier paper (ZAMM 63, 1983, 21), J.P.Fink and W.C.Rheinboldt developed a priori local error estimates for the scalar-parameter case of the reduced basis method by considering the method in a differential-geometric setting. Here it is shown that an analogous setting can be used for the analysis of the method applied to problems with a multidimensional parameter vector and that this leads to a corresponding local error theory also in this general case.

Journal ArticleDOI
TL;DR: In this paper, an efficient reanalysis method for the topological optimization of structures is presented, based on combining the computed terms of a series expansion, used as high quality basis vectors, and coefficients of a reduced basis expression.
Abstract: An efficient reanalysis method for the topological optimization of structures is presented. The method is based on combining the computed terms of a series expansion, used as high quality basis vectors, and coefficients of a reduced basis expression. The advantage is that the efficiency of local approximations and the improved quality of global approximations are combined to obtain an effective solution procedure.