scispace - formally typeset
Search or ask a question

Showing papers on "Basis (linear algebra) published in 1999"


Book ChapterDOI
Miklós Ajtai1
11 Jul 1999
TL;DR: It is shown that lattices of the same random class can be generated not only together with a short vector in them, but also together withA short basis, which may make the construction more applicable for cryptographic protocols.
Abstract: A class of random lattices is given, in [1] so that (a) a random lattice can be generated in polynomial time together with a short vector in it, and (b) assuming that certain worst-case lattice problems have no polynomial time solutions, there is no polynomial time algorithm which finds a short vector in a random lattice with a polynomially large probability. In this paper we show that lattices of the same random class can be generated not only together with a short vector in them, but also together with a short basis. The existence of a known short basis may make the construction more applicable for cryptographic protocols.

410 citations


Journal ArticleDOI
TL;DR: In this article, the authors used Almlof and Haser's Laplace transform idea to eliminate the energy denominator in second-order perturbation theory (MP2) and obtain an energy expression in the atomic orbital basis.
Abstract: We have used Almlof and Haser’s Laplace transform idea to eliminate the energy denominator in second-order perturbation theory (MP2) and obtain an energy expression in the atomic orbital basis. We show that the asymptotic computational cost of this method scales quadratically with molecular size. We then define atomic orbital domains such that selective pairwise interactions can be neglected using well-defined thresholding criteria based on the power law decay properties of the long-range contributions. For large molecules, our scheme yields linear scaling computational cost as a function of molecular size. The errors can be controlled in a precise manner and our method reproduces canonical MP2 energies. We present benchmark calculations of polyglycine chains and water clusters containing up to 3040 basis functions.

388 citations


Journal ArticleDOI
TL;DR: In this article, the Hartree-Fock and correlation contributions to the interaction energy of the hydrogen-bonded complexes were computed in conventional calculations employing the aug-cc-pVXZ series of basis sets at the levels of second-order perturbation theory, and coupled-cluster theory with single and double excitations augmented by a perturbative triples correction.
Abstract: The Hartree-Fock and correlation contributions to the interaction energy of the hydrogen-bonded complexes (HF)2, (HCl)2, H2OHF, HCNHF, and (H2O)2 are computed in conventional calculations employing the aug-cc-pVXZ series of basis sets at the levels of Hartree-Fock theory, second-order perturbation theory, and coupled-cluster theory with single and double excitations augmented by a perturbative triples correction. The basis set convergence of the interaction energy is examined by comparison with results obtained with an explicitly correlated wave function model. The counterpoise-corrected and uncorrected Hartree-Fock interaction energies both converge very unsystematically. The convergence of the uncorrected correlation contribution is also very unsystematic because the basis set superposition error and the error from the incomplete description of the electronic Coulomb cusp both are present. Once the former has been effectively removed by the counterpoise correction, the cusp dominates and the convergence...

347 citations


Book ChapterDOI
01 Jan 1999
TL;DR: The most well-known description of image statistics is that their Fourier spectra take the form of a power law, which suggests that the Fourier transform is an appropriate PCA representation of Fourier and related representations, widely used in image processing applications.
Abstract: The use of multi-scale decompositions has led to significant advances in representation, compression, restoration, analysis, and synthesis of signals. The fundamental reason for these advances is that the statistics of many natural signals, when decomposed in such bases, are substantially simplified. Choosing a basis that is adapted to statistical properties of the input signal is a classical problem. The traditional solution is principal components analysis (PCA), in which a linear decomposition is chosen to diagonalize the covariance structure of the input. The most well-known description of image statistics is that their Fourier spectra take the form of a power law [e.g., 1, 2, 3]. Coupled with a constraint of translation-invariance, this suggests that the Fourier transform is an appropriate PCA representation. Fourier and related representations are widely used in image processing applications. For example, the classical solution to the noise removal problem is the Wiener filter, which can be derived by assuming a signal model of decorrelated Gaussian-distributed coefficients in the Fourier domain.

261 citations


Journal ArticleDOI
TL;DR: In this article, the number of primitive Gaussians used to define the basis functions is not fixed but adjusted, based on a total energy criterion, and all basis functions share the same set of exponents.
Abstract: We introduce a scheme for the optimization of Gaussian basis sets for use in density-functional calculations. It is applicable to both all-electron and pseudopotential methodologies. In contrast to earlier approaches, the number of primitive Gaussians (exponents) used to define the basis functions is not fixed but adjusted, based on a total-energy criterion. Furthermore, all basis functions share the same set of exponents. The numerical results for the scaling of the shortest-range Gaussian exponent as a function of the nuclear charge are explained by analytical derivations. We have generated all-electron basis sets for H, B through F, Al, Si, Mn, and Cu. Our results show that they efficiently and accurately reproduce structural properties and binding energies for a variety of clusters and molecules for both local and gradient-corrected density functionals.

252 citations


Journal ArticleDOI
TL;DR: An algorithm for finding the convex cone boundaries is presented, and applications to unsupervised unmixing and classification are demonstrated with simulated data as well as experimental data from the hyperspectral digital imagery collection experiment (HYDICE).
Abstract: A new approach to multispectral and hyperspectral image analysis is presented. This method, called convex cone analysis (CCA), is based on the bet that some physical quantities such as radiance are nonnegative. The vectors formed by discrete radiance spectra are linear combinations of nonnegative components, and they lie inside a nonnegative, convex region. The object of CCA is to find the boundary points of this region, which can be used as endmember spectra for unmixing or as target vectors for classification. To implement this concept, the authors find the eigenvectors of the sample spectral correlation matrix of the image. Given the number of endmembers or classes, they select as many eigenvectors corresponding to the largest eigenvalues. These eigenvectors are used as a basis to form linear combinations that have only nonnegative elements, and thus they lie inside a convex cone. The vertices of the convex cone will be those points whose spectral vector contains as many zero elements as the number of eigenvectors minus one. Accordingly, a mixed pixel can be decomposed by identifying the vertices that were used to form its spectrum. An algorithm for finding the convex cone boundaries is presented, and applications to unsupervised unmixing and classification are demonstrated with simulated data as well as experimental data from the hyperspectral digital imagery collection experiment (HYDICE).

245 citations


Journal ArticleDOI
TL;DR: A best basis algorithm for signal enhancement in white Gaussian noise is proposed and an estimator of the mean-square error is proposed based on a heuristic argument and the reconstruction performance based upon it is compared to that based on the Stein (1981) unbiased risk estimator.
Abstract: We propose a best basis algorithm for signal enhancement in white Gaussian noise. The best basis search is performed in families of orthonormal bases constructed with wavelet packets or local cosine bases. We base our search for the "best" basis on a criterion of minimal reconstruction error of the underlying signal. This approach is intuitively appealing, because the enhanced or estimated signal has an associated measure of performance, namely, the resulting mean-square error. Previous approaches in this framework have focused on obtaining the most "compact" signal representations, which consequently contribute to effective denoising. These approaches, however, do not possess the inherent measure of performance which our algorithm provides. We first propose an estimator of the mean-square error, based on a heuristic argument and subsequently compare the reconstruction performance based upon it to that based on the Stein (1981) unbiased risk estimator. We compare the two proposed estimators by providing both qualitative and quantitative analyses of the bias term. Having two estimators of the mean-square error, we incorporate these cost functions into the search for the "best" basis, and subsequently provide a substantiating example to demonstrate their performance.

220 citations


Journal ArticleDOI
TL;DR: In this paper, two methods to expand the density matrix renormalization group (DMRG) to calculations of dynamical properties are investigated, and the Lanczos vector method is optimized to represent Lanczos vectors, which are then used to calculate the spectra.
Abstract: The density matrix renormalization group (DMRG) method allows for very precise calculations of ground state properties in low-dimensional strongly correlated systems. We investigate two methods to expand the DMRG to calculations of dynamical properties. In the Lanczos vector method the DMRG basis is optimized to represent Lanczos vectors, which are then used to calculate the spectra. This method is fast and relatively easy to implement, but the accuracy at higher frequencies is limited. Alternatively, one can optimize the basis to represent a correction vector for a particular frequency. The correction vectors can be used to calculate the dynamical correlation functions at these frequencies with high accuracy. By separately calculating correction vectors at different frequencies, the dynamical correlation functions can be interpolated and pieced together from these results. For systems with open boundaries we discuss how to construct operators for specific wave vectors using filter functions.

219 citations


Proceedings ArticleDOI
Jerome E. Lengyel1
26 Apr 1999
TL;DR: Methods for coding a time-dependent geometry stream include geometric transform coder, a basis decomposition coder), a column/row predictioncoder, and space-time level of detail coder.
Abstract: Methods for coding a time-dependent geometry stream (164, 165) include geometric transform coder, a basis decomposition coder, a column/row prediction coder, and space-time level of detail coder. The basis decomposition coder uses principal component analysis to decompose a time dependent geometry matrix into basis vectors and weights. The weights and basis vectors are coded separately. Optionally, the residual between a mesh constructed from the weight and basis vectors and the original mesh can be encoded as well. The column/row predictor exploits coherence in a matrix of time dependent geometry by encoding differences among neighboring rows and columns (166). Row and column sorting (163) optimizes this form of coding by re-arranging rows and columns to improve similarity among neighboring rows/columns. The space-time level of detail coder converts a time-dependent geometry stream into a hierarchical structure, including levels of detail in the space and time dimensions, and expansion records. The expansion records specify how to reconstruct (172) a mesh from deltas representing differences between levels of detail.

209 citations


Journal ArticleDOI
TL;DR: The AINV algorithm of Benzi, Meyer, and Tůma is introduced to linear scaling electronic structure theory, and found to be essential in transformations between orthogonal and nonorthogonal representations.
Abstract: A simplified version of the Li, Nunes and Vanderbilt [Phys Rev B 47, 10891 (1993)] and Daw [Phys Rev B 47, 10895 (1993)] density matrix minimization is introduced that requires four fewer matrix multiplies per minimization step relative to previous formulations The simplified method also exhibits superior convergence properties, such that the bulk of the work may be shifted to the quadratically convergent McWeeny purification, which brings the density matrix to idempotency Both orthogonal and nonorthogonal versions are derived The AINV algorithm of Benzi, Meyer, and Tůma [SIAM J Sci Comp 17, 1135 (1996)] is introduced to linear scaling electronic structure theory, and found to be essential in transformations between orthogonal and nonorthogonal representations These methods have been developed with an atom-blocked sparse matrix algebra that achieves sustained megafloating point operations per second rates as high as 50% of theoretical, and implemented in the MondoSCF suite of linear scaling SCF programs For the first time, linear scaling Hartree–Fock theory is demonstrated with three-dimensional systems, including water clusters and estane polymers The nonorthogonal minimization is shown to be uncompetitive with minimization in an orthonormal representation An early onset of linear scaling is found for both minimal and double zeta basis sets, and crossovers with a highly optimized eigensolver are achieved Calculations with up to 6000 basis functions are reported The scaling of errors with system size is investigated for various levels of approximation

208 citations


Book ChapterDOI
01 Jan 1999
TL;DR: Linear models form the core of classical statistics and are still the basis of much of statistical practice; many modern modelling and analytical techniques build on the methodology developed for linear models.
Abstract: Linear models form the core of classical statistics and are still the basis of much of statistical practice; many modern modelling and analytical techniques build on the methodology developed for linear models.

Book ChapterDOI
01 Jan 1999
TL;DR: This chapter lays categorical foundations for topology and fuzzy topology in which the basis of a space—the lattice of membership values—is allowed to change from one object to another within the same category.
Abstract: This chapter lays categorical foundations for topology and fuzzy topology in which the basis of a space—the lattice of membership values—is allowed to change from one object to another within the same category (the basis of a space being distinguished from the basis of the topology of a space). It is the goal of this chapter to create foundations which answer all the following questions in the affirmative:

Journal ArticleDOI
TL;DR: A method of detecting periodicities in data that exploits a series of projections onto "periodic subspaces" and finds its own set of nonorthogonal basis elements (based on the data), rather than assuming a fixed predetermined basis as in the Fourier, Gabor, and wavelet transforms.
Abstract: This paper presents a method of detecting periodicities in data that exploits a series of projections onto "periodic subspaces". The algorithm finds its own set of nonorthogonal basis elements (based on the data), rather than assuming a fixed predetermined basis as in the Fourier, Gabor, and wavelet transforms. A major strength of the approach is that it is linear-in-period rather than linear-in-frequency or linear-in-scale. The algorithm is derived and analyzed, and its output is compared to that of the Fourier transform in a number of examples. One application is the finding and grouping of rhythms in a musical score, another is the separation of periodic waveforms with overlapping spectra, and a third is the finding of patterns in astronomical data. Examples demonstrate both the strengths and weaknesses of the method.

Journal ArticleDOI
01 Oct 1999
TL;DR: Experimental results are presented which demonstrate that the ORMP method is the best procedure in terms of its ability to give the most compact signal representation, followed by MMP and then BMP which gives the poorest results.
Abstract: The problem of signal representation in terms of basis vectors from a large, over-complete, spanning dictionary has been the focus of much research. Achieving a succinct, or 'sparse', representation is known as the problem of best basis representation. Methods are considered which seek to solve this problem by sequentially building up a basis set for the signal. Three distinct algorithm types have appeared in the literature which are here termed basic matching pursuit (BMP), order recursive matching pursuit (ORMP) and modified matching pursuit (MMP). The algorithms are first described and then their computation is closely examined. Modifications are made to each of the procedures which improve their computational efficiency. The complexity of each algorithm is considered in two contexts; one where the dictionary is variable (time-dependent) and the other where the dictionary is fixed (time-independent). Experimental results are presented which demonstrate that the ORMP method is the best procedure in terms of its ability to give the most compact signal representation, followed by MMP and then BMP which gives the poorest results. Finally, weighing the performance of each algorithm, its computational complexity and the type of dictionary available, recommendations are made as to which algorithm should be used for a given problem.

Proceedings ArticleDOI
20 Sep 1999
TL;DR: A new algorithm for simulating the eikonal equation is introduced, which offers a number of computational and conceptual advantages over the earlier methods when it comes to shock tracking and a very efficient algorithm for shock detection.
Abstract: The eikonal equation and variants of it are of significant interest for problems in computer vision and image processing. It is the basis for continuous versions of mathematical morphology, stereo, shape-from-shading and for recent dynamic theories of shape. Its numerical simulation can be delicate, owing to the formation of singularities in the evolving front, and is typically based or, level set methods. However there are more classical approaches rooted in Hamiltonian physics, which have received little consideration in computer vision. In this paper we first introduce a new algorithm for simulating the eikonal equation, which offers a number of computational and conceptual advantages over the earlier methods when it comes to shock tracking. Next, we introduce a very efficient algorithm for shock detection, where the key idea is to measure the net outward flux of a vector field per unit volume, and to detect locations where a conservation of energy principle is violated. We illustrate the approach with several numerical examples including skeletons of complex 2D and 3D shapes.

Journal ArticleDOI
TL;DR: This work is concerned with developing the hierarchical basis for meshless methods, which is an intrinsic pseudo-spectral basis, which can remain as a partition of unity in a local region, because the discrete wavelet kernels form a ‘partition of nullity’.
Abstract: This work is concerned with developing the hierarchical basis for meshless methods. A reproducing kernel hierarchical partition of unity is proposed in the framework of continuous representation as well as its discretized counterpart. To form such hierarchical partition, a class of basic wavelet functions are introduced. Based upon the built-in consistency conditions, the differential consistency conditions for the hierarchical kernel functions are derived. It serves as an indispensable instrument in establishing the interpolation error estimate, which is theoretically proven and numerically validated. For a special interpolant with different combinations of the hierarchical kernels, a synchronized convergence effect may be observed. Being different from the conventional Legendre function based p-type hierarchical basis, the new hierarchical basis is an intrinsic pseudo-spectral basis, which can remain as a partition of unity in a local region, because the discrete wavelet kernels form a ‘partition of nullity’. These newly developed kernels can be used as the multi-scale basis to solve partial differential equations in numerical computation as a p-type refinement. Copyright © 1999 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors used geometric quantization to obtain a Hilbert space of states for the quantum tetrahedron in 3 and 4 dimensions, where the basis of states is defined by the areas of the faces of the tetrahedral faces.
Abstract: Recent work on state sum models of quantum gravity in 3 and 4 dimensions has led to interest in the `quantum tetrahedron'. Starting with a classical phase space whose points correspond to geometries of the tetrahedron in R^3, we use geometric quantization to obtain a Hilbert space of states. This Hilbert space has a basis of states labeled by the areas of the faces of the tetrahedron together with one more quantum number, e.g. the area of one of the parallelograms formed by midpoints of the tetrahedron's edges. Repeating the procedure for the tetrahedron in R^4, we obtain a Hilbert space with a basis labelled solely by the areas of the tetrahedron's faces. An analysis of this result yields a geometrical explanation of the otherwise puzzling fact that the quantum tetrahedron has more degrees of freedom in 3 dimensions than in 4 dimensions.


Journal ArticleDOI
TL;DR: This paper discusses the implementation and properties of an orthogonal DWT, with two zero moments and with improved time localization, with a piecewise linear basis that is reminiscent of the slant transform.
Abstract: The discrete wavelet transform (DWT) is usually carried out by filterbank iteration; however, for a fixed number of zero moments, this does not yield a discrete-time basis that is optimal with respect to time localization. This paper discusses the implementation and properties of an orthogonal DWT, with two zero moments and with improved time localization. The basis is not based on filterbank iteration; instead, different filters are used for each scale. For coarse scales, the support of the discrete-time basis functions approaches two thirds that of the corresponding functions obtained by filterbank iteration. This basis, which is a special case of a class of bases described by Alpert (1992, 1993), retains the octave-band characteristic and is piecewise linear (but discontinuous). Closed-form expressions for the filters are given, an efficient implementation of the transform is described, and improvement in a denoising example is shown. This basis, being piecewise linear, is reminiscent of the slant transform, to which it is compared.

Journal ArticleDOI
TL;DR: A new method is proposed for classifying sets of a variable number of points and curves in a multidimensional space as time series by examining a fixed number of questions like “how many points are in a certain range of a certain dimension”, and converting the corresponding answers into a binary vector with a fixed length.


Journal ArticleDOI
TL;DR: An approach to solid-state electronic-structure calculations based on the finite-element method that combines the significant advantages of both real-space-grid and basis-oriented approaches and so promises to be particularly well suited for large, accurate calculations.
Abstract: We present an approach to solid-state electronic-structure calculations based on the finite-element method. In this method, the basis functions are strictly local, piecewise polynomials. Because the basis is composed of polynomials, the method is completely general and its convergence can be controlled systematically. Because the basis functions are strictly local in real space, the method allows for variable resolution in real space; produces sparse, structured matrices, enabling the effective use of iterative solution methods; and is well suited to parallel implementation. The method thus combines the significant advantages of both real-space-grid and basis-oriented approaches and so promises to be particularly well suited for large, accurate {ital ab initio} calculations. We develop the theory of our approach in detail, discuss advantages and disadvantages, and report initial results, including electronic band structures and details of the convergence of the method. {copyright} {ital 1999} {ital The American Physical Society}

Posted Content
TL;DR: In this paper, the authors used geometric quantization to obtain a Hilbert space of states for the quantum tetrahedron in 3 and 4 dimensions, where the basis of states is defined by the areas of the faces of the tetrahedral faces.
Abstract: Recent work on state sum models of quantum gravity in 3 and 4 dimensions has led to interest in the `quantum tetrahedron'. Starting with a classical phase space whose points correspond to geometries of the tetrahedron in R^3, we use geometric quantization to obtain a Hilbert space of states. This Hilbert space has a basis of states labeled by the areas of the faces of the tetrahedron together with one more quantum number, e.g. the area of one of the parallelograms formed by midpoints of the tetrahedron's edges. Repeating the procedure for the tetrahedron in R^4, we obtain a Hilbert space with a basis labelled solely by the areas of the tetrahedron's faces. An analysis of this result yields a geometrical explanation of the otherwise puzzling fact that the quantum tetrahedron has more degrees of freedom in 3 dimensions than in 4 dimensions.

Book ChapterDOI
01 Jan 1999
TL;DR: In this article, a survey of the native spaces associated with basis functions is given, starting from reproducing kernel Hilbert spaces and invariance properties, and general construction of native spaces is carried out for both the unconditionally and the conditionally positive definite case.
Abstract: This contribution gives a partial survey over the native spaces associated to (not necessarily radial) basis functions. Starting from reproducing kernel Hilbert spaces and invariance properties, the general construction of native spaces is carried out for both the unconditionally and the conditionally positive definite case. The definitions of the latter are based on finitely supported functional only. Fourier or other transforms are not required. The dependence of native spaces on the domain is studied, and criteria for functions and functionals to be in the native space are given. Basic facts on optimal recovery, power functions, and error bounds are included.

Book ChapterDOI
01 Jan 1999
TL;DR: The two-dimensional Fourier transform (2DFT) as mentioned in this paper is a straightforward extension of the one-dimensional FFT, which can be used for image processing applications, and it is the basis of all further developments in signal processing.
Abstract: Publisher Summary Fourier presented a memoir to the Institut de France in 1807 where he claimed that any periodic function can be represented as a series of harmonically related sinusoids. This idea had a profound impact in mathematical analysis, physics and engineering, but it took one and a half centuries to understand the convergence of Fourier series and complete the theory of Fourier integrals. Fourier was motivated by the study of heat diffusion, which is governed by a linear differential equation. However, the Fourier transform diagonalizes all linear time-invariant operators, which are the building blocks of signal processing. It is, therefore, not only the starting point of our exploration but the basis of all further developments. Two -dimensional Fourier Transform is a straightforward extension of the one-dimensional Fourier transform. The two-dimensional case is briefly reviewed for image processing applications.

Journal ArticleDOI
TL;DR: In this paper, a wavelet Petrov-Galerkin procedure is proposed to stabilize computations of some pathological problems in numerical computations, such as advection-diffusion problems and Stokes' flow problems.
Abstract: In this part of the work, the meshless hierarchical partition of unity proposed in [1], referred here as Part I, is used as a multiple scale basis in numerical computations to solve practical problems. The applications discussed in the present work fall into two categories: (1) a wavelet adaptivity refinement procedure; and (2) a so-called wavelet Petrov–Galerkin procedure. In the applications of wavelet adaptivity, the hierarchical reproducing kernels are used as a multiple scale basis to compute the numerical solutions of the Helmholtz equation, a model equation of wave propagation problems, and to simulate shear band formation in an elasto-viscoplastic material, a problem dictated by the presence of the high gradient deformation. In both numerical experiments, numerical solutions with high resolution are obtained by inserting the wavelet-like basis into the primary interpolation function basis, a process that may be viewed as a spectral p-type refinement. By using the interpolant that has synchronized convergence property as a weighting function, a wavelet Petrov–Galerkin procedure is proposed to stabilize computations of some pathological problems in numerical computations, such as advection–diffusion problems and Stokes' flow problem; it offers an alternative procedure in stablized methods and also provides some insight, or new interpretation of the method. Detailed analysis has been carried out on the stability and convergence of the wavelet Petrov–Galerkin method. Copyright © 1999 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: From the theoretical and experimental study, it is seen that the recognition rates increase as the number of speakers in the training set increases, and it is shown that the common vector obtained from Criterion 2 represents the common properties of a spoken word better than the common or average vector obtainedfrom Criterion 1.
Abstract: A voice signal contains the psychological and physiological properties of the speaker as well as dialect differences, acoustical environment effects, and phase differences. For these reasons, the same word uttered by different speakers can be very different. In this paper, two theories are developed by considering two optimization criteria applied to both the training set and the test set. The first theory is well known and uses what is called Criterion 1 here and ends up with the average of all vectors belonging to the words in the training set. The second theory is a novel approach and uses what is called Criterion 2 here, and it is used to extract the common properties of all vectors belonging to the words in the training set. It is shown that Criterion 2 is superior to Criterion 1 when the training set is of concern. In Criterion 2, the individual differences are obtained by subtracting a reference vector from other vectors, and individual difference vectors are used to obtain orthogonal vector basis by using the Gram-Schmidt orthogonalization method. The common vector is obtained by subtracting projections of any vector of the training set on the orthogonal vectors from this same vector. It is proved that this common vector is unique for any word class in the training set and independent of the chosen reference vector. This common vector is used in isolated word recognition, and it is also shown that Criterion 2 is superior to Criterion 1 for the test set. From the theoretical and experimental study, it is seen that the recognition rates increase as the number of speakers in the training set increases. This means that the common vector obtained from Criterion 2 represents the common properties of a spoken word better than the common or average vector obtained from Criterion 1.

Journal ArticleDOI
TL;DR: The results show that BSD can simultaneously determine both the basis spectra and their distribution, and in principle, BSD should solve this bilinear problem for any dataset which results from multiplication of matrices representing positive additive distributions if the data overdetermine the solutions.


Book ChapterDOI
12 Aug 1999
TL;DR: In this paper, the authors present algorithms for conversion to and from dual of polynomial and dual of normal bases, as well as algorithms to convert to a normal basis which involve the dual of the basis.
Abstract: Conversion of finite field elements from one basis representation to another representation in a storage-efficient manner is crucial if these techniques are to be carried out in hardware for cryptographic applications. We present algorithms for conversion to and from dual of polynomial and dual of normal bases, as well as algorithms to convert to a polynomial or normal basis which involve the dual of the basis. This builds on work by Kaliski and Yin presented at SAC '98.