scispace - formally typeset
Search or ask a question

Showing papers on "Matrix (mathematics) published in 1993"


Book
08 Mar 1993
TL;DR: Experimental designs in linear models Optimal designs for Scalar Parameter Systems Information Matrices Loewner Optimality Real Optimality Criteria Matrix Means The General Equivalence Theorem Optimal Moment Matrices and Optimal Designs D-, A-, E-, T-Optimality Admissibility of moment and information matrices Bayes Designs and Discrimination Designs Efficient Designs for Finite Sample Sizes Invariant Design Problems Kiefer Optimality Rotatability and Response Surface Designs Comments and References Biographies Bibliography Index as discussed by the authors
Abstract: Experimental Designs in Linear Models Optimal Designs for Scalar Parameter Systems Information Matrices Loewner Optimality Real Optimality Criteria Matrix Means The General Equivalence Theorem Optimal Moment Matrices and Optimal Designs D-, A-, E-, T-Optimality Admissibility of Moment and Information Matrices Bayes Designs and Discrimination Designs Efficient Designs for Finite Sample Sizes Invariant Design Problems Kiefer Optimality Rotatability and Response Surface Designs Comments and References Biographies Bibliography Index.

1,823 citations


Journal ArticleDOI
TL;DR: In this paper, a new approach based on representing the weights of each configuration in the steady state as a product of noncommuting matrices is presented, and the whole solution of the fully asymmetric exclusion problem is reduced to finding two matrices and two vectors which satisfy very simple algebraic rules.
Abstract: Several recent works have shown that the one-dimensional fully asymmetric exclusion model, which describes a system of particles hopping in a preferred direction with hard core interactions, can be solved exactly in the case of open boundaries. Here the authors present a new approach based on representing the weights of each configuration in the steady state as a product of noncommuting matrices. With this approach the whole solution of the problem is reduced to finding two matrices and two vectors which satisfy very simple algebraic rules. They obtain several explicit forms for these non-commuting matrices which are, in the general case, infinite-dimensional. Their approach allows exact expressions to be derived for the current and density profiles. Finally they discuss briefly two possible generalizations of their results: the problem of partially asymmetric exclusion and the case of a mixture of two kinds of particles.

1,333 citations


Proceedings ArticleDOI
08 Sep 1993
TL;DR: Here I show how to compute a matrix that is optimized for a particular image, and custom matrices for a number of images show clear improvement over image-independent matrices.
Abstract: This presentation describes how a vision model incorporating contrast sensitivity, contrast masking, and light adaptation is used to design visually optimal quantization matrices for Discrete Cosine Transform image compression. The Discrete Cosine Transform (DCT) underlies several image compression standards (JPEG, MPEG, H.261). The DCT is applied to 8x8 pixel blocks, and the resulting coefficients are quantized by division and rounding. The 8x8 'quantization matrix' of divisors determines the visual quality of the reconstructed image; the design of this matrix is left to the user. Since each DCT coefficient corresponds to a particular spatial frequency in a particular image region, each quantization error consists of a local increment or decrement in a particular frequency. After adjustments for contrast sensitivity, local light adaptation, and local contrast masking, this coefficient error can be converted to a just-noticeable-difference (jnd). The jnd's for different frequencies and image blocks can be pooled to yield a global perceptual error metric. With this metric, we can compute for each image the quantization matrix that minimizes the bit-rate for a given perceptual error, or perceptual error for a given bit-rate. Implementation of this system demonstrates its advantages over existing techniques. A unique feature of this scheme is that the quantization matrix is optimized for each individual image. This is compatible with the JPEG standard, which requires transmission of the quantization matrix.

776 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented a short proof of localization under the conditions of either strong disorder (λ > λ 0) or extreme energies for a wide class of self adjoint operators with random matrix elements, acting inl 2 spaces.
Abstract: The work presents a short proof of localization under the conditions of either strong disorder (λ > λ0) or extreme energies for a wide class of self adjoint operators with random matrix elements, acting inl 2 spaces. A prototypical example is the discrete Schrodinger operatorH=−Δ+U 0(x)+λV x onZ d ,d≧1, withU 0(x) a specified background potential and {V x } generated as random variables. The general results apply to operators with −Δ replaced by a non-local self adjoint operatorT whose matrix elements satisfy: ∑ y |T x,y | S ≦Const., uniformly inx, for somes<1. Localization means here that within a specified energy range the spectrum ofH is of the pure-point type, or equivalently — the wave functions do not spread indefinitely under the unitary time evolution generated byH. The effect is produced by strong disorder in either the potential or in the off-diagonal matrix elementsT x, y . Under rapid decay ofT x, y , the corresponding eigenfunctions are also proven to decay exponentially. The method is based on resolvent techniques. The central technical ideas include the use of low moments of the resolvent kernel, i.e. <|G E (x, y)| s > withs small enough (<1) to avoid the divergence caused by the distribution's Cauchy tails, and an effective use of the simple form of the dependence ofG E (x, y) on the individual matrix elements ofH in elucidating the implications of the fundamental equation (H−E)G E (x,x 0)=δ x,x0 . This approach simplifies previous derivations of localization results, avoiding the small denominator difficulties which have been hitherto encountered in the subject. It also yields some new results which include localization under the following sets of conditions: i) potentials with an inhomogeneous non-random partU 0 (x), ii) the Bethe lattice, iii) operators with very slow decay in the off-diagonal terms (T x,y≈1/|x−y|(d+e)), and iv) localization produced by disordered boundary conditions.

701 citations


Journal Article
TL;DR: In this paper, the authors generalize the Bi-CGSTAB algorithm further, and overcome some shortcomings of BiCGStab2 by combining GMRES(l) and BiCG and profits from both.
Abstract: For a number of linear systems of equations arising from realistic problems, using the Bi-CGSTAB algorithm of van der Vorst [17] to solve these equations is very attractive. Unfortunately, for a large class of equations, where, for instance, Bi-CG performs well, the convergence of BiCGSTAB stagnates. This was observed specifically in case of discretized advection dominated PDE’s. The stagnation is due to the fact that for this type of equations the matrix has almost pure imaginary eigenvalues. With his BiCGStab2 algorithm Gutknecht [5] attempted to avoid this stagnation. Here, we generalize the Bi-CGSTAB algorithm further, and overcome some shortcomings of BiCGStab2. In some sense, the new algorithm combines GMRES(l) and Bi-CG and profits from both.

566 citations


Journal ArticleDOI
TL;DR: This article surveys iterative domain decomposition techniques that have been developed in recent years for solving several kinds of partial differential equations, including elliptic, parabolic, and differential systems such as the Stokes problem and mixed formulations of elliptic problems.
Abstract: Domain decomposition (DD) has been widely used to design parallel efficient algorithms for solving elliptic problems. In this thesis, we focus on improving the efficiency of DD methods and applying them to more general problems. Specifically, we propose efficient variants of the vertex space DD method and minimize the complexity of general DD methods. In addition, we apply DD algorithms to coupled elliptic systems, singular Neumann boundary problems and linear algebraic systems. We successfully improve the vertex space DD method of Smith by replacing the exact edge, vertex dense matrices by approximate sparse matrices. It is extremely expensive to calculate, invert and store the exact vertex and edge Schur complement dense sub-matrices in the vertex space DD algorithm. We propose several approximations for these dense matrices, by using Fourier approximation and an algebraic probing technique. Our numerical and theoretical results show that these variants retain the fast convergence rate and greatly reduce the computational cost. We develop a simple way to reduce the overall complexity of domain decomposition methods through choosing the coarse grid size. For sub-domain solvers with different complexities, we derive the optimal coarse grid size $H\sb{opt},$ which asymptotically minimizes the total computational cost of DD methods under the sequential and parallel environments. The overall complexity of DD methods is significantly reduced by using this optimal coarse grid size. We apply the additive and multiplicative Schwarz algorithms to solving coupled elliptic systems. Using the Dryja-Widlund framework, we prove that their convergence rates are independent of both the mesh and the coupling parameters. We also construct several approximate interface sparse matrices by using Sobolev inequalities, Fourier analysis and probe technique. We further discuss the application of DD to the singular Neumann boundary value problems. We extend the general framework to these problems and show how to deal with the null space in practice. Numerical and theoretical results show that these modified DD methods still have optimal convergence rate. By using the DD methodology, we propose algebraic additive and multiplicative Schwarz methods to solve general sparse linear algebraic systems. We analyze the eigenvalue distribution of the iterative matrix of each algebraic DD method to study the convergence behavior.

550 citations


Journal ArticleDOI
01 Sep 1993-Proteins
TL;DR: Matrices derived directly from either sequence‐based or structurebased alignments of distantly related proteins performed much better overall than extrapolated matrices based on the Dayhoff evolutionary model.
Abstract: Several choices of amino acid substitution matrices are currently available for searching and alignment applications. These choices were evaluated using the BLAST searching program, which is extremely sensitive to differences among matrices, and the Prosite catalog, which lists members of hundreds of protein families. Matrices derived directly from either sequence-based or structurebased alignments of distantly related proteins performed much better overall than extrapolated matrices based on the Dayhoff evolutionary model. Similar results were obtained with the FASTA searching program. Improved performance appears to be general rather than family-specific, reflecting improved accuracy in scoring alignments. An implementation of a multiple matrix strategy was also tested. While no combination of three matrices performed as well as the single best matrix, BLOSUM 62, good results were obtained using a combination of sequence-based and structure-based matrices. This hybrid set of matrices is likely to be useful in certain situations. Our results illustrate the importance of matrix selection and value of a comprehensive approach to evaluation of protein comparison tools. © 1993 Wiley-Liss, Inc.

471 citations



Journal ArticleDOI
TL;DR: The Singular Value Decomposition of the equilibrium matrix makes it possible to answer any question of a static, kinematic, or static/kinematic nature for any structural assembly, within a unified computational framework as discussed by the authors.

361 citations


Journal ArticleDOI
TL;DR: A fast and memory‐saving PLS regression algorithm for matrices with large numbers of objects is presented and a condensed matrix algebra version of the kernel algorithm is given together with the MATLAB code.
Abstract: A fast and memory-saving PLS regression algorithm for matrices with large numbers of objects is presented. It is called the kernel algorithm for PLS. Long (meaning having many objects, N) matrices X (N × K) and Y (N × M) are condensed into a small (K × K) square ‘kernel’ matrix XTYYTX of size equal to the number of X-variables. Using this kernel matrix XTYYTX together with the small covariance matrices XTX (K × K), XTY (K × M) and YTY (M × M), it is possible to estimate all necessary parameters for a complete PLS regression solution with some statistical diagnostics. The new developments are presented in equation form. A comparison of consumed floating point operations is given for the kernel and the classical PLS algorithm. As appendices, a condensed matrix algebra version of the kernel algorithm is given together with the MATLAB code.

345 citations


Proceedings ArticleDOI
01 Jan 1993
TL;DR: The method treats each DCT coefficient as an approximation to the local response of a visual "channel" and estimates the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for agiven bit rate.
Abstract: Many image compression standards (JPEG, MPEG, H.261) are based on the Discrete Cosine Transform (DCT). However, these standards do not specify the actual DCT quantization matrix. We have previously provided mathematical formulae to compute a perceptually lossless quantization matrix. Here I show how to compute a matrix that is optimized for a particular image. The method treats each DCT coefficient as an approximation to the local response of a visual 'channel'. For a given quantization matrix, the DCT quantization errors are adjusted by contrast sensitivity, light adaptation, and contrast masking, and are pooled non-linearly over the blocks of the image. This yields an 8x8 'perceptual error matrix'. A second non-linear pooling over the perceptual error matrix yields total perceptual error. With this model we may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

Journal ArticleDOI
TL;DR: MatMan is a program for performing a variety of ethological analyses of frequency (interaction) matrices and transitionMatrices and calculation of expected and residual values in transition matrices with defined or undefined diagonal.
Abstract: MatMan is a program for performing a variety of ethological analyses of frequency (interaction) matrices and transition matrices. These analyses include linear hierarchy indices for dominance matrices (APPLEBY, 1983), reorganization of a dominance matrix such that the subjects are in rank order, matrix correlation methods such as Mantel's test (MANTEL, 1967) and rowwise matrix correlation (DE VRIES, 1993), methods based on information theory (STEINBERG, 1977), and the calculation of expected and residual values in transition matrices with defined or undefined diagonal. In addition, MatMan offers some useful options for manipulating matrices. Import of matrices from The Observer (NOLDUS, 1991) and SAS is, within certain limitations, possible. Export of matrices is possible to the programs CORAN (1985), Vegrow (FRESCO, 1989), NCSS (HINTZE, 1987), SAS and SPSSPC.

Journal ArticleDOI
TL;DR: The current response to oscillating electric or magnetic fields acting on the carriers in the probes of a multichannel, multilead conductor is investigated and a self-consistent potential method is used to include Coulomb interactions.
Abstract: The current response to oscillating electric or magnetic fields acting on the carriers in the probes of a multichannel, mutlilead conductor is investigated. For a noninteracting system we find a frequency-dependent admittance matrix which is expressed in terms of scattering matrices. A self-consistent potential method is used to include Coulomb interactions. The low-frequency departure of the admittance away from the dc conductance is discussed in terms of phase-delay times and RC times.

Journal ArticleDOI
TL;DR: In this article, a wavelet expansion can adaptively fit itself to the various length scales associated with the scatterer by distributing the localized functions near the discontinuities and the more spatially diffused ones over the smooth expanses of the SCA.
Abstract: An approach which incorporates the theory of wavelet transforms in method-of-moments solutions for electromagnetic wave interaction problems is presented. The unknown field or response is expressed as a twofold summation of shifted and dilated forms of a properly chosen basis function, which is often referred to as the mother wavelet. The wavelet expansion can adaptively fit itself to the various length scales associated with the scatterer by distributing the localized functions near the discontinuities and the more spatially diffused ones over the smooth expanses of the scatterer. The approach is thus best suited for the analysis of scatterers which contain a broad spectrum of length scales ranging from a subwavelength to several wavelengths. Using a Galerkin method and subsequently applying a threshold procedure, the moment-method matrix is rendered sparsely populated. The structure of the matrix reveals the localized scale-fitting distribution long before the matrix equation is solved. The performance of the proposed discretization scheme is illustrated by a numerical study of electromagnetic coupling through a double-slot aperture. >

31 Dec 1993
TL;DR: A heuristic is presented that helps to improve the quality of the bisection returned by the Kernighan-Lin and greedy graph bisection algorithms and helps to reduce the amount of fill-in produced by separator-based algorithms that reorder a matrix before factorization.
Abstract: We present a heuristic that helps to improve the quality of the bisection returned by the Kernighan-Lin and greedy graph bisection algorithms. This in turns helps to reduce the amount of fill-in produced by separator-based algorithms that reorder a matrix before factorization. We also describe the performance of our heuristic on graphs from the Harwell-Boeing collection of sparse matrix test problems, and compare them with known results by other methods on the same graphs.

Journal ArticleDOI
TL;DR: The nonnegative rank of a nonnegative matrix is the smallest number of non negative rank-one matrices into which the matrix can be decomposed additively.

Journal ArticleDOI
TL;DR: In this article, a Gaussian basis set leading to wave functions with atomic total energies within 1 m E h of the Hartree-Fock values was prepared using the well-tempered formula for atoms Ga through Rn.

Journal ArticleDOI
TL;DR: In this article, a general method for constructing supersaturated 2 N-1 designs with a large number of interaction columns was proposed, and the efficiency of the constructed designs was studied by using three criteria.
Abstract: SUMMARY An N x N Hadamard matrix can be used to construct a saturated 2 N-1 design with N runs. Furthermore, if an interaction column for two of the columns of the matrix is not fully aliased with any column of the matrix, this interaction column can be used as a supplementary column for studying an additional factor. For some small Hadamard matrices studied by Plackett & Burman (1946), the number of such interaction columns is very large, thus allowing the construction of supersaturated designs whose number of factors far exceeds the number of runs. A general method of construction along these lines is proposed. The efficiency of the constructed designs is studied by using three criteria.

Journal ArticleDOI
TL;DR: This paper presents a new approach to generate multidimensional Gaussian random fields over a regular sampling grid that is both exact and computationally very efficient and comparable with that of a spectral method also implemented using the FFT.
Abstract: To generate multidimensional Gaussian random fields over a regular sampling grid, hydrogeologists can call upon essentially two approaches. The first approach covers methods that are exact but computationally expensive, e.g., matrix factorization. The second covers methods that are approximate but that have only modest computational requirements, e.g., the spectral and turning bands methods. In this paper, we present a new approach that is both exact and computationally very efficient. The approach is based on embedding the random field correlation matrix R in a matrix S that has a circulant/block circulant structure. We then construct products of the square root S1/2 with white noise random vectors. Appropriate sub vectors of this product have correlation matrix R, and so are realizations of the desired random field. The only conditions that must be satisfied for the method to be valid are that (1) the mesh of the sampling grid be rectangular, (2) the correlation function be invariant under translation, and (3) the embedding matrix S be nonnegative definite. These conditions are mild and turn out to be satisfied in most practical hydrogeological problems. Implementation of the method requires only knowledge of the desired correlation function. Furthermore, if the sampling grid is a d-dimensional rectangular mesh containing n points in total and the correlation between points on opposite sides of the rectangle is vanishingly small, the computational requirements are only those of a fast Fourier transform (FFT) of a vector of dimension 2dn per realization. Thus the cost of our approach is comparable with that of a spectral method also implemented using the FFT. In summary, the method is simple to understand, easy to implement, and is fast.

Journal ArticleDOI
TL;DR: This paper formalizes this concept by defining a scoring system that is sensitive at all detectable evolutionary distances, and shows that for a typical protein database search, estimating the originally unknown evolutionary distance appropriate to each alignment costs slightly over two bits of information, or somewhat less than a factor of five in statistical significance.
Abstract: Protein sequence alignments generally are constructed with the aid of a “substitution matrix” that specifies a score for aligning each pair of amino acids. Assuming a simple random protein model, it can be shown that any such matrix, when used for evaluating variable-length local alignments, is implicitly a “log-odds” matrix, with a specific probability distribution for amino acid pairs to which it is uniquely tailored. Given a model of protein evolution from which such distributions may be derived, a substitution matrix adapted to detecting relationships at any chosen evolutionary distance can be constructed. Because in a database search it generally is not known a priori what evolutionary distances will characterize the similarities found, it is necessary to employ an appropriate range of matrices in order not to overlook potential homologies. This paper formalizes this concept by defining a scoring system that is sensitive at all detectable evolutionary distances. The statistical behavior of this scoring system is analyzed, and it is shown that for a typical protein database search, estimating the originally unknown evolutionary distance appropriate to each alignment costs slightly over two bits of information, or somewhat less than a factor of five in statistical significance. A much greater cost may be incurred, however, if only a single substitution matrix, corresponding to the wrong evolutionary distance, is employed.

Journal ArticleDOI
TL;DR: In this article, a study is made of the extreme points of the convex set of doubly stochastic completely positive maps of the matrix algebra M n and a tilde operation is defined on the linear maps of M n to give an elementary derivation of a result of Kummerer and Maassen.

01 Feb 1993
TL;DR: The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate.
Abstract: The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.

Journal ArticleDOI
TL;DR: The theory of matrix models is reviewed from the point of view of its relation to integrable hierarchies in this article, where discrete 1-matrix, 2 -matrix and Kontsevich models are considered in some detail, together with the Ward identites, determinantal formulas and continuum limits.
Abstract: The theory of matrix models is reviewed from the point of view of its relation to integrable hierarchies. Discrete 1-matrix, 2-matrix, ``conformal'' (multicomponent) and Kontsevich models are considered in some detail, together with the Ward identites (``W-constraints''), determinantal formulas and continuum limits, taking one kind of models into another. Subtle points and directions of the future research are also discussed.


Journal ArticleDOI
TL;DR: In this article, a 3 × 3, upper triangular matrix with three off-diagonal terms (Γxy, Γx, y, and Γy,z) was used to decompose the deformation matrix into a series of incremental deformation matrices.

Journal ArticleDOI
TL;DR: The single fracture dispersion model (SFDM) as discussed by the authors is a solution to the transport equation in which fractured rock is represented by a single fracture and the tracer is allowed to diffuse into the matrix, and the model is easily calibrated yielding as good, or better, fits as did the models applied in original works, which were based on an assumption of negligible matrix diffusion and were characterized by larger numbers of fitting parameters.
Abstract: Several published tracer tests are reexamined using the single fracture dispersion model (SFDM), i.e., a solution to the transport equation in which fractured rock is represented by a single fracture and the tracer is allowed to diffuse into the matrix. The model has three fitting parameters, i.e., the mean time of flow, dispersivity, and a diffusion parameter which combines such physical parameters as matrix porosity, coefficient of matrix diffusion, and adjusted fracture aperture. The model is easily calibrated yielding as good, or better, fits as did the models applied in original works, which were based on an assumption of negligible matrix diffusion and were characterized by larger numbers of fitting parameters. The SFDM better describes the short-term transport in investigated systems and/or their parameters than models applied so far. Validation of the model was obtained by showing that the values of its physical parameters either agree with those known from other methods or are within expected ranges. Another validation was achieved for tests performed in the same pair of wells with two tracers characterized by distinctly different coefficients of molecular diffusion. In spite of distinct differences in the shapes of the experimental curves, the SFDM yielded the same values of the fracture aperture, matrix porosity, and intrinsic dispersivity. The ratio of the diffusion coefficients in the matrix was close to that known from the diffusion coefficients in free water.

Journal ArticleDOI
TL;DR: In this paper, a two-loop 10 × 10 anomalous dimension matrix O(� 2 ) involving current-current operators, QCD penguin operators, and electroweak penguin opera-tors was calculated for S = 1 weak non-leptonic decays, but also for B = 1 decays.

Book
01 Mar 1993
TL;DR: This chapter discusses C-XSC, a Programming Environment for Scientific Computing with Result Verification with a focus on C and C++, and some of the techniques used to develop these languages.
Abstract: 1 Introduction.- 1.1 Typography.- 1.2 C-XSC: A Class Library in the Programming Language C++.- 1.3 C-XSC: A Programming Environment for Scientific Computing with Result Verification.- 1.4 Survey of C-XSC.- 2 The Programming Languages C and C++.- 2.1 A Short Introduction to C.- 2.1.1 Overview.- 2.1.2 Data Types, Operators, and Expressions.- 2.1.3 Control Flow.- 2.1.4 Functions.- 2.1.5 The Structure of a C Program.- 2.1.6 External Variables.- 2.1.7 The Scope of Variables and Functions.- 2.1.8 The C Preprocessor.- 2.1.9 Pointers.- 2.1.10 Pointers and Function Arguments.- 2.1.11 Arithmetic with Pointers and Arrays.- 2.1.12 Structures.- 2.1.13 The C Standard Library.- 2.2 Additional Features in C++.- 2.2.1 Overview.- 2.2.2 A Sample Program.- 2.2.3 Comments.- 2.2.4 Classes.- 2.2.5 Member Functions.- 2.2.6 Friend Functions.- 2.2.7 Reference Variables.- 2.2.8 Constructors and Destructors.- 2.2.9 The Structure of a C++ Program.- 2.2.10 Inline Functions.- 2.2.11 Overloaded Operators and Functions.- 2.2.12 Input and Output with Streams.- 2.2.13 Memory Management Operators.- 2.2.14 Type Casting.- 2.2.15 Additional Features of C++.- 3 C-XSC Reference.- 3.1 Constants, Data Types, and Variables.- 3.1.1 Constants.- 3.1.2 Variables.- 3.1.3 Scalar Data Types.- 3.1.4 Vector Data Types.- 3.1.5 Matrix Data Types.- 3.1.6 Dot precision Data Types.- 3.1.7 Multiple-Precision Data Types.- 3.1.8 User-Defined Data Types.- 3.2 Expressions.- 3.2.1 Implicit Type Casting.- 3.2.2 Explicit Type Casting.- 3.2.3 Arithmetic Operators.- 3.2.4 Relational Operators.- 3.2.5 Standard Functions.- 3.3 Statements.- 3.3.1 Assignments.- 3.3.2 Manipulation of Index Bounds.- 3.3.3 Resize of Vectors and Matrices.- 3.3.4 Addition of a Product to a Dotprecision Accumulator.- 3.3.5 Rounding of Dotprecision Accumulators.- 3.3.6 Input and Output.- 3.4 Error Handling.- 3.5 Pitfalls for Programming with C - XSC.- A Syntax Diagrams.- A.1 Data Types.- A.2 Management of Vectors and Matrices.- A.3 Definition of Variables.- A.4 Expressions.- A.5 Logical Expressions.- A.6 Type Castings.- A.7 Assignments.- A.8 Arithmetic Standard Functions.- A.9 Other Functions.- A.10 Input and Output.- A.11 Extension of a Syntax Diagram of C++.- B The Structure of the C - XSC Package.- B.1 Header Files.- B.2 Module Libraries.- C Error List.- D Sample Programs.- D.1 Rounding Control of Arithmetic Operations.- D.2 Rounding Control of Input and Output.- D.3 Scalar Product.- D.4 Transpose of a Matrix.- D.5 Trace of a Product Matrix.- D.6 Inverse of a Matrix.- D.7 Multiple-Precision Arithmetic.- D.8 Interval Newton Method.- D.9 Runge-Kutta Method.- D.10 Complex Polynomial Arithmetic.- D.11 Automatic Differentiation.- E Scientific Computation with Verified Results.- E.1 Evaluation of Polynomials.- E.2 Matrix Inversion.- E.3 Linear Systems of Equations.- E.4 Eigenvalues of Symmetric Matrices.- E.5 Fast Fourier Transform.- E.6 Zeros of a Nonlinear Equation.- E.7 System of Nonlinear Equations.- E.8 Ordinary Differential Equations.

Journal ArticleDOI
01 Jun 1993-Analysis
TL;DR: In this paper, the authors characterize the matrix class (ίίχ ΠΧ, Y) for certain sequence spaces X and Υ, where stx is the set of all statistically convergent sequences defined by a non-negative regular matrix A.
Abstract: We characterize the matrix class (ίίχ ΠΧ, Y) for certain sequence spaces X and Υ , where stx is the set of all statistically convergent sequences defined by a non-negative regular matrix A. AMS-classification: 40A05, 40C05, 40D25, 40F05

Journal Article
TL;DR: In this paper, the attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate, and the method also provides an estimate of the attitude error covariance matrix.
Abstract: The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.