scispace - formally typeset
Search or ask a question

Showing papers on "Matrix (mathematics) published in 1994"


Journal ArticleDOI
TL;DR: In this paper, a new variant of Factor Analysis (PMF) is described, where the problem is solved in the weighted least squares sense: G and F are determined so that the Frobenius norm of E divided (element-by-element) by σ is minimized.
Abstract: A new variant ‘PMF’ of factor analysis is described. It is assumed that X is a matrix of observed data and σ is the known matrix of standard deviations of elements of X. Both X and σ are of dimensions n × m. The method solves the bilinear matrix problem X = GF + E where G is the unknown left hand factor matrix (scores) of dimensions n × p, F is the unknown right hand factor matrix (loadings) of dimensions p × m, and E is the matrix of residuals. The problem is solved in the weighted least squares sense: G and F are determined so that the Frobenius norm of E divided (element-by-element) by σ is minimized. Furthermore, the solution is constrained so that all the elements of G and F are required to be non-negative. It is shown that the solutions by PMF are usually different from any solutions produced by the customary factor analysis (FA, i.e. principal component analysis (PCA) followed by rotations). Usually PMF produces a better fit to the data than FA. Also, the result of PF is guaranteed to be non-negative, while the result of FA often cannot be rotated so that all negative entries would be eliminated. Different possible application areas of the new method are briefly discussed. In environmental data, the error estimates of data can be widely varying and non-negativity is often an essential feature of the underlying models. Thus it is concluded that PMF is better suited than FA or PCA in many environmental applications. Examples of successful applications of PMF are shown in companion papers.

4,797 citations


Journal ArticleDOI
TL;DR: In this paper, the authors survey some of the many results known for Laplacian matrices, and present a survey of the most important results in the field of graph analysis.

1,498 citations


Journal ArticleDOI
TL;DR: The existence conditions are equivalent to Scherer's results, but with a more elementary derivation, and the set of all H∞ controllers explicitly parametrized in the state space using the positive definite solutions to the LMIs is provided.

1,253 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived the distribution functions of light-ray operators and derived the evolution equations for these distribution functions on the basis of the renormalization group equation of the considered operators.
Abstract: The widely used nonperturbative wave functions and distribution functions of QCD are determined as matrix elements of light-ray operators. These operators appear as large momentum limit of nonlocal hadron operators or as summed up local operators in light-cone expansions. Nonforward one-particle matrix elements of such operators lead to new distribution amplitudes describing both hadrons simultaneously. These distribution functions depend besides other variables on two scaling variables. They are applied for the description of exclusive virtual Compton scattering in the Bjorken region near forward direction and the two meson production process. The evolution equations for these distribution amplitudes are derived on the basis of the renormalization group equation of the considered operators. This includes that also the evolution kernels follow from the anomalous dimensions of these operators. Relations between different evolution kernels (especially the Altarelli-Parisi and the Brodsky-Lepage) kernels are derived and explicitly checked for the existing two-loop calculations of QCD. Technical basis of these results are support and analytically properties of the anomalous dimensions of light-ray operators obtained with the help of the $\alpha$-representation of Green's functions.

967 citations


Journal ArticleDOI
TL;DR: The problem of separating n linearly superimposed uncorrelated signals and determining their mixing coefficients is reduced to an eigenvalue problem which involves the simultaneous diagonalization of two symmetric matrices whose elements are measureable time delayed correlation functions.
Abstract: The problem of separating n linearly superimposed uncorrelated signals and determining their mixing coefficients is reduced to an eigenvalue problem which involves the simultaneous diagonalization of two symmetric matrices whose elements are measureable time delayed correlation functions. The diagonalization matrix can be determined from a cost function whose number of minima is equal to the number of degenerate solutions. Our approach offers the possibility to separate also nonlinear mixtures of signals.

837 citations


Posted Content
TL;DR: The underlying motivation for maximum-likelihood estimation is explored, the interpretation of the MLE for misspecified probability models is treated, and the conditions under which parameters of interest can be consistently estimated despite misspecification are given.
Abstract: This book examines the consequences of misspecifications ranging from the fundamental to the nonexistent for the interpretation of likelihood-based methods of statistical estimation and interference. Professor White first explores the underlying motivation for maximum-likelihood estimation, treats the interpretation of the maximum-likelihood estimator (MLE) for misspecified probability models, and gives the conditions under which parameters of interest can be consistently estimated despite misspecification, and the consequences of misspecification, for hypothesis testing in estimating the asymptotic covariance matrix of the parameters. Although the theory presented in the book is motivated by econometric problems, its applicability is by no means restricted to economics. Subject to defined limitations, the theory applies to any scientific context in which statistical analysis is conducted using approximate models.

806 citations


Book
01 Feb 1994
TL;DR: Details of Matrix Eigenvalue Methods, including Double Bracket Isospectral Flows, and Singular Value Decomposition are revealed.
Abstract: Contents: Matrix Eigenvalue Methods.- Double Bracket Isospectral Flows.- Singular Value Decomposition.- Linear Programming.- Approximation and Control.- Balanced Matrix Factorizations.- Invariant Theory and System Balancing.- Balancing via Gradient Flows.- Sensitivity Optimization.- Linear Algebra.- Dynamical Systems.- Global Analysis.

800 citations


Journal ArticleDOI
TL;DR: A phylogenetic tree, derived from marsupial brain morphology data, is compared to trees depicting the evolution of diet, sociability, locomotion, and habitat in these animals, as well as their taxonomy and geographical relationships.
Abstract: This paper has two complementary purposes: first, to present a method to perform multiple regression on distance matrices, with permutation testing appropriate for path-length matrices representing evolutionary trees, and then, to apply this method to study the joint evolution of brain, behavior and other characteristics in marsupials. To understand the computation method, consider that the dependent matrix is unfolded as a vector y; similarly, consider X to be a table containing the independent matrices, also unfolded as vectors. A multiple regression is computed to express y as a function of X. The parameters of this regression (R2 and partial regression coefficients) are tested by permutations, as follows. When the dependent matrix variable y represents a simple distance or similarity matrix, permutations are performed in the same manner as the Mantel permutational test. When it is an ultrametric matrix representing a dendrogram, we use the double-permutation method (Lapointe and Legendre 1990, 1991). When it is a path-length matrix representing an additive tree (cladogram), we use the triple-permutation method (Lapointe and Legendre 1992). The independent matrix variables in X are kept fixed with respect to one another during the permutations. Selection of predictors can be accomplished by forward selection, backward elimination, or a stepwise procedure. A phylogenetic tree, derived from marsupial brain morphology data (28 species), is compared to trees depicting the evolution of diet, sociability, locomotion, and habitat in these animals, as well as their taxonomy and geographical relationships. A model is derived in which brain evolution can be predicted from taxonomy, diet, sociability and locomotion (R2 = 0.75). A new tree, derived from the "predicted" data, shows a lot of similarity to the brain evolution tree. The meaning of the taxonomy, diet, sociability, and locomotion predictors are discussed and conclusions are drawn about the evolution of brain and behavior in marsupials.

446 citations


Journal ArticleDOI
TL;DR: It is shown that all conceivable variance matrices can be generated through squeezed thermal states of the n-mode system and their symplectic transforms and developed in both the real and the complex forms for varianceMatrices.
Abstract: We present a complete analysis of variance matrices and quadrature squeezing for arbitrary states of quantum systems with any finite number of degrees of freedom. Basic to our analysis is the recognition of the crucial role played by the real symplectic group Sp(2n,openR) of linear canonical transformations on n pairs of canonical variables. We exploit the transformation properties of variance (noise) matrices under symplectic transformations to express the uncertainty-principle restrictions on a general variance matrix in several equivalent forms, each of which is manifestly symplectic invariant. These restrictions go beyond the classically adequate reality, symmetry, and positivity conditions. Towards developing a squeezing criterion for n-mode systems, we distinguish between photon-number-conserving passive linear optical systems and active ones. The former correspond to elements in the maximal compact U(n) subgroup of Sp(2n,openR), the latter to noncompact elements outside U(n). Based on this distinction, we motivate and state a U(n)-invariant squeezing criterion applicable to any state of an n-mode system, and explore alternative ways of expressing it. The set of all possible quantum-mechanical variance matrices is shown to contain several interesting subsets or subfamilies, whose definitions are related to the fact that a general variance matrix is not diagonalizable within U(n). Definitions, characterizations, and canonical forms for variance matrices in these subfamilies, as well as general ones, and their squeezing nature, are established. It is shown that all conceivable variance matrices can be generated through squeezed thermal states of the n-mode system and their symplectic transforms. Our formulas are developed in both the real and the complex forms for variance matrices, and ways to pass between them are given.

444 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered the case where the underlying set is the union of intervals and the determinants were thought of as functions of the end-points of the set.
Abstract: Orthogonal polynomial random matrix models ofN×N hermitian matrices lead to Fredholm determinants of integral operators with kernel of the form (ϕ(x)ψ(y)−ψ(x)ϕ(y))/x−y. This paper is concerned with the Fredholm determinants of integral operators having kernel of this form and where the underlying set is the union of intervals\(J = \cup _{j = 1}^m (a_{2j - 1 ,{\text{ }}} a_{2j} )\). The emphasis is on the determinants thought of as functions of the end-pointsak.

415 citations


Journal ArticleDOI
TL;DR: The author presents an algorithm for solving polynomial equations using the combination of multipolynomial resultants and matrix computations, which is efficient, robust and accurate.
Abstract: Geometric and solid modelling deal with the representation and manipulation of physical objects. Currently most geometric objects are formulated in terms of polynomial equations, thereby reducing many application problems to manipulating polynomial systems. Solving systems of polynomial equations is a fundamental problem in these geometric computations. The author presents an algorithm for solving polynomial equations. The combination of multipolynomial resultants and matrix computations underlies this efficient, robust and accurate algorithm. >

Posted Content
TL;DR: A non-commutative theory of symmetric functions, based on the notion of quasi-determinant, was presented in this article, which allows to endow the resulting algebra with a Hopf structure, which leads to a new method for computing in descent algebras.
Abstract: This paper presents a noncommutative theory of symmetric functions, based on the notion of quasi-determinant. We begin with a formal theory, corresponding to the case of symmetric functions in an infinite number of independent variables. This allows us to endow the resulting algebra with a Hopf structure, which leads to a new method for computing in descent algebras. It also gives unified reinterpretation of a number of classical constructions. Next, we study the noncommutative analogs of symmetric polynomials. One arrives at different constructions, according to the particular kind of application under consideration. For example, when a polynomial with noncommutative coefficients in one central variable is decomposed as a product of linear factors, the roots of these factors differ from those of the expanded polynomial. Thus, according to whether one is interested in the construction of a polynomial with given roots or in the expansion of a product of linear factors, one has to consider two distinct specializations of the formal symmetric functions. A third type appears when one looks for a noncommutative generalization of applications related to the notion of characteristic polynomial of a matrix. This construction can be applied, for instance, to the noncommutative matrices formed by the generators of the universal enveloping algebra $U(gl_n)$ or of

Journal ArticleDOI
TL;DR: In this paper, an elementary proof is given for localization for linear operators H = Ho + λV, with Ho translation invariant, or periodic, and V (·) a random potential, in energy regimes which for weak disorder (λ → 0) are close to the unperturbed spectrum σ (Ho).
Abstract: An elementary proof is given of localization for linear operators H = Ho + λV, with Ho translation invariant, or periodic, and V (·) a random potential, in energy regimes which for weak disorder (λ → 0) are close to the unperturbed spectrum σ (Ho). The analysis is within the approach introduced in the recent study of localization at high disorder by Aizenman and Molchanov [4]; the localization regimes discussed in the two works being supplementary. Included also are some general auxiliary results enhancing the method, which now yields uniform exponential decay for the matrix elements of the spectrally filtered unitary time evolution operators, with [a, b] in the relevant range.

Journal ArticleDOI
TL;DR: The theory of matrix models is reviewed from the point of view of its relation to integrable hierarchies in this paper, where discrete 1-matrix, 2 -matrix and Kontsevich models are considered in some detail, together with the Ward identites ('W-constraints'), determinantal formulas and continuum limits, taking one kind of model into another.
Abstract: The theory of matrix models is reviewed from the point of view of its relation to integrable hierarchies. Discrete 1-matrix, 2-matrix, 'conformal' (multicomponent) and Kontsevich models are considered in some detail, together with the Ward identites ('W-constraints'), determinantal formulas and continuum limits, taking one kind of model into another. Subtle points and directions of the future research are also discussed.

Journal ArticleDOI
TL;DR: A kernel algorithm is presented based on eigenvectors to the ‘kernel’ matrix XX TYYT, which is a square, non‐symmetric matrix of size N × N, where N is the number of objects.
Abstract: A fast PLS regression algorithm dealing with large data matrices with many variables (K) and fewer objects (N) is presented For such data matrices the classical algorithm is computer-intensive and memory-demanding. Recently, Lindgren et al. (J. Chemometrics, 7, 45–49 (1993)) developed a quick and efficient kernel algorithm for the case with many objects and few variables. The present paper is focused on the opposite case, i.e. many variables and fewer objects. A kernel algorithm is presented based on eigenvectors to the ‘kernel’ matrix XXTYYT, which is a square, non-symmetric matrix of size N × N, where N is the number of objects. Using the kernel matrix and the association matrices XXT (N × N) and YYT (N × N), it is possible to calculate all score and loading vectors and hence conduct a complete PLS regression including diagnostics such as R2. This is done without returning to the original data matrices X and Y. The algorithm is presented in equation form, with proofs of some new properties and as MATLAB code.

Journal ArticleDOI
01 Oct 1994
TL;DR: An algorithm and implementation for efficient inverse kinematics for a general six-revolute (6R) manipulator that makes use of the algebraic properties and symbolic formulation used for reducing the problem to solving a univariate polynomial.
Abstract: In this paper, we present an algorithm and implementation for efficient inverse kinematics for a general six-revolute (6R) manipulator. When stated mathematically, the problem reduces to solving a system of multivariate equations. We make use of the algebraic properties of the system and the symbolic formulation used for reducing the problem to solving a univariate polynomial. However, the polynomial is expressed as a matrix determinant and its roots are computed by reducing to an eigenvalue problem. The other roots of the multivariate system are obtained by computing eigenvectors and substitution. The algorithm involves symbolic preprocessing, matrix computations and a variety of other numerical techniques. The average running time of the algorithm, for most cases, is 11 milliseconds on an IBM RS/6000 workstation. This approach is applicable to inverse kinematics of all serial manipulators. >

Journal ArticleDOI
TL;DR: The theory of neuronal correlation functions in large networks comprising of several highly connected subpopulations, and obeying stochastic dynamic rules is developed and extended to networks with random connectivity, such as randomly dilute networks.
Abstract: One of the main experimental tools in probing the interactions between neurons has been the measurement of the correlations in their activity. In general, however, the interpretation of the observed correlations is difficult since the correlation between a pair of neurons is influenced not only by the direct interaction between them but also by the dynamic state of the entire network to which they belong. Thus a comparison between the observed correlations and the predictions from specific model networks is needed. In this paper we develop a theory of neuronal correlation functions in large networks comprising several highly connected subpopulations and obeying stochastic dynamic rules. When the networks are in asynchronous states, the cross correlations are relatively weak, i.e., their amplitude relative to that of the autocorrelations is of order of 1/N, N being the size of the interacting populations. Using the weakness of the cross correlations, general equations that express the matrix of cross correlations in terms of the mean neuronal activities and the effective interaction matrix are presented. The effective interactions are the synaptic efficacies multiplied by the gain of the postsynaptic neurons. The time-delayed cross-correlation matrix can be expressed as a sum of exponentially decaying modes that correspond to the (nonorthogonal) eigenvectors of the effective interaction matrix. The theory is extended to networks with random connectivity, such as randomly dilute networks. This allows for a comparison between the contribution from the internal common input and that from the direct interactions to the correlations of monosynaptically coupled pairs.A closely related quantity is the linear response of the neurons to external time-dependent perturbations. We derive the form of the dynamic linear response function of neurons in the above architecture in terms of the eigenmodes of the effective interaction matrix. The behavior of the correlations and the linear response when the system is near a bifurcation point is analyzed. Near a saddle-node bifurcation, the correlation matrix is dominated by a single slowly decaying critical mode. Near a Hopf bifurcation the correlations exhibit weakly damped sinusoidal oscillations. The general theory is applied to the case of a randomly dilute network consisting of excitatory and inhibitory subpopulations, using parameters that mimic the local circuit of 1 ${\mathrm{mm}}^{3}$ of the rat neocortex. Both the effect of dilution as well as the influence of a nearby bifurcation to an oscillatory state are demonstrated.

Journal ArticleDOI
TL;DR: In this paper, an efficient scheme for the numerical calculation of hydrodynamic interactions of many spheres in Stokes flow is presented, where both the friction and mobility matrix are found from the solution of a set of coupled equations.
Abstract: An efficient scheme is presented for the numerical calculation of hydrodynamic interactions of many spheres in Stokes flow. The spheres may have various sizes, and are freely moving or arranged in rigid arrays. Both the friction and mobility matrix are found from the solution of a set of coupled equations. The Stokesian dynamics of many spheres and the friction and mobility tensors of polymers and proteins may be calculated accurately at a modest expense of computer memory and time. The transport coefficients of suspensions can be evaluated by use of periodic boundary conditions.

Journal ArticleDOI
TL;DR: The classical and modified Gram-Schmidt (CGS) orthogonalization is one of the fundamental procedures in linear algebra as mentioned in this paper, and it is equivalent to the factorization AQ1R, where Q1∈Rm×n with orthonormal columns and R upper triangular.

Journal ArticleDOI
TL;DR: A method of solving large sparse systems of homogeneous linear equations over G F ( 2 ) , the field with two elements, is proposed and an algorithm due to Wiedemann is modified, which is competitive with structured Gaussian elimination in terms of time and has much lower space requirements.
Abstract: We propose a method of solving large sparse systems of homogeneous linear equations over G F ( 2 ) , the field with two elements. We modify an algorithm due to Wiedemann. A block version of the algorithm allows us to perform 32 matrix-vector operations for the cost of one. The resulting algorithm is competitive with structured Gaussian elimination in terms of time and has much lower space requirements. It may be useful in the last stage of integer factorization. We address here the problem of solving a large sparse system of homogeneous linear equations over GF(2) , the field with two elements. One important application, which motivates the present work, arises in integer factorization. During the last stage of most integer factorization algorithms, we are presented with a large sparse integer matrix and are asked to find linear combinations of the columns of this matrix which vanish modulo 2. For example [7], the matrix may have 100,000 columns, with an average of 15 nonzero entries per column. For this application we would like to obtain several solutions, because a given solution will lead to a nontrivial factorization with probability 112 ; with n independent solutions, our probability of finding a factorization rises to 1 2-" . Structured Gaussian elimination can be used [7], but as problems get larger, it may become infeasible to store the matrices obtained in the intermediate stages of Gaussian elimination. The Wiedemann algorithm [9, 71 has smaller storage requirements (one need only store a few vectors and an encoding of a sparse matrix, not a dense matrix as occurs in Gaussian elimination after fillin), and it may have fewer computational steps (since one takes advantage of the sparseness of the matrix). But its efficiency is hampered by the fact that the algorithm acts on only one bit at a time. In the present paper we work with blocks of vectors at a single time. By treating 32 vectors at a time (on a machine with 32-bit words), we can perform 32 matrix-vector products at once, thus considerably decreasing the cost of indexing. This can be viewed as a block Wiedemann algorithm. The main technical difficulty is in obtaining the correct generalization of the Berlekamp-Massey algorithm to a block version, namely, a multidimensional version of the extended Euclidean algorithm. Received by the editor November 20, 1991 and, in revised form, July 24, 1992. 1991 Mathematics Subject Classijication. Primary 15A33, 11Y05, 11-04, 15-04. @ 1994 Amencan Mathernat~cal Soc~ety 0025-571 8/94 $1 .OO + $.25 per page

Journal ArticleDOI
TL;DR: This new mutation data matrix is found to be very different from matrices calculated from general sequence sets which are biased towards water‐soluble globular proteins, and the differences are discussed in the context of specific structural requirements of membrane spanning segments.

Journal ArticleDOI
TL;DR: In this article, the authors extended the study of Wishart and multivariate beta distributions to the singular case, where the rank is below the dimensionality and the usual conjugacy is extended to this case.
Abstract: This paper extends the study of Wishart and multivariate beta distributions to the singular case, where the rank is below the dimensionality The usual conjugacy is extended to this case A volume element on the space of positive semidefinite $m \times m$ matrices of rank $n < m$ is introduced and some transformation properties established The density function is found for all rank-$n$ Wishart distributions as well as the rank-1 multivariate beta distribution To do that, the Jacobian for the transformation to the singular value decomposition of general $m \times n$ matrices is calculated The results in this paper are useful in particular for updating a Bayesian posterior when tracking a time-varying variance-covariance matrix

Journal ArticleDOI
TL;DR: In this article, a generalized form of the method-of-moments technique is presented for a diverse class of arbitrarily shaped three-dimensional scatterers, which may be totally or partially penetrable.
Abstract: We outline a generalized form of the method-of-moments technique. Integral equation formulations are developed for a diverse class of arbitrarily shaped three-dimensional scatterers. The scatterers may be totally or partially penetrable. Specific cases examined are scatterers with surfaces that are perfectly conducting, dielectric, resistive, or magnetically conducting or that satisfy the Leontovich (impedance) boundary condition. All the integral equation formulations are transformed into matrix equations expressed in terms of five general Galerkin (matrix) operators. This allows a unified numerical solution procedure to be implemented for the foregoing hierarchy of scatterers. The operators are general and apply to any arbitrarily shaped three-dimensional body. The operator calculus of the generalized approach is independent of geometry and basis or testing functions used in the method-of-moments approach. Representative numerical results for a number of scattering geometries modeled by triangularly faceted surfaces are given to illustrate the efficacy and the versatility of the present approach.

Journal ArticleDOI
TL;DR: The current study deals mainly with the problem of mathematically classifying the conversion rates into balanceable and calculable rates, given the subset of measured rates, and shows that a simple matrix equation can be derived that contains the vector of measured conversion rates and the redundancy matrix R.
Abstract: Measurements provide the basis for process monitoring and control as well as for model development and validation. Systematic approaches to increase the accuracy and credibility of the empirical data set are therefore of great value. In (bio)chemical conversions, linear conservation relations such as the balance equations for charge, enthalpy, and/or chemical elements, can be employed to relate conversion rates. In a pactical situation, some of these rates will be measured (in effect, be calculated directly from primary measurements of, e.g., concentrations and flow rates), as others can or cannot be calculated from the measured ones. When certain measured rates can also be calculated from other measured rates, the set of equations, the accuracy and credibility of the measured rates can indeed be improved by, respectively, balancing and gross error diagnosis. The balanced conversion rates are more accurate, and form a consistent set of data, which is more suitable for further application (e.g., to calculate nonmeasured rates) than the raw measurements. Such an approach has drawn attention in previous studies. The current study deals mainly with the problem of mathematically classifying the conversion rates into balanceable and calculable rates, given the subset of measured rates. The significance of this problem is illustrated with some examples. It is shown that a simple matrix equation can be derived that contains the vector of measured conversion rates and the redundancy matrix R. Matrix R plays a predominant role in the classification problem. In supplementary articles, significance of the redundancy matrix R for an improved gross error diagnosis approach will be shown. In addition, efficient equations have been derived to calculate the balanceable and/or calculable rates. The method is completely based on matrix algebra (principally different from the graph-theoretical approach), and it is easily implemented into a computer program. (c) 1994 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: In vitro NMR can be used to study cellular metabolism or to validate the flux estimates obtained from other methods, and Calculations showed that a single nuclear magnetic resonance (NMR) experiment with 13C‐labeled glucose should be able to uniquely determine two important TCA cycle flux ratios.
Abstract: The modeling of isotope distributions can be used to evaluate intracellular fluxes and to investigate cellular metabolism. A method of modeling isotope distributions in biochemical networks that addresses some of the shortcomings of conventional methods is presented. Matrix equations representing steady state isotope balances are formulated for each metabolite and solved iteratively via computer. The key feature of this method is the use of atom mapping matrices, which decouple the generation of the steady state equations from the details of the transfer of carbon atoms from reactants to products. The use of atom mapping matrices results in a clear, intuitive description of metabolic networks that is easy to develop, check, and modify. A network representing energy metabolism in a hybridoma cell line was developed and presented as an example of the method. A program that uses the atom mapping matrix method to calculate the isotope distribution in the network as a function of the intracellular fluxes was written. Calculations showed that a single nuclear magnetic resonance (NMR) experiment with 13C-labeled glucose should be able to uniquely determine two important TCA cycle flux ratios. These results demonstrate how in vitro NMR can be used to study cellular metabolism or to validate the flux estimates obtained from other methods.

Book
01 Sep 1994
TL;DR: Interestingly, polynomial and matrix computations vol 1 fundamental algorithms 1st edition that you really wait for now is coming.
Abstract: Interestingly, polynomial and matrix computations vol 1 fundamental algorithms 1st edition that you really wait for now is coming. It's significant to wait for the representative and beneficial books to read. Every book that is provided in better way and utterance will be expected by many peoples. Even you are a good reader or not, feeling to read this book will always appear when you find it. But, when you feel hard to find it as yours, what to do? Borrow to your friends and don't know when to give back it to her or him.

Journal ArticleDOI
TL;DR: A recurrence relation is presented for the computation of a basis for the corresponding linear solution space of these approximants, and these methods result in fast (and superfast) reliable algorithms for the inversion of stripedHankel, layered Hankel, and (rectangular) block-Hankel matrices.
Abstract: Recently, a uniform approach was given by B. Beckermann and G. Labahn [Numer. Algorithms, 3 (1992), pp. 45-54] for different concepts of matrix-type Pade approximants, such as descriptions of vector and matrix Pade approximants along with generalizations of simultaneous and Hermite Pade approximants. The considerations in this paper are based on this generalized form of the classical scalar Hermite Pade approximation problem, power Hermite Pade approximation. In particular, this paper studies the problem of computing these new approximants. A recurrence relation is presented for the computation of a basis for the corresponding linear solution space of these approximants. This recurrence also provides bases for particular subproblems. This generalizes previous work by Van Barel and Bultheel and, in a more general form, by Beckermann. The computation of the bases has complexity ${\cal O}(\sigma^{2})$, where $\sigma$ is the order of the desired approximant and requires no conditions on the input data. A second algorithm using the same recurrence relation along with divide-and-conquer methods is also presented. When the coefficient field allows for fast polynomial multiplication, this second algorithm computes a basis in the superfast complexity ${\cal O}(\sigma \log^{2})$. In both cases the algorithms are reliable in exact arithmetic. That is, they never break down, and the complexity depends neither on any normality assumptions nor on the singular structure of the corresponding solution table. As a further application, these methods result in fast (and superfast) reliable algorithms for the inversion of striped Hankel, layered Hankel, and (rectangular) block-Hankel matrices.

01 Jan 1994
TL;DR: A unified approach to phase and cross-talk calibration of polarimetric data which can be applied to calibrating scattering matrix data or to extraction of the descriptors of distributed targets is described, suggesting that current methods of symmetrization are not optimal.
Abstract: A unified approach to phase and cross-talk calibra- tion of polarimetric data which can be applied to calibrating scattering matrix data or to extraction of the descriptors of distributed targets is described. It relies on the scene being dominated by targets with uncorrelated like and cross-polarized backscattering coefficients, but provides cross-talk calibration of targets for which this is not true. The algorithm needs un- symmetrized data, but uses only quantities derived from the covariance matrix of large areas. It makes no assumptions about system reciprocity, permits ready interpretation of the terms in the calibration procedure, allows comparison of the relative magnitude of the system-induced mixing of terms in the observed covariance matrix, is noniterative, and produces indicators which allow testing of whether it meets its own underlying assumptions. The linear distortion model is shown to lead to an inconsis- tent system of equations; this inconsistency can be removed by introducing an extra parameter which has properties expected of system noise. The modulus of the copolarized correlation coefficient, which is important in polarimetric classification and as a phase descriptor, is shown to be invariant under all effects embodied in the linear distortion model. Calibration of the scattering matrix data is based on a minimum least squares principle. This suggests that current methods of symmetrization are not optimal. The same analysis shows that estimates of parameters needed to form an equivalent reciprocal system are also nonoptimal. The method is more general than the well-known van Zyl algorithm for cross-talk removal, and permits an analysis of the conditions under which the van Zyl algorithm will yield valid results. Correction of phase distortion induced by channel imbalance is treated as an optional extra step relying on a known HH-VV phase difference in some region of the image. Results from the algorithm are discussed using scattering matrix data from the 1989 MAESTRO campaign.

Journal ArticleDOI
TL;DR: The Parallel Universal Matrix Multiplication Algorithm (PUMMA) as mentioned in this paper is a parallel implementation of the Level 3 BLAS algorithm on a distributed memory concurrent computers, which includes non-transposed matrix multiplication routines and transposed multiplication routines.
Abstract: This paper describes the Parallel Universal Matrix Multiplication Algorithms (PUMMA) on distributed memory concurrent computers. The PUMMA package includes not only the non-transposed matrix multiplication routine C = A{center_dot}B, but also transposed multiplication routines C = A{sup T}{center_dot}B, C = A{center_dot}B{sup T}, and C = A{sup T}{center_dot}B{sup T}, for a block scattered data distribution. The routines perform efficiently for a wide range of processor configurations and block sizes. The PUMMA together provide the same functionality as the Level 3 BLAS routine xGEMM. Details of the parallel implementation of the routines are given, and results are presented for runs on the Intel Touchstone Delta computer.

Journal ArticleDOI
01 May 1994-Infor
TL;DR: In this paper, rank reversal in the Analytic Hierarchy Process (AHP) is avoided when the output of the process is properly defined as a weight-ratio matrix (rather than a normalized-weight vector) and multiplicative procedures are used.
Abstract: We analyze the Belton and Gear rank reversal problem within an axiomatic framework for deriving consistent weight ratios from pairwise ratio matrices and aggregating weights and ratio matrices. We show that rank reversal in the Analytic Hierarchy Process (AHP) is avoided when the output of the process is properly redefined as a weight-ratio matrix (rather than a normalized-weight vector) and multiplicative procedures – the geometric mean and the weighted-geometric-mean aggregation rule – which preserve the underlying mathematical structures are used.