scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Mathematical Software in 1982"


Journal ArticleDOI
TL;DR: Numerical tests are described comparing I~QR with several other conjugate-gradient algorithms, indicating that I ~QR is the most reliable algorithm when A is ill-conditioned.
Abstract: An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerical properties. Reliable stopping criteria are derived, along with estimates of standard errors for x and the condition number of A. These are used in the FORTRAN implementation of the method, subroutine LSQR. Numerical tests are described comparing I~QR with several other conjugate-gradient algorithms, indicating that I~QR is the most reliable algorithm when A is ill-conditioned.

4,189 citations


Journal ArticleDOI
TL;DR: This work was supported by Natural Sciences and Engineering Research Council of Canada Grant A8652, by the New Zealand Department of Scientific and Industrial Research, and by the Department of Energy under Contract DE-AT03-76ER72018.
Abstract: Received 4 June 1980; revised 23 September 1981, accepted 28 February 1982 This work was supported by Natural Sciences and Engineering Research Council of Canada Grant A8652, by the New Zealand Department of Scientific and Industrial Research; and by U S. National Science Foundation Grants MCS-7926009 and ECS-8012974, the Department of Energy under Contract AM03-76SF00326, PA No. DE-AT03-76ER72018, the Office of Naval Research under Contract N00014-75-C-0267, and the Army Research Office under Contract DAA29-79-C-0U0, Authors' addresses: C. C. Paige, School of Computer Science, McGill University, Montreal, Quebec, Canada H3A 2K6; M. A Saundem, Department of Operations Research, Stanford University, Stanford, CA 94305. Permmsion to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notme is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1982 ACM 0098-3500/82/0600-0[95 $00 75

745 citations


Journal ArticleDOI
Tony F. Chan1
TL;DR: An improved version of the original GR-SVD algorithm is presented, which works best for matrices with m >> n, but is more efficient even when m is only slightly greater than n and in some cases can achieve as much as 50 percent savings.
Abstract: The most well-known and widely used algorithm for computing the Singular Value Decomposition (SVD) A --U ~ V T of an m x n rectangular matrix A is the Golub-Reinsch algorithm (GR-SVD). In this paper, an improved version of the original GR-SVD algorithm is presented. The new algorithm works best for matrices with m >> n, but is more efficient even when m is only slightly greater than n (usually when m ~ 2n) and in some cases can achieve as much as 50 percent savings. If the matrix U ~s exphcltly desired, then n 2 extra storage locations are required, but otherwise no extra storage is needed. The two main modifications are: (1) first triangularizing A by Householder transformations before bldmgonahzing it (thin idea seems to be widely known among some researchers in the field, but as far as can be determined, neither a detailed analysis nor an lmplementatmn has been published before), and (2) accumulating the left Givens transformations in GR-SVD on an n x n array instead of on an m x n array. A PFORT-verified FORTRAN Implementation m included. Comparisons with the EISPACK SVD routine are given.

241 citations


Journal ArticleDOI
TL;DR: The included Taylor series method executes faster and yields a more accurate answer than the standard methods for most of the problems in the test set and is most attractwe for small systems and for stringent accuracy tolerances.
Abstract: Taylor series methods compute a solution to an initial value problem in ordinary differential equations by expanding each component of the solution in a long series. A portable translator program accepts statements of the system of differential equations and produces a portable FORTRAN object code which is then run to solve the system. At each step of the integration, the object program generates the series for each component of the solution, analyzes that series to determine the optimal step, and extends the solution by analytic continuation. The translator is easy to use, yet it is powerful and flexible. The computer time required by this approach consists of time to run the translator plus time to run the object code, CPU time and storage requirements depend on the size and complexity of the system of ODEs. Theoretical estimates and empirical test results are given for Hull's test problems, and comparisons with DVERK and DGEAR from IMSL are given. The computer time for all preproeessmg, compilation, and linking Is included Taylor series method executes faster and yields a more accurate answer than the standard methods for most of the problems in the test set. The Taylor series method is most attractwe for small systems and for stringent accuracy tolerances.

220 citations


Journal ArticleDOI
TL;DR: An implementation of sparse ${LDL}^T$ and LU factorization and back-substitution, based on a new scheme for storing sparse matrices, is presented and appears to be as efficient in terms of work and storage as existing schemes.
Abstract: An implementation of sparse ${LDL}^T$ and LU factorization and back-substitution, based on a new scheme for storing sparse matrices, is presented. The new method appears to be as efficient in terms of work and storage as existing schemes. It is more amenable to efficient implementation on fast pipelined scientific computers.

202 citations


Journal ArticleDOI
TL;DR: ITPACK 2C is a collection of seven FORTRAN subroutines for solving large sparse linear systems by adaptive accelerated iterative algorithms that are the most successful in solving systems with symmetric positive definite or mildly nonsymmetric coefficient matrices.

160 citations


Journal ArticleDOI
TL;DR: A heuristic algorithm is presented for fast and accurate evaluation of complex Hankel transforms of orders 0 and 1, where minimum relatwe errors are approximately 10 -s using 32-bit floating-point words.
Abstract: A heuristic algorithm is presented for fast and accurate evaluation of complex Hankel transforms of orders 0 and 1. Concepts using linear digital convolution are introduced, where Bessel function evaluations are not required. Related convolution is defined over a set of transforms whose integrands are related algebraically. Given any transform argument range, fast lagged convolution is performed m place within a predeslgned digital filter By arranging related and lagged convolutions m matrix form, a whole matrix of complex Hankel transforms is evaluated with a minimum of integrand function calls. For each point in the matrix, an adaptive convolution algorithm (based on function behavior and a given truncation tolerance) further reduce unnecessary evaluations along the decreasing digital-filter tails. The class of mtegrands (excluding the Bessel factor) must be continuous monotonic decreasing complex (or real) functions of a real variable defined in (0, oo). Higher integerorder Hankel transforms may be expressed in terms of orders 0 and 1 by recursion, and can be rapidly evaluated by related convolution. Several elementary and practical examples are given to demonstrate convolution accuracy and efficiency (m terms of total functmn evaluations), where minimum relatwe errors are approximately 10 -s using 32-bit floating-point words. (A double-precision real version is also provided, which yields minimum relative errors approximately 10-12.) Some suggestions are made for future algorithm enhancements. The algorithm is written in portable FORTRAN IV.

121 citations


Journal ArticleDOI
TL;DR: It is shown that for regular graphs of order n the expected value of the total length of a minimum fundamentalcycle set does not exceed O(n2).
Abstract: The following problem is considered: Given an undirected, connected graph G, find a spanning tree in G such that the sum of the lengths of the fundamental cycles (with respect to this tree) is minimum. This problem, besides being interesting in its own right, is useful in a variety of situations It is shown that this problem is NP-complete. A number of polynomial-time, heuristic algorithms which yield "good" suboptimal solutions are presented and their performances are discussed. Finally, it is shown that for regular graphs of order n the expected value of the total length of a minimum fundamentalcycle set does not exceed O(n2).

115 citations


Journal ArticleDOI
TL;DR: Using efficient subprograms for generating uniform, exponential, alid normal deviates, the new algorithm is much faster than all competing methods.
Abstract: of Poisson Deviates Distributions Samples from Poisson distributions of mean # _> 10 are generated by truncating suitable normal deviates and applying a correction with low probabdity. For p < 10, inversion is substituted. The method is accurate and it can cope with changing parameters p. Using efficient subprograms for generating uniform, exponential, alid normal deviates, the new algorithm is much faster than all competing methods.

105 citations


Journal ArticleDOI
TL;DR: A new FORTRAN implementation of the Gibbs-King method, ACM Algorithm 582, which is portable, faster, more reliable, and which uses less storage, is described.
Abstract: Two effective reordering algorithms for sparse linear systems are ACM Algorithm 508, an implementation of the Glbbs-Poole-Stockmeyer method to reduce matrix bandwidth, and ACM Algorithm 509, an Implementation of the Gibbs-King method to reduce the profile of a matrix. Reduction of matrix profile is more often reqmred than bandwidth reduction, but Algorithm 509 is substantially slower than 508. Consequently, the Glbbs-Poole-Stockmeyer algorithm has been recommended and used m contexts where the better profile reduction of the Gibbs-King algorithm is more appropriate A new FORTRAN implementation of these algorithms, ACM Algorithm 582, which is portable, faster, more reliable, and which uses less storage, is described. In addition, certain unnecessary restrictions on the problem size present in Algorithm 508 and 509 are removed. The new implementation of the Gibbs-King method is much faster than Algorithm 509, generally slightly faster than Algorithm 508, and nearly as fast as the new implementation of the Gibbs-Poole-Stockmeyer method

90 citations


Journal ArticleDOI
TL;DR: An item of mathematical software based on a Rosenbrock formula according to the stiffness of the problem is written and a cheap, effective estimate of the condition of the linear systems is derived.
Abstract: Rosenbrock formulas have shown promise in research codes for the solution of initial-value problems for stiff systems of ordinary differential equations (ODEs). To help assess their practical value, the author wrote an item of mathematical software based on such a formula. This required a variety of algorithmic and software developments. Those of general interest are reported in this paper. Among them is a way to select automatically, at every step, an explicit Runge-Kutta formula or a Rosenbrock formula according to the stiffness of the problem. Solving linear systems is important to methods for stiff ODEs, and is rather special for Rosenbrock methods. A cheap, effective estimate of the condition of the linear systems is derived. Some numerical results are presented to illustrate the developments.

Journal ArticleDOI
TL;DR: The "state of the art" of mathematical software that solves systems of nonlinear equations is evaluated based on a comparison of eight readily available FORTRAN codes.
Abstract: The "state of the art" of mathematical software that solves systems of nonlinear equations is evaluated. The evaluation is based on a comparison of eight readily available FORTRAN codes. Theoretmal and software aspects of the codes, as well as their performance on a carefully designed set of test problems, are used m the evaluation.

Journal ArticleDOI
TL;DR: This paper presents a method for solving the least squares problem by focusing on the relationship between weight and penalty, and the results show that between 1.0 and 0.0, MA.GE.N and L.LE.N are considered to be satisfactory.
Abstract: THIS SUBPROGRAM SOLVES A LINEARLY CONSTRAINED LEAST SQUARES PROBLEM. SUPPOSE THERE ARE GIVEN MATRICES E AND A OF RESPECTIVE DIMENSIONS ME BY N AND MA BY N, AND VECTORS F AND B OF RESPECTIVE LENGTHS ME AND MA. THIS SUBROUTINE SOLVES THE PROBLEM EX = F, (EQUATIONS TO BE EXACTLY SATISFIED) AX = B, (EQUATIONS TO BE APPROXIMATELY SATISFIED, IN THE LEAST SQUARES SENSE) SUBJECT TO COMPONENTS L+I,...,N NONNEGATIVE ANY VALUES ME.GE.0, MA.GE.0 AND 0. LE. L .LE.N ARE PERMITTED. THE PROBLEM IS REPOSED AS PROBLEM WNNLS (WT*E)X = (WT*F) ( A) ( B), (LEAST SQUARES) SUBJECT TO COMPONENTS L+I,...,N NONNEGATIVE. THE SUBPROGRAM CHOOSES THE HEAVY WEIGHT (OR PENALTY PARAMETER) WT THE PARAMETERS FOR WNNLS ARE


Journal ArticleDOI
TL;DR: The a lgor i thms realized by GPSKCA provides the same ma thema t i ca l capabil i t ies as provided by R E D U C E, and removes some implicit restr ict ions on the matr ices t ha t can be reordered.
Abstract: Given the s t ructure of a symmet r i c or s t ructural ly symmet r i c sparse matr ix, G P S K C A a t t e m p t s to find a synlnaetric reorder ing of the mat r ix tha t produces a smal ler bandwid th or profile. References [1], [4], [5], and [6] explain in detail the a lgor i thms realized by GPSKCA. Th i s a lgor i thm provides the same ma thema t i ca l capabil i t ies as provided by R E D U C E , Algor i thms 508, and 509, bu t requires less m e m o r y and t ime and removes some implicit restr ict ions on the matr ices t ha t can be reordered. A descript ion of the differences in the implementa t ion and their effects is given in [7]; G P S K C A and R E D U C E produce the same bandwid th and profile on all p rob lems for which R E D U C E executes successfully. T h e package of subrout ines is evoked by the F O R T R A N s t a t e m e n t

Journal ArticleDOI
Philip Wolfe1
TL;DR: Many numerical methods call for repeated calculatmn of the first partial derivatives of a given function of several variables that can be hard to discover.
Abstract: Many numerical methods call for repeated calculatmn of the first partial derivatives of a given function of several variables. Errors m a program for that calculatmn can be hard to discover. A procedure IS offered for testing the program and locating its errors.

Journal ArticleDOI
P. Van Dooren1
TL;DR: This work was supported by the National Science Foundation under Grant ENG78-10003 and by the U.S. Air Force under Grant AFOSR-79-0094.
Abstract: Received 11 November 1981; revised 2 June 1982, accepted 2 August 1982 This work was supported by the National Science Foundation under Grant ENG78-10003 and by the U.S. Air Force under Grant AFOSR-79-0094. Author's present address Philips Research Laboratory, Avenue Van Becelaere 2, Box 9, B-1170, Brussels, Belgium Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the pubhcation and its date appear, and notice is given tha t copying is by permission of the Association for Computing Machinery. To copy otherwise, or to repubhsh, requtres a fee and/or specific permission. © 1982 ACM 0098-3500/82/1200-0376 $00.75

Journal ArticleDOI
TL;DR: UIMP is a matrix-generator report-writer system designed to aid the reahzation (generation) of mathematical programming models and also the analysis-reporting of the solutions of such models.
Abstract: UIMP is a matrix-generator report-writer system designed to aid the reahzation (generation) of mathematical programming models and also the analysis-reporting of the solutions of such models The data-structure facility of the system allows the underlying structure of a user model to be captured and helps to define such models This data-structure feature is not only a powerful modeling aid, it also finds use in the analysis of solutions and report generation The experience of using the system, its shortcomings, and possible extensions are also discussed.

Journal ArticleDOI
Tony F. Chan1
TL;DR: The improved a lgor i thm first computes the QR factorizat ion of A using Householder t ransformat ions, and then uses the Go lub -Re insch a l Igor i Thm on R to solve the SSVDC problem.
Abstract: where U is an m × min(m,n) matr ix containing the left singular vectors, W is a diagonal mat r ix of size min(m, n) containing the singular values, and V is an n x min(m, n) mat r ix containing the r ight singular vectors. Note tha t m is al lowed to be greater t han or less t han n. For ease of presentat ion, we assume m to be greater than or equal to n in the following discussion. T h e a lgor i thm is an i m p r o v e m e n t of the Go lub-Re insch a lgor i thm [4], which is imp lemen ted in subrout ines SVD and M I N F I T in E I S P A C K [3] and in subrout ine SSVDC in L I N P A C K [2]. I t should be more efficient than the Go lub-Re insch a lgor i thm when m is approx imate ly larger tJ lan 2n, as is the case in m a n y least squares applications. T h e a lgor i thm has a hybr id nature. When m is a b o a t equal to n, the Golub-Re insch a lgor i thm is employed. When the rat io m/n is larger t han a threshold value, which is de te rmined by detai led opera t ion counts [1], the improved a lgor i thm is used. T h e improved a lgor i thm first computes the QR factorizat ion of A using Householder t ransformat ions , and then uses the Go lub -Re insch a lgor i thm on R. A fur ther i m p r o v e m e n t over the Golub-Re insch a lgor i thm is when the left singular

Journal ArticleDOI
TL;DR: Given a list of triangles whose union is T, together with an approximate integral and error estimate for each triangle, identify the triangle Tk with largest error estimate.
Abstract: (1) Compute an approximate integral A and error estimate 7 for the original triangle T. (2) If the current values of A and ~ satisfy ~ < max{e, IA~I}, or if further computation will exceed specified limits, exit. (3) Given a list of triangles whose union is T, together with an approximate integral and error estimate for each triangle, identify the triangle Tk with largest error estimate. (4) Remove Tk from the data list and subtract its contributions from the current values of A and 7. (5) Divide T~ into four congruent triangles, append them to the data list, and compute an approximate integral and error estimate for each. Add these contributions to A and 7.

Journal ArticleDOI
TL;DR: SICEDR is a FORTRAN subroutine for improving the accuracy of a computed real eigenvalue and improving or computing the associated eigenvector in O(n/sup 2/) work, where n is the order of the matrix A.
Abstract: SICEDR is a FORTRAN subroutine for improving the accuracy of a computed real eigenvalue and improving or computing the associated eigenvector. It is first used to generate information during the determination of the eigenvalues by the Schur decomposition technique. In particular, the Schur decomposition technique results in an orthogonal matrix Q and an upper quasi-triangular matrix T, such that A = QTQ/sup T/. Matrices A, Q, and T and the approximate eigenvalue, say lambda, are then used in the improvement phase. SICEDR uses an iterative method similar to iterative improvement for linear systems to improve the accuracy of lambda and improve or compute the eigenvector x in O(n/sup 2/) work, where n is the order of the matrix A.

Journal ArticleDOI
TL;DR: A generalization of Neville-Aitken algorithm for extrapolation has been found by Hhvie, and this algorithm has been called the MNA-algorithm, or the BH-protocol, which can be used for the general interpolation problem.
Abstract: Recent ly a generalization of Neville-Aitken algorithm for extrapolation has been found by Hhvie [10]. This general extrapolation scheme has also been studied by Brezinski [3], who showed tha t it includes most of the convergence acceleration methods actually known. This algorithm has been called the E-algorithm, or the BH-protocol [15, Chaps. 10 and 11, pp. 175-209]. Of course, after that, the next step was to look for a general interpolation scheme for interpolation by a linear combinat ion of functions forming a Chebyshev system. Such an algorithm was in fact found some years ago by Muhlbach [12, 13]. Except for the initializations, it is the same as the E-algorithm. This algori thm has been called the Miihlbach-Nevil le-Aitken algorithm (MNA-algorithm). The same algori thm can also be used for the general interpolation problem [4] as described, for example, by Davis [7]. The E-algori thm and its programming are described in Section 2. A F O R T R A N subroutine is given with an application. Section 3 is devoted to the MNAalgorithm, while Section 4 deals with least squares approximation and extrapolation.

Journal ArticleDOI
TL;DR: An uncorrected version of HQR3 will yield a division by zero in line HQR 1060 of [2] as a result of the test in (1), where the criterion is the sum of the magnitudes of the two diagonals.
Abstract: An uncorrected version of HQR3 will yield a division by zero in line HQR 1060 of [2] as a result of the test in (1). In checking for a zero subdiagonal element, the criterion is the sum of the magnitudes of the two diagonals. As the example shows, when both diagonals are zero, the .LT. test fails for an exactly zero subdiagonal element. Without correction (2), when exchanging two upper real roots with a lower 2 × 2 block, only the bottom root is tested and exchanged, leaving the possibility of misordering of the top root. The example above also demonstrates this problem. Without correction (3), the program would give the wrong diagonals in an unusual case where part of a matrix was being triangularized and the routine failed to converge. Correction (4) is necessary when the superdiagonal element is zero between two real roots, leaving the possibility that the maximum of (P,Q) is P ffi 0 for Q negative. (If (P,Q) are both zero, no exchange is required and the program properly returns.)

Journal ArticleDOI
TL;DR: This work considers the stabil i ty of the sys tem of leveling equat ions for the two choices of basis in polynomial Chebyshev approximat ion by the Remez algorithm.
Abstract: In polynomial Chebyshev approximat ion on an interval by the Remez algorithm, one can use a power basis or a Chebyshev polynomial basis for polynomials. More generally, in rat ional Chebyshev approximat ion by the F r a s e r H a r t R e m e z algorithm, one can use a power basis or a Chebyshev polynomial basis for polynomials. We consider in this pape r the stabil i ty of the sys tem of leveling equat ions for the two choices of basis. I t should be noted t ha t Fraser and H a r t [6] and " C o m p u t e r Approximat ions" [8, pp. 55-56] advocate the Chebyshev polynomial basis, whereas the ALGOL procedure of Cody, Fraser, and H a r t [3] uses a power basis. T h e Chebyshev polynomials Tk are discussed in the texts of Cheney [1], Davis [4], and Rivlin [9], and by Clenshaw [2]. T h e y are used for approximat ion on [ 1 , 1]. For approximat ion on [0, 1], the shifted Chebyshev polynomials T~ , given by Clenshaw [2], are used instead. We will assume tha t approximat ion problems have been converted, by change of variable if necessary, to approximat ion on a s tandard interval. For a square matr ix A the condition number cond(A) is defined to be [[A[[ [[ A -1 [[, where [[ ][ is a matr ix norm. Some relat ions suggesting tha t the stabil i ty of solution of a l inear sys tem with matr ix A depends on cond(A) are given by Wilkinson [11, formulas (102) and {107)]. T h e L1 and L~ mat r ix norms are given by Wilkinson [11, formulas (12) and (13)].

Journal ArticleDOI
TL;DR: The focus of this study is on one relatively small body of data from selected measurements by K. E. Hillstrom on five nonlinear optimization routines in solving one test problem, starting from each of twenty randomly chosen starting points.
Abstract: Investigations into the comparative performance of mathematical software often involve collection and analysis of data under circumstances where the behavior of the software is not understood and unexpected results are likely to arise. The recently developed statistical techniques of exploratory data analysis are well suited to exposing important regularities of such data. Several of these techniques are explained and illustrated. Nonlinear optmuzation algorithms are comphcated by nature, and data on the performance of some of them provide an opportunity to apply exploratory techniques and other techniquos of modern data analysis. The focus of this study is on one relatively small body of data from selected measurements by K. E. Hillstrom on five nonlinear optimization routines in solving one test problem, starting from each of twenty randomly chosen starting points. The variability of performance across optimizers is described, and the effect of starting points is exposed. The concluding discussion examines some aspects of the design issues in studying behavior of mathematical software and related data analysis problems.


Journal ArticleDOI
TL;DR: T h i s discrepancy can be resolved only b y changing t h e c o m p u t a t i o n of z, as z = I can result both when c = 0 and b # 0 and w h e n c = 1 (when a-b = 0).
Abstract: Boeing C o m p u t e r Services C o m p a n y , Mail Stop 9C-01, 565 A n d o v e r P a r k West, Tukwila, W A 98188. T h e c o m p a n i o n [5] to Algorithm 539 (the B L A S) contains two errors w h i c h we discovered in the p r e p a r a t i o n of [1]. T h e m o r e serious error occurs in the m a t h e m a t i c a l specifications for subrou-tines S R O T G and D R O T G , which construct Givens plane rotations. F o r convenience of reference, we include the relevant specifications f r o m [5]. Given a and b, each of these subroutines computes ~sgn(a) if lal > Ibl' r-o(a 2 \"4-b2) 1/2, o-/sgn(b) if Ibl-> lal, ! / if lal> Ibl, z = c if [bl~lal if c = 0. and c # 0 , (le) If the user later wishes to reconstruct c and s from z, it can be done as follows: I f z = 1 set c = 0 and s = 1. If Izl < 1 set c = (1-z2) 1/2 and s = z. (2) If [zl > 1 set c = 1/z and s = (1-c2) 1/2. T h e p r o b l e m occurs in the c o m p u t a t i o n of z. I n particular, w h e n a = b = 0, (1a-e) yield the results r = 0, c = 1, s = 0, and z = 1. However, w h e n c a n d s are reconstructed from z using (2), the incorrect values c-0 and s = 1 result. T h i s discrepancy can be resolved only b y changing t h e c o m p u t a t i o n of z, as z = I can result both when c = 0 (when a = 0 and b # 0) and w h e n c = 1 (when a-b = 0). Stewart [7] intended t h a t z represent the smaller in m a g n i …

Journal ArticleDOI
TL;DR: The initial A A R D V A R K relied principally on balanced analysis-of-variance algorithms and approximate statistical methods were applied to treat unbalanced data, and an iterative AOV algorithm was later developed by Hemmerle.
Abstract: Some of the work to be described is reminiscent of A A R D V A R K [5], an analysisof-variance package, at Iowa State University in the early 1960s. This program was used for most of the large number of analyses processed at Iowa State at that time; however, the program was definitely not transportable, being written in a mix of assembler and FORTRAN for a nonstandard IBM 7074. Subsequent versions, lacking some of its initial capabilities but adding others, were prepared for other machines, including the IBM 360 (see for example [7]). One distinctive feature of the program that greatly aided in its usability was algebraic specification of the statistical model by the user [14], [18]. This facility has now become common in statistical packages. The initial A A R D V A R K relied principally on balanced analysis-of-variance (AOV) algorithms and approximate statistical methods were applied to treat unbalanced data. An iterative AOV algorithm was later developed by Hemmerle [12], which permits using balanced analysis-of-variance algorithms to obtain exact statistical results for unbalanced data. This algorithm along with subsequent


Journal ArticleDOI
TL;DR: A new implementation of the Gibbs-Poole-Stockmeyer and Gibbs-King algorithms is available as Algorithm 582, which is faster, more robust, and requires less storage than the previous implementation.
Abstract: A new implementation of the Gibbs-Poole-Stockmeyer and Gibbs-King algorithms is available as Algorithm 582. This new implementation is faster, more robust, and requires less storage than the previous implementation of the GibbsPoole-Stockmeyer algorithm [1], distributed as Algorithm 508, and of the GibbsKing algorithm [2], distributed as Algorithm 509. The mathematical capabilities of Algorithm 582 are identical to those of Algorithms 508 and 509. References [3] and [4] give the implementation details and documentation for the new implementation.