scispace - formally typeset
Search or ask a question

Showing papers in "Computing in 1987"


Journal ArticleDOI
TL;DR: A shortest augmenting path algorithm for the linear assignment problem that contains new initialization routines and a special implementation of Dijkstra's shortest path method is developed.
Abstract: We develop a shortest augmenting path algorithm for the linear assignment problem. It contains new initialization routines and a special implementation of Dijkstra's shortest path method. For both dense and sparse problems computational experiments show this algorithm to be uniformly faster than the best algorithms from the literature. A Pascal implementation is presented.

1,196 citations


Journal ArticleDOI
TL;DR: It is shown that tabu search techniques provide almost optimal colorings of graphs having up to 1000 nodes and their efficiency is shown to be significantly superior to the famous simulated annealing.
Abstract: Tabu search techniques are used for moving step by step towards the minimum value of a function. A tabu list of forbidden movements is updated during the iterations to avoid cycling and being trapped in local minima. Such techniques are adapted to graph coloring problems. We show that they provide almost optimal colorings of graphs having up to 1000 nodes and their efficiency is shown to be significantly superior to the famous simulated annealing.

654 citations


Journal ArticleDOI
TL;DR: Three methods for refining estimates of invariant subspaces are compared by changing variables that they all solve the same equation, the Riccati equation, and this shows a hybrid algorithm combining advantages of all three is suggested.
Abstract: We compare three methods for refining estimates of invariant subspaces, due to Chatelin, Dongarra/Moler/Wilkinson, and Stewart. Even though these methods all apparently solve different equations, we show by changing variables that they all solve the same equation, the Riccati equation. The benefit of this point of view is threefold. First, the same convergence theory applies to all three methods, yielding a single criterion under which the last two methods converge linearly, and a slightly stronger criterion under which the first algorithm converges quadratically. Second, it suggest a hybrid algorithm combining advantages of all three. Third, it leads to algorithms (and convergence criteria) for the generalized eigenvalue problem. These techniques are compared to techniques used in the control systems community.

87 citations


Journal ArticleDOI
TL;DR: A necessary and sufficient criterion is presented under which the property of positivity carry over from the data set to rational quadratic spline interpolants.
Abstract: A necessary and sufficient criterion is presented under which the property of positivity carry over from the data set to rational quadratic spline interpolants. The criterion can always be satisfied if the occuring parameters are properly chosen.

58 citations


Journal ArticleDOI
TL;DR: A branch and bound algorithm is proposed for finding the global optimum of large-scale indefinite quadratic problems over a polytope using separable programming and techniques from concave optimization to obtain approximate solutions.
Abstract: A branch and bound algorithm is proposed for finding the global optimum of large-scale indefinite quadratic problems over a polytope. The algorithm uses separable programming and techniques from concave optimization to obtain approximate solutions. Results on error bounding are given and preliminary computational results using the Cray 1S supercomputer as reported.

54 citations


Journal ArticleDOI
TL;DR: A new programming language called FORTRAN-SC is presented which is closely related to FORTRan 8x and is particularly suitable for the development of numerical algorithms which deliver highly accurate and automatically verified results.
Abstract: FORTRAN-SC. A Study of a FORTRAN Extension for Engineering/Scientific Computation with Access to ACRITH. A new programming language called FORTRAN-SC is presented which is closely related to FORTRAN 8x. FORTRAN-SC is a FORTRAN extension with emphasis on engineering and scientific computation. It is particularly suitable for the development of numerical algorithms which deliver highly accurate and automatically verified results. The language allows the declaration of functions with arbitrary result type, operator overloading and definition, as well as dynamic arrays. It provides a large number of predefined numerical data types and operators. Programming experiences with the existing compiler have been very encouraging. FORTRAN-SC greatly facilitates programming and in particular the use of the ACRITH subroutine library [14], [15].

45 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a new approximation algorithm for the two-dimensional bin-packing problem based on two one-dimensional bag-packing algorithms, which can also be used for those cases where the output is required to be on-line.
Abstract: We present a new approximation algorithm for the two-dimensional bin-packing problem. The algorithm is based on two one-dimensional bin-packing algorithms. Since the algorithm is of next-fit type it can also be used for those cases where the output is required to be on-line (e. g. if we open an new bin we have no possibility to pack elements into the earlier opened bins). We give a tight bound for its worst-case and show that this bound is a parameter of the maximal sizes of the items to be packed. Moreover, we also present a probabilistic analysis of this algorithm.

44 citations


Journal ArticleDOI
Luc Devroye1
TL;DR: A short algorithm that can be used to generate random integers with a log-concave distribution (such as the binomisch, Poisson, hypergeometric, negative binomial, geometric, logarithmic series or Polya-Eggenberger distributions) is given.
Abstract: We give a short algorithm that can be used to generate random integers with a log-concave distribution (such as the binomial, Poisson, hypergeometric, negative binomial, geometric, logarithmic series or Polya-Eggenberger distributions). The expected time is uniformly bounded over all these distributions. The algorithm can be implemented if a few lines of high level language code.

40 citations


Journal ArticleDOI
TL;DR: This algorithm is based on Edmonds' complete description of the perfect 2-matching polytope and uses the simplex algorithm for solving the LP-relaxations coming up and has been solved in less than 1 hour CPU time on a medium speed computer.
Abstract: We describe an implementation of a cutting plane algorithm for the minimum weight perfect 2-matching problem. This algorithm is based on Edmonds' complete description of the perfect 2-matching polytope and uses the simplex algorithm for solving the LP-relaxations coming up. Cutting planes are determined by fast heuristics, or, if these fail, by an efficient implementation of the Padberg-Rao procedure, specialized for 2-matching constraints. With this algorithm 2-matching problems on complete graphs with up to 1000 nodes (i.e., 499,500 variables) have been solved in less than 1 hour CPU time on a medium speed computer.

37 citations


Journal ArticleDOI
Z. Shen1, Y. Zhu1
TL;DR: Using the “bisection rule” of Moore, a simple algorithm is given which is an interval version of Shubert's iterative method for seeking the global maximum of a function of a single variable defined on a closed interval.
Abstract: Using the “bisection rule” of Moore, a simple algorithm is given which is an interval version of Shubert's iterative method for seeking the global maximum of a function of a single variable defined on a closed interval [a, b]. The algorithm which is always convergent can be easily extended to the higher dimensional case. It seems much simpler than and produces results comparable to that proposed by Shubert and Basso.

31 citations


Journal ArticleDOI
TL;DR: A newly developed and implemented method for computing guaranteed errorbounds for the solution of hyperbolic initial value problems, where no a priori knowledge of Lipschitz constants, monotonicity properties or additional error analysis is necessary.
Abstract: This article describes a newly developed and implemented method for computing guaranteed errorbounds for the solution of hyperbolic initial value problems. The basic concepts—modified fixed point theorems and approximated operators—allow an a posteriori error-estimation automatically. Therefore, no a priori knowledge of Lipschitz constants, monotonicity properties or additional error analysis is necessary.

Journal ArticleDOI
TL;DR: In a Monte Carlo simulation experiment, 31 gradient pivot choice criteria for the Simplex-method are tested and the goodness of the (most used) steepest unit ascent method is analysed and compared with the results of other criteria.
Abstract: In a Monte Carlo simulation experiment we test 31 gradient pivot choice criteria for the Simplex-method. Among the several used norms we look for the one, which is best relative to the required number of iterations and computing time. Especially the goodness of the (most used) steepest unit ascent method is analysed and compared with the results of other criteria.

Journal ArticleDOI
TL;DR: The methods are applied to the computation of acoustic surface waves in layered systems of piezoelectric media where they prove to be powerful.
Abstract: Numerical aspects of the generalized eigenvalue problemA μ c=0 are discussed whereA μ is a parameter-dependent matrix. Instead of determinants, the use of the smallest diagonal element ofR resp.U in theQR resp.LU decomposition is recommended. This idea has first been used by Kublanovskaya [7]. The behaviour of this smallest diagonal element is considered, and several iterative procedures are constructed and discussed. The methods are applied to the computation of acoustic surface waves in layered systems of piezoelectric media where they prove to be powerful. In this part also the special case whenA μ is a matrix polynomial is mentioned.

Journal ArticleDOI
TL;DR: This paper is devoted to the design of an orthogonal systolic array of n(n+1) elementary processors which can solve any instance of the Algebraic Path Problem within only 5n−2 time steps, and is compared with the 7n− 2 time steps of the hexagonal systolischen Feld of Rote.
Abstract: This paper is devoted to the design of an orthogonal systolic array ofn(n+1) elementary processors which can solve any instance of the Algebraic Path Problem within only 5n−2 time steps, and is compared with the 7n−2 time steps of the hexagonal systolic array of Rote [8].

Journal ArticleDOI
TL;DR: Runge-Kutta-Nyström formulas applicable to the general second order vector initial value problem,y″=f(x, y, y′), are presented and seem to be competitive with some of the best Runge-kutta methods currently in use.
Abstract: Runge-Kutta-Nystrom formulas applicable to the general second order vector initial value problem,y″=f(x, y, y′), are presented Two families of computational methods requiring five and six evaluations of the function,f, per integration step are derived The methods consist of embedded formulas fo adjacent order for the solution and its first derivative This permits a stepping strategy based on error estimates of all components of the numerical solution From each family of methods a member considered to have good numerical properties has been Selected Some comparisons of these specific new methods with conventional Runge-Kutta techniques have been made, and the Nystrom methods studied here seem to be competitive with some of the best Runge-Kutta methods currently in use

Journal ArticleDOI
TL;DR: The convergence and the convergence rate with higher order are obtained and the algorithms are numerically illustrated by an example of degree 10, and the numerical results are satisfactory.
Abstract: In this paper we derive five kinds of algorithms for simultaneously finding the zeros of a complex polynomial. The convergence and the convergence rate with higher order are obtained. The algorithms are numerically illustrated by an example of degree 10, and the numerical results are satisfactory.

Journal ArticleDOI
TL;DR: A method of calculating characteristic polynomials that are optimal with respect to statistical independence of pairs of successive pseudorandom numbers is described.
Abstract: The digital multistep method generates uniform pseudorandom numbers by transforming sequences of integers obtained by multistep recursions. The statistical independence properties of these pseudorandom numbers depend on the characteristic polynomial of the recursion. We describe a method of calculating characteristic polynomials that are optimal with respect to statistical independence of pairs of successive pseudorandom numbers. Tables of such optimal characteristic polynomials for degrees ≤64 are included.

Journal ArticleDOI
TL;DR: In this article, the authors present an overview of the problem and then show how to modify an already present algorithm, so that it is also useful from a practical point of view, and prove that this algorithm is suitable for parallel computing and prove by means of an example that it can compete with the Bird algorithm.
Abstract: We will give an overview of the problem and then show how to modify an already present algorithm, so that it is also useful from a practical point of view. Further we will show that this algorithm is suitable for parallel computing and prove by means of an example that it can compete with the Bird algorithm, which is almost always applied by engineers but has only a heuristic base.

Journal ArticleDOI
TL;DR: A general method of computer generation of random vectors based on transformations of uniformly distributed vectors is discussed, then applied to build up efficient algorithms for generatingq-exponential random vectors and multivariate normal ort-distributed vectors.
Abstract: The paper discusses a general method of computer generation of random vectors based on transformations of uniformly distributed vectors This method is then applied to build up efficient algorithms for generatingq-exponential random vectors and multivariate normal ort-distributed vectors Comparisons and connections with other similar algorithms are also presented

Journal ArticleDOI
TL;DR: To this programming problem of approximating given data sets by convex cubicC1-splines a dual program is constructed which is unconstrained and an efficient computational treatment is possible.
Abstract: In the present paper the problem of approximating given data sets by convex cubicC 1-splines is considered. To this programming problem a dual program is constructed which is unconstrained. Therefore an efficient computational treatment is possible.

Journal ArticleDOI
TL;DR: A method for computing “useful coefficients”, where the computation of these coefficients is immediate and where the computing time is practically negligible for anys andN.
Abstract: The computation of optimal coefficients for higher dimensionss and larger modulesN by means of the methods known hitherto leads to practically insurmountable problems regarding the computing time needed. In this note we give a method for computing “useful coefficients”, where the computation of these coefficients is immediate and where the computing time is practically negligible for anys andN. Whereas the theoretical efficiency of those “useful coefficients” is roughly speaking half the efficiency of the best possible coefficients, all practical tests indicate that our methods lead to optimal performance as well. A series of computational comparisons between the “useful coefficients” and the optimal ones is enclosed.

Journal ArticleDOI
TL;DR: This paper presents a formulatioin and a study of an interpolatory quartic spline which interpolates the first and second derivatives of a given function which can be applied to quadratures.
Abstract: This paper presents a formulatioin and a study of an interpolatory quartic spline which interpolates the first and second derivatives of a given function. This formulation can be applied, in particular, to quadratures.

Journal ArticleDOI
TL;DR: Sylvester's theorem of 1853 is presented which makes these simple divisibility properties clear for normal prs's clear and is a modification of Sylvester's original proof.
Abstract: Given two univariate polynomials with integer coefficients, it has beenrediscovered [2] that the reduced polynomial remainder sequence (prs) algorithm can be used mainly to compute over the integers the members of anormal prs, keeping under control the coefficient growth and avoiding greatest common divisor (ged) computations of the coefficients. The validity proof of this algorithm as presented in the current literature [2] is very involved and has obscured simple divisibility properties. In this note, we present Sylvester's theorem of 1853 [4] which makes these simple divisibility properties clear for normal prs's. The proof presented here is a modification of Sylvester's original proof.

Journal ArticleDOI
TL;DR: The interpolation procedure is demonstrated using two recently developed Runge-Kutta-Nyström methods and numerical experimentation reveals some aspects of the performance of these interpolants.
Abstract: Hermite interpolation polynomials are developed for use with low order Runge-Kutta-Nystrom methods in producing continuous approximate solutions to second order initial value problems. Polynomials of degrees five and four, which interpolate te solution and derivative components, respectively, are easily computable and require no extra stages. Interpolants of the next higher orders are obtained at the cost of two additional stages per integration step. The interpolation procedure is demonstrated using two recently developed Runge-Kutta-Nystrom methods. Numerical experimentation reveals some aspects of the performance of these interpolants.

Journal ArticleDOI
TL;DR: A Quasi-Newton type method, which applies to large and sparse nonlinear systems of equations, and uses the Q-R factorization of the approximate Jacobians, which belongs to a more general class of algorithms for which a local convergence theorem is proved.
Abstract: In this paper we present a Quasi-Newton type method, which applies to large and sparse nonlinear systems of equations, and uses the Q-R factorization of the approximate Jacobians. This method belongs to a more general class of algorithms for which we prove a local convergence theorem. Some numerical experiments seem to confirm that the new algorithm is reliable.

Journal ArticleDOI
TL;DR: The Taylor expansion proposed by Hansen is generalized to degreem and estimates are given for the number of zero entries in the remainder and a Taylor form is defined for the range of a function over an interval.
Abstract: The Taylor expansion proposed by Hansen [3] is generalized to degreem and estimates are given for the number of zero entries in the remainder. The expansion is then used to define a Taylor form for the range of a function over an interval and estimates are given for the number of interval variables replaced by real variables due to the special Taylor expansion. The Taylor form is then implemented for factorable functions. Some numerical results are given.

Journal ArticleDOI
C. R. Traas1
TL;DR: A computable function, defined over the sphere, is constructed, which is of classC1 at least and which approximates a given set of data and is compared with that of the expansion in terms of spherical harmonics.
Abstract: A computable function, defined over the sphere, is constructed, which is of classC 1 at least and which approximates a given set of data. The construction is based upon tensor product spline basisfunctions, while at the poles of the spherical system of coordinates modified basisfunctions, suggested by the spherical harmonics expansion, are introduced to recover the continuity order at these points. Convergence experiments, refining the grid, are performed and results are compared with similar results available in literature. The approximation accuracy is compared with that of the expansion in terms of spherical harmonics. The use of piecewise approximation, with locally supported basisfunctions, versus approximation with spherical harmonics is discussed.

Journal ArticleDOI
TL;DR: In this article, a new approach to derive optimal B-convergence results is presented; optimal Bconcave of order 2 for the implicit midpoint rule and the implicit trapezoidal rule and of order 1.5 for the two stage Lobatto IIIC scheme is then established.
Abstract: In this paper a new approach to derive optimalB-convergence results is presented; optimalB-convergence of order 2 for the implicit midpoint rule and the implicit trapezoidal rule and of order 1.5 for the two stage Lobatto IIIC scheme is then established.

Journal ArticleDOI
TL;DR: A combinatorial optimization model is proposed whose solution yields an approximation of the complex of polygonal complexes using simulated annealing.
Abstract: Many known materials possess polycrystalline structure. The images produced by plane cuts through such structures are polygonal complexes. The problem of finding the edges, when only the vertices of a given polygonal complex are known, is considered. A combinatorial optimization model is proposed whose solution yields an approximation of the complex. The problem itself is solved using simulated annealing. Encouraging first experiments are presented.

Journal ArticleDOI
TL;DR: A natural matroid structure associated with the optimal topological sortings under consideration, which permits to solve the weighted case is exhibited.
Abstract: The defect of a (partial) order relationP is defined to be the rank of the kernel of the associated incidence matrix. Gierz and Poguntke [7] have shown that the defect provides a lower bound for the number of incomparable adjacent pairs in an arbitrary topological sorting ofP. We show that this bound is sharp for interval orders without odd crowns. Furthermore, an efficient algorithm for topological sortings of such orders is presented which achieves the bound. We finally exhibit a natural matroid structure associated with the optimal topological sortings under consideration, which permits to solve the weighted case.