scispace - formally typeset
Search or ask a question

Showing papers in "Bit Numerical Mathematics in 1990"


Journal ArticleDOI
TL;DR: In this article, a necessary condition for obtaining good regularized solutions is that the Fourier coefficients of the right-hand side, when expressed in terms of the generalized SVD associated with the regularization problem, on the average decay to zero faster than the generalized singular values.
Abstract: We investigate the approximation properties of regularized solutions to discrete ill-posed least squares problems. A necessary condition for obtaining good regularized solutions is that the Fourier coefficients of the right-hand side, when expressed in terms of the generalized SVD associated with the regularization problem, on the average decay to zero faster than the generalized singular values. This is the discrete Picard condition. We illustrate the importance of this condition theoretically as well as experimentally.

307 citations


Journal ArticleDOI
TL;DR: In this paper, the growth of the condition number of the Newton form when the interpolation points are Leja points for compact sets K in the complex plane has been investigated, and it has been shown that if K is an interval, then the points are distributed roughly like Chebyshev points.
Abstract: The Newton form is a convenient representation for interpolation polynomials. Its sensitivity to perturbations depends on the distribution and ordering of the interpolation points. The present paper bounds the growth of the condition number of the Newton form when the interpolation points are Leja points for fairly general compact sets K in the complex plane. Because the Leja points are defined recursively, they are attractive to use with the Newton form. If K is an interval, then the Leja points are distributed roughly like Chebyshev points. Our investigation of the Newton form defined by interpolation at Leja points suggests an ordering scheme for arbitrary interpolation points.

127 citations


Journal ArticleDOI
TL;DR: A new algorithmic scheme is proposed for finding a common point of finitely many closed convex sets using weighted averages of relaxed projections onto approximating halfspaces that generalizes Cimmino and Auslender's methods as well as more recent versions developed by Iusem & De Pierro and Aharoni & Censor.
Abstract: A new algorithmic scheme is proposed for finding a common point of finitely many closed convex sets. The scheme uses weighted averages (convex combinations) of relaxed projections onto approximating halfspaces. By varying the weights we generalize Cimmino's and Auslender's methods as well as more recent versions developed by Iusem & De Pierro and Aharoni & Censor. Our approach offers great computational flexibility and encompasses a wide variety of known algorithms as special instances. Also, since it is “block-iterative”, it lends itself to parallel processing.

70 citations


Journal ArticleDOI
TL;DR: Hubert's M-estimator for robust linear regression is analyzed and Newton type methods for solution of the problem are defined and analyzed, and finite convergence is proved.
Abstract: In this paper Hubert's M-estimator for robust linear regression is analyzed. Newton type methods for solution of the problem are defined and analyzed, and finite convergence is proved. Numerical experiments with a large number of test problems demonstrate efficiency and indicate that this kind of approach may be useful also in solving thel 1 problem.

50 citations


Journal ArticleDOI
TL;DR: In this paper, an interval method is used to compute the minimax value of a function and the localizations of the local minimax points, and the method provides bounds on both the value of the function and its localizations.
Abstract: Interval methods are used to compute the minimax problem of a twice continuously differentiable functionf(y, z),y e ℝ m ,z e ℝ n ofm+n variables over anm+n-dimensional interval. The method provides bounds on both the minimax value of the function and the localizations of the minimax points. Numerical examples, arising in both mathematics and physics, show that the method works well.

49 citations


Journal ArticleDOI
Anders Barrlund1
TL;DR: Strict bounds are presented on the perturbationsΔM,ΔH ofM andH respectively, whenA is perturbed byΔA, which can also be applied to the orthogonal Procrustes problem.
Abstract: The polar decomposition of ann ×n-matrixA takes the formA=MH whereM is orthogonal andH is symmetric and positive semidefinite. This paper presents strict bounds, (with no order terms), on the perturbationsΔM,ΔH ofM andH respectively, whenA is perturbed byΔA. The bounds onΔM can also be applied to the orthogonal Procrustes problem.

40 citations


Journal ArticleDOI
TL;DR: AnO(n2) algorithm is presented, which is a modified version of Irving's algorithm, that finds a maximum stable matching, i.e., a maximum number of disjoint pairs of persons such that these pairs are stable among themselves.
Abstract: The stable roommates problem is that of matchingn people inton/2 disjoint pairs so that no two persons, who are not paired together, both prefer each other to their respective mates under the matching. Such a matching is called “a complete stable matching”. It is known that a complete stable matching may not exist. Irving proposed anO(n 2) algorithm that would find one complete stable matching if there is one, or would report that none exists. Since there may not exist any complete stable matching, it is natural to consider the problem of finding a maximum stable matching, i.e., a maximum number of disjoint pairs of persons such that these pairs are stable among themselves. In this paper, we present anO(n 2) algorithm, which is a modified version of Irving's algorithm, that finds a maximum stable matching.

38 citations


Journal ArticleDOI
C. Gurwitz1
TL;DR: Three algorithms for the weighted median problem are presented and the relationships between them are discussed and their use in the context of multivariate L1 approximation is reported on.
Abstract: The weighted median problem arises as a subproblem in certain multivariate optimization problems, includingL1 approximation. Three algorithms for the weighted median problem are presented and the relationships between them are discussed. We report on computational experience with these algorithms and on their use in the context of multivariateL1 approximation.

35 citations


Journal ArticleDOI
TL;DR: Two variants of partition trees are designed that can be used for storing arbitrarily oriented line segments in the plane in an efficient way and it is shown how to use these structures for solving line segment intersection queries, triangle stabbing queries and ray shooting queries in reasonably efficient ways.
Abstract: We design two variants of partition trees, calledsegment partition trees andinterval partition trees, that can be used for storing arbitrarily oriented line segments in the plane in an efficient way. The raw structures useO(n logn) andO(n) storage, respectively, and their construction time isO(n logn). In our applications we augment these structures by certain (simple) auxiliary structures, which may increase the storage and preprocessing time by a polylogarithmic factor. It is shown how to use these structures for solving line segment intersection queries, triangle stabbing queries and ray shooting queries in reasonably efficient ways. If we use the conjugation tree as the underlying partition tree, the query time for all problems isO(n λ), whereγ=log2(1+√5)−1≈0.695. The techniques are fairly simple and easy to understand.

25 citations


Journal ArticleDOI
TL;DR: The authors approximate solutions of equations with nondifferentiable operators and improve recent error estimates by approximating solutions with non-differentiable operators with a non-parametric non-negative operator.
Abstract: In this note we approximate solutions of equations with nondifferentiable operators and improve recent error estimates.

20 citations


Journal ArticleDOI
TL;DR: The case where the ‘black box’ is a solver not for A but for a matrix close toA is analysed, of interest for numerical continuation methods.
Abstract: Linear systems with a fairly well-conditioned matrixM of the form\(\begin{array}{*{20}c} n \\ 1 \\ \end{array} \mathop {\left( {\begin{array}{*{20}c} A & b \\ c & d \\ \end{array} } \right)}\limits^{\begin{array}{*{20}c} n & 1 \\ \end{array} } \), for which a ‘black box’ solver forA is available, can be accurately solved by the standard process of Block Elimination, followed by just one step of Iterative Refinement, no matter how singularA may be — provided the ‘black box’ has a property that is possessed by LU- and QR-based solvers with very high probability. The resulting Algorithm BE + 1 is simpler and slightly faster than T.F. Chan's Deflation Method, and just as accurate. We analyse the case where the ‘black box’ is a solver not forA but for a matrix close toA. This is of interest for numerical continuation methods.

Journal ArticleDOI
TL;DR: A hardware-oriented algorithm for generating permutations is presented that takes as a theoretic base an iterative decomposition of the symmetric groupSn into cosets and generates permutations in a new order.
Abstract: A hardware-oriented algorithm for generating permutations is presented that takes as a theoretic base an iterative decomposition of the symmetric groupS n into cosets. It generates permutations in a new order. Simple ranking and unranking algorithms are given. The construction of a permutation generator is proposed which contains a cellular permutation network as a main component. The application of the permutation generator for solving a class of combinatorial problems on parallel computers is suggested.

Journal ArticleDOI
TL;DR: A universally applicable algorithm for generating minimal perfect hashing functions that has (worst case) polynomial time complexity in units of bit operations and an adjunct algorithm for reducing parameter magnitudes in the generated hash functions is given.
Abstract: We present a universally applicable algorithm for generating minimal perfect hashing functions. The method has (worst case) polynomial time complexity in units of bit operations. An adjunct algorithm for reducing parameter magnitudes in the generated hash functions is given. This probabilistic method makes hash function parameter magnitudes independent of argument (input key) magnitudes.

Journal ArticleDOI
TL;DR: A cost-optimal parallel algorithm for enumerating all partitions (equivalence relations) of the set {1, ...,n}, in lexicographic order, designed to be executed on a linear array of processors.
Abstract: We describe a cost-optimal parallel algorithm for enumerating all partitions (equivalence relations) of the set {1, ...,n}, in lexicographic order. The algorithm is designed to be executed on a linear array of processors. It usesn processors, each having constant size memory and each being responsible for producing one element of a given set partition. Set partitions are generated with constant delay leading to anO(Bn) time solution, whereBn is the total number of set partitions. The same method can be generalized to enumerate some other combinatorial objects such as variations. The algorithm can be made adaptive, i.e. to run on any prespecified number of processors. To illustrate the model of parallel computation, a simple case of enumerating subsets of the set {1, ...,n}, having at mostm (≤n) elements is also described.

Journal ArticleDOI
TL;DR: It is proved that the optimal placement for two-headed disk systems is the “camel” arrangement, which may be viewed as two consecutive organ-pipe arrangements, and the total number of these optimal camel arrangements is exp2 (N/2+1).
Abstract: A problem inherent to the performance of disk systems is the data placement in cyinders in such a way that the seek time is minimized If successive searchers are independent, then the optimal placement for conventional one-headed disk systems is the organ-pipe arrangement According to this arrangement the most frequent cylinder is placed in the central location, while the less frequent cylinders are placed right and left alternatively This paper proves that the optimal placement for two-headed disk systems is the “camel” arrangement, which may be viewed as two consecutive organ-pipe arrangements It is also proved that, for a two-headed disk system withN=2(2n+1) cylinders, the total number of these optimal camel arrangements is exp2 (N/2+1)

Journal ArticleDOI
TL;DR: In this paper, an algorithm for semi-inifinite programming using sequential quadratic programming techniques together with an L ∼ ∞ exact penalty function is presented, and global convergence is shown.
Abstract: An algorithm for semi-inifinite programming using sequential quadratic programming techniques together with anL ∞ exact penalty function is presented, and global convergence is shown. An important feature of the convergence proof is that it does not require an implicit function theorem to be applicable to the semi-infinite constraints; a much weaker assumption concerning the finiteness of the number of global maximizers of each semi-infinite constraint is sufficient. In contrast to proofs based on an implicit function theorem, this result is also valid for a large class ofC 1 problems.

Journal ArticleDOI
TL;DR: An algorithm to stably sort an array ofn elements using only a linear number of data movements and constant extra space, albeit in quadratic time is described.
Abstract: In this paper, we describe an algorithm to stably sort an array ofn elements using only a linear number of data movements and constant extra space, albeit in quadratic time. It was not known previously whether such an algorithm existed. When the input contains only a constant number of distinct values, we present a sequence ofin situ stable sorting algorithms makingO(n lg(k+1)n+kn) comparisons (lg(K) means lg iteratedk times and lg* the number of times the logarithm must be taken to give a result ≤ 0) andO(kn) data movements for any fixed valuek, culminating in one that makesO(n lg*n) comparisons and data movements. Stable versions of quicksort follow from these algorithms.

Journal ArticleDOI
TL;DR: In this paper, it was shown that for weight functions of the formw(t)=(1 −t 2)1/2) 1/2/s====== m.............. (t), where m.............. is a polynomial of degreem which is positive on [−1, +1], successive Kronrod extension of a certain class of N-point interpolation quadrature formulas, including the n-point Gauss-formula, is always possible.
Abstract: In this note it is shown that for weight functions of the formw(t)=(1 −t 2)1/2/s m (t), wheres m is a polynomial of degreem which is positive on [−1, +1], successive Kronrod extension of a certain class ofN-point interpolation quadrature formulas, including theN-point Gauss-formula, is always possible and that each Kronrod extension has the positivity and interlacing property.

Journal ArticleDOI
TL;DR: The algorithm for the computation of the divided differences is shown to be numerically stable and does not require equidistant points, precomputation, or the fast Fourier transform, and can be very useful for very high-order interpolation.
Abstract: We present parallel algorithms for the computation and evaluation of interpolating polynomials. The algorithms use parallel prefix techniques for the calculation of divided differences in the Newton representation of the interpolating polynomial. Forn+1 given input pairs, the proposed interpolation algorithm requires only 2 [log(n+1)]+2 parallel arithmetic steps and circuit sizeO(n 2), reducing the best known circuit size for parallel interpolation by a factor of logn. The algorithm for the computation of the divided differences is shown to be numerically stable and does not require equidistant points, precomputation, or the fast Fourier transform. We report on numerical experiments comparing this with other serial and parallel algorithms. The experiments indicate that the method can be very useful for very high-order interpolation, which is made possible for special sets of interpolation nodes.

Journal ArticleDOI
TL;DR: An abstract characterisation of SIMD computation and a simple proof theory for SIMD programs are provided and a soundness result is stated and the consequences of the result are analysed.
Abstract: The aims of this article are to provide (i) an abstract characterisation of SIMD computation and (ii) a simple proof theory for SIMD programs. A soundness result is stated and the consequences of the result are analysed. The use of the axiomatic theory is illustrated by a proof of a parallel implementation of Euclid's GCD algorithm.

Journal ArticleDOI
TL;DR: The FastHull algorithm runs faster than any currently known 2D convex hull algorithm for many input point patterns and has linear time performance for many kinds of input patterns.
Abstract: An efficient and numerically correct program called FastHull for computing the convex hulls of finite point sets in the plane is presented. It is based on the Akl-Toussaint algorithm and the MergeHull algorithm. Numerical correctness of the FastHull procedure is ensured by using special routines for interval arithmetic and multiple precision arithmetic. The FastHull algorithm guaranteesO(N logN) running time in the worst case and has linear time performance for many kinds of input patterns. It appears that the FastHull algorithm runs faster than any currently known 2D convex hull algorithm for many input point patterns.

Journal ArticleDOI
TL;DR: In this article, a partitioned linearly implicit Runge-Kutta method is proposed to approximate the smooth solution of a perturbed problem with stepsizes larger than the stiffness parametere.
Abstract: This paper studies partitioned linearly implicit Runge-Kutta methods as applied to approximate the smooth solution of a perturbed problem with stepsizes larger than the stiffness parametere. Conditions are supplied for construction of methods of arbitrary order. The local and global error are analyzed and the limiting casee → 0 considered yielding a partitioned linearly implicit Runge-Kutta method for differential-algebraic equations of index one. Finally, some numerical experiments demonstrate our theoretical results.

Journal ArticleDOI
TL;DR: A constant average-time algorithm for generating all alternating permutations in lexicographic order is presented and Ranking and unranking algorithms are also derived.
Abstract: A permutation π1 π2 ... π n is alternating if π1 π3<π4 .... We present a constant average-time algorithm for generating all alternating permutations in lexicographic order. Ranking and unranking algorithms are also derived.

Journal ArticleDOI
TL;DR: Alinpack downdating algorithm is being modified by interleaving its two different phases, the forward solving a triangular system and the backward sweep of Givens rotations, to yield a new forward method for finding the Cholesky decomposition of RTR −zzT.
Abstract: Alinpack downdating algorithm is being modified by interleaving its two different phases, the forward solving a triangular system and the backward sweep of Givens rotations, to yield a new forward method for finding the Cholesky decomposition ofR T R −zz T By showing that the new algorithm saves forty percent purely redundant operations of the original, better stability properties are expected In addition, various other downdating algorithms are rederived and analyzed under a uniform framework

Journal ArticleDOI
TL;DR: In this paper, the authors give necessary and sufficient conditions for the solution set of a system of linear interval equations to be nonconvex, and derive some consequences for the resulting solution.
Abstract: We give necessary and sufficient conditions for the solution set of a system of linear interval equations to be nonconvex and derive some consequences.

Journal ArticleDOI
TL;DR: This work presents a parallel algorithm for finding the convex hull of a sorted set of points in the plane using O(n log logn/logn) processors in the Common crcw pram computational model, which is shown to be time and cost optimal.
Abstract: We present a parallel algorithm for finding the convex hull of a sorted set of points in the plane. Our algorithm runs inO(logn/log logn) time usingO(n log logn/logn) processors in theCommon crcw pram computational model, which is shown to be time and cost optimal. The algorithm is based onn 1/3 divide-and-conquer and uses a simple pointer-based data structure.

Journal ArticleDOI
TL;DR: This work focuses here on the notion of outer join and finds some reasonable conditions to guarantee that outer join will also preserve the lossless join property for two relations.
Abstract: In the relational model of data, Rissanen's Theorem provides the basis for the usual normalization process based on decomposition of relations. However, many difficulties occur if information is incomplete in databases and nulls are required to represent missing or unknown data. We concentrate here on the notion of outer join and find some reasonable conditions to guarantee that outer join will also preserve the lossless join property for two relations. Next we provide a generalization of this result to several relations.


Journal ArticleDOI
TL;DR: In this article, it was proved that spline difference schemes for singularly perturbed self-adjoint problems have second order uniform convergence in a small parameter e.g., the size of the splines.
Abstract: It is proved that a spline difference scheme for a singularly perturbed self-adjoint problem, derived by using exponential cubic splines at mid-points, has second order uniform convergence in a small parameter e. Numerical experiments are presented to confirm the theoretical predictions.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the convergence rate of the parametric cubic spline approximation of a plane curve is of order four instead of order three, for the first and second derivatives, respectively.
Abstract: The aim of the present paper is to show that the convergence rate of the parametric cubic spline approximation of a plane curve is of order four instead of order three. For the first and second derivatives, the rates are of order three and two, respectively. Finally some numerical examples are given to illustrate the predicted error behaviour.