scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 1977"


Book ChapterDOI
TL;DR: Several polynomial time algorithms finding “good,” but not necessarily optimal, tours for the traveling salesman problem are considered, and the closeness of a tour is measured by the ratio of the obtained tour length to the minimal tour length.
Abstract: Several polynomial time algorithms finding “good,” but not necessarily optimal, tours for the traveling salesman problem are considered. We measure the closeness of a tour by the ratio of the obtained tour length to the minimal tour length. For the nearest neighbor method, we show the ratio is bounded above by a logarithmic function of the number of nodes. We also provide a logarithmic lower bound on the worst case. A class of approximation methods we call insertion methods are studied, and these are also shown to have a logarithmic upper bound. For two specific insertion methods, which we call nearest insertion and cheapest insertion, the ratio is shown to have a constant upper bound of 2, and examples are provided that come arbitrarily close to this upper bound. It is also shown that for any n≥8, there are traveling salesman problems with n nodes having tours which cannot be improved by making n/4 edge changes, but for which the ratio is 2(1−1/n).

815 citations


Journal ArticleDOI
TL;DR: With the Delsarte-MacWilliams inequalities as a starting point, an upper bound is obtained on the rate of a binary code as a function of its minimum distance, which is asymptotically less than Levenshtein's bound and so also Elias's.
Abstract: With the Delsarte-MacWilliams inequalities as a starting point, an upper bound is obtained on the rate of a binary code as a function of its minimum distance. This upper bound is asymptotically less than Levenshtein's bound, and so also Elias's.

473 citations


Book ChapterDOI
TL;DR: A conceptually straightforward method for generating sharp lower bounds constitutes the basic element in a family of efficient branch and bound algorithms for solving simple (uncapacitated) plant location problems and special versions hereof including set covering and set partitioning.
Abstract: A conceptually straightforward method for generating sharp lower bounds constitutes the basic element in a family of efficient branch and bound algorithms for solving simple (uncapacitated) plant location problems and special versions hereof including set covering and set partitioning. After an introductory discussion of the problem formulation, a theorem on lower bounds is established and exploited in a heuristic procedure for maximizing lower bounds. For cases where an optimal solution cannot be derived directly from the final tableau upon determination of the first lower bound, a branch and bound algorithm is presented together with a report on computational experience. The lower bound generation procedure was originally developed by the authors in 1967. In the period 1967–69 experiments were performed with various algorithms for solving both plant location and set covering problems. All results appeared in a series of research reports in Danish and attracted accordingly limited attention outside Scandinavia. However, due to their simplicity and high standard of performance, the algorithms are still competitive with more recent approaches. Furthermore, they have appeared to be quite powerful for solving problems of moderate size by hand.

229 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the isostatic response function for the continental United States, computed by Lewis & Dorman (1970), is incompatible with any local compensation model that involves only negative density contrasts beneath topographic loads.
Abstract: Summary. Using the techniques of linear and quadratic programming, it can be shown that the isostatic response function for the continental United States, computed by Lewis & Dorman (1970), is incompatible with any local compensation model that involves only negative density contrasts beneath topographic loads. We interpret the need for positive densities as indicating that compensation is regional rather than local. The regional compensation model that we investigate treats the outer shell of the Earth as a thin elastic plate, floating on the surface of a liquid. The response of such a model can be inverted to yield the absolute density gradient in the plate, provided the flexural rigidity of the plate and the density contrast between mantle and topography are specified. If only positive density gradients are allowed, such a regional model fits the United States response data provided the flexural rigidity of the plate lies between 1021 and 1022 N m. The fit of the model is insensitive to the mantle/ load density contrast, but certain bounds on the density structure can be established if the model is assumed correct. In particular, the maximum density increase within the plate at depths greater than 34 kin must not exceed 470 kg m−3; this can be regarded as an upper bound on the density contrast at the Mohorovicic discontinuity. The permitted values of the flexural rigidity correspond to plate thicknesses in the range 5–10 km, yet deformations at depths greater than 20 km are indicated by other geophysical data. We conclude that the plate cannot be perfectly elastic; its effective elastic moduli must be much smaller than the seismically determined values. Estimates of the stress-differences produced in the earth by topographic loads, that use the elastic plate model, together with seismically determined elastic parameters, will be too large by a factor of four or more.

213 citations


Journal ArticleDOI
TL;DR: Several approaches for the evaluation of upper and lower bounds on error probability of asynchronous spread spectrum multiple access communication systems are presented, utilizing an isomorphism theorem in the theory of moment spaces.
Abstract: Several approaches for the evaluation of upper and lower bounds on error probability of asynchronous spread spectrum multiple access communication systems are presented. These bounds are obtained by utilizing an isomorphism theorem in the theory of moment spaces. From this theorem, we generate closed, compact, and convex bodies, where one of the coordinates represents error probability, while the other coordinate represents a generalized moment of the multiple access interference random variable. Derivations for the second moment, fourth moment, single exponential moment, and multiple exponential moment are given in terms of the partial cross correlations of the codes used in the system. Error bounds based on the use of these moments are obtained. By using a sufficient number of terms in the multiple exponential moment, upper and lower error bounds can be made arbitrarily tight. In that case, the error probability equals the multiple exponential moment of the multiple access interference random variable. An example using partial cross correlations based on codes generated from Gold's method is presented.

172 citations


Journal ArticleDOI
TL;DR: A level algorithm is given that constructs optimal preemptive schedules on identical processors when the task system is a tree or when there are only two processors available, and an upper bound on its performance is derived in terms of the speeds of the processors.
Abstract: Muntz and Coffman give a level algorithm that constructs optimal preemptive schedules on identical processors when the task system is a tree or when there are only two processors available. Their algorithm is adapted here to handle processors of different speeds. The new algorithm is optimal for independent tasks on any number of processors and for arbitrary task systems on two processors, but not on three or more processors, even for trees. By taking the algorithm as a heuristic on m processors and using the ratio of the lengths of the constructed and optimal schedules as a measure, an upper bound on its performance is derived in terms of the speeds of the processors. It is further shown that 1.23√m is an upper bound over all possible processor speeds and that the 1.23√m bound can be improved at most by a constant factor, by giving an example of a system for which the bound 0.35√m can be approached asymptotically.

150 citations


Journal ArticleDOI
TL;DR: A branch and bound algorithm is proposed, based on the above mentioned upper bound and on original backtracking and forward schemes, which is superior to the fastest algorithms known at present.

148 citations


Book ChapterDOI
TL;DR: In this paper, a Lagrangian dual for obtaining an upper bound and heuristics for obtaining a lower bound on the value of an optimal solution are introduced, and the main results are analytical worst case analyses of these bounds.
Abstract: The problem of optimally locating bank accounts to maximize clearing times in discused. The importance of this problem depends in part on its mathematical relationship to the well-known uncapacitated plant location problem. A Lagrangian dual for obtaining an upper bound and heuristics for obtaining a lower bound on the value of an optimal solution are introduced. The main results are analytical worst case analyses of these bounds. In particular it is shown that the relative error of the dual bound and a “greedy” heuristic never exceeds [( K – 1)/ K ] K e for a problem in which at most K locations are to be chosen. An interchange heuristic is shown to have a worst case relative error of ( K – 1)/(2 K – 1)

140 citations


Journal ArticleDOI
TL;DR: In this article, a unitary group formulation of the many-electron problem is employed to give explicit representations of state vectors which are convenient for the discussion of progerties derived from propagatorcalculations.
Abstract: A unitary group formulation of the many-electron problem is employed to give explicit representations of state vectors which are convenient for the discussion of progerties derived from propagatorcalculations. New results are obtained concerning the nature of various random-phase-like approximations and ground state representatives are generated from consistency requirements for the spectral resolution of the polarization propagator. The explicit solution admits the calculation of ground state average values for arbitrary operators and a variational upper bound to the ground state energy.

132 citations


Journal ArticleDOI
TL;DR: In this paper, Guttman's bounds are reviewed, a method is established to determine whether his λ4 (maximum split-half coefficient alpha) is the greatest lower bound in any instance, and three new bounds are discussed.
Abstract: Let Σ x be the (population) dispersion matrix, assumed well-estimated, of a set of non-homogeneous item scores. Finding the greatest lower bound for the reliability of the total of these scores is shown to be equivalent to minimizing the trace of Σ x by reducing the diagonal elements while keeping the matrix non-negative definite. Using this approach, Guttman's bounds are reviewed, a method is established to determine whether his λ4 (maximum split-half coefficient alpha) is the greatest lower bound in any instance, and three new bounds are discussed. A geometric representation, which sheds light on many of the bounds, is described.

121 citations


Journal ArticleDOI
TL;DR: In this article, a lower bound for the reliability of the total score on a test comprisingn nonhomogenous items with dispersion matrix �� x is equivalent to maximizing the trace of a diagonal matrix.
Abstract: Finding the greatest lower bound for the reliability of the total score on a test comprisingn non-homogenous items with dispersion matrix Σ x is equivalent to maximizing the trace of a diagonal matrix Σ E with elements θ I , subject to Σ E and Σ T =Σ x − Σ E being non-negative definite. The casesn=2 andn=3 are solved explicity. A computer search in the space of the θ i is developed for the general case. When Guttman's λ4 (maximum split-half coefficient alpha) is not the g.l.b., the maximizing set of θ i makes the rank of Σ T less thann − 1. Numerical examples of various bounds are given.

Proceedings ArticleDOI
01 Jan 1977
TL;DR: This paper describes a system which checks correctness of array accesses automatically without any inductive assertions or human interaction and creates logical assertions immediately before array elements such that these assertions must be true whenever the control passes the assertion in order for the access to be valid.
Abstract: This paper describes a system which checks correctness of array accesses automatically without any inductive assertions or human interaction. For each array access in the program a condition that the subscript is greater than or equal to the lower bound and a condition that the subscript is smaller than or equal to the upper bound are checked and the results indicating within the bound, out of bound, or undetermined are produced. It can check ordinary programs at about fifty lines per ten seconds, and it shows linear time complexity behavior.It has been long discussed whether program verification will ever become practical. The main argument against program verification is that it is very hard for a programmer to write assertions about programs. Even if he can supply enough assertions, he must have some knowledge about logic in order to prove the lemmas (or verification conditions) obtained from the verifier.However, there are some assertions about programs which must always be true no matter what the programs do; and yet which cannot be checked for all cases. These assertions include: integer values do not overflow, array subscripts are within range, pointers do not fall off NIL, cells are not reclaimed if they are still pointed to, uninitialized variables are not used.Since these conditions cannot be completely checked, many compilers produce dynamic checking code so that if the condition fails, then the program terminates with proper diagnostics. These dynamic checking code sometimes take up much computation time. It is better to have some checking so that unexpected overwriting of data will not occur, but it is still very awkward that the computation stops because of error. Moreover, these errors can be traced back to some other errors in the program. If we can find out whether these conditions will be met or not before actually running the program, we can benefit both by being able to generate efficient code and by being able to produce more reliable programs by careful examination of errors in the programs. Similar techniques can be used to detect semantically equivalent subexpressions or redundant statements to do more elaborate code movement optimization.The system we have constructed runs fast enough to be used as a preprocessor of a compiler. The system first creates logical assertions immediately before array elements such that these assertions must be true whenever the control passes the assertion in order for the access to be valid. These assertions are proved using similar techniques as inductive assertion methods. If an array element lies inside a loop or after a loop a loop invariant is synthesized. A theorem prover was created which has the decision capabilities for a subset of arithmetic formulas. We can use this prover to prove some valid formulas, but we can also use it to generalize nonvalid formulas so that we can hypothesize more general loop invariants.Theoretical considerations on automatic synthesis of loop invariants have been taken into account and a complete formula for loop invariants was obtained. We reduced the problem of loop invariant synthesis to the computation of this formula. This new approach of the synthesis of loop invariant will probably give more firmer basis for the automatic generation of loop invariants in general purpose verifiers.

Journal ArticleDOI
TL;DR: In this article, a study of stability and differential stability in nonconvex programming with equality and inequality constraints is presented, where upper and lower bounds for the potential directional derivatives of the perturbation function (or the extremal value function) are obtained' with the help of a constraint qualification which is shown to be necessary and sufficient to have bounded multipliers.
Abstract: This paper consists of a study of stability and differential stability in nonconvex programming. For a program with equality and inequality constraints, upper and lower bounds are estimated for the potential directional derivatives of the perturbation function (or the extremal-value function). These results are obtained' with the help of a constraint qualification which is shown to be necessary and sufficient to have bounded multipliers. New results on the continuity of the perturbation function are also obtained.

Journal ArticleDOI
TL;DR: In this paper, the problem of the prediction of the effective electrical conductivity of a polycrystal from the conductivities of a single crystal is considered, and it is shown that the average of the principal conductivities is the best upper bound on effective conductivity that can possibly be found.
Abstract: The problem of the prediction of the effective electrical conductivity of a polycrystal from the conductivity of a single crystal is considered. If the only information known about phase geometry is that the aggregate is statistically homogeneous and isotropic, it is shown that the average of the principal conductivities of the single crystal is the best upper bound on effective conductivity that can possibly be found. A new rigorous lower bound is found for the case of axially symmetric crystals. An exact solution is found for the case of a two-dimensional polycrystal.

Journal ArticleDOI
01 Jan 1977-Topology
TL;DR: In particular, if M is a compact manifold supporting a nonsingular flow in which each orbit is periodic, is there an upper bound on the lengths of the orbits? as discussed by the authors showed that the answer is no.

Journal ArticleDOI
TL;DR: The convergence properties of the Fermi-hypernetted-chain method as originated by Fantoni and Rosati are investigated in this article, where the convergence to an upper bound for the ground state energy is excellent, but that for higher densities and/or long-ranged correlation functions, it is easily possible to underestimate the upper bound if one does not apply certain convergence criteria and associated error estimates.
Abstract: The convergence properties of the Fermi-hypernetted-chain method as originated by Fantoni and Rosati are investigated. Numerical results are reported for liquid /sup 3/He and two model fermion liquids. It turns out that for not too high densities and not too long-ranged correlation functions the convergence to an upper bound for the ground-state energy is excellent, but that for higher densities and/or long-ranged correlation functions, it is easily possible to underestimate the upper bound if one does not apply certain convergence criteria and associated error estimates.

Journal ArticleDOI
TL;DR: A new approach to approximating, in a computationally efficient way, the total duration distribution function of a program, using PERT networks and CPM to propose bounds for the different moments of the distribution function.
Abstract: PERT and critical path techniques have exceptionally wide applications. These techniques and their applications have contributed significantly to better planning, control, and general organization of many programs. This paper is concerned with a technical improvement in PERT methodology by introducing a new approach to approximating, in a computationally efficient way, the total duration distribution function of a program. It is assumed that the activity durations are independent random variables and have a finite range. The first part of the paper considers PERT networks and deals with the lower bound approximation to the total duration distribution function of the program. Then we use this approximation and CPM to propose bounds for the different moments of the distribution function. We illustrate this with numerical examples. In the second part of the paper we adapt our results to PERT decision networks.

Journal ArticleDOI
TL;DR: In this article, the moments of the delay distribution and other measures of performance for a multi-channel queue are bounded by corresponding corresponding corresponding moments of delay distribution for a single channel queue.
Abstract: Moments of the delay distribution and other measures of performance for a multi-channel queue are shown to be bounded above by corresponding

Journal ArticleDOI
TL;DR: It is shown that, of the three strategies which have been suggested for dealing with equal keys, the method of always stopping the scanning pointers on keys equal to the partitioning element performs best.
Abstract: This paper considers the problem of implementing and analyzing a Quicksort program when equal keys are likely to be present in the file to be sorted. Upper and lower bounds are derived on the average number of comparisons needed by any Quicksort program when equal keys are present. It is shown that, of the three strategies which have been suggested for dealing with equal keys, the method of always stopping the scanning pointers on keys equal to the partitioning element performs best.

Journal ArticleDOI
TL;DR: Sharp upper and lower bounds on the Laplace-Stieltjes transform of the corresponding distribution function are derived and can prove useful in producing conservative estimates of a system's performance and in judging the information content of a partial characterization.
Abstract: Several partial characterizations of positive random variables (e.g., certain moments) are considered. For each characterization, sharp upper and lower bounds on the Laplace-Stieltjes transform of the corresponding distribution function are derived. These bounds are then shown to be applicable to several problems in queueing and traffic theory. The results can prove useful in producing conservative estimates of a system's performance, in judging the information content of a partial characterization and in providing insight into approximations.

Journal ArticleDOI
TL;DR: The foundations are laid for a theory of multiplicative complexity of algebras and it is shown how “multiplication problems” such as multiplication of matrices, polynomials, quaternions, etc., are instances of this theory.
Abstract: The foundations are laid for a theory of multiplicative complexity of algebras and it is shown how “multiplication problems” such as multiplication of matrices, polynomials, quaternions, etc., are instances of this theory. The usefulness of the theory is then demonstrated by utilizing algebraic ideas and results to derive complexity bounds. In particular linear upper and lower bounds for the complexity of certain types of algebras are established.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the assumption that the probability of survival is the product of the probabilities of survival of the structure for the principal stresses applied individually is generally unconservative and therefore the approximation serves as a lower bound to the failure probability.
Abstract: A frequently used approximate treatment of fracture statistics for polyaxial stress states assumes that the probability of survival is the product of the probabilities of survival of the structure for the principal stresses applied individually. The present paper shows that this assumption is generally unconservative and therefore the approximation serves as a lower bound to the failure probability. A simple technique is given for finding an upper bound in cases of biaxial tension provided the uniaxial fracture behavior is described satisfactorily by Weibull's two-parameter formula. The upper bound is a good approximation when in high stress regions the stresses are equibiaxial, or nearly so, as in laterally loaded or spinning disks.

Journal ArticleDOI
TL;DR: Variational functionals of Braun and Rebane for the imaginary-frequency polarizability ..cap alpha.. (i..omega..) have been generalized by the method of Gramian inequalities to give rigorous upper and lower bounds, permitting a comparative assessment of competing theoretical methods at this level of accuracy.
Abstract: Variational functionals of Braunn and Rebane (1972) for the imagery-frequency polarizability (IFP) have been generalized by the method of Gramian inequalities to give rigorous upper and lower bounds, valid even when the true (but unknown) unperturbed wavefunction must be represented by a variational approximation. Using these formulas in conjunction with flexible variational trial functions, tight error bounds are computed for the IFP and the associated two- and three-body van der Waals interaction constants of the ground 1(1S) and metastable 2(1,3S) states of He and Li(+). These bounds generally establish the ground-state properties to within a fraction of a per cent and metastable properties to within a few per cent, permitting a comparative assessment of competing theoretical methods at this level of accuracy. Unlike previous 'error bounds' for these properties, the present results have a completely a priori theoretical character, with no empirical input data.

Journal ArticleDOI
TL;DR: In this paper, it was shown that there is an upper bound to the rate of growth of the Ricci curvature near a singularity, and that the curvature can be maintained near the singularity.
Abstract: It is shown that there is an upper bound to the rate of growth of the Ricci curvature near a singularity.


Journal ArticleDOI
TL;DR: Sharp bounds on the value of perfect information for static and dynamic simple recourse stochastic programming problems are presented and some recent extensions of Jensen's upper bound and the Edmundson-Madansky lower bound are used.
Abstract: We present sharp bounds on the value of perfect information for static and dynamic simple recourse stochastic programming problems. The bounds are sharper than the available bounds based on Jensen's inequality. The new bounds use some recent extensions of Jensen's upper bound and the Edmundson-Madansky lower bound on the expectation of a concave function of several random variables. Bounds are obtained for nonlinear return functions and linear and strictly increasing concave utility functions for static and dynamic problems. When the random variables are jointly dependent, the Edmundson-Madansky type bound must be replaced by a less sharp "feasible point" bound. Bounds that use constructs from mean-variance analysis are also presented. With independent random variables the calculation of the bounds generally involves several simple univariate numerical integrations and the solution of several similar nonlinear programs. These bounds may be made as sharp as desired with increasing computational effort. The bounds are illustrated on a well-known problem in the literature and on a portfolio selection problem.


Journal ArticleDOI
TL;DR: This paper generalizes the ideas by defining total matchings and total coverings, and shows that these sets, whose elements in general consist of both vertices and edges, provide a way to unify these concepts.
Abstract: In graph theory, the related problems of deciding when a set of vertices or a set of edges constitutes a maximum matching or a minimum covering have been extensively studied. In this paper we generalize these ideas by defining total matchings and total coverings, and show that these sets, whose elements in general consist of both vertices and edges, provide a way to unify these concepts. Parameters denoting the maximum and the minimum cardinality of these sets are introduced and upper and lower bounds depending only on the order of the graph are obtained for the number of elements in arbitrary total matchings and total coverings. Precise values of all the parameters are found for several general classes of graphs, and these are used to establish the sharpness of most of the bounds. In addition, variations of some well known equalities due to Gallai relating covering and matching numbers are obtained.

Journal ArticleDOI
TL;DR: Several procedures based on (not necessarily regular) resolution for checking whether a formula in CF3 is contradictory are considered, and the exponential lower bounds do not follow directly from Tseitin's lower bound for regular resolution since these procedures also allow nonregular resolution trees.
Abstract: Several procedures based on (not necessarily regular) resolution for checking whether a formula in CF3 is contradictory are considered. The procedures use various methods of bounding the size of the clauses which are generated. The following results are obtained:1. All of the proposed procedures which are forced to run in polynomial time do not always work—i.e., they do not identify all contradictory formulas.2. Those which always work must run in exponential time. The exponential lower bounds for these procedures do not follow directly from Tseitin’s lower bound for regular resolution since these procedures also allow nonregular resolution trees.

Journal ArticleDOI
TL;DR: In this article, it was shown that failure to allow for variation of the field within each cell limits the maximum usable electrical size of the cells in moment-method solutions to the problem.
Abstract: When pulse functions are used in moment-method solutions, failure to allow for variation of the field within each cell limits the maximum usable electrical size of the cells. Appreciable error is expected for |k| l >or= 2 in one or two dimensions and |k| l >or= /spl radic/6 in the three-dimensional problem where l is the side of a cell and k is the propagation constant in the material.