scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 1981"


Journal ArticleDOI
TL;DR: For systems with negligible self-gravity, the bound follows from application of the second law of thermodynamics to a gedanken experiment involving a black hole as discussed by the authors, and it is shown that black holes have the maximum entropy for given mass and size which is allowed by quantum theory and general relativity.
Abstract: We present evidence for the existence of a universal upper bound of magnitude $\frac{2\ensuremath{\pi}R}{\ensuremath{\hbar}c}$ to the entropy-to-energy ratio $\frac{S}{E}$ of an arbitrary system of effective radius $R$. For systems with negligible self-gravity, the bound follows from application of the second law of thermodynamics to a gedanken experiment involving a black hole. Direct statistical arguments are also discussed. A microcanonical approach of Gibbons illustrates for simple systems (gravitating and not) the reason behind the bound, and the connection of $R$ with the longest dimension of the system. A more general approach establishes the bound for a relativistic field system contained in a cavity of arbitrary shape, or in a closed universe. Black holes also comply with the bound; in fact they actually attain it. Thus, as long suspected, black holes have the maximum entropy for given mass and size which is allowed by quantum theory and general relativity.

1,079 citations


Proceedings ArticleDOI
28 Oct 1981
TL;DR: A super-polynomial lower bound is given for the size of circuits of fixed depth computing the parity function and connections are given to the theory of programmable logic arrays and to the relativization of the polynomial-time hierarchy.
Abstract: A super-polynomial lower bound is given for the size of circuits of fixed depth computing the parity function. Introducing the notion of polynomial-size, constant-depth reduction, similar results are shown for the majority, multiplication, and transitive closure functions. Connections are given to the theory of programmable logic arrays and to the relativization of the polynomial-time hierarchy.

718 citations


Proceedings ArticleDOI
11 May 1981
TL;DR: Using the red-blue pebble game formulation, a number of lower bound results for the I/O requirement are proven and may provide insight into the difficult task of balancing I/o and computation in special-purpose system designs.
Abstract: @S), respectively, time is needed for the I/O. Similar results are obtained for algorithms for several other problems. All of the lower bounds presented are the best possible in the sense that they are achievable by certain decomposition schemes. Results of this paper may provide insight into the difficult task of balancing I/O and computation in special-purpose system designs. For example, for the n-point FFT, the lower bound on I/O time implies that an S-point device achieving a speed-up ratio of order log S over the conventional O(n log n) time implementation is all one can hope for.

514 citations


Journal ArticleDOI
TL;DR: For a Coulomb system of particles of charge e, it has been shown that the indirect part of the repulsive Coulomb energy (exchange plus correlation energy) has a lower bound of the form Ce 2/3 ∫ ρ (x) 4/3 dx, where ρ is the single particle charge density.
Abstract: For a Coulomb system of particles of charge e, it has previously been shown that the indirect part of the repulsive Coulomb energy (exchange plus correlation energy) has a lower bound of the form — Ce 2/3 ∫ ρ (x) 4/3 dx, where ρ is the single particle charge density. Here we lower the constant C from the 8.52 previously given to 1.68. We also show that the best possible C is greater than 1.23.

472 citations


Proceedings ArticleDOI
29 Jun 1981
TL;DR: Upper and lower bounds for delay that are computationally simple are presented here to certify that a circuit is "fast enough", given both the maximum delay and the voltage threshold.
Abstract: In MOS integrated circuits, signals may propagate between stages with fanout. The MOS interconnect may be modeled by an RC tree. Exact calculation of signal delay through such networks is difficult. However, upper and lower bounds for delay that are computationally simple are presented here. The results can be used (1) to bound the delay, given the signal threshold; or (2) to bound the signal voltage, given a delay time; or (3) to certify that a circuit is "fast enough", given both the maximum delay and the voltage threshold.

357 citations


Journal ArticleDOI
TL;DR: In this paper, the authors give an upper bound for the Betti numbers of a compact Riemannian manifold in terms of its diameter and the lower bound of the sectional curvatures.
Abstract: We give an upper bound for the Betti numbers of a compact Riemannian manifold in terms of its diameter and the lower bound of the sectional curvatures. This estimate in particular shows that most manifolds admit no metrics of non-negative sectional curvature.

344 citations


Journal ArticleDOI
TL;DR: In this paper, a recursive formula is presented to compute the exact system reliability, and sharp upper and lower bounds for it are given. But this formula is not suitable for the case where k consecutive components are failed.
Abstract: A reliability diagram with n components in sequence is called consecutive-k-out-of-n:F system if the system fails whenever k consecutive components are failed. This paper presents a recursive formula to compute the exact system reliability, and gives sharp upper and lower bounds for it.

280 citations


Journal ArticleDOI
TL;DR: In this paper, an upper bound on the rate at which information can be transferred in terms of the message energy is inferred from thermodynamic and causality considerations, which is consistent with Shannon's bounds for a band-limited channel.
Abstract: From thermodynamic and causality considerations a general upper bound on the rate at which information can be transferred in terms of the message energy is inferred. This bound is consistent with Shannon's bounds for a band-limited channel. It prescribes the minimum energy cost for information transferred over a given time interval. As an application, a fundamental upper bound of ${10}^{15}$ operations/sec on the speed of an ideal digital computer is established.

227 citations


Journal ArticleDOI
TL;DR: A simple algorithm for the graph design of small-diameter networks, which can be used to construct a directed graph with diameter ⌈logd n⌉, which is at most one larger than the lower bound.
Abstract: This paper proposes a simple algorithm for the graph design of small-diameter networks. For given nodes n and degree d, this algorithm can be used to construct a directed graph with diameter ⌈logd n⌉, which is at most one larger than the lower bound ⌈logd (n(d − 1) + 1)⌉ − 1. Its average distance is also close to the lower bound. The algorithm can be used to construct a directed graph with smaller diameter, compared with any other conventional methods, when n is larger.

223 citations



Journal ArticleDOI
Dana Angluin1
TL;DR: The number of queries required to identify a regular set given an oracle for the set and some auxiliary information about the set is considered and the problem considered by Pao and Carr (1978) is shown to be solvable in a polynomial number of query.
Abstract: We consider the number of queries required to identify a regular set given an oracle for the set and some auxiliary information about the set. If the auxiliary information is n , the number of states of the canonical finite state acceptor for the language, then the upper and lower bounds on the number of queries are exponential in n . If the auxiliary information consists of a set of strings guaranteed to reach every live state of the canonical acceptor for the language, then the upper and lower bounds are polynomial in n and the size of the given set of strings. As a corollary, the problem considered by Pao and Carr (1978) is shown to be solvable in a polynomial number of queries.

Journal ArticleDOI
TL;DR: This paper presents a general methodology to carry out the reduction in numbers of degrees of freedom in path integral algorithms, and shows how to use discretized path integrals to compute rigorous upper and lower bounds to the free energy for nontrivial quantum systems.
Abstract: In the path integral representation of quantum theory, a few body quantum problem becomes a classical many body problem. To exploit this isomorphism, it becomes necessary to develop methods by which degrees of freedom can be explicitly removed from consideration. The interactions among the remaining relevant variables are described by effective interactions. In this paper, we present a general methodology to carry out the reduction in numbers of degrees of freedom. Certain path integral algorithms are shown to correspond to reference systems for the full isomorphic classical many body problem. The correspondence allows one to determine systematic corrections to the algorithms by low order perturbation approximations familiar in the theory of simple classical fluids. We show how to use discretized path integrals to compute rigorous upper and lower bounds to the free energy for nontrivial quantum systems, and we discuss how to optimize the upper bounds with variational theories. Several illustrative example...

Proceedings ArticleDOI
11 May 1981
TL;DR: This paper extends the model and the class of functions for which non-trivial bounds can be proved and shows that previous lower bound results also apply even when the model is extended to allow nondeterminism, randomness, and multiple arrivials.
Abstract: Increased use of Very Large Scale Integration (VLSI) for the fabrication of digital circuits has led to increased interest in complexity results on the inherent VLSI difficulty of various problems. Lower bounds have been obtained for problems such as integer multiplication [1,2], matrix multiplication [7], sorting [8], and discrete Fourier transform [9], all within VLSI models similar to one originally developed by Thompson [8,9]. The lower bound results all pertain to a space-time trade-off measure that arises naturally within this model. In this paper, we extend the model and the class of functions for which non-trivial bounds can be proved. In Section 2, we give a more general model than has been proposed previously. In Section 3 we show how to reduce the derivation of lower bounds within the model to a problem in distributed computing In Section 4, we consider lower bounds for a number of predicates: n-input, l-output functions (as contrasted with the n-input, n-output functions which have been studied previously). In Section 5, we show that previous lower bound results (for n-input, n-output functions) also apply even when the model is extended to allow nondeterminism, randomness, and multiple arrivials. Finally, the full details of the results presented here will appear in the final version of this paper.

Journal ArticleDOI
TL;DR: It is shown that the sequence 2^0, 2^1, 2 \cdots, 2^{n - 1} ,2^ n - 1\} can be computed with $n + 2.13\sqrt n + \log n$ additions, and that the lower bound result is applied to show that the addition-sequence problem is NP-complete.
Abstract: Given a sequence $n_1 , \cdots ,n_m $ of positive integers, what is the smallest number of additions needed to compute all m integers starting with 1? This generalization of the addition chain ($m = 1$) problem will be called the addition-sequence problem. We show that the sequence $\{ 2^0 ,2^1 , \cdots ,2^{n - 1} ,2^n - 1\} $ can be computed with $n + 2.13\sqrt n + \log n$ additions, and that $n + \sqrt n - 2$ is a lower bound. This lower bound result is applied to show that the addition-sequence problem is NP-complete.

Journal ArticleDOI
TL;DR: In this paper, a distribution planning model is formulated which considers existing and potential substation locations, their capacities and costs, together with the primary feeder network represented by small area demand locations to represent non-uniform loads, and feeder segments having variable distribution costs and limited capacities.
Abstract: A distribution planning model is formulated which considers existing and potential substation locations, their capacities and costs, together with the primary feeder network represented by small area demand locations to represent non-uniform loads, and feeder segments having variable distribution costs and limited capacities. A branch and bound search method is described which utilizes a shortest path table to obtain lower bounds and solutions from a transshipment linear programming model for upper bounds. The solution of a small example is presented in detail, and computational results for several larger problems are summarized.

Journal ArticleDOI
TL;DR: In this paper, the authors generalized Hill's theory of bifurcation and stability in solids obeying normality to include a non-associated flow law, and a one-parameter family of linear comparison solids has been found that admits a potential and has the property that if uniqueness is certain for the comparison solid, then instability is precluded for the underlying elastic-plastic solid.
Abstract: In the present paper, Hill's theory of bifurcation and stability in solids obeying normality is generalized to include a non-associated flow law. A one-parameter family of linear comparison solids has been found that admits a potential and has the property that if uniqueness is certain for the comparison solid then bifurcation and instability are precluded for the underlying elastic-plastic solid. The uniqueness criterion derived may be used as a device to determine lower bounds to the magnitudes of primary bifurcation and instability stresses which are ordinarily unknown. A second linear solid is introduced whose constitutive relations have the same form as the elastic-plastic solid “in loading”. The first eigenstate of this solid gives an upper bound to the primary bifurcation state of the underlying elastic-plastic solid. The search for the genuine primary bifurcation state is therefore replaced by a search for upper and lower bounds in the situation when normality fails to hold. The theory is applied to problems of homogeneous stress states.

Journal ArticleDOI
TL;DR: An algorithm for the asymmetric traveling salesman problem (TSP) using a new, restricted Lagrangean relaxation based on the assignment problem (AP) that can be adapted to the symmetric TSP by using the 2-matching problem instead of AP is described.
Abstract: We describe an algorithm for the asymmetric traveling salesman problem (TSP) using a new, restricted Lagrangean relaxation based on the assignment problem (AP). The Lagrange multipliers are constrained so as to guarantee the continued optimality of the initial AP solution, thus eliminating the need for repeatedly solving AP in the process of computing multipliers. We give several polynomially bounded procedures for generating valid inequalities and taking them into the Lagrangean function with a positive multiplier without violating the constraints, so as to strengthen the current lower bound. Upper bounds are generated by a fast tour-building heuristic. When the bound-strengthening techniques are exhausted without matching the upper with the lower bound, we branch by using two different rules, according to the situation: the usual subtour breaking disjunction, and a new disjunction based on conditional bounds. We discuss computational experience on 120 randomly generated asymmetric TSP's with up to 325 cities, the maximum time used for any single problem being 82 seconds. This is a considerable improvement upon earlier methods. Though the algorithm discussed here is for the asymmetric TSP, the approach can be adapted to the symmetric TSP by using the 2-matching problem instead of AP.

Journal ArticleDOI
TL;DR: It is shown here that fi(n(logn) ~) is a lower bound on the inherent worst case time reqmred to process a sequence of n intermixed insemons, deleuons, and range queries, which imphes that the Lueker and Wdlard data structures are in some sense optimal.
Abstract: Let S be an arbitrary commutaUve semigroup (set of elements closed under a commutative and associative addmon operation, +). Given a set of records wtth d-dimensional key vectors over an ordered key space, such that each record has associated with it a value in S, an orthogonal range query is a request for the sum of the values associated with each record in some specified hypercube (cross product of mtervals). Data structures which accommodate inserUons and delettons of records and orthogonal range queries, such that an arbitrary sequence of n such operations takes time O(n(log n)a), have been presented by G. Lueker and D. Wdlard. It is shown here that fi(n(logn) ~) is a lower bound on the inherent worst case time reqmred to process a sequence of n intermixed insemons, deleuons, and range queries, which imphes that the Lueker and Wdlard data structures are in some sense optimal.

Journal ArticleDOI
TL;DR: The size and position of spurious peaks in the autocorrelation functions are discussed and the results are extended to narrowband ambiguity functions.
Abstract: Time-frequency hop codes are developed based upon the theory of linear congruences. These codes can be used for multiuser radar and asynchronous spread spectrum communications systems. A uniform upper bound is placed on the cross-correlation function between any two elements of the code set. The upper bound is minimized by choice of time-bandwidth product and is shown to diminish as 2/N, where N is the number of elements in the code set. The size and position of spurious peaks in the autocorrelation functions are discussed. The results are extended to narrowband ambiguity functions.

Journal ArticleDOI
TL;DR: It is proved that R(N)=N1/4+o(1) thus showing that Roth’s original lower bound was essentially best possible, and the notion ofdiscrepancy of hypergraphs is introduced and derive an upper bound from which the above result follows.
Abstract: Letg be a coloring of the set {1, ...,N} = [1,N] in red and blue. For each arithmetic progressionA in [1,N], consider the absolute value of the difference of the numbers of red and of blue members ofA. LetR(g) be the maximum of this number over all arithmetic progression (thediscrepancy ofg). Set $$R(N) = \mathop {\min }\limits_g R(g)$$ over all two-coloringsg. A remarkable result of K. F. Roth gives*R(N)≫N 1/4. On the other hand, Roth observed thatR(N)≪N 1/3+ɛ and suggested that this bound was nearly sharp. A. Sarkozy disproved this by provingR(N)≪N 1/3+ɛ. We prove thatR(N)=N 1/4+o(1) thus showing that Roth’s original lower bound was essentially best possible. Our result is more general. We introduce the notion ofdiscrepancy of hypergraphs and derive an upper bound from which the above result follows.

Journal ArticleDOI
TL;DR: The Barankin bound is used to examine the effect of ambiguity on mean-square measurement error and the relative magnitude of the bounds in that region depends critically on the ratio of signal center frequency to signal bandwidth.
Abstract: Array processing of narrow-band Gaussian signals is studied with emphasis on delay estimation. The Barankin bound is used to examine the effect of ambiguity on mean-square measurement error. When the bound is plotted as a function of signal-to-noise ratio one observes a distinct threshold. Above the critical signal-to-noise ratio the lower bound on mean-square error is given by the Cramer-Rao inequality, which is approached by the Barankin inequality under these conditions. Below the threshold the Barankin bound can exceed the Cramer-Rao bound by large factors. The relative magnitude of the bounds in that region depends critically on the ratio of signal center frequency to signal bandwidth.

Journal ArticleDOI
TL;DR: In this paper, the bearing capacities of slopes loaded on top surfaces were calculated by using the upper bound theorem, and the results were compared with bearing capacities obtained by conventional circular arc methods and by Kotter's stress characteristics equations.

Journal ArticleDOI
TL;DR: In this paper, the eigenvalues of the Laplace operator on a Riemannian manifold with Dirichlet or Neumann boundary condition are investigated. But the Laplacian is not a Riehle operator, and it is not known whether it can be computed asymptotically.
Abstract: Let M\" be a n-dimensional compact Riemannian manifold with possibly empty boundary, OM. We consider the eigenvalues of the Laplace operator on M with either Dirichlet or Neumann boundary condition. The famous asymptotic formulas of H. Weyl give

Journal ArticleDOI
TL;DR: In this article, an upper bound on the probability distribution for the strength of composite materials is obtained based on the occurence of two or more adjacent broken fibers in a bundle, and local load sharing is assumed for the non-failed fiber elements in each bundle.
Abstract: An upper bound is obtained on the probability distribution for the strength of composite materials. The analysis is based on the chain-of-bundles probability model, and local load sharing is assumed for the nonfailed fiber elements in each bundle. The bound is based on the occurence of two or more adjacent broken fibers in a bundle. This event is necessary but not sufficient for the failure of the material. Two distributions are assumed for fiber strength: the usual Weibull distribution and a more realistic double version which has much the effect of putting a ceiling on fiber strength. For large composite materials, the upper bound becomes a Weibull distribution but with a shape parameter which is twice that for the individual fibers. The bound is always conservative, but it is extremely tight when the variability in fiber strength is low. In typical cases, the use of the double Weibull distribution for fiber strength is shown not to affect the behavior of the bound significantly. In view of the additional experimental and computational labor involved, its use in practice may not be justified in such cases. However, its use does shed light on fracture processes in composite materials.


Proceedings ArticleDOI
11 May 1981
TL;DR: In this paper, the authors proved matching upper and lower bounds on minimax edge length for four planar embedding problems for complete binary trees, which imply general performance limits due to propagation delay.
Abstract: Information is not transferred instantaneously; there is always a propagation delay before an output is available as an input to the next computational step. Propagation delay is a function of wire length, so we study the length of edges in planar graphs. We prove matching (to within a constant factor) upper and lower bounds on minimax edge length for four planar embedding problems for complete binary trees. (The results are summarized in Table 1.) Because trees are often subcircuits of larger circuits, these results imply general performance limits due to propagation delay. The results give important information for the popular technique of pipelining.

Journal ArticleDOI
TL;DR: In this paper, an asymptotic lower bound is obtained for the integrated relative squared error of autoregressive spectral estimate when the order of auto-gression is selected, and the bound is attained in the limit by the same selection as has been proposed for prediction.
Abstract: An asymptotic lower bound is obtained for the integrated relative squared error of autoregressive spectral estimate when the order of autoregression is selected. The bound is attained in the limit by the same selection as has been proposed for prediction.

Journal ArticleDOI
TL;DR: In this paper, the Green's function Monte Carlo method is applied to liquid /sup 3/He, a fermion system, which yields an exact ground-state solution to the Schroedinger equation for Bose systems.
Abstract: The Green's function Monte Carlo method, which yields an exact ground-state solution to the Schroedinger equation for Bose systems, is applied to liquid /sup 3/He, a fermion system. With use of a technique that projects out states of selected symmetry, strict upper bounds to the ground-state energy of many-fermion systems are constructed. It is found that such an upper bound for liquid /sup 3/He of -2.20 +- 0.05 /sup 0/K at the experimental equilibrium density. This energy is more than 1 /sup 0/K below Jastrow variational results, and much nearer the experimental value of -2.47 +- 0.01 /sup 0/K.

Journal ArticleDOI
TL;DR: In this paper, the convergence properties of several algorithms for computing the greatest lower bound to reliability or the constrained minimum trace communality solution in factor analysis have been examined, and it is shown that a slightly modified version of one method suggested by Bentler and Woodward can safely be applied to any set of data.
Abstract: In the last decade several algorithms for computing the greatest lower bound to reliability or the constrained minimum-trace communality solution in factor analysis have been developed. In this paper convergence properties of these methods are examined. Instead of using Lagrange multipliers a new theorem is applied that gives a sufficient condition for a symmetric matrix to be Gramian. Whereas computational pitfalls for two methods suggested by Woodhouse and Jackson can be constructed it is shown that a slightly modified version of one method suggested by Bentler and Woodward can safely be applied to any set of data. A uniqueness proof for the solution desired is offered.

Journal ArticleDOI
TL;DR: This work defines a particular tree-search technique for the ILP, which makes use of a lower bound to determine the branches to follow in the decision tree, and applies it to the solution of the Zero-One Multiple Knapsack Problem.