scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 1990"


Journal ArticleDOI
TL;DR: In this article, the authors provide integer linear programming formulations for the selective travelling salesman problem and derive upper and lower bounds for exact enumerative algorithms for the problem, which are then embedded in exact enumeration algorithms.

388 citations


Journal ArticleDOI
TL;DR: It is shown that the minimal distance d of a binary self-dual code of length n>or=74 is at most 2((n+6)/10).
Abstract: It is shown that the minimal distance d of a binary self-dual code of length n>or=74 is at most 2((n+6)/10). This bound is a consequence of some new conditions on the weight enumerator of a self-dual code obtained by considering a particular translate of the code, called its shadow. These conditions also enable one to find the highest possible minimal distance of a self-dual code for all n>or=60; to show that self-dual codes with d or=22, with d>or=8 exist precisely for n=24, 32 and n>or=26, and with d>or=10 exist precisely for n>or=46; and to show that there are exactly eight self-dual codes of length 32 with d=8. Several of the self-dual codes of length 34 have trivial group (this appears to be the smallest length where this can happen). >

384 citations


Journal ArticleDOI
TL;DR: Upper and lower bounds for extremal problems defined for arrangements of lines, circles, spheres, and alike are presented and it is proved that the maximum number of edges boundingm cells in an arrangement ofn lines is Θ(m2/3n 2/3 +n), and that it isO(m3/2β(m) forn unit-circles.
Abstract: We present upper and lower bounds for extremal problems defined for arrangements of lines, circles, spheres, and alike. For example, we prove that the maximum number of edges boundingm cells in an arrangement ofn lines is ?(m2/3n2/3 +n), and that it isO(m2/3n2/3s(n) +n) forn unit-circles, wheres(n) (and laters(m, n)) is a function that depends on the inverse of Ackermann's function and grows extremely slowly. If we replace unit-circles by circles of arbitrary radii the upper bound goes up toO(m3/5n4/5s(n) +n). The same bounds (without thes(n)-terms) hold for the maximum sum of degrees ofm vertices. In the case of vertex degrees in arrangements of lines and of unit-circles our bounds match previous results, but our proofs are considerably simpler than the previous ones. The maximum sum of degrees ofm vertices in an arrangement ofn spheres in three dimensions isO(m4/7n9/7s(m, n) +n2), in general, andO(m3/4n3/4s(m, n) +n) if no three spheres intersect in a common circle. The latter bound implies that the maximum number of unit-distances amongm points in three dimensions isO(m3/2s(m)) which improves the best previous upper bound on this problem. Applications of our results to other distance problems are also given.

362 citations


Journal ArticleDOI
TL;DR: A Chapman-Robbins form of the Barankin bound is used to derive a multiparameter Cramer-Rao (CR) type lower bound on estimator error covariance when the parameter theta in R/sup n/ is constrained to lie in a subset of the parameter space.
Abstract: A Chapman-Robbins form of the Barankin bound is used to derive a multiparameter Cramer-Rao (CR) type lower bound on estimator error covariance when the parameter theta in R/sup n/ is constrained to lie in a subset of the parameter space. A simple form for the constrained CR bound is obtained when the constraint set Theta /sub C/, can be expressed as a smooth functional inequality constraint. It is shown that the constrained CR bound is identical to the unconstrained CR bound at the regular points of Theta /sub C/, i.e. where no equality constraints are active. On the other hand, at those points theta in Theta /sub C/ where pure equality constraints are active the full-rank Fisher information matrix in the unconstrained CR bound must be replaced by a rank-reduced Fisher information matrix obtained as a projection of the full-rank Fisher matrix onto the tangent hyperplane of the full-rank Fisher matrix onto the tangent hyperplane of the constraint set at theta . A necessary and sufficient condition involving the forms of the constraint and the likelihood function is given for the bound to be achievable, and examples for which the bound is achieved are presented. In addition to providing a useful generalization of the CR bound, the results permit analysis of the gain in achievable MSE performance due to the imposition of particular constraints on the parameter space without the need for a global reparameterization. >

350 citations


Book
01 Jan 1990
TL;DR: In this article, the authors discuss the validity of the upper bound work (or energy) method of limit analysis in a form that can be appreciated by a practicing soil engineer, and provide a compact and up-to-date summary of recent advances in the applications of upper bound analysis to earthquake-induced stability problems in soil mechanics.
Abstract: Hardbound. During the last ten years, our understanding of the perfect plasticity and the associated flow rule assumption on which limit analysis is based has increased considerably. Many extensions and advances have been made in applications of limit analysis to the area of soil dynamics, in particular, to earthquake-induced slope failure and landslide problems and to earthquake-induced lateral earth pressures on rigid retaining structures. The purpose of the book therefore is in part to discuss the validity of the upper bound work (or energy) method of limit analysis in a form that can be appreciated by a practicing soil engineer, and in part to provide a compact and up-to-date summary of recent advances in the applications of limit analysis to earthquake-induced stability problems in soil mechanics.

316 citations


Journal ArticleDOI
TL;DR: Set-membership techniques for estimating parameters from uncertain data are reviewed and a suitable characterization of the set of all parameter vectors is found consistent with the model structure, data, and bounds on the errors.

312 citations


Journal ArticleDOI
TL;DR: In this paper, the relationship between a detailed power system dynamic model and a standard load-flow model is examined to show how the load flow Jacobian appears in the system dynamic-state Jacobian for evaluating steady-state stability.
Abstract: The relationship is presented between a detailed power system dynamic model and a standard load-flow model. The linearized dynamic model is examined to show how the load-flow Jacobian appears in the system dynamic-state Jacobian for evaluating steady-state stability. Two special cases are given for the situation when singularity of the load-flow Jacobian implies singularity of the system dynamic-state Jacobian. The standard load-flow Jacobian can provide information about the existence of a steady-state equilibrium point for a specified level of loading or interchange. There are two very special cases when the determinant of the standard load-flow Jacobian implies something about the steady-state stability of a dynamic model. Both of these cases involve very drastic assumptions about the synchronous machines and their control systems. The load level which produces a singular load-flow Jacobian should be considered an optimistic upper bound on maximum loadability. >

295 citations


Journal ArticleDOI
TL;DR: Lower bounds and a dominance criterion are presented and a reduction algorithm is derived and an experimental analysis is provided for both lower bounds and reduction algorithm.

283 citations


Journal ArticleDOI
TL;DR: It is proved here that every monotone circuit which tests $st-connectivity of an undirected graph on n nodes has depth $\Omega (\log^2 \,n)$.
Abstract: It is proved here that every monotone circuit which tests $st$-connectivity of an undirected graph on n nodes has depth $\Omega (\log^2 \,n)$. This implies a superpolynomial $(n^{\Omega (\log n)} )$ lower bound on the size of any monotone formula for $st$-connectivity.The proof draws intuition from a new characterization of circuit depth in terms of communication complexity. Within the same framework, a very simple and intuitive proof is given of a depth analogue of a theorem of Khrapchenko concerning formula size lower bounds.

280 citations


Journal ArticleDOI
TL;DR: The heuristic, developed to get an initial lower bound, finds an optimal solution for most of the random test problems, and an extension to the basic problem that allows for preselected points, which may correspond to existing facility locations.

267 citations


Proceedings ArticleDOI
22 Oct 1990
TL;DR: It is proved that any language in ACC can be approximately computed by two-level circuits of size 2 raised to the (log n)/sup k/ power, with a symmetric-function gate at the top and only AND gates on the first level, giving the first nontrivial upper bound on the computing power of ACC circuits.
Abstract: It is proved that any language in ACC can be approximately computed by two-level circuits of size 2 raised to the (log n)/sup k/ power, with a symmetric-function gate at the top and only AND gates on the first level. This implies that any language in ACC can be recognized by depth-3 threshold circuits of that size. This result gives the first nontrivial upper bound on the computing power of ACC circuits. >

Journal ArticleDOI
Svante Janson1
TL;DR: Upper and lower bounds are given for P(S ≤ k), 0 ≤ k ≤ ES, where S is a sum of indicator variables with a special structure, which appears, for example, in subgraph counts in random graphs.
Abstract: Upper and lower bounds are given for P(S ≤ k), 0 ≤ k ≤ ES, where S is a sum of indicator variables with a special structure, which appears, for example, in subgraph counts in random graphs. in typical cases, these bounds are close to the corresponding probabilities for a Poisson distribution with the same mean as S. There are no corresponding general bounds for P(S ≥ k), k > ES, but some partial results are given.

Journal ArticleDOI
TL;DR: Three distinct upper bounds on the size of an OOC are presented that, for many values of the parameter set (n, omega , lambda ), improve upon the tightest previously known bound.
Abstract: A technique for constructing optimal OOCs (optical orthogonal codes) is presented. It provides the only known family of optimal (with respect to family size) OOCs having lambda =2. The parameters (n, omega , lambda ) are respectively (p/sup 2m/-1, p/sup m/+1,2), where p is any prime and the family size is p/sup m/-2. Three distinct upper bounds on the size of an OOC are presented that, for many values of the parameter set (n, omega , lambda ), improve upon the tightest previously known bound. >

Journal ArticleDOI
TL;DR: The goal in this paper is to generalize the STS method and to study some of its basic properties, finding a lower bound is obtained for the expected length of the asymptotic as the run size becomes large STS confidence intervals.
Abstract: The method of standardized time series STS was proposed by Schruben as an approach for constructing asymptotic confidence intervals for the steady-state mean from a single simulation run. The STS method "cancels out" the variance constant while other methods attempt to consistently estimate the variance constant. Our goal in this paper is to generalize the STS method and to study some of its basic properties. Starting from a functional central limit theorem FCLT for the sample mean of the simulated process, a class of mappings of C[0,1] to ℝ is identified, each of which leads to a STS confidence interval. One of these mappings leads to the batch means method. A lower bound is obtained for the expected length of the asymptotic as the run size becomes large STS confidence intervals. This lower bound is not attained, but can be approached arbitrarily closely, by STS confidence intervals. Methods that consistently estimate the variance constant do realize this lower bound. The variance of the length of a STS confidence interval is of larger order in the run length than is that for the regenerative method.

Journal ArticleDOI
TL;DR: The main result of this paper is showing that theclass of polynomial threshold functions is strictly contained in the class of Boolean functions that can be computed by a depth 2, unbounded fan-in polynometric size circuit of linear threshold gates.
Abstract: The analysis of linear threshold Boolean functions has recently attracted the attention of those interested in circuit complexity as well as ofthose interested in neural networks. Here a generalization oflinear threshold functions is defined, namely, polynomial threshold functions, and its relation to the class of linear threshold functions is investigated. A Boolean function is polynomial threshold if it can be represented as a sign function ofa polynomial that consists ofa polynomial (in the number ofvariables) number ofterms. The main result ofthis paper is showing that the class ofpolynomial threshold functions (which is called PT1 is strictly contained in the class ofBoolean functions that can be computed by a depth 2, unbounded fan-in polynomial size circuit of linear threshold gates (which is called LT2). Harmonic analysis ofBoolean functions is used to derive a necessary and sufficient condition for a function to be an S-threshold function for a given set S of monomials. This condition is used to show that the number of different S-threshold functions, for a given S, is at most 2 t'/ 1)lsl. Based on the necessary and sufficient condition, a lower bound is derived on the number of terms in a threshold function. The lower bound is expressed in terms of the spectral representation of a Boolean function. It is found that Boolean functions having an exponentially small spectrum are not polynomial threshold. A family of functions is exhibited that has an exponentially small spectrum; they are called "semibent" functions. A function is constructed that is both semibent and symmetric to prove thatPT is properly contained in LT2.

Journal ArticleDOI
TL;DR: In this paper, the results of numerical simulations of the lattice Boltzmann equation in three-dimensional porous geometries constructed by the random positioning of penetrable spheres of equal radii are presented.
Abstract: The results of numerical simulations of the lattice‐Boltzmann equation in three‐dimensional porous geometries constructed by the random positioning of penetrable spheres of equal radii are presented. Numerical calculations of the permeability are compared with previously established rigorous variational upper bounds. The numerical calculations approach the variational bounds from below at low solid fractions and are always within one order of magnitude of the best upper bound at high solid fractions ranging up to 0.98. At solid fractions less than 0.2 the calculated permeabilities compare well with the predictions of Brinkman’s effective‐medium theory, whereas at higher solid fractions a good fit is obtained with a Kozeny–Carman equation.

Journal ArticleDOI
TL;DR: An upper bound is proved for the Lp norm of Woodward’s ambiguity function in radar signal analysis and of the Wigner distribution in quantum mechanics when p>2 and equality is achieved in the L p bounds if and only if the functions f and g that enter the definition are both Gaussians.
Abstract: An upper bound is proved for the L p norm of Woodward’s ambiguity function in radar signal analysis and of the Wigner distribution in quantum mechanics when p >2. A lower bound is proved for 1 ≤p < 2. In addition, a lower bound is proved for the entropy. These bounds set limits to the sharpness of the peaking of the ambiguity function or Wigner distribution. The bounds are best possible and equality is achieved in the L P bounds if and only if the functions/ and g that enter the definition are both Gaussians.

Journal ArticleDOI
TL;DR: An upper bound for the posterior probability of a measurable set A when the prior lies in a class of probability measures P, which is a rational function of two Choquet integrals if P is weakly compact and is closed with respect to majorization.
Abstract: We give an upper bound for the posterior probability of a measurable set $A$ when the prior lies in a class of probability measures $\mathscr{P}$. The bound is a rational function of two Choquet integrals. If $\mathscr{P}$ is weakly compact and is closed with respect to majorization, then the bound is sharp if and only if the upper prior probability is 2-alternating. The result is used to compute bounds for several sets of priors used in robust Bayesian inference. The result may be regarded as a characterization of 2-alternating Choquet capacities.

Journal ArticleDOI
TL;DR: Some criteria for obtaining lower bounds for the formula size of Boolean functions are presented and the boundnΩ(logn) for the function “MINIMUM COVER” is obtained using methods considerably simpler than all previously known.
Abstract: We present some criteria for obtaining lower bounds for the formula size of Boolean functions. In the monotone case we get the boundn Ω(logn) for the function “MINIMUM COVER” using methods considerably simpler than all previously known. In the general case we are only able to prove that the criteria yield an exponential lower bound when applied to almost all functions. Some connections with graph complexity and communication complexity are also given.

Journal ArticleDOI
TL;DR: Lower bounds on the complexity of orthogonal range searching in thestatic case are established and a lower bound on the time required for executinginserts and queries is established.
Abstract: Lower bounds on the complexity of orthogonal range searching in the static case are established. Specifically, we consider the following dominance search problem: Given a collection of n weighted points in d-space and a query point q, compute the cumulative weight of the points dominated (in all coordinates) by q. It is assumed that the weights are chosen in a commutative semigroup and that the query time measures only the number of arithmetic operations needed to compute the answer. It is proved that if m units of storage are available, then the query time is at least proportional to (log n/log(2m/n))d–*1 in both the worst and average cases. This lower bound is provably tight for m = Ω(n(log n)d–1+ϵ) and any fixed ϵ > 0. A lower bound of Ω(n/log log n)d) on the time required for executing n inserts and queries is also established. —Author's Abstract

Proceedings ArticleDOI
01 Jan 1990
TL;DR: In this paper, the authors examined the expected complexity of boundary problems on a set of n points in K-space, where the points are chosen from a probability distribution in which each component of a point is chosen independently of all other components.
Abstract: This paper examines the expected complexity of boundary problems on a set ofN points inK-space. We assume that the points are chosen from a probability distribution in which each component of a point is chosen independently of all other components. We present an algorithm to find the maximal points usingKN + O (N1−1/K log1/K N) expected scalar comparisons, for fixedK≥ 2. A lower bound shows that the algorithm is optimal in the leading term. We describe a simple maxima algorithm that is easy to code, and present experimental evidence that it has similar running time. For fixedK ≥ 2, an algorithm computes the convex hull of the set in 2KN + O(N1−1/K log1/KN) expected scalar comparisons. The history of the algorithms exhibits interesting interactions among consulting, algorithm design, data analysis, and mathematical analysis of algorithms.

Journal ArticleDOI
01 Feb 1990
TL;DR: In this paper, the K-interpolation norm from the theory of interpolation of Banach spaces was used to obtain upper and lower bounds for the problem of finding a sequence of real numbers such that x = (xn)n??= E 12.
Abstract: We find upper and lower bounds for Pr(Z ?x, > t), where x1, x2 . ... are real numbers. We express the answer in terms of the K-interpolation norm from the theory of interpolation of Banach spaces. INTRODUCTION Throughout this paper, we let e1 C 2' ... be independent Bernoulli random variables (that is, Pr(Ce = 1) = Pr(Ce = -1) = 2). We are going to look for upper and lower bounds for Pr(Z 8,7Xn > t), where xl, x2, ... is a sequence of real numbers such that x = (xn)n??= E 12 . Our first upper bound is well known (see, for example, Chapter II, ?59 of [5]): (1) Pr (Z nxn > t 1X112) ? However, if Ix II I Ixlll) =0. To look for lower bounds, we might first consider using some version of the central limit theorem. For example, using Theorem 7.1.4 of [2], it can be shown that for some constant c we have | ( ) 1 ,/2?? ~I 00 2/2 lX113 )3 Pr(E C n > t lIX11) f,27ds t ||X112) > CI ,( e-S210 ds c -2e _ /2 Cr 9 > x I2 -1 e-s2/2 d Received by the editors December 22, 1988 and, in revised form, August 30, 1989. 1980 Mathematics Subject Classification (1985 Revision). Primary 60C05; Secondary 60G50.

Proceedings ArticleDOI
01 Apr 1990
TL;DR: The above quantitative form of Steinitz's theorem gives a notion of efficiency for closure grasps for anm-fingered robot hand and presents some efficient algorithms for these problems, especially in the two-dimensional case.
Abstract: We prove the following quantitative form of a classical theorem of Steintiz: Letm be sufficiently large. If the convex hull of a subsetS of Euclideand-space contains a unit ball centered on the origin, then there is a subset ofS with at mostm points whose convex hull contains a solid ball also centered on the origin and havingresidual radius $$1 - 3d\left( {\frac{{2d^2 }}{m}} \right)^{2/(d - 1)} .$$ The casem=2d was first considered by Baranyet al. [1]. We also show an upper bound on the achievable radius: the residual radius must be less than $$1 - \frac{1}{{17}}\left( {\frac{{2d^2 }}{m}} \right)^{2/(d - 1)} .$$ These results have applications in the problem of computing the so-calledclosure grasps by anm-fingered robot hand. The above quantitative form of Steinitz's theorem gives a notion ofefficiency for closure grasps. The theorem also gives rise to some new problems in computational geometry. We present some efficient algorithms for these problems, especially in the two-dimensional case.

Journal ArticleDOI
TL;DR: It is shown that the message complexity of broadcast depends on the exact complexity measure, and it is proved that, if one counts messages of bounded length, then broadcast requires &THgr;(↿E
Abstract: This paper concerns the message complexity of broadcast in arbitrary point-to-point communication networks. Broadcast is a task initiated by a single processor that wishes to convey a message to all processors in the network. The widely accepted model of communication networks, in which each processor initially knows the identity of its neighbors but does not know the entire network topology, is assumed. Although it seems obvious that the number of messages required for broadcast in this model equals the number of links, no proof of this basic fact has been given before.It is shown that the message complexity of broadcast depends on the exact complexity measure. If messages of unbounded length are counted at unit cost, then broadcast requires T(uVu) messages, where V is the set of processors in the network. It is proved that, if one counts messages of bounded length, then broadcast requires T(uEu) messages, where E is the set of edges in the network.Assuming an intermediate model in which each vertex knows the topology of the network in radius r ≥ 1 from itself, matching upper and lower bounds of T(min{uEu, uVu1+T(l)/r}) is proved on the number of messages of bounded length required for broadcast. Both the upper and lower bounds hold for both synchronous and asynchronous network models.The same results hold for the construction of spanning trees, and various other global tasks.

Journal ArticleDOI
TL;DR: A lower bound on the number of processors and finish time for the problem of scheduling precedence graphs with communication costs is presented and a derivation of the minimum time increase over the earliest completion time is proposed.
Abstract: A lower bound on the number of processors and finish time for the problem of scheduling precedence graphs with communication costs is presented. The notion of the earliest starting time of a task is formulated for the context of lower bounds. A lower bound on the completion time is proposed. A task delay which does not increase the earliest completion time of a schedule is defined. Each task can then be scheduled within a time interval without affecting the lower bound performance on the finish time. This leads to definition of a new lower bound on the number of processors required to process the task graph. A derivation of the minimum time increase over the earliest completion time is also proposed for the case of a smaller number of processors. A lower bound on the minimum number of interprocessor communication links required to achieve optimum performance is proposed. Evaluation had been carried out by using a set of 360 small graphs. The bound on the finish time deviates at most by 5% from the optimum solution in 96% of the cases and performs well with respect to the minimum number of processors and communication links. >

Journal ArticleDOI
01 Oct 1990-Networks
TL;DR: In this article, a branch-and-bound algorithm for the traveling salesman problem is presented, which is based on a lower bound for the time-dependent version of the problem known as the traveling deliveryman problem.
Abstract: We consider a scheme to derive lower bounds for the time-dependent traveling salesman problem. It involves splitting lower bounds into a number of components and optimizing each of these components. The lower bounds thus derived are shown to be at least as sharp as the ones previously suggested for the problem. We describe a branch-and-bound algorithm based on our lower bounding scheme and computationally test it for an instance of the problem known as the traveling deliveryman problem.

Journal ArticleDOI
TL;DR: In this article, a nonlinear observer for state observation of a manipulator with three revolute elastic joints is proposed, which asymptotically reconstructs all the robot state variables.
Abstract: The problem of state observation of robots that have elastic joints is discussed. Outputs are assumed to be the global link coordinates and their time derivatives, and a nonlinear observer is proposed which asymptotically reconstructs all the robot state variables. The dynamic behaviour of the observation algorithm is illustrated by simulation tests referred to a manipulator with three revolute elastic joints. To verify the observer robustness, the previous simulation tests were repeated by using the same observer, designed for a nominal payload of 5 kg, and actual robot payloads of 0 and 10 kg. The differences with respect to the nominal case are not appreciable. However, the steady-state joint errors, which were about 10/sup -6/ rad in the nominal case, became about 10/sup -4/ rad. >

Journal ArticleDOI
TL;DR: In stability problems, the failure probability associated with the critical slip surface is known to be smaller than that for the system that comprises all potential slip surfaces The difference depends on the correlation between the failure probabilities of different slip surfaces Calculations of the upper bound of the probability of system failure were made for the Congress Street cut, in Chicago as mentioned in this paper.
Abstract: In stability problems, the failure probability associated with the critical slip surface is known to be smaller than that for the system that comprises all potential slip surfaces The difference depends on the correlation between the failure probabilities of the different slip surfaces Calculations of the upper bound of the probability of system failure were made for the Congress Street cut, in Chicago The computed upper bound of the probability of system failure is about twice the failure probability for the critical slip surface The failure probabilities for the slip surfaces that pass through the two clay layers near the bottom of the cut have about the same value Because of the assumption that the strengths of these layers are statistically independent, the safety factor of a slip surface that passes through the upper clay layer but not the lower one is poorly correlated with that of a slip surface that passes through both layers This leads to a large difference between the failure probability o

Journal ArticleDOI
TL;DR: It is shown that if a two-way probabilistic finite-state automaton (2pfa) M recognizes a nonregular language L with error probability bounded below $\frac{1}{2}$, then there is a positive constant b such that, for infinitely many inputs x, the expected running time of M on input x must exceed $2^{n^{b}}$ where n is the length of x.
Abstract: It is shown that if a two-way probabilistic finite-state automaton (2pfa) M recognizes a nonregular language L with error probability bounded below $\frac{1}{2}$, then there is a positive constant b (depending on M) such that, for infinitely many inputs x, the expected running time of M on input x must exceed $2^{n^{b}}$ where n is the length of x. This complements a result of Freivalds showing that 2pfa’s can recognize certain nonregular languages in exponential expected time. It also establishes a time complexity gap for 2pfa’s, since any regular language can be recognized by some 2pfa in linear time. Other results give roughly exponential upper and lower bounds on the worst-case increase in the number of states when converting a polynomial-time 2pfa to an equivalent two-way nondeterministic finite-state automaton or to an equivalent one-way deterministic finite-state automaton.

Journal ArticleDOI
TL;DR: It is shown that the inequalities are “strong” by showing that in one special case, they suffice to describe the convex hull of solutions, and for other models it is possible to obtain violated inequalities by using constraint aggregation followed by the above separation procedure.