scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 1979"


Journal ArticleDOI
Allen Gersho1
TL;DR: A heuristic argument generalizing Bennett's formula to block quantization where a vector of random variables is quantized is given, leading to a rigorous method for obtaining upper bounds on the minimum distortion for block quantizers.
Abstract: In 1948 W. R. Bennett used a companding model for nonuniform quantization and proposed the formula D \: = \: \frac{1}{12N^{2}} \: \int \: p(x) [ E(x) ]^{-2} \dx for the mean-square quantizing error where N is the number of levels, p (x) is the probability density of the input, and E \prime (x) is the slope of the compressor curve. The formula, an approximation based on the assumption that the number of levels is large and overload distortion is negligible, is a useful tool for analytical studies of quantization. This paper gives a heuristic argument generalizing Bennett's formula to block quantization where a vector of random variables is quantized. The approach is again based on the asymptotic situation where N , the number of quantized output vectors, is very large. Using the resulting heuristic formula, an optimization is performed leading to an expression for the minimum quantizing noise attainable for any block quantizer of a given block size k . The results are consistent with Zador's results and specialize to known results for the one- and two-dimensional cases and for the case of infinite block length (k \rightarrow \infty) . The same heuristic approach also gives an alternate derivation of a bound of Elias for multidimensional quantization. Our approach leads to a rigorous method for obtaining upper bounds on the minimum distortion for block quantizers. In particular, for k = 3 we give a tight upper bound that may in fact be exact. The idea of representing a block quantizer by a block "compressor" mapping followed with an optimal quantizer for uniformly distributed random vectors is also explored. It is not always possible to represent an optimal quantizer with this block companding model.

936 citations


Journal ArticleDOI
01 Jan 1979
TL;DR: In this article, a simple geometrical interpretation of the correlation between mode safety margins combined with a well-known geometry interpretation of single mode reliability index makes the practical calculation of the system reliability bounds easy, particularly when the set of basic variables is jointly normally distributed.
Abstract: For structural systems that may fail in any one of several possible modes, reliability analysis is greatly simplified by use of upper and lower bound techniques. General bounds based on all the single mode failure probabilities and all the pairwise mode intersection failure probabilities are established. For systems where the single mode limit state surfaces are hyperplanes in the space of basic variables, a simple geometrical interpretation of the correlation between mode safety margins combined with a well-known geometrical interpretation of the single mode reliability index makes the practical calculation of the system reliability bounds easy. This is particularly true when the set of basic variables is jointly normally distributed. Examples show very narrow bounds which, in the practically important domain of high reliability, are almost coincident.

876 citations


Journal ArticleDOI
TL;DR: Average depolarization factors are derived for various orientational distribution functions that demonstrate the effects of various mechanisms for reorientation of the luminophores, and it is shown that in general the static averaging regime does not lend itself to determinations of R.

720 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the requirement that no interaction becomes strong and no vacuum instability develops up to the unification energy implies upper and lower bounds to the fermion masses as well as the Higgs boson mass.

507 citations


Proceedings ArticleDOI
30 Apr 1979
TL;DR: The complexity of the Discrete Fourier Transform is studied with respect to a new model of computation appropriate to VLSI technology, which focuses on two key parameters, the amount of silicon area and time required to implement a DFT on a single chip.
Abstract: The complexity of the Discrete Fourier Transform (DFT) is studied with respect to a new model of computation appropriate to VLSI technology. This model focuses on two key parameters, the amount of silicon area and time required to implement a DFT on a single chip. Lower bounds on area (A) and time (T) are related to the number of points (N) in the DFT: AT2≥ N2/16. This inequality holds for any chip design based on any algorithm, and is nearly tight when T = θ(N1/2) or T = θ(log N). A more general lower bound is also derived: ATx = Ω(N1+x/2), for 0≤×≤2.

441 citations


Journal ArticleDOI
TL;DR: In this article, an upper bound on expected average interconnection length, based on partitioning results, is given for linear and square arrays of gates, which gives significantly lower interconnection lengths than the bound based upon random placement.
Abstract: The length of the interconnections for a placement of logic gates is an important variable in the estimation of wiring space requirements, delay values, and power dissipation. A formula for an upper bound on expected average interconnection length, based on partitioning results, is given for linear and square arrays of gates. This upper bound gives significantly lower interconnection length than the bound based upon random placement. Actual placements give average interconnection lengths of about half the upper bound given by theory.

416 citations


Journal ArticleDOI
TL;DR: It is shown that the existence of a strict local minimum satisfying the constraint qualification of [16] or McCormick's second order sufficient optimality condition implies theexistence of a class of exact local penalty functions (that is ones with a finite value of the penalty parameter) for a nonlinear programming problem.
Abstract: It is shown that the existence of a strict local minimum satisfying the constraint qualification of [16] or McCormick's [12] second order sufficient optimality condition implies the existence of a class of exact local penalty functions (that is ones with a finite value of the penalty parameter) for a nonlinear programming problem. A lower bound to the penalty parameter is given by a norm of the optimal Lagrange multipliers which is dual to the norm used in the penalty function.

324 citations


Journal ArticleDOI
TL;DR: In this paper, Delsarte's linear programming bound is compared with Lov\acute{a}sz's \theta -function bound (an upper bound on the Shannon capacity of a graph).
Abstract: Delsarte's linear programming bound (an upper bound on the cardinality of cliques in association schemes) is compared with Lov\acute{a}sz's \theta -function bound (an upper bound on the Shannon capacity of a graph). The two bounds can be treated in a uniform fashion. Delsarte's linear programming bound can be generalized to a bound \theta \prime(G) on the independence number \propto(G) of an arbitrary graph G , such that \theta \prime(G) \leq \theta(G) . On the other hand, if the edge set of G is a union of classes of a symmetric association scheme, \theta(G) may be calculated by linear programming, For such graphs the product \theta(G) . \theta(G) is equal to the number of vertices of G .

249 citations


Journal ArticleDOI
TL;DR: A class of functions are given, where the choice of two parameters result in good upper bounds, lower bounds and approximations on Q(x), suitable for programmable pocket calculators.
Abstract: Simple analytical upper bounds, approximations and lower bounds on the error function Q(x) are presented and analyzed. A class of functions are given, where the choice of two parameters result in good upper bounds, lower bounds and approximations on Q(x) . The results are given in formulas, parameter tables and graphs. The approximations are suitable for programmable pocket calculators.

235 citations


Book
01 Jan 1979
TL;DR: A lower bound on the theories of pairing functions and some additional lower bounds are given in this article, along with a technique for writing short formulas for defining complicated properties of complicated properties.
Abstract: and background- Ehrenfeucht games and decision procedures- Integer addition - An example of an Ehrenfeucht game decision procedure- Some additional upper bounds- Direct products of theories- Lower bound preliminaries- A technique for writing short formulas defining complicated properties- A lower bound on the theories of pairing functions- Some additional lower bounds

222 citations


Proceedings ArticleDOI
29 Oct 1979
TL;DR: Several new classes of hash functions with certain desirable properties are exhibited, and two novel applications for hashing which make use of these functions are introduced, including a provably secure authentication techniques for sending messages over insecure lines.
Abstract: In this paper we exhibit several new classes of hash functions with certain desirable properties, and introduce two novel applications for hashing which make use of these functions. One class of functions is small, yet is almost universal2. If the functions hash n-bit long names into m-bit indices, then specifying a member of the class requires only O((m + log2log2(n)) log2(n)) bits as compared to O(n) bits for earlier techniques. For long names, this is about a factor of m larger than the lower bound of m+log2n-log2m bits. An application of this class is a provably secure authentication techniques for sending messages over insecure lines. A second class of functions satisfies a much stronger property than universal2. We present the application of testing sets for equality. The authentication technique allows the receiver to be certain that a message is genuine. An 'enemy' - even one with infinite computer resources - cannot forge or modify a message without detection. The set equality technique allows the the operations 'add member to set', 'delete member from set' and 'test two sets for equality' to be performed in expected constant time and with less than a specified probability of error.

Journal ArticleDOI
TL;DR: In this article, a new technique for determining the terminal reliability of probabilistic networks is derived and discussed, which uses set-theoretic concepts to partition the space of all graph realizations in a way which permits extremely fast evaluation of the source-to-terminal probability.
Abstract: A new technique for determining the terminal reliability of probabilistic networks is derived and discussed. The technique uses set-theoretic concepts to partition the space of all graph realizations in a way which permits extremely fast evaluation of the source-to-terminal probability. If not allowed to run to completion, the algorithm yields rapidly converging upper and lower bounds on that probability. Comparison with algorithms in the recent literature shows a decrease of one or two orders of magnitude in required CPU time.

Journal ArticleDOI
TL;DR: In this article, the authors consider search games in which the searcher moves along a continuous trajectory in a set Q until he captures the hider, where Q is either a network or a two-dimensional region.
Abstract: We consider search games in which the searcher moves along a continuous trajectory in a set Q until he captures the hider, where Q is either a network or a two (or more) dimensional region. We distinguish between two types of games; in the first type which is considered in the first part of the paper, the hider is immobile while in the second type of games which is considered in the rest of the paper, the hider is mobile. A complete solution is presented for some of the games, while for others only upper and lower bounds are given and some open problems associated with those games are presented for further research.

Journal ArticleDOI
TL;DR: A study of a class of multiple-user channels including general multiple-access channels with many correlated sources and many simultaneous receivers is presented, establishing a simple characterization of the capacity region on the basis of the polymatroidal structure of a set of (conditional) mutual informations.
Abstract: A study of a class of multiple-user channels including general multiple-access channels with many correlated sources and many simultaneous receivers is presented. The main result is summarized as Theorems 4.1 and 5.1 which establish a simple characterization of the capacity region on the basis of the polymatroidal structure of a set of (conditional) mutual informations. The results include, as special cases, the Slepian and Wolf's result (1973) as well as Ulrey's (1975) . These may be regarded as further developments along the line shown by Ahlswede (1971) and Liao (1972) . Furthermore, a finite upper bound for the cardinalities of the ranges of auxiliary variables is given in Theorems 4.2 and 5.2. Finally, the relation between Slepian—Wolf's formalism and ours is clarified.

Journal ArticleDOI
TL;DR: Asymptotically coincident upper and lower bounds on the exponent of the largest possible probability of the correct decoding of block codes are given for all rates above capacity.
Abstract: Asymptotically coincident upper and lower bounds on the exponent of the largest possible probability of the correct decoding of block codes are given for all rates above capacity. The lower bound sharpens Omura's bound. The upper bound is proved by a new and simple combinatorial argument.

Journal ArticleDOI
TL;DR: In this paper, an algorithm based on the use of probability weighted moments allows estimation of the parameters, hence quantiles, of the Wakeby distribution, for the case where the lower bound is not known.
Abstract: An algorithm based on the use of probability weighted moments allows estimation of the parameters, hence quantiles, of the Wakeby distribution. For the case where the lower bound is not known, the performance of the algorithm using unbiased estimates of the probability weighted moments is compared to that using biased estimates. The choice of estimating algorithm, determined as that which minimizes the root mean square error of the quantiles, appears to be unimportant when the upper (flood) quantiles are of interest and the lower bound is not known, in contrast to the lower (drought) quantiles.

Journal ArticleDOI
TL;DR: In this article, the Efimov effect is demonstrated in a model consisting of two heavy particles and a light one when the light-heavy interaction leads to a zero-energy two-body bound state.

Journal ArticleDOI
01 Jan 1979
TL;DR: A fixed point theorem for nonexpansive mappings in dual Banach spaces is proved in this article, and applications in certain Banach lattices are given. But this theorem is not applicable to non-convex weakly compact mappings.
Abstract: A fixed point theorem for nonexpansive mappings in dual Banach spaces is proved. Applications in certain Banach lattices are given. 1. Suppose K is a subset of a Banach space X and T: K -* K is a nonexpansive mapping, i.e. II T(x) T(y)jj S lix -yIj, x, y E K. A wellknown theorem due to Kirk [1] states that, if K is convex weakly compact (weak* compact when X is a dual space) and has normal structure, then T has a fixed point in K. In particular, if X = LP (1 0. Here and in the sequel V and A denote the least upper bound and the greatest lower bound respectively. X is said to be order complete if each set A C X with an upper bound has a least upper bound. A complex AM-space is defined as the complexification of an AM-space. Suppose X is an order complete AM-space with unit (i.e. an element e such that the unit ball at zero is the order interval [e, e]); then X is isometrically lattice isomorphic to the space CR (S) of all continuous real-valued functions defined on a compact Stonian space S. For these and other facts about Banach lattices we refer to Schaefer's book [4]. Received by the editors March 19, 1978. AMS (MOS) subject classifications (1970). Primary 47H10; Secondary 46B99, 46E05.

Journal ArticleDOI
TL;DR: A derivation of unified statistical theory is presented that emphasizes the dynamical and statistical assumptions that are the foundation of the theory and is significantly more accurate, for the H+H2 reaction, than either Transition state theory or variational transition state theory.
Abstract: Miller’s unified statistical theory for bimolecular chemical reactions is tested on the collinear H+H2 exchange reaction, treated classically The reaction probability calculated from unified statistical theory is more accurate than that calculated from ordinary transition state theory or from variational transition state theory; in particular, unified statistical theory predicts the highenergy falloff of the reaction probability, which transition state theory does not A derivation of unified statistical theory is presented that emphasizes the dynamical and statistical assumptions that are the foundation of the theory We show how these assumptions unambiguously define the ’’collision complex’’ in unified statistical theory, and we test these assumptions in detail on the H+H2 reaction Finally, a lower bound on the reaction probability is derived; this bound complements the upper bound provided by transition state theory and is significantly more accurate, for the H+H2 reaction, than either transition st

Journal ArticleDOI
TL;DR: In this article, it was shown that 5/14 is the best possible lower bound for the independence ratio of graphs with maximum degree 3 containing no triangles. But this lower bound was later shown to be false.
Abstract: If each of k, m, and n is a positive integer, there is a smallest positive integer r = rk(m, n) with the property that each graph G with at least r vertices, and with maximum degree not exceeding k, has either a complete subgraph with m vertices, or an independent subgraph with n vertices. In this paper we determine r3(3, n) = r(n), for all n. As a corollary we obtain the largest possible lower bound for the independence ratio of graphs with maximum degree three containing no triangles. From the work of Brooks [2] it follows that if G is a graph with maximum degree k containing no complete graph on k + 1 vertices, then the independence ratio of G is at least \/k. In case G has no complete graph on k vertices, Albertson, Bollobas, and Tucker [1] proved this ratio is larger than \/k, with only two exceptions. And they conjectured that for k = 3, with the additional assumption of planarity, this ratio is bounded away from 1/3. Fajtlowicz [3] verified their conjecture, even without assuming planarity, showing that each cubic graph without triangles has independence ratio at least 12/35. In addition, he displayed a graph in which the independence ratio is exactly 5/14. It follows from our main theorem that 5/14 is a lower bound for the independence ratio in the case k = 3, and in light of Fajtlowicz' graph, 5/14 is the best possible lower bound. In what follows, all graphs will be finite symmetric graphs with no loops and no multiple edges. If G is a graph, then v(G) and e(G) will be the numbers of vertices and edges of G. If M is a set of vertices of G, no two of which are joined by an edge, then M is called independent. The number of vertices in a largest independent vertex set in G will be denoted i(G). A cycle with « vertices will be denoted C„. Proposition I. If G is a graph in which each vertex has degree two or degree three, and if v(G) is odd, then either there is a vertex of degree two both of whose neighbors are of degree two, or else there is a vertex of degree two both of whose neighbors are of degree three. Received by the editors November 6, 1978. AMS (MOS) subject classifications (1970). Primary 05C99.


Proceedings ArticleDOI
29 Oct 1979
TL;DR: It is demonstrated that any algorithm for sorting n inputs which is based on comparisons of individual inputs requires time-space product proportional to n2, and uniform and non-uniform sorting algorithms are presented which show that this lower bound is nearly tight.
Abstract: A model of computation is introduced which permits the analysis of both the time and space requirements of non-oblivious programs. Using this model, it is demonstrated that any algorithm for sorting n inputs which is based on comparisons of individual inputs requires time-space product proportional to n2. Uniform and non-uniform sorting algorithms are presented which show that this lower bound is nearly tight.

Journal ArticleDOI
Robert Erdahl1
TL;DR: A theorem giving necessary and sufficient conditions for the optimum for the central optimization problem of the lower bound method of reduced density matrix theory is developed.

Proceedings ArticleDOI
30 Apr 1979
TL;DR: A positive answer to a problem for which an exponential speedup can be attained using {+,−,×} rather than just {+,×} as operations is given, which is the multivariate polynomial associated with perfect matchings in planar graphs.
Abstract: Among the most remarkable algorithms in algebra are Strassen's algorithm for the multiplication of matrices and the Fast Fourier Transform method for the convolution of vectors. For both of these problems the definition suggests an obvious algorithm that uses just the monotone operations + and ×. Schnorr [18] has shown that these algorithms, which use t(n3) and T(n2) operations respectively, are essentially optimal among algorithms that use only these monotone operations. By using subtraction as an additional operation and exploiting cancellations of computed terms in a very intricate way Strassen showed that a faster algorithm requiring only O(n2.81) operations is possible. The FFT method for convolution achieves O(nlog n) complexity in a similar fashion. The question arises as to whether we can expect even greater gains in computational efficiency by such judicious use of cancellations. In this paper we give a positive answer to this, by exhibiting a problem for which an exponential speedup can be attained using {+,−,×} rather than just {+,×} as operations. The problem in question is the multivariate polynomial associated with perfect matchings in planar graphs. For this a fast algorithm is implicit in the Pfaffian technique of Fisher and Kasteleyn [6,8]. The main result we provide here is the exponential lower bound in the monotone case.

Journal ArticleDOI
TL;DR: An upper bound on the rate distortion function is obtained for source coding with partial side information at the decoder, i.e. full knowledge of Y_{n} .
Abstract: An upper bound on the rate distortion function is obtained for source coding with partial side information at the decoder. Previous results were for complete side information, i.e. full knowledge of Y_{n} . below. A diagram given in the paper helps to describe the problem. The bound is given in

Journal ArticleDOI
TL;DR: A method of deriving lower bounds for the constants ck is given, the bounds obtained improving known lower bounds, for k>2 and the rate of decrease of ck with k is shown to be no faster than 1/√k, contrasting with P{B1i−B2i}=1/k.

Journal ArticleDOI
TL;DR: In this paper, the authors use pseudo-hyper-ellipsoids to describe the nonlinear behavior of the given inverse problem, and upper and lower bounds can be added to those parameters of which some independent knowledge is available.
Abstract: For constrained inversion of potential field data within the framework of generalized inversion an analysis of data error variances leads to confidence limits for the model parameters. For that purpose Pseudo-hyper-ellipsoids can be used to describe the nonlinear behaviour of the given inverse problem, and upper and lower bounds can be added to those parameters of which some independent knowledge is available. A gravity example is treated to show the application of the method.

Journal ArticleDOI
Sönke Albers1
TL;DR: In this paper, a special purpose algorithm called PROPOSAS is proposed to solve the problem of optimal product positioning in an attribute space, which works under simplified assumptions: Euclidean metric, equally weighted dimensions of the attribute space and equal sales per customer.

Journal ArticleDOI
TL;DR: In this article, an upper bound for the mean value of a non-negative submultiplicative function by R. Hall's inequality is sharpened and generalised, and this aspect of Hall's result is exploited in deriving good lower bounds for π(x) via the sieve.

Journal ArticleDOI
TL;DR: In this paper, upper and lower bounds for the transmission coefficient of a chain of random masses were derived and it was shown that the heat conduction in such a chain does not obey Fourier's law.
Abstract: We find upper and lower bounds for the transmission coefficient of a chain of random masses. Using these bounds we show that the heat conduction in such a chain does not obey Fourier's law: For different temperatures at the ends of a chain containingN particles the energy flux falls off likeN−1/2 rather thanN−1.