scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 1986"


Journal ArticleDOI
TL;DR: The results show that the proposed multiuser detectors afford important performance gains over conventional single-user systems, in which the signal constellation carries the entire burden of complexity required to achieve a given performance level.
Abstract: Consider a Gaussian multiple-access channel shared by K users who transmit asynchronously independent data streams by modulating a set of assigned signal waveforms. The uncoded probability of error achievable by optimum multiuser detectors is investigated. It is shown that the K -user maximum-likelihood sequence detector consists of a bank of single-user matched filters followed by a Viterbi algorithm whose complexity per binary decision is O(2^{K}) . The upper bound analysis of this detector follows an approach based on the decomposition of error sequences. The issues of convergence and tightness of the bounds are examined, and it is shown that the minimum multiuser error probability is equivalent in the Iow-noise region to that of a single-user system with reduced power. These results show that the proposed multiuser detectors afford important performance gains over conventional single-user systems, in which the signal constellation carries the entire burden of complexity required to achieve a given performance level.

2,300 citations


Journal ArticleDOI
TL;DR: Answering a question of Vera Sós, it is shown how Lovász’ lattice reduction can be used to find a point of a given lattice, nearest within a factor ofcd (c = const.) to a given point in Rd.
Abstract: Answering a question of Vera Sos, we show how Lovasz’ lattice reduction can be used to find a point of a given lattice, nearest within a factor ofc d (c = const.) to a given point in R d . We prove that each of two straightforward fast heuristic procedures achieves this goal when applied to a lattice given by a Lovasz-reduced basis. The verification of one of them requires proving a geometric feature of Lovasz-reduced bases: ac 1 lower bound on the angle between any member of the basis and the hyperplane generated by the other members, wherec 1 = √2/3. As an application, we obtain a solution to the nonhomogeneous simultaneous diophantine approximation problem, optimal within a factor ofC d . In another application, we improve the Grotschel-Lovasz-Schrijver version of H. W. Lenstra’s integer linear programming algorithm. The algorithms, when applied to rational input vectors, run in polynomial time.

1,030 citations


Journal ArticleDOI
TL;DR: In this article, the stochastic complexity of a string of data, relative to a class of probabilistic models, is defined to be the fewest number of binary digits with which the data can be encoded by taking advantage of the selected models.
Abstract: As a modification of the notion of algorithmic complexity, the stochastic complexity of a string of data, relative to a class of probabilistic models, is defined to be the fewest number of binary digits with which the data can be encoded by taking advantage of the selected models. The computation of the stochastic complexity produces a model, which may be taken to incorporate all the statistical information in the data that can be extracted with the chosen model class. This model, for example, allows for optimal prediction, and its parameters are optimized both in their values and their number. A fundamental theorem is proved which gives a lower bound for the code length and, therefore, for prediction errors as well. Finally, the notions of "prior information" and the "useful information" in the data are defined in a new way, and a related construct gives a universal test statistic for hypothesis testing.

1,004 citations


Proceedings ArticleDOI
01 Nov 1986
TL;DR: Improved lower bounds for the size of small depth circuits computing several functions are given and it is shown that there are functions computable in polynomial size and depth k but requires exponential size when the depth is restricted to k 1.
Abstract: We give improved lower bounds for the size of small depth circuits computing several functions. In particular we prove almost optimal lower bounds for the size of parity circuits. Further we show that there are functions computable in polynomial size and depth k but requires exponential size when the depth is restricted to k 1. Our Main Lemma which is of independent interest states that by using a random restriction we can convert an AND of small ORs to an OR of small ANDs and conversely. Warning: Essentially this paper has been published in Advances for Computing and is hence subject to copyright restrictions. It is for personal use only.

667 citations


Journal ArticleDOI
TL;DR: In this paper, a variant of the Byzantine Generals problem is considered, in which processes start with arbitrary real values rather than Boolean values or values from some bounded range, and in which approximate, rather than exact, agreement is the desired goal.
Abstract: This paper considers a variant of the Byzantine Generals problem, in which processes start with arbitrary real values rather than Boolean values or values from some bounded range, and in which approximate, rather than exact, agreement is the desired goal. Algorithms are presented to reach approximate agreement in asynchronous, as well as synchronous systems. The asynchronous agreement algorithm is an interesting contrast to a result of Fischer et al, who show that exact agreement with guaranteed termination is not attainable in an asynchronous system with as few as one faulty process. The algorithms work by successive approximation, with a provable convergence rate that depends on the ratio between the number of faulty processes and the total number of processes. Lower bounds on the convergence rate for algorithms of this form are proved, and the algorithms presented are shown to be optimal.

531 citations


Journal ArticleDOI
TL;DR: An algorithm is presented that constructs a representation for the cell complex defined by n hyperplanes in optimal $O(n^d )$ time in d dimensions, which is shown to lead to new methods for computing $\lambda $-matrices, constructing all higher-order Voronoi diagrams, halfspatial range estimation, degeneracy testing, and finding minimum measure simplices.
Abstract: A finite set of lines partitions the Euclidean plane into a cell complex. Similarly, a finite set of $(d - 1)$-dimensional hyperplanes partitions d-dimensional Euclidean space. An algorithm is presented that constructs a representation for the cell complex defined by n hyperplanes in optimal $O(n^d )$ time in d dimensions. It relies on a combinatorial result that is of interest in its own right. The algorithm is shown to lead to new methods for computing $\lambda $-matrices, constructing all higher-order Voronoi diagrams, halfspatial range estimation, degeneracy testing, and finding minimum measure simplices. In all five applications, the new algorithms are asymptotically faster than previous results, and in several cases are the only known methods that generalize to arbitrary dimensions. The algorithm also implies an upper bound of $2^{cn^d } $, c a positive constant, for the number of combinatorially distinct arrangements of n hyperplanes in $E^d $.

447 citations


Journal ArticleDOI
TL;DR: It is shown that an upper bound for the convergence time is the classical mean-square-error time constant, and examples are given to demonstrate that for broad signal classes the convergenceTime is reduced by a factor of up to 50 in noise canceller applications for the proper selection of variable step parameters.
Abstract: In recent work, a new version of an LMS algorithm has been developed which implements a variable feedback constant μ for each weight of an adaptive transversal filter. This technique has been called the VS (variable step) algorithm and is an extension of earlier ideas in stochastic approximation for varying the step size in the method of steepest descents. The method may be implemented in hardware with only modest increases in complexity ( \approx 15 percent) over the LMS Widrow-Hoff algorithm. It is shown that an upper bound for the convergence time is the classical mean-square-error time constant, and examples are given to demonstrate that for broad signal classes (both narrow-band and broad-band) the convergence time is reduced by a factor of up to 50 in noise canceller applications for the proper selection of variable step parameters. Finally, the VS algorithm is applied to an IIR filter and simulations are presented for applications of the VS FIR and IIR adaptive filters.

398 citations


Journal ArticleDOI
TL;DR: It is shown that even if the authors allow nonuniform algorithms, an arbitrary number of processors, and arbitrary instruction sets, $\Omega (\log n)$ is a lower bound on the time required to compute various simple functions, including sorting n keys and finding the logical “or” of n bits.
Abstract: One of the frequently used models for a synchronous parallel computer is that of a parallel random access machine, where each processor can read from and write into a common random access memory. Different processors may read the same memory location at the same time, but simultaneous writing is disallowed. We show that even if we allow nonuniform algorithms, an arbitrary number of processors, and arbitrary instruction sets, $\Omega (\log n)$ is a lower bound on the time required to compute various simple functions, including sorting n keys and finding the logical “or” of n bits. We also prove a surprising time upper bound of $.72\log _2 n$ steps for these functions, which beats the obvious algorithms requiring $\log _2 n$ steps.If simultaneous writes are allowed, there are simple algorithms to compute these functions in a constant number of steps.

356 citations


Journal ArticleDOI
TL;DR: In this paper, a method for adaptive stabilization without a minimum-phase assumption and without knowledge of the sign of the high-frequency gain is developed, which leads to a guarantee of Lyapunov stability and an exponential rate of convergence for the state.
Abstract: In this paper, we develop a method for adaptive stabilization without a minimum-phase assumption and without knowledge of the sign of the high-frequency gain. In contrast to recent work by Martensson [8], we include a compactness requirement on the set of possible plants and assume that an upper bound on the order of the plant is known. Under these additional hypotheses, we generate a piecewise linear time-invariant switching control law which leads to a guarantee of Lyapunov stability and an exponential rate of convergence for the state. One of the main objectives in this paper is to eliminate the possibility of "large state deviations" associated with a search Over the space of gain matrices which is required in [8].

318 citations


Book
01 Nov 1986
TL;DR: In this paper, the Shadow-Vertex algorithm is used to solve the sign-invariance problem with non-negativity constraints, and an integral formula for the expected number of pivot steps is given.
Abstract: 0 Introduction.- Formulation of the problem and basic notation.- 1 The problem.- A Historical Overview.- 2 The gap between worst case and practical experience.- 3 Alternative algorithms.- 4 Results of stochastic geometry.- 5 The results of the author.- 6 The work of Smale.- 7 The paper of Haimovich.- 8 Quadratic expected number of steps for sign-invariance model.- Discussion of different stochastic models.- 9 What is the "Real World Model"?.- Outline of Chapters 1-5.- 10 The basic ideas and the methods of this book.- 11 The results of this book.- 12 Conclusion and conjectures.- 1 The Shadow-Vertex Algorithm.- 1 Primal interpretation.- 2 Dual interpretation.- 3 Numerical realization of the algorithm.- 4 The algorithm for Phase I.- 2 The Average Number of Pivot Steps.- 1 The probability space.- 2 An integral formula for the expected number of S.- 3 A transformation of coordinates.- 4 Generalizations.- 3 The Polynomiality of the Expected Number of Steps.- 1 Comparison of two integrals.- 2 An application of Cavalieri's Principle.- 3 The influence of the distribution.- 4 Evaluation of the quotient.- 5 The average number of steps in our complete Simplex-Method.- 4 Asymptotic Results.- 1 An asymptotic upper bound in integral form.- 2 Asymptotic results for certain classes of distributions.- 3 Special distributions with bounded support.- 4 Asymptotic bounds under uniform distributions.- 5 Asymptotic bounds under Gaussian distribution.- 5 Problems with Nonnegativity Constraints.- 1 The geometry.- 2 The complete solution method.- 3 A simplification of the boundary-condition.- 4 Explicit formulation of the intersection-condition.- 5 Componentwise sign-independence and the intersection condition.- 6 The average number of pivot steps.- 6 Appendix.- 1 Gammafunction and Betafunction.- 2 Unit ball and unit sphere.- 3 Estimations under variation of the weights.- References.

234 citations


Journal ArticleDOI
TL;DR: The proposed picture compressibility is shown to possess the properties that one would expect and require of a suitably defined concept of two-dimensional entropy for arbitrary probabilistic ensembles of infinite pictures.
Abstract: Distortion-free compressibility of individual pictures, i.e., two-dimensional arrays of data, by finite-state encoders is investigated. For every individual infinite picture I , a quantity \rho(I) is defined, called the compressibility of I , which is shown to be the asymptotically attainable lower bound on the compression ratio that can be achieved for I by any finite-state information-lossless encoder. This is demonstrated by means of a constructive coding theorem and its converse that, apart from their asymptotic significance, might also provide useful criteria for finite and practical data-compression tasks. The proposed picture compressibility is also shown to possess the properties that one would expect and require of a suitably defined concept of two-dimensional entropy for arbitrary probabilistic ensembles of infinite pictures. While the definition of \rho(I) allows the use of different machines for different pictures, the constructive coding theorem leads to a universal compression scheme that is asymptotically optimal for every picture. The results are readily extendable to data arrays of any finite dimension.

Journal ArticleDOI
TL;DR: Two selection protols that run on multiple access channels in log-logarithmic expected time are proposed and a complementary lower bound is established showing that the first protocols falls within an additive constant of optimality and that the second differs from optimality by less than any multiplicative factor infinitesimally greater than 1 as the size of the problem approaches infinity.
Abstract: We propose two selection protols that run on multiple access channels in log-logarithmic expected time, and establish a complementary lower bound showing that the first protocols falls within an additive constant of optimality and that the second differs from optimality by less than any multiplicative factor infinitesimally greater than 1 as the size of the problem approaches infinity. It is difficult to second-guess the fast-changing electronics industry, but our mathematical analysis could be relevant outside the traditional interests of communications protocols to semaphore-like problems.

Journal ArticleDOI
TL;DR: Shannon's self-information of a string is generalized to its complexity relative to the class of finite-state-machine (FSM) defined sources by a theorem stating that, asymptotically, the mean complexity provides a tight lower bound for the mean length of all so-called regular codes.
Abstract: Shannon's self-information of a string is generalized to its complexity relative to the class of finite-state-machine (FSM) defined sources. Unlike an earlier generalization, the new one is valid for both short and long strings. The definition is justified in part by a theorem stating that, asymptotically, the mean complexity provides a tight lower bound for the mean length of all so-called regular codes. This also generalizes Shannon's noiseless coding theorem. For a large subclass of FSM sources a simple algorithm is described for computing the complexity.

Journal ArticleDOI
TL;DR: A new lower bound for the minimum distance of cyclic codes that includes earlier bounds (i.e., BCH bound, HT bound, Roos bound) is created and can be even stronger than the first one.
Abstract: The main result is a new lower bound for the minimum distance of cyclic codes that includes earlier bounds (i.e., BCH bound, HT bound, Roos bound). This bound is related to a second method for bounding the minimum distance of a cyclic code, which we call shifting. This method can be even stronger than the first one. For all binary cyclic codes of length (with two exceptions), we show that our methods yield the true minimum distance. The two exceptions at the end of our list are a code and its even-weight subcode. We treat several examples of cyclic codes of length \geq 63 .

Journal ArticleDOI
TL;DR: Lower and upper bounds on the trace of the positive semidefinite solution of the algebraic matrix Riccati and Lyapunov equation are derived and results in a tighter bound as compared to the Upper bound for the maximal eigenvalue.
Abstract: Lower and upper bounds on the trace of the positive semidefinite solution of the algebraic matrix Riccati and Lyapunov equation are derived. The upper trace bound obtained in this note in many cases results in a tighter bound as compared to the Upper bound for the maximal eigenvalue proposed in [1] and [2].

Journal ArticleDOI
TL;DR: In this article, the authors studied the response of phase-transforming steels to variations of the applied stress (i.e. the ∑-term of the classical plastic strain rate defined in Part I) both theoretically and numerically for ideal-plastic individual phases.
Abstract: The response of phase-transforming steels to variations of the applied stress (i.e. the ∑-term of the classical plastic strain rate Ė cp defined in Part I) is studied both theoretically and numerically for ideal-plastic individual phases. It is found theoretically that though the stress-strain curve contains no elastic portion, it is nevertheless initially tangent to the elastic line with slope equal to Young's modulus. Moreover an explicit formula for the beginning of the curve is derived for medium or high proportions of the harder phase, and a simple upper bound is given for the ultimate stress (maximum Von Mises stress). The finite element simulation confirms and completes these results, especially concerning the ultimate stress whose discrepancy with the theoretical upper bound is found to be maximum for low proportions of the harder phase. Based on these results, a complete model is proposed for the ∑-term of the classical plastic strain rate Ė cp in the case of ideal-plastic phases.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the total error is expressible as a linear combination of three terms: measurement error, modeling errors caused by inadequacy of the travel-time tables; and a nonlinear term.
Abstract: For conventional single-event, nonlinear, least-squares hypocentral estimates, I show that the total error is expressible as a linear combination of three terms: (1) measurement error; (2) modeling errors caused by inadequacy of the travel-time tables; and (3) a nonlinear term. Errors in calculating travel-time partial derivatives are shown to have no effect, provided a stable solution can be found. This is in contrast to linear problems where errors in calculating matrix elements can distort the solution drastically. The error appraisal technique developed here examines each of the three error terms independently. The first can be analyzed by standard confidence ellipsoids with critical values based on measurement error statistics. The second can cause conventional error ellipsoid calculations that derive a critical value from an estimate based on rms residuals, to give misleading results. I introduce an alternative extremal bound procedure for appraising such errors. Travel-time modeling errors are bounded as the product of ray arc length and an estimate of the nominal scale of slowness errors along the ray path. These are used to derive an upper bound on systematic errors in each hypocentral coordinate based on a novel bounding criteria. Finally, I show that, for errors of a reasonable scale, the nonlinear error term can be estimated adequately using a second-order approximation. Given an upper bound on the total location error, bounds on the travel-time error induced by nonlinearity can be calculated from the spectral norm of the Hessian for each measured arrival time. The systematic errors in each hypocentral coordinate due to nonlinearity can then be bounded using the same criteria used for constructing modeling error bounds. This overall procedure is complete because it allows one to independently appraise the relative importance of all sources of hypocentral errors. It is practical because the required computational effort is small.

Journal ArticleDOI
TL;DR: The logarithmic lower bound on communication complexity is applied to obtain an Ω(n log n) bound on the time of 1-tape unbounded error probabilistic Turing machines, believed to be the first nontrivial lower bound obtained for such machines.

Journal ArticleDOI
TL;DR: It follows as a corollary of the first result that there are no more thannd(d+1)n combinatorially distinct labeled simplicial polytopes inRd withn vertices, which improves the best previous upper bound ofncnd/2.
Abstract: We give a new upper bound onnd(d+1)n on the number of realizable order types of simple configurations ofn points inRd, and ofn2d2n on the number of realizable combinatorial types of simple configurations. It follows as a corollary of the first result that there are no more thannd(d+1)n combinatorially distinct labeled simplicial polytopes inRd withn vertices, which improves the best previous upper bound ofncnd/2.

Journal ArticleDOI
A. Wojnar1
TL;DR: The general error-rate formula holds for coherent and noncoherent detection of PSK or FSK signals and a lower bound on error rates is established for some diversity systems.
Abstract: Optimum performance of communication in Nakagami channels is reexamined. With novel functionals of m/\gamma -distribution, analyses of signal interference and of coherent binary detection are greatly simplified. The general error-rate formula holds for coherent and noncoherent detection of PSK or FSK signals. A lower bound on error rates is established for some diversity systems.

Journal ArticleDOI
TL;DR: A number of upper and lower bounds are obtained for K(n, R) , the minimal number of codewords in any binary code of length n and covering radius R, and an upper bound is given for the density of a covering code over any alphabet.
Abstract: A number of upper and lower bounds are obtained for K(n, R) , the minimal number of codewords in any binary code of length n and covering radius R . Several new constructions are used to derive the upper bounds, including an amalgamated direct sum construction for nonlinear codes. This construction works best when applied to normal codes, and we give some new and stronger conditions which imply that a linear code is normal. An upper bound is given for the density of a covering code over any alphabet, and it is shown that K(n + 2, R + 1) \leq K(n, R) holds for sufficiently large n .

Journal ArticleDOI
TL;DR: In this paper, the spectrum of the Laplacian in a bounded open domain with a rough boundary was considered and upper and lower bounds for the second term of the expansion of the partition function were given.
Abstract: We consider the spectrum of the Laplacian in a bounded open domain of ℝ n with a rough boundary (ie with possibly non-integer dimension) and we discuss a conjecture by M V Berry generalizing Weyl's conjecture Then using ideas Mark Kac developed in his famous study of the drum, we give upper and lower bounds for the second term of the expansion of the partition function The main thesis of the paper is to show that the relevant measure of the roughness of the boundary should be based on Minkowski dimensions and on Minkowski measures rather than on Haussdorff ones

Journal ArticleDOI
TL;DR: It is proved that the proposed algorithm can find k nearest neighbors in a constant expected time and is distribution free, and only 4.6 distance calculations were required to find a nearest neighbor among 10 000 samples drawn from a bivariate normal distribution.
Abstract: We propose a fast nearest neighbor finding algorithm, named tentatively an ordered partition, based on the ordered lists of the training samples of each projection axis. The ordered partition contains two properties, one is ordering?to bound the search region, and the other is partitioning?to reject the unwanted samples without actual distance computations. It is proved that the proposed algorithm can find k nearest neighbors in a constant expected time. Simulations show that the algorithm is rather distribution free, and only 4.6 distance calculations, on the average, were required to find a nearest neighbor among 10 000 samples drawn from a bivariate normal distribution.

Journal ArticleDOI
TL;DR: An Ω(n logn) lower bound is proved for these problems under appropriate models of computation for a set ofn demand points with weightWi,i = 1,2,...,n, in the plane.
Abstract: Given a set ofn demand points with weightWi,i = 1,2,...,n, in the plane, we consider several geometric facility location problems. Specifically we study the complexity of the Euclidean 1-line center problem, discrete 1-point center problem and a competitive location problem. The Euclidean 1-line center problem is to locate a line which minimizes the maximum weighted distance from the line (or the center) to the demand points. The discrete 1-point center problem is to locate one of the demand points so as to minimize the maximum unweighted distance from the point to other demand points. The competitive location problem studied is to locate a new facility point to compete against an existing facility so that a certain objective function is optimized. An Ω(n logn) lower bound is proved for these problems under appropriate models of computation. Efficient algorithms for these problems that achieve the lower bound and other related problems are also given.

Journal ArticleDOI
TL;DR: An algebraic limitation on the maximum number of directions of arrival of plane waves that can be resolved by a uniform linear sensor array is studied and the upper bounds indicate the potential for resolving more signals than by present methods of array processing.
Abstract: An algebraic limitation on the maximum number of directions of arrival of plane waves that can be resolved by a uniform linear sensor array is studied. Achievable lower and upper bounds are derived on that number as a function of the number of elements in the array, number of snapshots, and the rank of the source sample-correlation matrix. The signals are assumed narrow-band and of identical and known center frequency. The results are also applicable in the coherent signal case and when directions of arrival are estimated from few snapshots. While in the multiple snapshot case the lower bounds coincide with known asymptotic results, the upper bounds indicate the potential for resolving more signals than by present methods of array processing.

Journal ArticleDOI
01 Jul 1986
TL;DR: It is shown that the time lower bound of computing the inverse dynamics of an n-link robot manipulator parallelly using p processors is O(k1 [n/p] + k2 [log<2 p]), where k1 and k2 are constants.
Abstract: It is shown that the time lower bound of computing the inverse dynamics of an n-link robot manipulator parallelly using p processors is O(k1 [n/p] + k2 [log<2 p]), where k1 and k2 are constants. A novel parallel algorithm for computing the inverse dynamics using the Newton-Euler equations of motion was developed to be implemented on a single-instruction-stream multiple-data-stream computer with p processors to achieve the time lower bound. When p = n, the proposed parallel algorithm achieves the Minsky's time lower bound O([log2 n]), whidc is the conjecture of parallel evaluation. The proposed p-fold parallel algorithm can be best described as consisting of p-parallel blocks with pipelined elements within each parallel block The results from the computations in the p blocks form a new homogeneous linear recurrence of size p, which can be computed using the recursive doubling algorithm. A modified inverse perfect shuffle interconnection scheme was suggested to interconnect the p processors. Furthermore, the proposed parallel algorithm is susceptible to a systolic pipelined architecture, requiring three floating-point operations per complete set of joint torques.

Journal ArticleDOI
TL;DR: An algorithm is presented for solving a set of linear equations on the nonnegative orthant which can be made equivalent to the maximization of a simple concave function subject to a similar set oflinear equations and bounds on the variables.
Abstract: An algorithm is presented for solving a set of linear equations on the nonnegative orthant. This problem can be made equivalent to the maximization of a simple concave function subject to a similar set of linear equations and bounds on the variables. A Newton method can then be used which enforces a uniform lower bound which increases geometrically with the number of iterations. The basic steps are a projection operation and a simple line search. It is shown that this procedure either proves in at mostO(n2m2L) operations that there is no solution or, else, computes an exact solution in at mostO(n3m2L) operations.

Journal ArticleDOI
TL;DR: An upper bound on the string tension is obtained: G or approx.
Abstract: The evolution of a system of cosmic strings is studied using an extended version of an analytic formalism introduced by Kibble. It is shown that, in a radiation-dominated universe, the fate of the string system depends sensitively on the fate of the closed loops that are produced by the interactions of very long strings. The strings can be prevented from dominating the energy density of the Universe only if there is a large probability (> or approx. =50%) that a closed loop will intersect itself and break up into smaller loops. A comparison with the numerical simulations of Albrecht and Turok indicates that the probability of self-intersection is indeed large enough to allow the energy density in strings to stabilize at a small fraction of the radiation density, but there is a potential problem with the gravitational radiation that is produced by the strings. If the string tension is too large, then the gravitational radiation will be so copious that it interferes with primordial nucleosynthesis. By assuming that the probability of self-intersection is less than 85%, as the comparison with the results of Albrecht and Turok indicates, an upper bound on the string tension is obtained: G or approx. =2 x 10 W) predicted for the cosmic-string theory of galaxy formation. This bound would become significantly lower if the probability of intersection is less than 85%.« less

Journal ArticleDOI
TL;DR: The lower bound from the Lagrangian dual of this approach to the generalized assignment problem (GAP) is shown to be at least as strong as that from the best of the traditionallagrangian relaxation approaches.

Journal ArticleDOI
TL;DR: An upper bound for the number of metastable states in the Hopfield model is calculated as a function of the Hamming fraction from an input pattern, which implies that there is a gap between a set of states close to the input pattern and another set centred around the hamming fraction 0.5 from it.
Abstract: An upper bound for the number of metastable states in the Hopfield model is calculated as a function of the Hamming fraction from an input pattern. For all finite values of alpha , the ratio of number of patterns to nodes, the hamming fraction from the input pattern to the nearest metastable state is infinite. When alpha <0.113, the bound also implies that there is a gap between a set of states close to the input pattern and another set centred around the Hamming fraction 0.5 from it.