scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 1993"


Proceedings Article
29 Nov 1993
TL;DR: It is shown that the recognition weights of an autoencoder can be used to compute an approximation to the Boltzmann distribution and that this approximation gives an upper bound on the description length.
Abstract: An autoencoder network uses a set of recognition weights to convert an input vector into a code vector. It then uses a set of generative weights to convert the code vector into an approximate reconstruction of the input vector. We derive an objective function for training autoencoders based on the Minimum Description Length (MDL) principle. The aim is to minimize the information required to describe both the code vector and the reconstruction error. We show that this information is minimized by choosing code vectors stochastically according to a Boltzmann distribution, where the generative weights define the energy of each possible code vector given the input vector. Unfortunately, if the code vectors use distributed representations, it is exponentially expensive to compute this Boltzmann distribution because it involves all possible code vectors. We show that the recognition weights of an autoencoder can be used to compute an approximation to the Boltzmann distribution and that this approximation gives an upper bound on the description length. Even when this bound is poor, it can be used as a Lyapunov function for learning both the generative and the recognition weights. We demonstrate that this approach can be used to learn factorial codes.

1,114 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the second eigenvalue of the symmetric exclusion process on 8i3 by comparison with a reversible Markov chain on the r-sets of {1, 2,..., n} with uniform stationary distribution.
Abstract: By symmetry, P has eigenvalues 1 = I03 > I381 > ?> I 31xI- 1 2 -1. This paper develops methods for getting upper and lower bounds on 8i3 by comparison with a second reversible chain on the same state space. This extends the ideas introduced in Diaconis and Saloff-Coste (1993), where random walks on finite groups were considered. The bounds involve geometric properties such as the diameter and covering number of an associated graph along the lines of Diaconis and Stroock (1991). The main application gives a sharp upper bound on the second eigenvalue of the symmetric exclusion process. Thus, let S0 be a connected undirected graph with n vertices. For simplicity, we assume in this introduction that SW is regular. To start, r unlabelled particles are placed in an initial configuration, 1 < r < n. At each step, a particle is chosen at random; then one of the neighboring sites of this particle is chosen at random. If the neighboring site is unoccupied, the chosen particle is moved there; if the neighboring site is occupied, the system stays as it was. This is a reversible Markov chain on the r-sets of {1, 2, . . ., n} with uniform stationary distribution. Liggett (1985) gives background and motivation (he focuses on infinite systems). Fill (1991) gives bounds on the second eigenvalue of the labeled exclusion process on the finite circle ZZn 1 We study this chain by comparison with a second Markov chain on r-sets that proceeds by picking a particle at random, picking an unoccupied site at random (not necessarily a neighboring site) and moving the particle to the unoccupied site. This is a well studied chain (the Bernoulli-Laplace model for diffusion). Its eigenvalues are known. We show that the comparison techniques apply to give upper bounds on the eigenvalues of the exclusion

481 citations


Journal ArticleDOI
17 Jan 1993
TL;DR: In contrast to the classical matched decoding case, here, under the mismatched decoding regime, the highest achievable rate depends on whether the performance criterion is the bit error rate or the message error probability and whether the coding strategy is deterministic or randomized.
Abstract: Reliable transmission over a discrete-time memoryless channel with a decoding metric that is not necessarily matched to the channel (mismatched decoding) is considered. It is assumed that the encoder knows both the true channel and the decoding metric. The lower bound on the highest achievable rate found by Csiszar and Korner (1981) and by Hui (1983) for DMC's, hereafter denoted C/sub LM/, is shown to bear some interesting information-theoretic meanings. The bound C/sub LM/ turns out to be the highest achievable rate in the random coding sense, namely, the random coding capacity for mismatched decoding. It is also demonstrated that the /spl epsiv/-capacity associated with mismatched decoding cannot exceed C/sub LM/. New bounds and some properties of C/sub LM/ are established and used to find relations to the generalized mutual information and to the generalized cutoff rate. The expression for C/sub LM/ is extended to a certain class of memoryless channels with continuous input and output alphabets, and is used to calculate C/sub LM/ explicitly for several examples of theoretical and practical interest. Finally, it is demonstrated that in contrast to the classical matched decoding case, here, under the mismatched decoding regime, the highest achievable rate depends on whether the performance criterion is the bit error rate or the message error probability and whether the coding strategy is deterministic or randomized. >

444 citations


Journal ArticleDOI
TL;DR: It is shown that, by solving appropriate local residual type problems, one can obtain upper bounds on the error in the energy norm, in the special case of adaptiveh-p finite element analysis, the estimator will also give a realistic estimate of the error.
Abstract: This paper deals with the problem of obtaining numerical estimates of the accuracy of approximations to solutions of elliptic partial differential equations. It is shown that, by solving appropriate local residual type problems, one can obtain upper bounds on the error in the energy norm. Moreover, in the special case of adaptiveh-p finite element analysis, the estimator will also give a realistic estimate of the error. A key feature of this is the development of a systematic approach to the determination of boundary conditions for the local problems. The work extends and combines several existing methods to the case of fullh-p finite element approximation on possibly irregular meshes with, elements of non-uniform degree. As a special case, the analysis proves a conjecture made by Bank and Weiser [Some A Posteriori Error Estimators for Elliptic Partial Differential Equations, Math. Comput.44, 283---301 (1985)].

411 citations


Journal ArticleDOI
G. Poltyrev1
17 Jan 1993
TL;DR: The author derives exponential upper and lower bounds for the decoding error probability of an IC, expressed in terms of the normalized logarithmic density (NLD), and shows that the exponent of the random coding bound can be attained by linear ICs (lattices), implying that lattices play the same role with respect to the AWGN channel as linear-codes do with respectto a discrete symmetric channel.
Abstract: Many coded modulation constructions, such as lattice codes, are visualized as restricted subsets of an infinite constellation (IC) of points in the n-dimensional Euclidean space. The author regards an IC as a code without restrictions employed for the AWGN channel. For an IC the concept of coding rate is meaningless and the author uses, instead of coding rate, the normalized logarithmic density (NLD). The maximum value C/sub /spl infin// such that, for any NLD less than C/sub /spl infin//, it is possible to construct an PC with arbitrarily small decoding error probability, is called the generalized capacity of the AWGN channel without restrictions. The author derives exponential upper and lower bounds for the decoding error probability of an IC, expressed in terms of the NLD. The upper bound is obtained by means of a random coding method and it is very similar to the usual random coding bound for the AWGN channel. The exponents of these upper and lower bounds coincide for high values of the NLD, thereby enabling derivation of the generalized capacity of the AWGN channel without restrictions. It is also shown that the exponent of the random coding bound can be attained by linear ICs (lattices), implying that lattices play the same role with respect to the AWGN channel as linear-codes do with respect to a discrete symmetric channel. >

324 citations


Journal ArticleDOI
TL;DR: In this article, a branch-and-bound method for scheduling thermal generating units is presented, where a simple rule is defined to compute the lower bound of each candidate schedule for interim computation usage, and the branching process takes place on the subschedule with the lowest lower bound.
Abstract: A branch-and-bound method for scheduling thermal generating units is presented. The decision variables are the start and stop times and the generation levels of the units. A simple rule is defined to compute the lower bound of each candidate schedule for interim computation usage, and the branching process takes place on the subschedule with the lowest lower bound. The heap data storage structure and space saving encoded data representations for partially fulfilled unit commitment schedules are utilized to facilitate the branch-and-bound procedure. By successive branching and bounding, the unit commitment schedule with the minimum cost can be obtained. Two examples, a 10 unit, 24 h and a 20 unit, 36 h case, are shown to illustrate the effectiveness of the proposed algorithm. >

320 citations


Journal ArticleDOI
TL;DR: In this paper, a method for bounding the overall properties of a class of composite materials in terms of the properties of individual phases and of their arrangement is proposed, which applies to power law materials and, as a special case, to rigid ideally plastic materials.
Abstract: A method is proposed for bounding the overall properties of a class of composite materials in terms of the properties of the individual phases and of their arrangement. It applies to power law materials and, as a special case, to rigid ideally plastic materials. A link between the overall potential of a nonlinear composite and the overall energy of a fictitious linear composite is presented with no assumptions on the arrangement of the phases. With this method, any upper bound available for linear materials can easily be transposed to nonlinear materials. A new characterizing of the external surface of ideally plastic composites is given. The possible applications of these bounds are illustrated in a study on two-phase isotropic composites and the predictions of the bounds are compared with Finite Element cell calculations.

281 citations


Journal ArticleDOI
TL;DR: In this paper, the Schrodinger equation with quartic anharmonic and symmetric double-well potentials of the form V(A,B)=Ax2/2+Bx4(B≥0) was studied.
Abstract: Rigorous and remarkably accurate lower bounds to the lower eigenvalue spectrum of the Schrodinger equation with quartic anharmonic and symmetric double‐well potentials of the form V(A,B)=Ax2/2+Bx4(B≥0) are presented. This procedure exploits some exactly soluble model potentials and appears to be of quite general utility.

277 citations


Journal ArticleDOI
TL;DR: In this article, a Gaussian upper bound for the iterated kernels of Markov chains is obtained under some natural conditions, which applies in particular to simple random walks on any locally compact unimodular group $G$ which is compactly generated.
Abstract: A Gaussian upper bound for the iterated kernels of Markov chains is obtained under some natural conditions. This result applies in particular to simple random walks on any locally compact unimodular group $G$ which is compactly generated. Moreover, if $G$ has polynomial volume growth, the Gaussian upper bound can be complemented with a similar lower bound. Various applications are presented. In the process, we offer a new proof of Varopoulos' results relating the uniform decay of convolution powers to the volume growth of $G$.

264 citations



Proceedings ArticleDOI
01 Jun 1993
TL;DR: This paper shows how to software pipeline a loop for minimal register pressure without sacrificing the loop's minimum execution time, and empirical results indicate near-optimal performance.
Abstract: This paper shows how to software pipeline a loop for minimal register pressure without sacrificing the loop's minimum execution time. This novel bidirectional slack-scheduling method has been implemented in a FORTRAN compiler and tested on many scientific benchmarks. The empirical results—when measured against an absolute lower bound on execution time, and against a novel schedule-independent absolute lower bound on register pressure—indicate near-optimal performance.

Book ChapterDOI
01 Jan 1993
TL;DR: It is proved that a sequence set corresponding to a binary linear code achieves Welch’s bound with equality if and only if the dual code contains no codewords of Hamming weight two.
Abstract: Welch’s bound for a set of M complex equi-energy sequences is considered as a lower bound on the sum of the squares of the magnitudes of the inner products between all pairs of these sequences. It is shown that, when the sequences are binary (±-1 valued) sequences assigned to the M users in a synchronous code-division multiple-access (S-CDMA) system, precisely such a sum determines the sum of the variances of the interuser interference seen by the individual users. It is further shown that Welch’s bound, in the general case, holds with equality if and only if the array having the M sequences as rows has orthogonal and equi-energy columns. For the case of binary (±-1 valued) sequences that meet Welch’s bound with equality, it is shown that the sequences are uniformly good in the sense that, when used in a S-CDMA system, the variance of the interuser interference is the same for all users. It is proved that a sequence set corresponding to a binary linear code achieves Welch’s bound with equality if and only if the dual code contains no codewords of Hamming weight two. Transformations and combination of sequences sets that preserve equality in Welch’s bound are given and used to illustrate the design and analysis of sequence sets for non-synchronous CDMA systems.

Book ChapterDOI
01 Jan 1993
TL;DR: The paper is presented in two parts: the first, appearing here, summarizes the major results and treats the case of high transmission rates in detail; the second, to appear in the subsequent issue, treats the cases of low transmission rates.
Abstract: New lower bounds are presented for the minimum error probability that can be achieved through the use of block coding on noisy discrete memoryless channels. Like previous upper bounds, these lower bounds decrease exponentially with the block length N . The coefficient of N in the exponent is a convex function of the rate. From a certain rate of transmission up to channel capacity, the exponents of the upper and lower bounds coincide. Below this particular rate, the exponents of the upper and lower bounds differ, although they approach the same limit as the rate approaches zero. Examples are given and various incidental results and techniques relating to coding theory are developed. The paper is presented in two parts: the first, appearing here, summarizes the major results and treats the case of high transmission rates in detail; the second, to appear in the subsequent issue, treats the case of low transmission rates.

Journal ArticleDOI
TL;DR: The simple head model, the low power noise and the few strong dipoles were all selected in this study as optimistic conditions to establish possibly fundamental resolution limits for any localization effort.

Journal ArticleDOI
TL;DR: An exponential lower bound on the size of bounded-depth Frege proofs for the pigeonhole principle (PHP) is proved and an Ω(loglogn)-depth lower bound for any polynomial-sized Frege proof of the pigeon hole principle is obtained.
Abstract: In this paper we prove an exponential lower bound on the size of bounded-depth Frege proofs for the pigeonhole principle (PHP). We also obtain an Ω(loglogn)-depth lower bound for any polynomial-sized Frege proof of the pigeonhole principle. Our theorem nearly completes the search for the exact complexity of the PHP, as S. Buss has constructed polynomial-size, logn-depth Frege proofs for the PHP. The main lemma in our proof can be viewed as a general Hastad-style Switching Lemma for restrictions that are partial matchings. Our lower bounds for the pigeonhole principle improve on previous superpolynomial lower bounds.

Journal ArticleDOI
TL;DR: In this paper, the authors prove four results on randomized incremental constructions (RICs): 1) analysis of the expected behavior under insertion and deletions, 2) fully dynamic data structure for convex hull maintenance in arbitrary dimensions, 3) tail estimate for the space complexity of RICs, 4) lower bound on the complexity of a game related to RIC.
Abstract: We prove four results on randomized incremental constructions (RICs): an analysis of the expected behavior under insertion and deletions, a fully dynamic data structure for convex hull maintenance in arbitrary dimensions, a tail estimate for the space complexity of RICs, a lower bound on the complexity of a game related to RICs.

Journal ArticleDOI
TL;DR: This paper considers two previously proposed measures, and given two computationaly efficient multiple alignment methods whose deviation from the optimal value is guaranteed to be less than a factor of two, gives a related randomized method which gives, with high probability, multiple alignments with fairly small error bounds.

Proceedings ArticleDOI
01 Sep 1993
TL;DR: It is shown that for any randomized broadcast protocol for radio networks, there exists a network in which the expected time to broadcast a message is Q(ll log(N/11)), where D is the diameter of the network and N is the number of nodes.
Abstract: We show that for any randomized broadcast protocol for radio networks, there exists a network in which the expected time to broadcast a message is Q(ll log(N/11)), where D is the diameter of the network and N is the number of nodes. This implies a tight lower bound of Q( D log N) for all D S N1-e, where s >0 is any constant.

Journal ArticleDOI
TL;DR: The theory of the Cramer-Rao lower bound (CRLB) and maximum-likelihood (ML) estimators is summarized in the context of a heterodyne lidar and the asymptotic bounds developed in the radar literature should not be used as approximations for the correct expression in lidar applications at intermediate signal levels.
Abstract: The theory of the Cramer-Rao lower bound (CRLB) and maximum-likelihood (ML) estimators is summarized in the context of a heterodyne lidar. Numerical experiments are described that indicate the scaling of this CRLB with parameters such as the signal bandwidth and the level of noise. This CRLB is also compared with the CRLB of a highly idealized noiseless direct detection system using photon counting. It is found that the asymptotic bounds developed in the radar literature for the heterodyne CRLB should not be used as approximations for the correct expression in lidar applications at intermediate signal levels. Moreover, the variance of the ML estimator may be greater or even less than the heterodyne CRLB, depending on the mechanism leading to the departure from the bound. >

Proceedings ArticleDOI
02 Jun 1993
TL;DR: In this paper, a new design technique for a robust model predictive controller using an uncertainty description expressed in the time-domain is proposed using a set of Finite Impulse Response (FIR) models, and necessary and sufficient conditions for asymptotic stability are stated.
Abstract: A new design technique for a robust model predictive controller is proposed using an uncertainty description expressed in the time-domain. Robust stability of the resulting closed-loop system is guaranteed for a set of Finite Impulse Response (FIR) models. Both necessary and sufficient conditions for asymptotic stability are stated. If the uncertainty is described as lower and upper bounds on impulse response coefficients, then the resulting optimization problem can be cast as a linear program of moderate size.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate conditions under which dilation occurs and study some of its implications in robust Bayesian inference and in the theory of upper and lower probabilities, and characterize dilation immune neighborhoods of the uniform measure.
Abstract: Suppose that a probability measure $P$ is known to lie in a set of probability measures $M$. Upper and lower bounds on the probability of any event may then be computed. Sometimes, the bounds on the probability of an event $A$ conditional on an event $B$ may strictly contain the bounds on the unconditional probability of $A$. Surprisingly, this might happen for every $B$ in a partition $\mathscr{B}$. If so, we say that dilation has occurred. In addition to being an interesting statistical curiosity, this counterintuitive phenomenon has important implications in robust Bayesian inference and in the theory of upper and lower probabilities. We investigate conditions under which dilation occurs and we study some of its implications. We characterize dilation immune neighborhoods of the uniform measure.

Journal ArticleDOI
TL;DR: Lower and upper bounds are derived for the decay and transitions of quantum states, evolving under a time-dependent Hamiltonian, in terms of the energy uncertainty of the initial and final state.
Abstract: Lower and upper bounds are derived for the decay and transitions of quantum states, evolving under a time-dependent Hamiltonian, in terms of the energy uncertainty of the initial and final state. The bounds are simultaneously a rigorous version of Fermi's golden rule and of the time-energy uncertainty relation. They are sharp, refer to short times, and are compared with recent long-time results for time-independent Hamiltonians. Illustrations for tunneling systems, laser-driven processes, and neutron interferometry in time-dependent magnetic fields are given.

Journal ArticleDOI
TL;DR: An explicit expression is given for the worst-case H/sub 2/ norm when the disturbance system is allowed to vary over all nonlinear, time-varying and possibly noncausal systems with bounded L/ sub 2/-induced operator norm.
Abstract: The worst-case effect of a disturbance system on the H/sub 2/ norm of the system is analyzed. An explicit expression is given for the worst-case H/sub 2/ norm when the disturbance system is allowed to vary over all nonlinear, time-varying and possibly noncausal systems with bounded L/sub 2/-induced operator norm. An upper bound for this measure, which is equal to the worst-case H/sub 2/ norm if the exogeneous input is scalar, is defined. Some further analysis of this upper bound is done, and a method to design controllers which minimize this upper bound over all robustly stabilizing controllers is given. The latter is done by relating this upper bound to a parameterized version of the auxiliary cost function studied in the literature. >

Journal ArticleDOI
TL;DR: Lower bounds on the complexity of any implementation of Carter-Wegman universal hashing are given: quadratic AT2 bound for VLSI implementation; Ω(logn) parallel time bound on a CREW PRAM; and exponential size for constant-depth circuits.

Journal ArticleDOI
TL;DR: In this paper, the exact upper and lower bounds on the information entropies of three spin-1 2 operators S X, S Y, S Z are derived, for a set of more than two observables.

Journal ArticleDOI
TL;DR: A mixed H2/H∞ control problem for discrete-time systems is considered for both state- feedback and output-feedback cases and it is shown that these problems can be effectively solved by reducing them to convex programming problems.

Journal ArticleDOI
01 Apr 1993
TL;DR: A sliding-mode control algorithm combined with an adaptive scheme, which is used to estimate the unknown parameter bounds, is developed for the trajectory control of robot manipulators and shows that in the presence of the uncertainties, which are assumed to be unbounded and rapidly varying, the closed-loop system can still be stabilized.
Abstract: A sliding-mode control algorithm combined with an adaptive scheme, which is used to estimate the unknown parameter bounds, is developed for the trajectory control of robot manipulators. The major contribution of this methodology lies in the use of a special matrix, called the regressor, which makes it possible to isolate the unknown parameters from the robotic dynamics. Based on the upper bounds of those unknown parameters, which are estimated by a simple adaptive law, the proposed VSS (variable-structure-system) controller guarantees the stability of the closed-loop system. The robustness analysis shows that in the presence of the uncertainties, which are assumed to be unbounded and rapidly varying, the closed-loop system can still be stabilized. Chattering is reduced by using the boundary layer technique. Simulation results show the validity of the proposed algorithm. >

Journal ArticleDOI
TL;DR: In this article, it was shown that there is no constant e>0 for which this problem can be approximated within a factor of n 1−e in polynomial time unless P  NP.

Proceedings ArticleDOI
19 Jun 1993
TL;DR: It is proved that the satisfiability problem for set constraints is complete for NEXPTIME, and that this problem has a lower bound of NTIME(c/sup n/log n/), for some c>0.
Abstract: The authors investigate the relationship between set constraints and the monadic class of first-order formulas and show that set constraints are essentially equivalent to the monadic class. From this equivalence, they infer that the satisfiability problem for set constraints is complete for NEXPTIME. More precisely, it is proved that this problem has a lower bound of NTIME(c/sup n/log n/), for some c>0. The relationship between set constraints and the monadic class also gives decidability and complexity results for certain practically useful extensions of set constraints, in particular "negative" projections and subterm equality tests. >

Journal Article
TL;DR: It is shown that there is no constant e>0 for which this problem can be approximated within a factor of n1−e inpolynomial time, unless P  NP, which is the strongest lower bound for polynomial-time approximation of an unweighted NP-complete graph problem.
Abstract: We consider the problem of approximating the size of a minimum non-extendible independent set of a graph, also known as the minimum dominating independence number. We strengthen a result of Irving to show that there is no constant e>0 for which this problem can be approximated within a factor of n1−e inpolynomial time, unless P  NP. This is the strongest lower bound we are aware of for polynomial-time approximation of an unweighted NP-complete graph problem.