scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 1987"


Journal ArticleDOI
TL;DR: It is argued here that in universes that do not recollapse, the only bound on the cosmological constantlambda is that it should not be so large as to prevent the formation of gravitationally bound states, and it turns out that the bound is quite large.
Abstract: In recent cosmological models, there is an "anthropic" upper bound on the cosmological constant $\ensuremath{\Lambda}$. It is argued here that in universes that do not recollapse, the only such bound on $\ensuremath{\Lambda}$ is that it should not be so large as to prevent the formation of gravitationally bound states. It turns out that the bound is quite large. A cosmological constant that is within 1 or 2 orders of magnitude of its upper bound would help with the missing-mass and age problems, but may be ruled out by galaxy number counts. If so, we may conclude that anthropic considerations do not explain the smallness of the cosmological constant.

1,132 citations


Journal ArticleDOI
TL;DR: In this article, the stability of discrete-time sliding mode control systems is investigated and a new sliding mode condition is suggested, and it is shown that the control must have upper and lower bounds.
Abstract: The stability of discrete-time sliding mode control systems is investigated and a new sliding mode condition is suggested. It is shown that the control must have upper and lower bounds. A numerical example is discussed as an illustration.

730 citations


Book
01 Jan 1987
TL;DR: The techniques described in "Computational Limitations for Small Depth Circuits" can be used to demonstrate almost optimal lower bounds on the size of small depth circuits computing several different functions, such as parity and majority.
Abstract: Proving lower bounds on the amount of resources needed to compute specific functions is one of the most active branches of theoretical computer science. Significant progress has been made recently in proving lower bounds in two restricted models of Boolean circuits. One is the model of small depth circuits, and in this book Johan Torkel Hastad has developed very powerful techniques for proving exponential lower bounds on the size of small depth circuits' computing functions.The techniques described in "Computational Limitations for Small Depth Circuits" can be used to demonstrate almost optimal lower bounds on the size of small depth circuits computing several different functions, such as parity and majority. The main tool used in the proof of the lower bounds is a lemma, stating that any AND of small fanout OR gates can be converted into an OR of small fanout AND gates with high probability when random values are substituted for the variables.Hastad also applies this tool to relativized complexity, and discusses in great detail the computation of parity and majority in small depth circuits.Contents: Introduction. Small Depth Circuits. Outline of Lower Bound Proofs. Main Lemma. Lower Bounds for Small Depth Circuits. Functions Requiring Depth k to Have Small Circuits. Applications to Relativized Complexity. How Well Can We Compute Parity in Small Depth? Is Majority Harder than Parity? Conclusions.John Hastad is a postdoctoral fellow in the Department of Mathematics at MIT C"omputational Limitations of Small Depth Circuits" is a winner of the 1986 ACM Doctoral Dissertation Award.

589 citations


Proceedings ArticleDOI
12 Oct 1987
TL;DR: The problem of finding a sequence of commanded velocities which is guaranteed to move the point to the goal is shown to be non-deterministic exponential time hard, making it the first provably intractable problem in robotics.
Abstract: We present new techniques for establishing lower bounds in robot motion planning problems. Our scheme is based on path encoding and uses homotopy equivalence classes of paths to encode state. We first apply the method to the shortest path problem in 3 dimensions. The problem is to find the shortest path under an Lp metric (e.g. a euclidean metric) between two points amid polyhedral obstacles. Although this problem has been extensively studied, there were no previously known lower bounds. We show that there may be exponentially many shortest path classes in single-source multiple-destination problems, and that the single-source single-destination problem is NP-hard. We use a similar proof technique to show that two dimensional dynamic motion planning with bounded velocity is NP-hard. Finally we extend the technique to compliant motion planning with uncertainty in control. Specifically, we consider a point in 3 dimensions which is commanded to move in a straight line, but whose actual motion may differ from the commanded motion, possibly involving sliding against obstacles. Given that the point initially lies in some start region, the problem of finding a sequence of commanded velocities which is guaranteed to move the point to the goal is shown to be non-deterministic exponential time hard, making it the first provably intractable problem in robotics.

575 citations



Journal ArticleDOI
TL;DR: An algorithm is presented for computing a column permutation Pi and a QR-factorization of an m by n (m or = n) matrix A such that a possible rank deficiency of A will be revealed in the triangular factor R having a small lower right block.

525 citations


Journal ArticleDOI
TL;DR: A lower bound is provided for the regret associated with any uniformly good scheme, and a scheme which attains the lower bound for every configuration C is constructed, given explicitly in terms of the Kullback-Liebler number between pairs of distributions.
Abstract: At each instant of time we are required to sample a fixed number m \geq 1 out of N i.i.d, processes whose distributions belong to a family suitably parameterized by a real number \theta . The objective is to maximize the long run total expected value of the samples. Following Lai and Robbins, the learning loss of a sampling scheme corresponding to a configuration of parameters C = (\theta_{1},..., \theta_{N}) is quantified by the regret R_{n}(C) . This is the difference between the maximum expected reward at time n that could be achieved if C were known and the expected reward actually obtained by the sampling scheme. We provide a lower bound for the regret associated with any uniformly good scheme, and construct a scheme which attains the lower bound for every configuration C . The lower bound is given explicitly in terms of the Kullback-Liebler number between pairs of distributions. Part II of this paper considers the same problem when the reward processes are Markovian.

407 citations


Journal ArticleDOI
TL;DR: An expression for the output of the receiver is obtained for the case of random signature sequences, and the corresponding characteristic function is determined to study the density function of the multiple-access interference and to determine arbitrarily tight upper and lower bounds on the average probability of error.
Abstract: Binary direct-sequence spread-spectrum multiple-access communications, an additive white Gaussian noise channel, and a coherent correlation receiver are considered. An expression for the output of the receiver is obtained for the case of random signature sequences, and the corresponding characteristic function is determined. The expression is used to study the density function of the multiple-access interference and to determine arbitrarily tight upper and lower bounds on the average probability of error. The bounds, which are obtained without making a Gaussian approximation, are compared to results obtained using a Gaussian approximation. The effects of transmitter power, the length of the signature sequences, and the number of interfering transmitters are illustrated. Each transmitter is assumed to have the same power, although the general approach can accommodate the case of transmitters with unequal powers.

402 citations


Journal ArticleDOI
TL;DR: A branch and bound algorithm for project scheduling with resource constraints based on the idea of using disjunctive arcs for resolving conflicts that are created whenever sets of activities have to be scheduled whose total resource requirements exceed the resource availabilities in some periods is described.

387 citations


Journal ArticleDOI
TL;DR: The arguments of Razborov are modified to obtain exponential lower bounds for circuits, and the best lower bound for an NP function ofn variables is exp (Ω(n1/4 · (logn)1/2)), improving a recent result of exp ( Ω( n1/8-ε)) due to Andreev.
Abstract: Recently, Razborov obtained superpolynomial lower bounds for monotone circuits that cliques in graphs. In particular, Razborov showed that detecting cliques of sizes in a graphm vertices requires monotone circuits of size Ω(m s /(logm)2s ) for fixeds, and sizem Ω(logm) form/4]. In this paper we modify the arguments of Razborov to obtain exponential lower bounds for circuits. In particular, detecting cliques of size (1/4) (m/logm)2/3 requires monotone circuits exp (Ω((m/logm)1/3)). For fixeds, any monotone circuit that detects cliques of sizes requiresm) s ) AND gates. We show that even a very rough approximation of the maximum clique of a graph requires superpolynomial size monotone circuits, and give lower bounds for some Boolean functions. Our best lower bound for an NP function ofn variables is exp (Ω(n 1/4 · (logn)1/2)), improving a recent result of exp (Ω(n 1/8-e)) due to Andreev.

382 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the condition number κ satisfies one or both of the diffrential inequalitiesm ·κ2≤∥Dκ∥≤M·κ2, where "Dκ" is the norm of the gradient of κ.
Abstract: The condition number of a problem measures the sensitivity of the answer to small changes in the input. We call the problem ill-posed if its condition number is infinite. It turns out that for many problems of numerical analysis, there is a simple relationship between the condition number of a problem and the shortest distance from that problem to an ill-posed one: the shortest distance is proportional to the reciprocal of the condition number (or bounded by the reciprocal of the condition number). This is true for matrix inversion, computing eigenvalues and eigenvectors, finding zeros of polynomials, and pole assignment in linear control systems. In this paper we explain this phenomenon by showing that in all these cases, the condition number κ satisfies one or both of the diffrential inequalitiesm·κ2≤∥Dκ∥≤M·κ2, where ‖Dκ‖ is the norm of the gradient of κ. The lower bound on ‖Dκ‖ leads to an upper bound 1/mκ(x) on the distance. fromx to the nearest ill-posed problem, and the upper bound on ‖Dκ‖ leads to a lower bound 1/(Mκ(X)) on the distance. The attraction of this approach is that it uses local information (the gradient of a condition number) to answer a global question: how far away is the nearest ill-posed problem? The above differential inequalities also have a simple interpretation: they imply that computing the condition number of a problem is approximately as hard as computing the solution of the problem itself. In addition to deriving many of the best known bounds for matrix inversion, eigendecompositions and polynomial zero finding, we derive new bounds on the distance to the nearest polynomial with multiple zeros and a new perturbation result on pole assignment.

Proceedings ArticleDOI
12 Oct 1987
TL;DR: A model of Hierarchical Memory with Block Transfer, like a random access machine, except that access to location x takes time f(x), and a block of consecutive locations can be copied from memory to memory, taking one unit of time per element after the initial access time is introduced.
Abstract: In this paper we introduce a model of Hierarchical Memory with Block Transfer (BT for short). It is like a random access machine, except that access to location x takes time f(x), and a block of consecutive locations can be copied from memory to memory, taking one unit of time per element after the initial access time. We first study the model with f(x) = xα for 0 ≪ α ≪ 1. A tight bound of θ(n log log n) is shown for many simple problems: reading each input, dot product, shuffle exchange, and merging two sorted lists. The same bound holds for transposing a √n × √n matrix; we use this to compute an FFT graph in optimal θ(n log n) time. An optimal θ(n log n) sorting algorithm is also shown. Some additional issues considered are: maintaining data structures such as dictionaries, DAG simulation, and connections with PRAMs. Next we study the model f(x) = x. Using techniques similar to those developed for the previous model, we show tight bounds of θ(n log n) for the simple problems mentioned above, and provide a new technique that yields optimal lower bounds of Ω(n log2n) for sorting, computing an FFT graph, and for matrix transposition. We also obtain optimal bounds for the model f(x)= xα with α ≫ 1. Finally, we study the model f(x) = log x and obtain optimal bounds of θ(n log*n) for simple problems mentioned above and of θ(n log n) for sorting, computing an FFT graph, and for some permutations.

Journal ArticleDOI
TL;DR: Results on the packet error probability for frequency-hop and direct-sequence spread spectrum systems are presented and it is shown that the assumption of independent errors out of the decoder also gives an upper bound.
Abstract: The use of convolutional coding in packet radio systems introduces problems in evaluating performance, since the errors out of the decoder are not independent. A new bound on the packet error probability out of a Viterbi decoder may be used to evaluate these systems. We present results on the packet error probability for frequencyhop and direct-sequence spread-spectrum systems. In addition, we show that the assumption of independent errors out of the decoder gives an upper bound that is not as tight as our new bound. Comparisons are made with frequency-hop systems using Reed-Solomon codes.

Journal ArticleDOI
TL;DR: Two corollaries to the result are (1) hidden-lines can be removed in optimal O(n2) time, and (2) the portion of a 3-D polyhedron visible from a given interior point is constructible in optimal S(n-2) time.
Abstract: An O(n2) hidden-surface removal algorithm is shown. This is an improvement over the previous best worst-case performance of O(n2 log n). It has been established that the hidden-line and hidden-surface problems have an O(n2) worst-case lower bound, so the algorithm is optimal. However, the algorithm is not output-size sensitive. Two corollaries to the result are (1) hidden-lines can be removed in optimal O(n2) time, and (2) the portion of a 3-D polyhedron visible from a given interior point is constructible in optimal O(n2) time.

Journal ArticleDOI
TL;DR: It is proved that an Ultracomputer (the currently feasible general-purpose parallel machine) can simulate one step of a PRAM (the most convenient parallel model to program) in O, and a simple consequence of the upper bound is that a complete network of processors can simulate this step.
Abstract: The power of shared-memory in models of parallel computation is studied, and a novel distributed data structure that eliminates the need for shared memory without significantly increasing the run time of the parallel computation is described. More specifically, it is shown how a complete network of processors can deterministically simulate one PRAM step in O(log n/(log log n)2) time when both models use n processors and the size of the PRAM's shared memory is polynomial in n. (The best previously known upper bound was the trivial O(n)). It is established that this upper bound is nearly optimal, and it is proved that an on-line simulation of T PRAM steps by a complete network of processors requires O(T(log n/ log log n)) time.A simple consequence of the upper bound is that an Ultracomputer (the currently feasible general-purpose parallel machine) can simulate one step of a PRAM (the most convenient parallel model to program) in O((log n)2log log n) steps.

Journal ArticleDOI
James Renegar1
TL;DR: It is shown that with respect to a certain model of computation, the worst-case computational complexity of obtaining an e-approximation either to one, or to each, zero of arbitrary f ∈ Pd(R) is Θ(log log(R/e), that is, both upper and lower bounds are proved.

Journal ArticleDOI
TL;DR: These results show that in each case the high signal-to-noise maximum-likelihood rules have a performance nearly equal to that of the maximum- likelihood rules over a wide range of practically interesting signal- to- noise ratios (SNR's).
Abstract: The problem of locating a periodically inserted frame synchronization pattern in random data for a M -ary digital communication system operating over the additive white Gaussian noise channel is considered. The optimum maximum-likelihood decision rule, high signal-to-noise approximate maximum likelihood decision rule, and ordinary correlation decision rule for frame synchronization are derived for both coherent and noncoherent phase demodulation. A general lower bound on synchronization probability is derived for the coherent correlation rule. Monte Carlo computer simulations of all three decision rules, along with evaluations of the lower bound for the coherent correlation rule, were performed for the coherent MPSK, coherent, and noncoherent M ary orthogonal, and 16 QAM signaling schemes. These results show that in each case the high signal-to-noise maximum-likelihood rules have a performance nearly equal to that of the maximum-likelihood rules over a wide range of practically interesting signal-to-noise ratios (SNR's). These high SNR decision rules also provide significant performance improvement over the simple correlation rules. Moreover, they are much simpler to implement than the maximum-likelihood decision rules and, in fact, are no more complex than the correlation rules.

Proceedings Article
01 Jan 1987
TL;DR: In this paper, the authors give a model-theoretic method for establishing the k-variable property, involving a restricted Ehrenfeucht-Fraisse game in which each player has only k pebbles.
Abstract: A theory satisfies the k-variable property if every first-order formula is equivalent to a formula with at most k bound variables (possibly reused). Gabbay has shown that a model of temporal logic satisfies the k -variable property for some k if and only if there exists a finite basis for the temporal connectives over that model. We give a model-theoretic method for establishing the k -variable property, involving a restricted Ehrenfeucht-Fraisse game in which each player has only k pebbles. We use the method to unify and simplify results in the literature for linear orders. We also establish new k -variable properties for various theories of bounded-degree trees, and in each case obtain tight upper and lower bounds on k . This gives the first finite basis theorems for branching-time models of temporal logic.

Journal ArticleDOI
TL;DR: In this paper, the spectral density function of the stochastic field takes a limiting shape, and the upper and lower bounds on the response variability are derived for statically determinate structures.
Abstract: Dealing with the issues of response variability for statically determinate structures, this study analyzes the response variability in two cases in which the spectral density function of the stochastic field takes limiting shapes. In these limiting cases, the spectral density, having a constant total area, concentrates sharply around the origin in one case and spreads thinly throughout in the other. Also, the present study derives the upper and lower bounds on the response variability. These results provide important physical as well as numerical insight into the response variability issue, whether we solve the problem by exact or numerical integration of equations of equilibrium or motion, or by other numerical methods. It is rather difficult to estimate experimentally the autocorrelation or spectral density function for the stochastic variation of material properties. In view of this, the upper bound results are particularly important, since the bounds derived here do not require knowledge of the autoco...

Journal ArticleDOI
TL;DR: Lower bounds on the asymptotic variance for regular distribution-free estimators of the parameters of the binary choice model and the censored regression (Tobit) model were derived in this paper.
Abstract: We derive lower bounds on the asymptotic variances for regular distribution-free estimators of the parameters of the binary choice model and the censored regression (Tobit) model. A distribution-free (or semiparametric) estimator is one that does not require any assumption about the distribution of the stochastic error term in the model, apart from regularity conditions. For the binary choice model, we obtain an explicit lower bound for the asymptotic variance for the slope parameters, or more generally the parameters of a nonlinear regression function in the underlying latent variable model, but we find that there is no regular semiparametric estimator of the constant term (identified by requiring the error distribution to have zero median). Lower bounds are also obtained under the further assumption that the error distribution is symmetric, and in this case there is a finite lower bound for the constant term too. Comparison of the bounds with those for the classical parametric problem shows the loss of information due to lack of a priori knowledge of the functional form of the error distribution. We give the conditions for equality of the parametric and semiparametric lower bounds (in which case adaptive estimation may be possible), both with and without the assumption of a symmetric error distribution. In general, adaptive estimation is not possible, but one special case where these conditions hold is when the regression function is linear and the explanatory variables have a multivariate normal distribution. The Tobit model considered here is the censored nonlinear regression model, with a fixed censoring point. We again give an explicit lower bound for the asymptotic variance for the regression parameters, this time including a constant term (if the error term has zero median). Comparison with the corresponding lower bound for the parametric case shows that adaptive estimation is in general not possible for this model.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the class of all unimodal densities defined on some interval of length $L$ and bounded by $H$ and study the minimax risk over this class, when they estimate using $n$ i.i.d. observations, the loss being measured by the $\mathbb{L}^1$ distance between the estimator and the true density.
Abstract: Let us consider the class of all unimodal densities defined on some interval of length $L$ and bounded by $H$; we shall study the minimax risk over this class, when we estimate using $n$ i.i.d. observations, the loss being measured by the $\mathbb{L}^1$ distance between the estimator and the true density. We shall prove that if $S = \operatorname{Log}(HL + 1)$, upper and lower bounds for the risk are of the form $C(S/n)^{1/3}$ and the ratio between those bounds is smaller than 44 when $S/n$ is smaller than 220$^{-1}$.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the long-time behavior of solutions to the Ginzburg-Landau partial differential equation and showed that a finite-dimensional attractor captures all the solutions.

Journal ArticleDOI
TL;DR: It is shown that the problem is NP-hard but polynomially solvable in the preemptive case and several lower bounds are introduced through definition of a special class of graphs for which the maximum clique problem is shown to be polynomial.
Abstract: We consider a generalization of the fixed job schedule problem where a bound is imposed on the total working time of each processor. It is shown that the problem is NP-hard but polynomially solvable in the preemptive case. We introduce several lower bounds. One is determined through definition of a special class of graphs, for which the maximum clique problem is shown to be polynomial. Lower bounds and dominance criteria are exploited in a branch-and-bound algorithm for optimal solution of the problem. The effectiveness of the algorithm is analyzed through computational experiments.

Journal ArticleDOI
TL;DR: In this article, the upper bound method of limit analysis of perfect plasticity is applied to stability problems of slopes with a general nonlinear failure criterion, and a numerical procedure is suggested, which converts the complex system of differential equations to an initial value problem.
Abstract: The upper bound method of limit analysis of perfect plasticity is applied to stability problems of slopes with a general nonlinear failure criterion. Based on the upper bound method, a numerical procedure is suggested, which converts the complex system of differential equations to an initial value problem. Using this numerical procedure, an effective numerical method, called the inverse method, suitable for the solution of slope stability problems in soil mechanics with a general nonlinear failure criterion, is presented. A general nonlinear failure criterion for soils is also suggested, from which the effects of nonlinear failure parameters on the stability of slopes are discussed.

Journal ArticleDOI
TL;DR: In this article, the four-dimensional O(4) Φ4 scalar theory is investigated in the broken phase at different values of the quartic coupling λ and the scalar mass, the field expectation value and the wave function renormalization constant are calculated.

Journal ArticleDOI
TL;DR: The ratio % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGqiVC
Abstract: For every polynomial time algorithm which gives an upper bound $$\overline {vol}$$ (K) and a lower boundvol(K) for the volume of a convex setK?Rd, the ratio $$\overline {vol}$$ (K)/vol(K) is at least (cd/logd)d for some convex setK?Rd.

Book ChapterDOI
13 Apr 1987
TL;DR: The period, the distribution of short patterns and a lower bound for the linear complexity of the sequences generated by an ASG are determination and the frequency of all short patterns as well as the autocorrelations turn out to be ideal.
Abstract: The alternating step generator (ASG) is a new generator of pseudo-random sequences which is closely related t o the stop-and-go generator. It shares all the good properties of this latter generator without Posessing its weaknesses. The ASG consists of three subgenerators K, M, and M. The main characteristic of its structure is that the output of one of the subgenerators, K, controls the clock of the two others, M and M. In the present contribution, we determine the period, the distribution of short patterns and a lower bound for the linear complexity of the sequences generated by an ASG. The proof of the lower bound is greatly simplified by assuming that K generates a de Bruijn sequence. Under this and other not very restrictive assumptions the period and the linear complexity are found to be proportional to the period of the de Bruijn sequence. Furthermore the frequency of all short patterns as well as the autocorrelations turn out to be ideal. This means that the sequences generated by the ASG are provably secure against the standard attacks.

Journal ArticleDOI
Jorma Rissanen1
TL;DR: A Search for the stochastic complexity of the observed data, as the greates lower bound with which the data can be encoded, represents a global maximum likelihood principle, which permits comparison of models regardless of the number of parameters in them.
Abstract: A Search for the stochastic complexity of the observed data, as the greates lower bound with which the data can be encoded, represents a global maximum likelihood principle, which permits comparison of models regardless of the number of parameters in them. For important special classes, such as the guassian and the multinomial models, formulas for the stochastic complexity give new and powerful model selection criteria,while in the general case approximations can be computed with the MDL principle. Once a model is found with which the stochastic complexity is reached, there is nothing further to learn from the data with the proposed models. The basic notions are reviewed and numerical examples are given.

Journal ArticleDOI
TL;DR: Several related problems NP-complete are proved, including an upper bound and a lower bound on the maximum diameter of the graph obtained by deleting k edges from a graph G with diameter D and exact bounds of θ(√k) and of 2k + 2, respectively.
Abstract: We consider the following problem: Given positive integers k and D, what is the maximum diameter of the graph obtained by deleting k edges from a graph G with diameter D, assuming that the resulting graph is still connected? For undirected graphs G we prove an upper bound of (k + 1)D and a lower bound of (k + 1)D − k for even D and of (k + 1)D − 2k + 2 for odd D ⩾ 3. For the special cases of k = 2 and k = 3, we derive the exact bounds of 3D − 1 and 4D − 2, respectively. For D = 2 we prove exact bounds of k + 2 and k + 3, for k ⩽ 4 and k = 6, and k = 5 and k ⩾ 7, respectively. For the special case of D = 1 we derive an exact bound on the resulting maximum diameter of order θ(√k). For directed graphs G, the bounds depend strongly on D: for D = 1 and D = 2 we derive exact bounds of θ(√k) and of 2k + 2, respectively, while for D ⩾ 3 the resulting diameter is in general unbounded in terms of k and D. Finally, we prove several related problems NP-complete.

Journal ArticleDOI
TL;DR: The present work establishes a direct connection, as suggested by Sears, Parr, and Dinur, between the quantum-mechanical kinetic energy and information entropy in position space and an upper bound to the information entropy sum in complementary spaces.
Abstract: An uncertainty-type lower bound [I. Bialynicki-Birula and J. Mycielski, Commun. Math. Phys. 44, 129 (1975)] to the information-entropy sum in complementary spaces has recently been reformulated by Gadre et al. [Phys. Rev. A 32, 2602 (1985)] in terms of the respective one-particle probability densities. This bound has been exploited to derive rigorous upper as well as lower bounds to the information entropies and their sum in terms of the corresponding second moments of their distributions. Thus the present work establishes a direct connection, as suggested by Sears, Parr, and Dinur [Israel J. Chem. 19, 165 (1980)], between the quantum-mechanical kinetic energy and information entropy in position space. It has also been demonstrated that given at least one arbitrary moment-type constraint in each space, it is possible to derive an upper bound to the information entropy sum in complementary spaces.