scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 1989"


Proceedings ArticleDOI
05 Dec 1989
TL;DR: An exact characterization of the ability of the rate monotonic scheduling algorithm to meet the deadlines of a periodic task set and a stochastic analysis which gives the probability distribution of the breakdown utilization of randomly generated task sets are represented.
Abstract: An exact characterization of the ability of the rate monotonic scheduling algorithm to meet the deadlines of a periodic task set is represented. In addition, a stochastic analysis which gives the probability distribution of the breakdown utilization of randomly generated task sets is presented. It is shown that as the task set size increases, the task computation times become of little importance, and the breakdown utilization converges to a constant determined by the task periods. For uniformly distributed tasks, a breakdown utilization of 88% is a reasonable characterization. A case is shown in which the average-case breakdown utilization reaches the worst-case lower bound of C.L. Liu and J.W. Layland (1973). >

1,582 citations



Journal ArticleDOI
TL;DR: The results support the existence of long-range antiferromagnetic order in the ground state at half-Filling and its absence at quarter-filling and find evidence for an attractive effective /ital d/-wave pairing interaction near half- filling but have not found evidence for a phase transition to a superconducting state.
Abstract: We report on a numerical study of the two-dimensional Hubbard model and describe two new algorithms for the simulation of many-electron systems. These algorithms allow one to carry out simulations within the grand canonical ensemble at significantly lower temperatures than had previously been obtained and to calculate ground-state properties with fixed numbers of electrons. We present results for the two-dimensional Hubbard model with half- and quarter-filled bands. Our results support the existence of long-range antiferromagnetic order in the ground state at half-filling and its absence at quarter-filling. Results for the magnetic susceptibility and the momentum occupation along with an upper bound to the spin-wave spectrum are given. We find evidence for an attractive effective d-wave pairing interaction near half-filling but have not found evidence for a phase transition to a superconducting state.

609 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented Chen's results in a form that is easy to use and gave a multivariable extension, which gives an upper bound on the total variation distance between a sequence of dependent indicator functions and a Poisson process with the same intensity.
Abstract: Convergence to the Poisson distribution, for the number of occurrences of dependent events, can often be established by computing only first and second moments, but not higher ones. This remarkable result is due to Chen (1975). The method also provides an upper bound on the total variation distance to the Poisson distribution, and succeeds in cases where third and higher moments blow up. This paper presents Chen's results in a form that is easy to use and gives a multivariable extension, which gives an upper bound on the total variation distance between a sequence of dependent indicator functions and a Poisson process with the same intensity. A corollary of this is an upper bound on the total variation distance between a sequence of dependent indicator variables and the process having the same marginals but independent coordinates.

522 citations


Journal ArticleDOI
TL;DR: In this paper, a lower bound of Ω ((1/∆)ln(1/δ)+VCdim(C )/ε) was shown for distribution-free learning of a concept class C, where VCdim( C ) is the Vapnik-Chervonenkis dimension and ǫ and à are the accuracy and confidence parameters.
Abstract: We prove a lower bound of Ω ((1/ɛ)ln(1/δ)+VCdim( C )/ɛ) on the number of random examples required for distribution-free learning of a concept class C , where VCdim( C ) is the Vapnik-Chervonenkis dimension and ɛ and δ are the accuracy and confidence parameters. This improves the previous best lower bound of Ω ((1/ɛ)ln(1/δ)+VCdim( C )) and comes close to the known general upper bound of O ((1/ɛ)ln(1/δ)+(VCdim( C )/ɛ)ln(1/ɛ)) for consistent algorithms. We show that for many interesting concept classes, including k CNF and k DNF, our bound is actually tight to within a constant factor.

410 citations


Proceedings ArticleDOI
01 Feb 1989
TL;DR: New lower and upper bounds on the time per operation are proved to implement solutions to some familiar dynamic data structure problems including list representation, subset ranking, partial sums, and the set union problem.
Abstract: Dynamic data structure problems involve the representation of data in memory in such a way as to permit certain types of modifications of the data (updates) and certain types of questions about the data (queries). This paradigm encompasses many fundamental problems in computer science.The purpose of this paper is to prove new lower and upper bounds on the time per operation to implement solutions to some familiar dynamic data structure problems including list representation, subset ranking, partial sums, and the set union problem. The main features of our lower bounds are: They hold in the cell probe model of computation (A. Yao [18]) in which the time complexity of a sequential computation is defined to be the number of words of memory that are accessed. (The number of bits b in a single word of memory is a parameter of the model). All other computations are free. This model is at least as powerful as a random access machine and allows for unusual representation of data, indirect addressing etc. This contrasts with most previous lower bounds which are proved in models (e.g., algebraic, comparison, pointer manipulation) which require restrictions on the way data is represented and manipulated.The lower bound method presented here can be used to derive amortized complexities, worst case per operation complexities, and randomized complexities.The results occasionally provide (nearly tight) tradeoffs between the number R of words of memory that are read per operation, the number W of memory words rewritten per operation and the size b of each word. For the problems considered here there is a parameter n that represents the size of the data set being manipulated and for these problems b = logn is a natural register size to consider. By letting b vary, our results illustrate the effect of register size on time complexity. For instance, one consequence of the results is that for some of the problems considered here, increasing the register size from logn to polylog(n) only reduces the time complexity by a constant factor. On the other hand, decreasing the register size from logn to 1 increases time complexity by a logn factor for one of the problems we consider and only a loglogn factor for some other problems.The first two specific data structure problems for which we obtain bounds are: List Representation. This problem concerns the representation of an ordered list of at most n (not necessarily distinct) elements from the universe U = {1, 2,…, n}. The operations to be supported are report(k), which returns the kth element of the list, insert(k, u) which inserts element u into the list between the elements in positions k - 1 and k, delete(k), which deletes the kth item.Subset Rank. This problem concerns the representation of a subset S of U = {1, 2,…, n}. The operations that must be supported are the updates “insert item j into the set” and “delete item j from the set” and the queries rank(j), which returns the number of elements in S that are less than or equal to j.The natural word size for these problems is b = logn, which allows an item of U or an index into the list to be stored in one register. One simple solution to the list representation problem is to maintain a vector v, whose kth entry contains the kth item of the list. The report operation can be done in constant time, but the insert and delete operations may take time linear in the length of the list. Alternatively, one could store the items of the list with each element having a pointer to its predecessor and successor in the list. This allows for constant time updates (given a pointer to the appropriate location), but requires linear cost for queries.This problem can be solved must more efficiently by use of balanced trees (such as AVL trees). When b = logn, the worst case cost per operation using AVL trees is O(logn). If instead b = 1, so that each bit access costs 1, then the AVL three solution requires O(log2n) per operation.It is not hard to find similar upper bounds for the subset rank problem (the algorithms for this problem are actually simpler than AVL trees).The question is: are these upper bounds bet possible? Our results show that the upper bounds for the case of logn bit registers are within a loglogn factor of optimal. On the other hand, somewhat surprisingly, for the case of single bit registers there are implementations for both of these problems that run in time significantly faster than O(log2n) per operation.Let CPROBE(b) denote the cell probe computational model with register size b.Theorem 1. If b ≤ (logn)t for some t, then any CPROBE(b) implementation of either list representation or the subset rank requires O(logn/loglogn) amortized time per operation.Theorem 2. Subset rank and list representation have CPROBE(1) implementations with respective complexities O((logn)(loglogn)) and O((logn)(loglogn)2) per operation.Paul Dietz (personal communication) has found an implementation of list representation with logn bit registers that requires only O(logn/loglogn) time per operation, and thus the result of theorem 1 is best possible.The lower bounds of theorem 1 are derived from lower bounds for a third problem: Partial sum mode k. An array A[1],…, A[N] of integers mod k is to be represented. Updates are add(i, d) which implements A[i] ← A[i] + d; and queries are sum(j) which returns Si≤jA[i] (mod k).This problem is demoted PS(n, k). Our main lower bound theorems provide tradeoffs between the number of register rewrites and register reads as a function of n, k, and b. Two corollaries of these results are: Theorem 3. Any CPROBE(b) implementation of PS(n, 2) (partial sums mod 2) requires O(logn/(loglogn + logb)) amortized time per operation, and for b ≥ logn, there is an implementation that achieves this. In particular, if b = T((logn)c) for some constant c, then the optimal time complexity of PS(n, 2) is t(logn/loglogn).Theorem 4. Any CPROBE(1) implementation of PS(n, n) with single bit registers requires O((logn/loglogn)2) amortized time per operation, and there is an implementation that achieves O(log2n) time per operation.It can be shown that a lower bound of PS(n, 2) is also a lower bound for both list representation and subset rank (the details, which are not difficult, are omitted from this report), and thus theorem 1 follows from theorem 3. The results of theorem 4 make an interesting contrast with those of theorem 2. For the three problems, list representation, subset rank and PS(n, k), there are standard algorithms that can be implemented on a CPROBE(logn) that use time O(logn per operation, and their implementations on CPROBE(1) require O(log2n) time. Theorem 4 says that for the problem PS(n, n) this algorithm is essentially best possible, while theorem 2 says that for list representation and rank, the algorithm can be significantly improved. In fact, the rank problem an be viewed as a special case of PS(n, n) where the variables take on values on {0, 1}, and apparently this specialization is enough to reduce the complexity on a CPROBE(1) by a factor of logn/loglogn, even though on a CPROBE(logn) the complexities of the two problems differ by no more than a loglogn factor.The third problem we consider is the set union problem. This problem concerns the design of a data structure for the on-line manipulation of sets in the following setting. Initially, there are n singleton sets {1}, {2},…, {n} with i chosen as the name of the set {i}. Our data structure is required to implement two operations, Find(j), and Union(A, B, C). The operation Find(j) returns the name of the set containing j. The operation Union(A, B, C) combines the sets with names A and B. The names of the existing sets at any moment must be unique and chosen to be integers in the range from 1 to 2n. The sets existing at any time are disjoint and define a partition of the elements into equivalence classes.A well known data structure for the set union problem represents the sets as trees and stores the name of a set in the root of its corresponding tree. A Union operation is performed by attaching the root of the smaller set as a child of the root of the larger set (weight rule). A Find operation is implemented by following the path from the appropriate node to the root of the tree containing it, and then redirecting to the root the parent pointers of the nodes encountered along this path (path compression). From now on we consider sequences of Union and Find operations consisting of n-1 Union operations and m Find operations with m ≥ n. Tarjan [14] demonstrated that the above algorithm requires time t(ma(m, n)), where a(m, n) is an inverse to Ackermann's function, to execute n-1 Union and m Find operations. In particular, if m = t(n), then the running time is almost, but not quite, linear. Tarjan conjectured [14] that no linear time algorithm exists for the set union problem, and provided significant evidence in favor of this conjecture (which we discuss in the following section). We affirm Tarjan's conjecture in the CPROBE(logn) model.Theorem 5. Any CPROBE(logn) implementation of the set union problem requires O(ma(m, n)) time to execute m Find's and n-1 Union's, beginning with n singleton sets.N. Blum [2] has given a logn/loglogn algorithm (worst case time per operation) for the set union problem. This algorithm is also optimal in the CPROBE(polylogn) model.The following Section provides further discussion of these results, Section 3 outlines our lower bound method, and Section 4 contains some proofs.

396 citations


Journal ArticleDOI
TL;DR: A simplified maximum-likelihood Gauss-Newton algorithm which provides asymptotically efficient estimates of these parameters is proposed and initial estimates for this algorithm are obtained by a variation of the overdetermined Yule-Walker method and periodogram-based procedure.
Abstract: The problem of estimating the frequencies, phases, and amplitudes of sinusoidal signals is considered. A simplified maximum-likelihood Gauss-Newton algorithm which provides asymptotically efficient estimates of these parameters is proposed. Initial estimates for this algorithm are obtained by a variation of the overdetermined Yule-Walker method and periodogram-based procedure. Use of the maximum-likelihood Gauss-Newton algorithm is not, however, limited to this particular initialization method. Some other possibilities to get suitable initial estimates are briefly discussed. An analytical and numerical study of the shape of the likelihood function associated with the sinusoids-in-noise process reveals its multimodal structure and clearly sets the importance of the initialization procedure. Some numerical examples are presented to illustrate the performance of the proposed estimation procedure. Comparison to the performance corresponding to the Cramer-Rao lower bound is also presented, using a simple expression for the asymptotic Cramer-Rao bound covariance matrix derived in the paper. >

376 citations


Journal ArticleDOI
TL;DR: In this article, a free quantum particle living on a curved planar strip Ω of a fixed width d with Dirichlet boundary conditions is studied, and a lower bound on the critical width is obtained using the Birman-Schwinger technique.
Abstract: A free quantum particle living on a curved planar strip Ω of a fixed width d with Dirichlet boundary conditions is studied. It can serve as a model for electrons in thin films on a cylinder‐type substrate, or in a curved quantum wire. Assuming that the boundary of Ω is infinitely smooth and its curvature decays fast enough at infinity, it is proved that a bound state with energy below the first transversal mode exists for all sufficiently small d. A lower bound on the critical width is obtained using the Birman–Schwinger technique.

350 citations


Proceedings ArticleDOI
13 Dec 1989
TL;DR: The singular perturbation approximation technique for model reduction is related to the direct truncation technique if the system model to be reduced is stable, minimal, and internally balanced as mentioned in this paper.
Abstract: The singular perturbation approximation technique for model reduction is related to the direct truncation technique if the system model to be reduced is stable, minimal, and internally balanced. It is shown that these two methods constitute two fully compatible model reduction techniques for a continuous-time system, and both methods yield a stable, minimal, and internally balanced reduced-order system with the same L/sub infinity /-norm error bound on the reduction. Although the upper bound for both reductions is the same, the direct truncation method tends to have smaller errors at high frequencies and larger errors at low frequencies, whereas the singular perturbation approximation method will display the opposite character. It is also shown that a certain bilinear mapping not only preserves the balanced structure between a continuous-time system and an associated discrete-time system, but also preserves the slow singular perturbation approximation structure. Hence, the continuous-time results on the singular perturbation approximation of balanced systems are easily extended to the discrete-time case. Examples are used to show the compatibility of and the differences in the two reduction techniques for a balanced system. >

327 citations


Book
31 Oct 1989
TL;DR: A comparison of the Bounds of Optimized VQ vs. Uniform Quantization Noise: Deterministic Inputs and Dithering and the Results of Exercises.
Abstract: 1 Information Sources.- 1.1 Probability Spaces.- 1.2 Random Variables and Vectors.- 1.3 Random Processes.- 1.4 Expectation.- 1.5 Ergodic Properties.- Exercises.- 2 Codes, Distortion, and Information.- 2.1 Basic Models of Communication Systems.- 2.2 Code Structures.- 2.3 Code Rate.- 2.4 Code Performance.- 2.5 Optimal Performance.- 2.6 Information.- 2.6.1 Information and Entropy Rates.- 2.7 Limiting Properties.- 2.8 Related Reading.- Exercises.- 3 Distortion-Rate Theory.- 3.1 Introduction.- 3.2 Distortion-Rate Functions.- 3.3 Almost Noiseless Codes.- 3.4 The Source Coding Theorem for Block Codes.- 3.4.1 Block Codes.- 3.4.2 A Coding Theorem.- 3.5 Synchronizing Block Codes.- 3.6 Sliding-Block Codes.- 3.7 Trellis Encoding.- Exercises.- 4 Rate-Distortion Functions.- 4.1 Basic Properties.- 4.2 The Variational Equations.- 4.3 The Discrete Shannon Lower Bound.- 4.4 The Blahut Algorithm.- 4.5 Continuous Alphabets.- 4.6 The Continuous Shannon Lower Bound.- 4.7 Vectors and Processes.- 4.7.1 The Wyner-Ziv Lower Bound.- 4.7.2 The Autoregressive Lower Bound.- 4.7.3 The Vector Shannon Lower Bound.- 4.8 Norm Distortion.- Exercises.- 5 High Rate Quantization.- 5.1 Introduction.- 5.2 Asymptotic Distortion.- 5.3 The High Rate Lower Bound.- 5.4 High Rate Entropy.- 5.5 Lattice Vector Quantizers.- 5.6 Optimal Performance.- 5.7 Comparison of the Bounds.- 5.8 Optimized VQ vs. Uniform Quantization.- 5.9 Quantization Noise.- Exercises.- 6 Uniform Quantization Noise.- 6.1 Introduction.- 6.2 Uniform Quantization.- 6.3 PCM Quantization Noise: Deterministic Inputs.- 6.4 Random Inputs and Dithering.- 6.5 Sigma-Delta Modulation.- 6.6 Two-Stage Sigma-Delta Modulation.- 6.7 Delta Modulation.- Exercises.

315 citations


Journal ArticleDOI
TL;DR: In this paper, a technique for computing rigorous upper bounds on limit loads under conditions of plane strain is described, which assumes a perfectly plastic soil model and employs finite elements in conjunction with the upper bound theorem of classical plasticity theory.
Abstract: This paper describes a technique for computing rigorous upper bounds on limit loads under conditions of plane strain. The method assumes a perfectly plastic soil model, which is either purely cohesive or cohesive-frictional, and employs finite elements in conjunction with the upper bound theorem of classical plasticity theory. The computational procedure uses three-noded triangular elements with the unknown velocities as the nodal variables. An additional set of unknowns, the plastic multiplier rates, is associated with each element. Kinematically admissible velocity discontinuities are permitted along specified planes within the grid. The finite element formulation of the upper bound theorem leads to a classical linear programming problem where the objective function, which is to be minimized, corresponds to the dissipated power and is expressed in terms of the velocities and plastic multiplier rates. The unknowns are subject to a set of linear constraints arising from the imposition of the flow rule and velocity boundary conditions. It is shown that the upper bound optimization problem may be solved efficiently by applying an active set algorithm to the dual linear programming problem. Since the computed velocity field satisfies all the conditions of the upper bound theorem, the corresponding limit load is a strict upper bound on the true limit load. Other advantages include the ability to deal with complicated loading, complex geometry and a variety of boundary conditions. Several examples are given to illustrate the effectiveness of the procedure.

Journal ArticleDOI
TL;DR: A proof is given that the maximum number of separable regions (M) in the input space is a function of both H and input space dimension (d).
Abstract: Recent results indicate that the number of hidden nodes (H) in a feedforward neural net depend only on the number of input training patterns (T). There appear to be conjectures that H is on the order of T-1 and of log/sub 2/T. A proof is given that the maximum number of separable regions (M) in the input space is a function of both H and input space dimension (d). The authors also show that H=M -1 and H=log/sub 2/M are special cases of that formulation. M defines a lower bound on T, the number of input patterns that may be used for training. Application to some experiments are investigated. >

Journal ArticleDOI
TL;DR: A general technique for the efficient implementation of lattice operations such as greatest lower bound, least upper bound, and relative complementation based on an encoding method, which takes into account idiosyncrasies of the topology of the poset being encoded that are quite likely to occur in practice.
Abstract: Lattice operations such as greatest lower bound (GLB), least upper bound (LUB), and relative complementation (BUTNOT) are becoming more and more important in programming languages supporting object inheritance. We present a general technique for the efficient implementation of such operations based on an encoding method. The effect of the encoding is to plunge the given ordering into a boolean lattice of binary words, leading to an almost constant time complexity of the lattice operations. A first method is described based on a transitive closure approach. Then a more space-efficient method minimizing code-word length is described. Finally a powerful grouping technique called modulation is presented, which drastically reduces code space while keeping all three lattice operations highly efficient. This technique takes into account idiosyncrasies of the topology of the poset being encoded that are quite likely to occur in practice. All methods are formally justified. We see this work as an original contribution towards using semantic (vz., in this case, taxonomic) information in the engineering pragmatics of storage and retrieval of (vz., partially or quasi-ordered) information.

Journal ArticleDOI
TL;DR: In this article, an estimator design problem is considered which involves both L 2 (least squares) and H ∞ (worst-case frequency-domain) aspects, and the goal of the problem is to minimize an L 2 state-estimation error criterion subject to a prespecified h ∞ constraint on the state estimation error.

Proceedings ArticleDOI
01 Nov 1989
TL;DR: A system to derive time bounds automatically as a function of the size of input using abstract interpretation is described and the semantics-based setting makes it possible to prove the correctness of the time bound function.
Abstract: One way to analyse programs is to to derive expressions for their computational behaviour. A time bound function (or worst-case complexity) gives an upper bound for the computation time as a function of the size of input. We describe a system to derive such time bounds automatically using abstract interpretation. The semantics-based setting makes it possible to prove the correctness of the time bound function. The system can analyse programs in a first-order subset of Lisp and we show how the system also can be used to analyse programs in other languages.

Journal ArticleDOI
TL;DR: In this article, a lower bound on the top quark mass of the Higgs boson was derived by numerically solving the renormalization group equations to two-loop order.

Journal ArticleDOI
TL;DR: In this paper, a variational principle is developed for the linearized driftkinetic, Fokker-Planck equation, from which both upper and lower bounds for neoclassical transport coefficients can be calculated for plasmas in three-dimensional toroidal confinement geometries.
Abstract: A variational principle is developed for the linearized drift‐kinetic, Fokker–Planck equation, from which both upper and lower bounds for neoclassical transport coefficients can be calculated for plasmas in three‐dimensional toroidal confinement geometries. These bounds converge monotonically with the increasing phase‐space dimensionality of the assumed trial function. This property may be used to identify those portions of phase space that make dominant contributions to the transport process. A computer code based on this principle has been developed that uses Fourier–Legendre expansions for the poloidal, toroidal, and pitch‐angle dependences of the distribution function. Numerical calculations of transport coefficients for a plasma in the TJ‐II flexible heliac [Nucl. Fusion 28, 157 (1988)] are used to demonstrate the application of this procedure.

Journal ArticleDOI
TL;DR: In this paper, a simple queueing system, known as the fork-join queue, is considered with basic performance measure defined as the delay between the fork and join dates, and simple lower and upper bounds are derived for some of the statistics of this quantity.
Abstract: A simple queueing system, known as the fork-join queue, is considered with basic performance measure defined as the delay between the fork and join dates. Simple lower and upper bounds are derived for some of the statistics of this quantity. They are obtained, in both transient and steady-state regimes, by stochastically comparing the original system to other queueing systems with a structure simpler than the original system, yet with identical stability characteristics. In steady-state, under renewal assumptions, the computation reduces to standard GI/GI /1 calculations and the bounds constitute a first sizing-up of system performance. These bounds can also be used to show that for homogeneous fork-join queue system under assumptions, the moments of the system response time grow logarithmically in the number of parallel processors provided the service time distribution has rational Laplace–Stieltjes transform. The bounding arguments combine ideas from the theory of stochastic ordering with the notion of associated random variables, and are of independent interest to study various other queueing systems with synchronization constraints. The paper is an abridged version of a more complete report on the matter [6].

Journal ArticleDOI
01 Aug 1989-Networks
TL;DR: Analysis of the scheduling problem for freight vehicles assigned to various different depots considers the np-hard multiple depot case in which, in addition, one has to assign vehicles to depots, and a strong dominance procedure derived from new dominance criteria is described.
Abstract: This article describes analyses carried out to solve the scheduling problem for freight vehicles assigned to various different depots. The vehicle scheduling problem concerns the assigning of a set of time-tabled trips to vehicles so as to minimize a given cost function. We consider the np-hard multiple depot case in which, in addition, one has to assign vehicles to depots. Different lower bounds based on assignment relaxation and on connectivity constraints are presented and combined in an effective bounding procedure. A strong dominance procedure derived from new dominance criteria is also described. A branch and bound algorithm is finally proposed. Computational results are given.

Journal ArticleDOI
TL;DR: In this article, the problem of the slow viscous flow of a fluid through a random porous medium is considered, and the macroscopic Darcy's law, which defines the fluid permeability k, is derived in an ensemble-average formulation using the method of homogenization.
Abstract: The problem of the slow viscous flow of a fluid through a random porous medium is considered. The macroscopic Darcy's law, which defines the fluid permeability k, is first derived in an ensemble-average formulation using the method of homogenization. The fluid permeability is given explicitly in terms of a random boundary-value problem. General variational principles, different to ones suggested earlier, are then formulated in order to obtain rigorous upper and lower bounds on k. These variational principles are applied by evaluating them for four different types of admissible fields. Each bound is generally given in terms of various kinds of correlation functions which statistically characterize the microstructure of the medium. The upper and lower bounds are computed for flow interior and exterior to distributions of spheres.

Journal ArticleDOI
TL;DR: In this paper, the Gutzwiller trace formula was used to approximate the scattering of a point particle from three hard discs in a plane, and a semiclassical limit upper bound was obtained on the lifetime of the scattering resonances.
Abstract: The scattering of a point particle from three hard discs in a plane is studied in the semiclassical approximation, using the Gutzwiller trace formula. Using a previously introduced coding of the classical dynamics, the needed summation over the classical periodic orbits is performed. The trace function is then given in terms of Ruelle zeta functions. A semiclassical limit upper bound is obtained on the lifetimes of the scattering resonances. This bound is larger than the classical lifetime when the classical repellor is chaotic but coincides with it when the repellor is periodic. We conclude that classical chaos dramatically influences the lifetimes of the scattering resonances. Our upper bound for the resonance lifetime is compared with the results of numerical calculation of the full quantum dynamics. The distribution of the imaginary parts of the complex wave numbers of the resonances is also calculated.

Journal ArticleDOI
TL;DR: The authors estimate the order of a finite Markov source based on empirically observed statistics and propose a universal asymptotically optimal test for the case where a given integer is known to be the upper bound of the true order.
Abstract: The authors estimate the order of a finite Markov source based on empirically observed statistics. The performance criterion adopted is to minimize the probability of underestimating the model order while keeping the overestimation probability exponent at a prescribed level. A universal asymptotically optimal test, in the sense just defined, is proposed for the case where a given integer is known to be the upper bound of the true order. For the case where such a bound is unavailable, an alternative rule based on the Lempel-Ziv data compression algorithm is shown to be asymptotically optimal also and computationally more efficient. >

Journal ArticleDOI
TL;DR: The upper bounds derived on the computational complexity of the algorithms above improve the upper bounds given by Kannan and Bachem in [SIAM J. Comput., 8 (1979), pp. 499–507].
Abstract: An $O(s^5 M(s^2 ))$ algorithm for computing the canonical structure of a finite Abelian group represented by an integer matrix of size s (this is the Smith normal form of the matrix) is presented. Moreover, an $O(s^3 M(s^2 ))$ algorithm for computing the Hermite normal form of an integer matrix of size s is given.The upper bounds derived on the computational complexity of the algorithms above improve the upper bounds given by Kannan and Bachem in [SIAM J. Comput., 8 (1979), pp. 499–507] and Chou and Collins in [SIAM J. Comput., 11 (1982), pp. 687–708].

Journal ArticleDOI
15 Sep 1989-EPL
TL;DR: In this article, a lower bound for the spreading of wave packet over one-and quasi-one-dimensional lattices was established, and it was shown that diffusive spread can take place only if the spectrum is singular continuous.
Abstract: The possibility of quantum diffusion over discrete lattices is discussed on the ground of general spectral theory, in connection with the problem of quantal suppression of classical chaos. Given the Hausdorff dimension of the spectrum, an asymptotic lower bound for the spreading of wave packet can be established. This bound shows that diffusive spread over one- and quasi one-dimensional lattices can take place only if the spectrum is singular continuous.

Journal ArticleDOI
TL;DR: In this paper, the capacity region of the discrete memoryless single-output two-way channel was shown to be bounded by the idea that no more dependence can be consumed than is produced.
Abstract: If in a transmission the inputs of a single-output two-way channel exhibit some interdependence, this dependence must have been created during earlier transmissions. The idea that no more dependence can be consumed than is produced is used to obtain new upper bounds to the capacity region of the discrete memoryless single-output two-way channel. With these upper bounds it is shown that C.E. Shannon' (1961) inner bound region is the capacity region for channels in a certain class, and the Zhang-Berger-Schalkwijk upper bound (1986) for Blackwell's multiplying channel is improved upon. >

Book ChapterDOI
01 Jan 1989
TL;DR: It is shown that, if termination of R can be proved by polynomial interpretation then dh R is bounded from above by a doubly exponential function, whereas termination proofs by Knuth-Bendix ordering are possible even for systems where dh R cannot be bounded by any primitive recursive functions.
Abstract: The derivation height of a term t, relative to a set R of rewrite rules, dh R (t), is the length of a longest derivation from t. We investigate in which way certain termination proof methods impose bounds on dh R . In particular we show that, if termination of R can be proved by polynomial interpretation then dh R is bounded from above by a doubly exponential function, whereas termination proofs by Knuth-Bendix ordering are possible even for systems where dh R cannot be bounded by any primitive recursive functions. For both methods, conditions are given which guarantee a singly exponential upper bound on dh R . Moreover, all upper bounds are tight.

Journal ArticleDOI
TL;DR: In this paper, the sensitivity versus a geometric or physical parameter and its use in an optimization process are presented, where the quasiNewton method is used to determine the direction of minimization, and the constraints of upper and lower bounds of parameters are treated by the penalty function.
Abstract: The calculation of the sensitivity versus a geometric or physical parameter and its use in an optimization process are presented. The 2-D finite-element method is used in the magnetic field calculation. The quasiNewton method is used to determine the direction of minimization, and the constraints of upper and lower bounds of parameters are treated by the penalty function. A cubic approximation method is used in the unidimensional minimization. Minimization of the current density (J/sub c/) while keeping the force desired for a relay model is presented as an example. >

Journal ArticleDOI
TL;DR: In this article, upper and lower bounds that relate the expected cover time for a graph to the eigenvalues of the Markov chain that describes the random walk above are presented.
Abstract: Consider a particle that moves on a connected, undirected graphG withn vertices. At each step the particle goes from the current vertex to one of its neighbors, chosen uniformly at random. Tocover time is the first time when the particle has visited all the vertices in the graph starting from a given vertex. In this paper, we present upper and lower bounds that relate the expected cover time for a graph to the eigenvalues of the Markov chain that describes the random walk above. An interesting consequence is that regular expander graphs have expected cover time Θ(n logn).

Journal ArticleDOI
TL;DR: A general lower bound is derived for the variance of estimates based on high-order sample moments and the existence of an optimal weight matrix is proven.
Abstract: Recently, there has been a considerable interest in parametric estimation of non-Gaussian processes, based on high-order moments. Several researchers have proposed algorithms for estimating the parameters of AR, MA and ARMA processes, based on the third-order and fourth-order cumulants. These algorithms are capable of handling non-minimum phase processes, and some of them provide a good trade-off between computational complexity and statistical efficiency. This paper presents some results about the performance of algorithms based on high-order moments. A general lower bound is derived for the variance of estimates based on high-order sample moments. This bound, which is shown to be asymptotically tight, is neither the Cramer-Rao bound nor a trivial extension thereof. The performance of weighted least squares estimates of the type recently proposed in the literature is investigated. An expression for the variance of such estimates is derived and the existence of an optimal weight matrix is proven. The general formulae are specialized to MA and ARMA processes and used to analyse the performance of some algorithms in detail. The analytic results are verified by Monte Carlo simulations for some specific test cases. A by-product of this paper is the derivation of asymptotic formulae for the variances and covariances of the sample third-order moments of a certain class of processes.