scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 1996"


Journal ArticleDOI
TL;DR: This paper presents a new approach for robust MPC synthesis that allows explicit incorporation of the description of plant uncertainty in the problem formulation, and shows that the feasible receding horizon state-feedback control design robustly stabilizes the set of uncertain plants.

2,329 citations


Journal ArticleDOI
TL;DR: An inequality is derived according to which the fringe visibility in a two-way interferometer sets an absolute upper bound on the amount of which- way information that is potentially stored in a which-way detector.
Abstract: An inequality is derived according to which the fringe visibility in a two-way interferometer sets an absolute upper bound on the amount of which-way information that is potentially stored in a which-way detector. In some sense, this inequality can be regarded as quantifying the notion of wave-particle duality. The derivation of the inequality does not make use of Heisenberg's uncertainty relation in any form.

729 citations


Journal ArticleDOI
TL;DR: In this article, a tight analysis of Grover's recent algorithm for quantum database searching is provided, where the probability of success after any given number of iterations of the algorithm is given.
Abstract: We provide a tight analysis of Grover's recent algorithm for quantum database searching. We give a simple closed-form formula for the probability of success after any given number of iterations of the algorithm. This allows us to determine the number of iterations necessary to achieve almost certainty of finding the answer. Furthermore, we analyse the behaviour of the algorithm when the element to be found appears more than once in the table and we provide a new algorithm to find such an element even when the number of solutions is not known ahead of time. Using techniques from Shor's quantum factoring algorithm in addition to Grover's approach, we introduce a new technique for approximate quantum counting, which allows to estimate the number of solutions. Finally we provide a lower bound on the efficiency of any possible quantum database searching algorithm and we show that Grover's algorithm nearly comes within a factor 2 of being optimal in terms of the number of probes required in the table.

613 citations


Journal ArticleDOI
TL;DR: In this article, a branch-and-bound LP formulation for the single allocation p-hub median problem is presented, which requires fewer variables and constraints than those traditionally used in the literature.

514 citations


Proceedings ArticleDOI
14 Oct 1996
TL;DR: It is proved that spectral partitioning techniques can be used to produce separators whose ratio of vertices removed to edges cut is O(/spl radic/n) for bounded-degree planar graphs and two-dimensional meshes and O(n/sup 1/d/) for well-shaped d-dimensional mesh.
Abstract: Spectral partitioning methods use the Fiedler vector-the eigenvector of the second-smallest eigenvalue of the Laplacian matrix-to find a small separator of a graph. These methods are important components of many scientific numerical algorithms and have been demonstrated by experiment to work extremely well. In this paper, we show that spectral partitioning methods work well on bounded-degree planar graphs and finite element meshes-the classes of graphs to which they are usually applied. While active spectral bisection does not necessarily work, we prove that spectral partitioning techniques can be used to produce separators whose ratio of vertices removed to edges cut is O(/spl radic/n) for bounded-degree planar graphs and two-dimensional meshes and O(n/sup 1/d/) for well-shaped d-dimensional meshes. The heart of our analysis is an upper bound on the second-smallest eigenvalues of the Laplacian matrices of these graphs: we prove a bound of O(1/n) for bounded-degree planar graphs and O(1/n/sup 2/d/) for well-shaped d-dimensional meshes.

501 citations


Journal ArticleDOI
TL;DR: A branch-and-cut algorithm to solve quadratic programming problems where there is an upper bound on the number of positive variables and the algorithm solves the largest real-life problems in a few minutes of run-time.
Abstract: We present computational experience with a branch-and-cut algorithm to solve quadratic programming problems where there is an upper bound on the number of positive variables. Such problems arise in financial applications. The algorithm solves the largest real-life problems in a few minutes of run-time.

409 citations


Journal ArticleDOI
TL;DR: In this article, a second-order Taylor expansion for the nonlinear phase potentials is proposed to estimate the effective behavior of nonlinear composite materials with arbitrary phase contrast, and the results are compared with available bounds and numerical estimates, as well as with other nonlinear homogenization procedures.
Abstract: Motivated by previous small-contrast perturbation estimates, this paper proposes a new method for estimating the effective behavior of nonlinear composite materials with arbitrary phase contrast. The key idea is to write down a second-order Taylor expansion for the phase potentials, about appropriately defined phase average strains. The resulting estimates, which are exact to second order in the contrast, involve the “tangent” modulus tensors of the nonlinear phase potentials, and reduce the problem for the nonlinear composite to a linear problem for an anisotropic thermoelastic composite. Making use of a well-known result by Levin for two-phase thermoelastic composites, together with estimates of the Hashin-Shtrikman type for linear elastic composites, explicit results are generated for two-phase nonlinear composites with statistically isotropic particulate microstructures. Like the earlier small-contrast asymptotic results, the new estimates are found to depend on the determinant of the strain, but unlike the small-contrast results that diverge for shear loading conditions in the nonhardening limit, the new estimates remain bounded and reduce to the classical lower bound in this limiting case. The general method is applied to composites with power-law constitutive behavior and the results are compared with available bounds and numerical estimates, as well as with other nonlinear homogenization procedures. For the cases considered, the new estimates are found to satisfy the restrictions imposed by the bounds, to improve on the predictions of prior homogenization procedures and to be in excellent agreement with the results of the numerical simulations.

343 citations


Journal ArticleDOI
TL;DR: In this article, a process to manufacture single-crystal thermal actuators using silicon fusion bonding and electrochemical etch stop is presented, which permits simultaneous creation of in-plane and out-of-plane thermal actuator together with levers suitable for both directions of actuation.
Abstract: A process to manufacture single-crystal thermal actuators using silicon fusion bonding and electrochemical etch stop is presented. The process permits the simultaneous creation of in-plane and out-of-plane thermal actuators together with levers suitable for both directions of actuation. A final dry-release step is used, permitting the manufacture of MOS or bipolar devices in conjunction with actuators. Out-of-plane actuation of vertically levered devices has been demonstrated. The −3 dB response frequency of out-of-plane actuators is approximately 1000 Hz in air. Novel levered in-plane devices which achieve deflections of up to 200 μm have been fabricated. An estimate of the upper bound of thermal actuator efficiency is presented.

317 citations


Journal ArticleDOI
TL;DR: In this paper, the authors take a new approach to the problem of peak-to-peak gain minimization by minimizing the * -norm, the best upper bound on the induced L∞ norm obtainable by bounding the reachable set with inescapable ellipsoids.
Abstract: In this paper we take a new approach to the problem of peak-to-peak gain minimization (the L1 or induced L∞ problem). This is done in an effort to circumvent the complexity problems of other approaches. Instead of minimizing the induced L∞ norm, we minimize the * -norm, the best upper bound on the induced L∞ norm obtainable by bounding the reachable set with inescapable ellipsoids. Controller and filter synthesis for * -norm minimization reduces to minimizing a continuous function of a single real variable. This function can be evaluated, in the most complicated case, by solving a Riccati equation followed by an LMI eigenvalue problem. We contend that synthesis is practical now, but a key computational question-is the function to be minimized convex?—remains open. The filters and controllers that result from this approach are at most the same order as the plant, as in the case of LQG and H∞ design.

284 citations


Journal ArticleDOI
TL;DR: The purpose of these methods is to allow engineers to assess the current performance of the existing control systems, to monitor how this performance changes over time, to see which variables are well controlled and which are not, and to help in deciding whether or not there is sufficient incentive to reidentify the process model and/or redesign the controller.

284 citations


Journal ArticleDOI
TL;DR: A variational principle for upper bounds on the largest possible time averaged convective heat flux is derived from the Boussinesq equations of motion, from which nonlinear Euler-Lagrange equations for the optimal background fields are derived.
Abstract: Building on a method of analysis for the Navier-Stokes equations introduced by Hopf [Math. Ann. 117, 764 (1941)], a variational principle for upper bounds on the largest possible time averaged convective heat flux is derived from the Boussinesq equations of motion. When supplied with appropriate test background fields satisfying a spectral constraint, reminiscent of an energy stability condition, the variational formulation produces rigorous upper bounds on the Nusselt number (Nu) as a function of the Rayleigh number (Ra). For the case of vertical heat convection between parallel plates in the absence of sidewalls, a simplified (but rigorous) formulation of the optimization problem yields the large Rayleigh number bound Nu\ensuremath{\le}0.167 ${\mathrm{Ra}}^{1/2}$-1. Nonlinear Euler-Lagrange equations for the optimal background fields are also derived, which allow us to make contact with the upper bound theory of Howard [J. Fluid Mech. 17, 405 (1963)] for statistically stationary flows. The structure of solutions of the Euler-Lagrange equations are elucidated from the geometry of the variational constraints, which sheds light on Busse's [J. Fluid Mech. 37, 457 (1969)] asymptotic analysis of general solutions to Howard's Euler-Lagrange equations. The results of our analysis are discussed in the context of theory, recent experiments, and direct numerical simulations. \textcopyright{} 1996 The American Physical Society.

Journal ArticleDOI
TL;DR: The bounded-variance filtered estimation of the state of an uncertain, linear, discrete-time system, with an unknown norm-bounded parameter matrix, is considered and an upper bound on the variance of the estimation error is found.
Abstract: The bounded-variance filtered estimation of the state of an uncertain, linear, discrete-time system, with an unknown norm-bounded parameter matrix, is considered. An upper bound on the variance of the estimation error is found for all admissible systems, and estimators are derived that minimize the latter bound. We treat the finite-horizon, time-varying case and the infinite-time case, where the nominal system model is time invariant. In the special stationary case, where it is known that the uncertain system is time invariant, we provide a robust filter for all uncertainties that still keep the system asymptotically stable.

Proceedings ArticleDOI
01 Jul 1996
TL;DR: The first substantial improvement of the 20 year old classical harmonic upper bound, H(m), of Johnson, Lovssz, and ChvAt al is provided, and the approximation guarantee for the greedy algorithm is better than the guarantee recently established by Srinivasan for the randomized rounding technique, thus improving the bounds on the integralit~ gap.
Abstract: We establish significantly improved bounds on the performance of the greedy algorithm for approximatingset cover. In particular, we provide the first substantial improvement of the 20-year-old classical harmonic upper bound,H(m), of Johnson, Lovasz, and Chvatal, by showing that the performance ratio of the greedy algorithm is, in fact,exactlylnm?lnlnm+?(1), wheremis the size of the ground set. The difference between the upper and lower bounds turns out to be less than 1.1. This provides the first tight analysis of the greedy algorithm, as well as the first upper bound that lies belowH(m) by a function going to infinity withm. We also show that the approximation guarantee for the greedy algorithm is better than the guarantee recently established by Srinivasan for the randomized rounding technique, thus improving the bounds on theintegrality gap. Our improvements result from a new approach which might be generally useful for attacking other similar problems.

Journal ArticleDOI
TL;DR: The recovery of signals from indirect measurements, blurred by random noise, is considered under the assumption that prior knowledge regarding the smoothness of the signal is avialable and the general problem is embedded in an abstract Hilbert scale.
Abstract: The recovery of signals from indirect measurements, blurred by random noise, is considered under the assumption that prior knowledge regarding the smoothness of the signal is avialable. For greater flexibility the general problem is embedded in an abstract Hilbert scale. In the applications Sobolev scales are used. For the construction of estimators we employ preconditioning along with regularized operator inversion in the appropriate inner product, where the operator is bounded but not necessarily compact. A lower bound to certain minimax rates is included, and it is shown that in generic examples the proposed estimators attain the asymptotic minimax rate. Examples include errors-in-variables (deconvolution) and indirect nonparametric regression. Special instances of the latter are estimation of the source term in a differential equation and the estimation of the initial state in the heat equation.

Posted Content
TL;DR: In the present paper the problem of finding quantum-error-correcting codes is transformed into one of finding additive codes over the field GF(4) which are self-orthogonal with respect to a certain trace inner product.
Abstract: The problem of finding quantum error-correcting codes is transformed into the problem of finding additive codes over the field GF(4) which are self-orthogonal with respect to a certain trace inner product. Many new codes and new bounds are presented, as well as a table of upper and lower bounds on such codes of length up to 30 qubits.

Patent
Lester Lynn Shipman1
22 Mar 1996
TL;DR: In this paper, a method and apparatus, using a computer model, is used to control a manufacturing or distribution process, which determines a demand forecast by using an optimized historical weighting factor, determines an upper and a lower bound of a planned inventory by explicitly accounting for the customer order lead time, and computes a production schedule at predetermined intervals to maintain an actual inventory between the upper and lower bounds of the planned inventory.
Abstract: A method and apparatus, using a computer model, to control a manufacturing or distribution process, which determines a demand forecast by using an optimized historical weighting factor, determines an upper and a lower bound of a planned inventory by explicitly accounting for the customer order lead time, and computes a production schedule at predetermined intervals to maintain an actual inventory between the upper and lower bounds of the planned inventory.

Journal ArticleDOI
TL;DR: This work devise an algorithm for computing linear-fractional representation LFRs, and shows how to use this approach for static state-feedback synthesis, and generalize the results to dynamic output-feedbacks synthesis, in the case when f and g are linear in every state coordinate that is not measured.

Journal ArticleDOI
TL;DR: In this paper, a branch and bound algorithm for set covering, whose centerpiece is a new integrated upper bounding/lower bounding procedure called dynamic subgradient optimization DYNSGRAD, is discussed.
Abstract: We discuss a branch and bound algorithm for set covering, whose centerpiece is a new integrated upper bounding/lower bounding procedure called dynamic subgradient optimization DYNSGRAD. This new procedure, applied to a Lagrangean dual at every node of the search tree, combines the standard subgradient method with primal and dual heuristics that interact to change the Lagrange multipliers and tighten the upper and lower bounds, fix variables, and periodically restate the Lagrangean itself. Extensive computational testing is reported. As a stand-alone heuristic, DYNSGRAD performs significantly better than other procedures in terms of the quality of solutions obtainable with a certain computational effort. When incorporated into a branch-and-bound algorithm, DYNSGRAD considerably advances the state of the art in solving set covering problems.

Journal ArticleDOI
TL;DR: A simple inverse kinematics procedure is proposed for a seven degree of freedom model of the human arm to provide an additional constraint leading to closed-form analytical equations with an upper bound of two or four solutions.
Abstract: A simple inverse kinematics procedure is proposed for a seven degree of freedom model of the human arm. Two schemes are used to provide an additional constraint leading to closed-form analytical equations with an upper bound of two or four solutions, Multiple solutions can be evaluated on the basis of their proximity from the rest angles or the previous configuration of the arm. Empirical results demonstrate that the procedure is well suited for real-time applications.

Proceedings Article
04 Aug 1996
TL;DR: The Russian Doll Search algorithm is introduced, which replaces one search by n successive searches on nested subproblems, records the results of each search and uses them later, when solving larger subpro problems, in order to improve the lower bound on the global valuation of any partial assignment.
Abstract: If the Constraint Satisfaction framework has been extended to deal with Constraint Optimization problems, it appears that optimization is far more complex than satisfaction. One of the causes of the inefficiency of complete tree search methods, like Depth First Branch and Bound, lies in the poor quality of the lower bound on the global valuation of a partial assignment, even when using Forward Checking techniques. In this paper, we introduce the Russian Doll Search algorithm which replaces one search by n successive searches on nested subproblems (n being the number of problem variables), records the results of each search and uses them later, when solving larger subproblems, in order to improve the lower bound on the global valuation of any partial assignment. On small random problems and on large real scheduling problems, this algorithm yields surprisingly good results, which greatly improve as the problems get more constrained and the bandwidth of the used variable ordering diminishes.

Journal ArticleDOI
TL;DR: A very short proof is presented for the almost best upper bound for the size of anr-cover-free family overnelements.

Journal ArticleDOI
TL;DR: This paper proposes two simple lower bounds for quantifying the confidence probability that the selected designs contain at least one good design, and numerical testing is presented.
Abstract: Ordinal optimization concentrates on finding a subset of good designs, by approximately evaluating a parallel set of designs, and reduces the required simulation time dramatically for discrete-event simulation and optimization. The estimation of the confidence probability (CP) that the selected designs contain at least one good design is crucial to ordinal optimization. However, it is very difficult to estimate this probability in DES simulation, especially for complicated DES with large number of designs. This paper proposes two simple lower bounds for quantifying the confidence probability. Numerical testing is presented.

Journal ArticleDOI
TL;DR: In this article, the authors present analytical upper and lower charging bounds for continuum charging, useful for checking the accuracy of charging models, and formulate an analytic charging rate model that remains within the bounds and gives good agreement with unipolar and bipolar charging data.

Journal ArticleDOI
Jeff Kahn1
TL;DR: The “guided-random” method used in the proof is in the spirit of some earlier work and is thought to be of particular interest, and one simple ingredient is a martingale inequality which ought to prove useful beyond the present context.

Journal ArticleDOI
TL;DR: Minimum relative entropy (MRE) yields exact expressions for the posterior probability density function and the expected value of the linear inverse problem and is able to produce any desired confidence intervals.
Abstract: In this paper we show that given prior information in terms of a lower and upper bound, a prior bias, and constraints in terms of measured data, minimum relative entropy (MRE) yields exact expressions for the posterior probability density function (pdf) and the expected value of the linear inverse problem. In addition, we are able to produce any desired confidence intervals. In numerical simulations, we use the MRE approach to recover the release and evolution histories of plume in a one-dimensional, constant known velocity and dispersivity system. For noise-free data, we find that the reconstructed plume evolution history is indistinguishable from the true history. An exact match to the observed data is evident. Two methods are chosen for dissociating signal from a noisy data set. The first uses a modification of MRE for uncertain data. The second method uses “presmoothing” by fast Fourier transforms and Butterworth filters to attempt to remove noise from the signal before the “noise-free” variant of MRE inversion is used. Both methods appear to work very well in recovering the true signal, and qualitatively appear superior to that of Skaggs and Kabala [1994]. We also solve for a degenerate case with a very high standard deviation in the noise. The recovered model indicates that the MRE inverse method did manage to recover the salient features of the source history. Once the plume source history has been developed, future behavior of a plume can then be cast in a probabilistic framework. For an example simulation, the MRE approach not only was able to resolve the source function from noisy data but also was able to correctly predict future behavior.

Journal ArticleDOI
TL;DR: In this paper, a simple and general algebraic technique for obtaining results in additive number theory was presented, and applied to derive various new extensions of the Cauchy-Davenport Theorem.

Proceedings ArticleDOI
28 Jan 1996
TL;DR: The expected performance ratio is studied, taking the worst-case multiset of items L, and assuming that the elements of L are inserted in random order, with all permutations equally likely, and showing a lower bound of 1.08 and an upper bound 1.5 on the random order performance ratio of Best-fit.
Abstract: Best-fit is the best known algorithm for on-line binpacking, in the sense that no algorithm is known to behave better both in the worst case (when Best-fit has performance ratio 1.7) and in the average uniform case, with items drawn uniformly in the interval [0, l] (then Best-fit has expected wasted space O(n’/2(log n)“/“)). In practical applications, Best-fit appears to perform within a few percent of optimal. In this paper, in the spirit of previous work in computational geometry, we study the expected performance ratio, taking the worst-case multiset of items L, and assuming that the elements of L are inserted in random order, with all permutations equally likely. We show a lower bound of 1.08 . . . and an upper bound of 1.5 on the random order performance ratio of Best-fit. The upper bound contrasts with the result that in the worst case, any (deterministic or randomized) on-line bin-packing algorithm has performance ratio at least 1.54. . . .

Proceedings ArticleDOI
28 Jan 1996
TL;DR: Empirical evidence is provided in support of using theHK bound as a stand-in for the optimal tour length when evaluating the quality of near-optimal tours, and data indicates that the HK bound can provide substantial ‘‘variance reduction’’ in experimental studies involving randomly generated instances.
Abstract: The Held-Karp (HK) lower bound is the solution to the linear programming relaxation of the standard integer programming formulation of the traveling salesman problem (TSP). For numbers of cities N up to 30,000 or more it can be computed exactly using the Simplex method and appropriate separation algorithms, and for N up to a million good approximations can be obtained via iterative Lagrangean relaxation techniques first suggested by Held and Karp. In this paper, we consider three applications of our ability to compute/approximate this bound. First, we provide empirical evidence in support of using the HK bound as a stand-in for the optimal tour length when evaluating the quality of near-optimal tours. We show that for a wide variety of randomly generated instance types the optimal tour length averages less than 0.8% over the HK bound, and even for the real-world instances in TSPLIB the gap is almost always less than 2%. Moreover, our data indicates that the HK bound can provide substantial ‘‘variance reduction’’ in experimental studies involving randomly generated instances. Second, we estimate the expected HK bound as a function of N for a variety of random instance types, based on extensive computations. For example, for random Euclidean instances it is known that the ratio of the HeldKarp bound to √+ + N approaches a constant C HK, and we estimate both that constant and the rate of convergence to it. Finally, we combine this information with our earlier results on expected HK gaps to obtain estimates for expected optimal tour lengths. For random Euclidean instances, we conclude that C OPT, the limiting ratio of the optimal tour length to √+ + N , is .7124 ± .0002, thus invalidating the commonly cited estimates of .749 and .765 and undermining many claims of good heuristic performance based on those estimates. For random distance matrices, the expected optimal tour length appears to be about 2.042, adding support to a conjec

Book ChapterDOI
21 May 1996
TL;DR: For the 15-Puzzle, iterative-deepening A with pattern databases (N=8) reduces the total number of nodes searched on a standard problem set of 100 positions by over 1000-fold.
Abstract: The efficiency of A searching depends on the quality of the lower bound estimates of the solution cost. Pattern databases enumerate all possible subgoals required by any solution, subject to constraints on the subgoal size. Each subgoal in the database provides a tight lower bound on the cost of achieving it. For a given state in the search space, all possible subgoals are looked up, with the maximum cost over all lookups being the lower bound. For sliding tile puzzles, the database enumerates all possible patterns containing N tiles and, for each one, contains a lower bound on the distance to correctly move all N tiles into their correct final location. For the 15-Puzzle, iterative-deepening A with pattern databases (N=8) reduces the total number of nodes searched on a standard problem set of 100 positions by over 1000-fold.

Journal ArticleDOI
TL;DR: It is found that most speech signals in the form of phoneme articulations are low dimensional, and the second‐order dynamical entropy of speech time series is found to be a lower bound of metric entropy.
Abstract: This paper reports results of the estimation of dynamical invariants, namely Lyapunov exponents, dimension, and metric entropy for speech signals. Two optimality criteria from dynamical systems literature, namely singular value decomposition method and the redundancy method, are used to reconstruct state space trajectories of speech and make observations. The positive values of the largest Lyapunov exponent of speech signals in the form of phoneme articulations show the average exponential divergence of nearby trajectories in the reconstructed state space. The dimension of a time series is a measure of its complexity and gives bounds on the number of state space variables needed to model it. It is found that most speech signals in the form of phoneme articulations are low dimensional. For comparison, a statistical model of a speech time series is also used to estimate the correlation dimension. The second‐order dynamical entropy (which is a lower bound of metric entropy) of speech time series is found to ...