scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 1992"


Journal ArticleDOI
TL;DR: It is shown that, for inputs of length n, the probabilistic (bounded error) communication complexity of set intersection is $\Theta ( n )$.
Abstract: It is shown that, for inputs of length n, the probabilistic (bounded error) communication complexity of set intersection is $\Theta ( n )$. Since set intersection can be recognized nondeterministic...

600 citations


Journal ArticleDOI
TL;DR: A new upper bound on the mixing rate is presented, based on the solution to a multicommodity flow problem in the Markov chain viewed as a graph, and improved bounds are obtained for the runtimes of randomised approximation algorithms for various problems, including computing the permanent of a 0–1 matrix, counting matchings in graphs, and computing the partition function of a ferromagnetic Ising system.
Abstract: The paper is concerned with tools for the quantitative analysis of finite Markov chains whose states are combinatorial structures. Chains of this kind have algorithmic applications in many areas, including random sampling, approximate counting, statistical physics and combinatorial optimisation. The efficiency of the resulting algorithms depends crucially on the mixing rate of the chain, i.e., the time taken for it to reach its stationary or equilibrium distribution.The paper presents a new upper bound on the mixing rate, based on the solution to a multicommodity flow problem in the Markov chain viewed as a graph. The bound gives sharper estimates for the mixing rate of several important complex Markov chains. As a result, improved bounds are obtained for the runtimes of randomised approximation algorithms for various problems, including computing the permanent of a 0–1 matrix, counting matchings in graphs, and computing the partition function of a ferromagnetic Ising system. Moreover, solutions to the multicommodity flow problem are shown to capture the mixing rate quite closely: thus, under fairly general conditions, a Markov chain is rapidly mixing if and only if it supports a flow of low cost.

534 citations


Journal ArticleDOI
TL;DR: It is shown that Ω(n) variables are needed for first-order logic with counting to identify graphs onn vertices, equivalent to the (k−1)-dimensional Weisfeiler-Lehman method, and the lower bound is optimal up to multiplication by a constant.
Abstract: In this paper we show that Ω(n) variables are needed for first-order logic with counting to identify graphs onn vertices. Thek-variable language with counting is equivalent to the (k−1)-dimensional Weisfeiler-Lehman method. We thus settle a long-standing open problem. Previously it was an open question whether or not 4 variables suffice. Our lower bound remains true over a set of graphs of color class size 4. This contrasts sharply with the fact that 3 variables suffice to identify all graphs of color class size 3, and 2 variables suffice to identify almost all graphs. Our lower bound is optimal up to multiplication by a constant becausen variables obviously suffice to identify graphs onn vertices.

526 citations


Journal ArticleDOI
TL;DR: In this article, the authors apply the lower bound principle to the multinomial logistic regression model, where it becomes specifically attractive for the Newton-Raphson iteration instead of the Hessian matrix leading to a monotonically converging sequence of iterates.
Abstract: The lower bound principle (introduced in Bohning and Lindsay 1988, Ann. Inst. Statist. Math., 40, 641–663), Bohning (1989, Biometrika, 76, 375–383) consists of replacing the second derivative matrix by a global lower bound in the Loewner ordering. This bound is used in the Newton-Raphson iteration instead of the Hessian matrix leading to a monotonically converging sequence of iterates. Here, we apply this principle to the multinomial logistic regression model, where it becomes specifically attractive.

481 citations


Journal ArticleDOI
TL;DR: An estimate of an upper bound of 1.75 bits for the entropy of characters in printed English is presented by constructing a word trigram model and then computing the cross-entropy between this model and a balanced sample of English text.
Abstract: We present an estimate of an upper bound of 1.75 bits for the entropy of characters in printed English, obtained by constructing a word trigram model and then computing the cross-entropy between this model and a balanced sample of English text. We suggest the well-known and widely available Brown Corpus of printed English as a standard against which to measure progress in language modeling and offer our bound as the first of what we hope will be a series of steadily decreasing bounds.

402 citations


Journal ArticleDOI
TL;DR: It is proved that almost every problem decidable in exponential space has essentially maximum circuit-size and space-bounded Kolmogorov complexity almost everywhere, and it is shown that infinite pseudorandom sequences have high nonuniform complexityalmost everywhere.

291 citations


Journal ArticleDOI
TL;DR: In this paper, the authors prove lower bounds of the form Ω(n · c−k), for the number of bits that need to be exchanged in order to compute some (explicitly given) polynomial time computable functions.

282 citations


Proceedings ArticleDOI
01 Jul 1992
TL;DR: This paper begins the investigation of the communication complexity of unconditionally secure multi-party computation, and its relation with various fault-tolerance models, and presents upper and lower bounds on communication, as well as tradeoffs among resources.
Abstract: A secret-ballot vote for a single proposition is an example of a secure distributed computation. The goal is for m participants to jointly compute the output of some n-ary function (in this case, the sum of the votes), while protecting their individual inputs against some form of misbehavior.In this paper, we initiate the investigation of the communication complexity of unconditionally secure multi-party computation, and its relation with various fault-tolerance models. We present upper and lower bounds on communication, as well as tradeoffs among resources.First, we consider the “direct sum problem” for communications complexity of perfectly secure protocols: Can the communication complexity of securely computing a single function f : Fn → F at k sets of inputs be smaller if all are computed simultaneously than if each is computed individually? We show that the answer depends on the failure model. A factor of O(n/log n) can be gained in the privacy model (where processors are curious but correct); specifically, when f is n-ary addition (mod 2), we show a lower bound of O(n2 log n) for computing f O(n) times simultaneously. No gain is possible in a slightly stronger fault model (fail-stop mode); specifically, when f is n-ary addition over GF(q), we show an exact bound of T(kn2 log q) for computing f at k sets of inputs simultaneously (for any k ≥ 1).However, if one is willing to pay an additive cost in fault tolerance (from t to t-k+1), then a variety of known non-cryptographic protocols (including “provably unparallelizable” protocols from above!) can be systematically compiled to compute one function at k sets of inputs with no increase in communication complexity. Our compilation technique is based on a new compression idea of polynomial-based multi-secret sharing.Lastly, we show how to compile private protocols into error-detecting protocols at a big savings of a factor of O(n3) (up to a log factor) over the best known error-correcting protocols. This is a new notion of fault-tolerant protocols, and is especially useful when malicious behavior is infrequent, since error-detection implies error-correction in this case.

272 citations


Journal ArticleDOI
TL;DR: Two techniques for finding the discrete orthogonal wavelet of support less than or equal to some given integer that leads to the best approximation to a given finite support signal up to a desired scale are presented.
Abstract: Two techniques for finding the discrete orthogonal wavelet of support less than or equal to some given integer that leads to the best approximation to a given finite support signal up to a desired scale are presented. The techniques are based on optimizing certain cost functions. The first technique consists of minimizing an upper bound that is derived on the L/sub 2/ norm of error in approximating the signal up to the desired scale. It is shown that a solution to the problem of minimizing that bound does exist and it is explained how the constrained minimization over the parameters that define discrete finite support orthogonal wavelets can be turned into an unconstrained one. The second technique is based on maximizing an approximation to the norm of the projection of the signal on the space spanned by translates and dilates of the analyzing discrete orthogonal wavelet up to the desired scale. Both techniques can be implemented much faster than the optimization of the L/sub 2/ norm of either the approximation to the given signal up to the desired scale or that of the error in that approximation. >

251 citations


Journal ArticleDOI
TL;DR: In this paper, a design procedure was developed that combines linear-quadratic optimal control with regional pole placement, in which the poles of the closed-loop system are constrained to lie in specified regions of the complex plane.
Abstract: A design procedure is developed that combines linear-quadratic optimal control with regional pole placement. Specifically, a static and dynamic output-feedback control problem is addressed in which the poles of the closed-loop system are constrained to lie in specified regions of the complex plane. These regional pole constraints are embedded within the optimization process by replacing the covariance Lyapunov equation by a modified Lyapunov equation whose solution, in certain cases, leads to an upper bound on the quadratic cost functional. The results include necessary and sufficient conditions for characterizing static output-feedback controllers with bounded performance and regional pole constraints. Sufficient conditions are also presented for the fixed-order (i.e. full- and reduced-order) dynamic output-feedback problem with regional pole constraints. Circular, elliptical, vertical strip, parabolic, and section regions are considered. >

250 citations


Journal ArticleDOI
TL;DR: A hybrid system to predict the secondary structures of proteins and achieved 66.4% accuracy, which may suggest an upper bound on the accuracy of secondary structure predictions based on local information from the currently available protein structures, and indicate places where non-local interactions may play a dominant role in conformation.

Journal ArticleDOI
TL;DR: A composite upper bound on the redundancy as a function of the quantizer resolution that leads to a tighter bound in the high rate (low distortion) case is presented.
Abstract: Uniform quantization with dither, or lattice quantization with dither in the vector case, followed by a universal lossless source encoder (entropy coder), is a simple procedure for universal coding with distortion of a source that may take continuously many values. The rate of this universal coding scheme is examined, and a general expression is derived for it. An upper bound for the redundancy of this scheme, defined as the difference between its rate and the minimal possible rate, given by the rate distortion function of the source, is derived. This bound holds for all distortion levels. Furthermore, a composite upper bound on the redundancy as a function of the quantizer resolution that leads to a tighter bound in the high rate (low distortion) case is presented. >

Journal ArticleDOI
TL;DR: A modified Hopfield neural network model for regularized image restoration is presented, which allows negative autoconnections for each neuron and allows a neuron to have a bounded time delay to communicate with other neurons.
Abstract: A modified Hopfield neural network model for regularized image restoration is presented. The proposed network allows negative autoconnections for each neuron. A set of algorithms using the proposed neural network model is presented, with various updating modes: sequential updates; n-simultaneous updates; and partially asynchronous updates. The sequential algorithm is shown to converge to a local minimum of the energy function after a finite number of iterations. Since an algorithm which updates all n neurons simultaneously is not guaranteed to converge, a modified algorithm is presented, which is called a greedy algorithm. Although the greedy algorithm is not guaranteed to converge to a local minimum, the l/sub 1/ norm of the residual at a fixed point is bounded. A partially asynchronous algorithm is presented, which allows a neuron to have a bounded time delay to communicate with other neurons. Such an algorithm can eliminate the synchronization overhead of synchronous algorithms. >

Journal ArticleDOI
TL;DR: In this paper, the authors consider non-preemptive single machine scheduling problems using time-indexed variables and derive a variety of valid inequalities, and show the role of constraint aggregation and the knapsack problem with generalised upper bound constraints as a way of generating such inequalities.
Abstract: We consider the formulation of non-preemptive single machine scheduling problems using time-indexed variables. This approach leads to very large models, but gives better lower bounds than other mixed integer programming formulations. We derive a variety of valid inequalities, and show the role of constraint aggregation and the knapsack problem with generalised upper bound constraints as a way of generating such inequalities. A cutting plane/branch-and-bound algorithm based on these inequalities has been implemented. Computational experience on small problems with 20/30 jobs and various constraints and objective functions is presented.

Journal ArticleDOI
TL;DR: This paper analyzes a class of two-machine batching and scheduling problems in which the batch processor plays an important role, presents polynomial procedures for some problems, proposes a heuristic, and establishes an upper bound on the worst case performance ratio of the heuristic for the NP-complete problem.
Abstract: We consider a situation in which the manufacturing system is equipped with batch and discrete processors Each batch processor can process a batch limited number of jobs simultaneously Once the process begins, no job can be released from the batch processor until the entire batch is processed In this paper, we analyze a class of two-machine batching and scheduling problems in which the batch processor plays an important role Specifically, we consider two performance measures: the makespan and the sum of job completion times We analyze the complexity of this class of problems, present polynomial procedures for some problems, propose a heuristic, and establish an upper bound on the worst case performance ratio of the heuristic for the NP-complete problem In addition, we extend our analysis to the case of multiple families and to the case of three-machine batching

Proceedings ArticleDOI
28 Jun 1992
TL;DR: This paper will attempt to establish upper and lower bounds on the level of performance that can be expected in an evaluation, and finds that the upper bound is very dependent on the instructions given to the judges.
Abstract: We have recently reported on two new word-sense disambiguation systems, one trained on bilingual material (the Canadian Hansards) and the other trained on monolingual material (Roget's Thesaurus and Grolier's Encyclopedia). After using both the monolingual and bilingual classifiers for a few months, we have convinced ourselves that the performance is remarkably good. Nevertheless, we would really like to be able to make a stronger statement, and therefore, we decided to try to develop some more objective evaluation measures. Although there has been a fair amount of literature on sense-disambiguation, the literature does not offer much guidance in how we might establish the success or failure of a proposed solution such as the two systems mentioned in the previous paragraph. Many papers avoid quantitative evaluations altogether, because it is so difficult to come up with credible estimates of performance.This paper will attempt to establish upper and lower bounds on the level of performance that can be expected in an evaluation. An estimate of the lower bound of 75% (averaged over ambiguous types) is obtained by measuring the performance produced by a baseline system that ignores context and simply assigns the most likely sense in all cases. An estimate of the upper bound is obtained by assuming that our ability to measure performance is largely limited by our ability obtain reliable judgments from human informants. Not surprisingly, the upper bound is very dependent on the instructions given to the judges. Jorgensen, for example, suspected that lexicographers tend to depend too much on judgments by a single informant and found considerable variation over judgments (only 68% agreement), as she had suspected. In our own experiments, we have set out to find word-sense disambiguation tasks where the judges can agree often enough so that we could show that they were outperforming the baseline system. Under quite different conditions, we have found 96.8% agreement over judges.

Proceedings ArticleDOI
01 Jul 1992
TL;DR: It is proved that the minimum degree over all the approximating polynomials of f is &THgr;((n.5).
Abstract: In this paper, we provide matching (up to a constant factor) upper and lower bounds on the degree of polynomials that represent symmetric boolean functions with an error 1/3. Let Γ(f)=min{|2k–n+1|:fk ≠ fk+ 1 and 0 ≤ k ≤ n – 1} where fi is the value of f on inputs with exactly i 1's. We prove that the minimum degree over all the approximating polynomials of f is Θ((n(n-Γ(f))).5). We apply the techniques and tools from approximation theory to derive this result.

Journal ArticleDOI
01 Oct 1992-Oikos
TL;DR: This paper presents and test a method whereby the upper bound slope can be measured by dividing assemblage data into size classes, and using the highest abundance in each class to estimate the regression slope.
Abstract: Plots of species body size versus abundance in natural assemblages of animals often reveal polygonal relationships Theory predicts that the slope of the upper bound of such polygons will approximate to -075, but has never been tested because of the lack of a statistical technique to estimate the slope We present and test a method whereby the upper bound slope can be measured by dividing assemblage data into size classes, and using the highest abundance in each class to estimate the regression slope The method accurately estimates slopes from simulated data of known upper bound, and estimates from real and simulated data vary in a similar manner when different numbers of size classes are used We conclude that this method may be used with confidence to estimate upper bound slopes from polygonal relationships, and gives the first chance to test different hypotheses about the magnitude of such slopes

Journal ArticleDOI
TL;DR: An improved lower bound of 1.540 is given for the parametric case where all items are smaller than or equal to 1/r, rϵ N +, and this work gives improved lower bounds for the asymptotic worst case ratio.

Journal ArticleDOI
TL;DR: An algorithm which determines the number of integer points in a polyhedron to within a multiplicative factor of 1+ε in time polynomial inm, ϕ and 1/ε when the dimensionn is fixed is described.
Abstract: We give an upper bound on the number of vertices ofP I , the integer hull of a polyhedronP, in terms of the dimensionn of the space, the numberm of inequalities required to describeP, and the size ϕ of these inequalities. For fixedn the bound isO(m n ϕ n− ). We also describe an algorithm which determines the number of integer points in a polyhedron to within a multiplicative factor of 1+e in time polynomial inm, ϕ and 1/e when the dimensionn is fixed.

Journal ArticleDOI
TL;DR: In this article, it was shown that no online scheduling algorithm can guarantee a cumulative value greater than 1/4th the value obtainable by a clairvoyant scheduler, i.e. if a task request is successfuly scheduled to completion, a value equal to the task's execution time is obtained; otherwise no value is obtained.
Abstract: With respect to on-line scheduling algorithms that must direct the service of sporadic task requests we quantify the benefit of clairvoyancy, i.e., the power of possessing knowledge of various task parameters of future events. Specifically, we consider the problem of preemptively sheduling sporadic task requests in both uni- and multi-processor environments. If a task request is successfuly scheduled to completion, a value equal to the task's execution time is obtained; otherwise no value is obtained. We prove that no on-line scheduling algorithm can guarantee a cumulative value greater than 1/4th the value obtainable by a clairvoyant scheduler; i.e., we prove a 1/4th upper bound on the competitive factor of on-line real-time schedulers. We present an online uniprocessor scheduling algorithm TD 1 that actually has a competitive factor of 1/4; this bound is thus shown to be tight. We further consider the effect of restricting the amount of overloading permitted (the loading factor), and quantify the relationship between the loading factor and the upper bound on the competitive factor. Other results of a similar nature deal with the effect of value densities (measuring the importance of type of a task). Generalizations to dual-processor on-line scheduling are also considered. For the dual-processor case, we prove an upper bound of 1/2 on the competitive factor. This bound is shown to be tight in the special case when all the tasks have the same density and zero laxity.

Journal ArticleDOI
TL;DR: In this article, the authors derived finite time estimates for simulated annealing and gave a sharp upper bound for the probability that the energy is close to its minimum value, which involves a new constant, the difficulty of the energy landscape.
Abstract: Simulated annealing algorithms are time inhomogeneous controlled Markov chains used to search for the minima of energy functions defined on finite state spaces. The control parameters, the so-called cooling schedule, control the probability that the energy should increase during one step of the algorithm. Most of the studies on simulated annealing have dealt with limit theorems, such as characterizing convergence conditions on the cooling schedule, or giving an equivalent of the law of the process for one fixed cooling schedule. In this paper we derive finite time estimates. These estimates are uniform in the cooling schedule and in the energy function. With new technical tools, we gain a new insight into the algorithm. We give a sharp upper bound for the probability that the energy is close to its minimum value. Hence we characterize the optimal convergence rate. This involves a new constant, the "difficulty" of the energy landscape. We calculate two cooling schedules for which our bound is almost reached. In one case it is reached up to a multiplicative constant for one energy function. In the other case it is reached in the sense of logarithmic equivalence uniformly in the energy function. These two schedules are both triangular: There is one different schedule for each finite simulation time. For each fixed finite time the second schedule has the currently used but previously mathematically unjustified exponential form. Finally, the title is "Rough large deviation estimates" because we have computed sharper ones (i.e., with sharp multiplicative constants) in two other papers.

Journal ArticleDOI
TL;DR: For several NP-hard optimal linear labeling problems, including the bandwidth, the cutwidth, and the min-sum problem for graphs, a heuristic algorithm is proposed which finds approximative solutions to these problems in polynomial time.

Journal ArticleDOI
TL;DR: Recently, the first author of as discussed by the authors gave a quasipolynomial upper bound for ∆(d, n) for linear programming with pivot rules, which requires n √ d (or less) arithmetic operations for every linear programming problem with d variables and n constraints.
Abstract: The diameter of the graph of a d-dimensional polyhedron with n facets is at most nlog d+2 Let P be a convex polyhedron. The graph of P denoted by G(P ) is an abstract graph whose vertices are the extreme points of P and two vertices u and v are adjacent if the interval [v, u] is an extreme edge (= 1-dimensional face) of P . The diameter of the graph of P is denoted by δ(P ). Let ∆(d, n) be the maximal diameter of the graphs of d-dimensional polyhedra P with n facets. (A facet is a (d− 1)-dimensional face.) Thus, P is the set of solutions of n linear inequalities in d variables. It is an old standing problem to determine the behavior of the function ∆(d, n). The value of ∆(d, n) is a lower bound for the number of iterations needed for Dantzig’s simplex algorithm for linear programming with any pivot rule. In 1957 Hirsch conjectured [2] that ∆(d, n) ≤ n−d. Klee and Walkup [6] showed that the Hirsch conjecture is false for unbounded polyhedra. They proved that for n ≥ 2d,∆(d, n) ≥ n − d + [d/5]. This is the best known lower bound for ∆(d, n). The statement of the Hirsch conjecture for bounded polyhedra is still open. For a recent survey on the Hirsch conjecture and its relatives, see [5]. In 1967 Barnette proved [1, 3] that ∆(d, n) ≤ n3. An improved upper bound, ∆(d, n) ≤ n2, was proved in 1970 by Larman [7]. Barnette’s and Larman’s bounds are linear in n but exponential in the dimension d. In 1990 the first author [4] proved a subexponential bound ∆(d, n) ≤ 2 √ (n−d) . The purpose of this paper is to announce and to give a complete proof of a quasipolynomial upper bound for ∆(d, n). Such a bound was proved by the first author in March 1991. The proof presented here is a substantial simplification that was subsequently found by the second author. See [4] for the original proof and related results. The existence of a polynomial (or even linear) upper bound for ∆(d, n) is still open. Recently, the first author found a randomized pivot rule for linear programming which requires an expected n √ d (or less) arithmetic operations for every linear programming problem with d variables and n constraints. 1991 Mathematics Subject Classification. Primary 52A25, 90C05. Received by the editors July 1, 1991 The first author was supported in part by a BSF grant by a GIF grant. The second author was supported by an AFOSR grant c ©1992 American Mathematical Society 0273-0979/92 $1.00 + $.25 per page

Journal ArticleDOI
TL;DR: Two theoresms are proved, which together extend the universal coding theorems to a large class of data generating densities and give an asymptotic upper bound for the code redundancy in the order of magnitude, achieved with a special predictive type of histogram estimator, which sharpens a related bound.
Abstract: The results by P. Hall and E.J. Hannan (1988) on optimization of histogram density estimators with equal bin widths by minimization of the stochastic complexity are extended and sharpened in two separate ways. As the first contribution, two generalized histogram estimators are constructed. The first has unequal bin widths which, together with the number of the bins, are determined by minimization of the stochastic complexity using dynamic programming. The other estimator consists of a mixture of equal bin width estimators, each of which is defined by the associated stochastic complexity. As the main contribution in the present work, two theorems are proved, which together extend the universal coding theorems to a large class of data generating densities. The first gives an asymptotic upper bound for the code redundancy in the order of magnitude, achieved with a special predictive type of histogram estimator, which sharpens a related bound. The second theorem states that this bound cannot be improved upon by any code whatsoever. >

Journal ArticleDOI
TL;DR: New lower bounds for the quadratic assignment problem QAP are presented, based on the orthogonal relaxation of QAP, and an additional improvement is obtained by making efficient use of a tractable representation of Orthogonal matrices having constant row and column sums.
Abstract: New lower bounds for the quadratic assignment problem QAP are presented. These bounds are based on the orthogonal relaxation of QAP. The additional improvement is obtained by making efficient use of a tractable representation of orthogonal matrices having constant row and column sums. The new bound is easy to implement and often provides high quality bounds under an acceptable computational effort.

Journal ArticleDOI
Gene Myers1
TL;DR: This work places a new worst-case upper bound on regular expression pattern matching using a combination of the node-listing and “Four-Russians” paradigms and provides an implementation that is faster than existing software for small regular expressions.
Abstract: Given a regular expression R of length P and a word A of length N, the membership problem is to determine if A is in the language denoted by R An O(PN/lgN) time algorithm is presented that is based on a lgN speedup of the standard O(PN) time simulation of R's nonderministic finite automaton on A using a combination of the node-listing and “Four-Russians” paradigms This result places a new worst-case upper bound on regular expression pattern matching Moreover, in practice the method provides an implementation that is faster than existing software for small regular expressions

Journal ArticleDOI
TL;DR: In this paper, the smoothness of the solutions of dilation equations is studied and the Sharp limit of the Sobolev exponent of the solution is given as a function of the spectral radius of an associated finite-dimensional positive operator.
Abstract: This work studies the smoothness of the solutions of dilation equations, which are encountered in the multiresolution analysis and iterative interpolation processes. Sharp limit of the Sobolev exponent of the solution is given as a function of the spectral radius of an associated finite-dimensional positive operator. In addition, tools are given to get good explicit upper and lower bounds for the exponent.

Patent
13 Nov 1992
TL;DR: In this article, the problem of comparing the relative phase of carrier signals received from GPS satellites to determine the roll, pitch and azimuth attitude of ships, aircraft, land vehicles, or survey instruments, accomplishes a maximum likelihood estimation (MLE) optimum solution over the full range of integers and vehicle attitudes.
Abstract: A method for comparing the relative phase of carrier signals received from GPS satellites to determine the roll, pitch and azimuth attitude of ships, aircraft, land vehicles, or survey instruments, accomplishes a maximum likelihood estimation (MLE) optimum solution over the full range of integers and vehicle attitudes. The problem is formulated as an MLE optimization, where vehicle attitude and integers are regarded as parameters to be adjusted to maximize probability of first-difference carrier phase measurements that are actually generated by hardware. Formulation results in weighted-fit error W as the objective criterion to minimize. A Kalman filter is introduced, having same objective criterion. Minimizing computation in Kalman filter leads to a decision tree for the integers. Two ways are shown to prune decision tree. The first is to exclude impossible combinations, such as those that produce an antenna upside down. The second is to generate a lower bound for W at each branch of the tree. A running sum is kept at each stage moving down the tree. When that sum exceeds a reasonableness bound or the current best W found elsewhere in the search, it is guaranteed that all subsequent integer combinations further down the current branch will produce a larger W and the remainder of the current branch can be cut off, speeding up the search.

Journal ArticleDOI
TL;DR: Results show that the approach is applicable to general linearly elliptic systems, including unsymmetrical operators, and that the method is valid for broad classes of linear and non-linear problems.