scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 1994"


Journal ArticleDOI
TL;DR: A simple and effective method, based on weighted voting, is introduced for constructing a compound algorithm, which is robust in the presence of errors in the data, and is called the Weighted Majority Algorithm.
Abstract: We study the construction of prediction algorithms in a situation in which a learner faces a sequence of trials, with a prediction to be made in each, and the goal of the learner is to make few mistakes. We are interested in the case where the learner has reason to believe that one of some pool of known algorithms will perform well, but the learner does not know which one. A simple and effective method, based on weighted voting, is introduced for constructing a compound algorithm in such a circumstance. We call this method the Weighted Majority Algorithm. We show that this algorithm is robust in the presence of errors in the data. We discuss various versions of the Weighted Majority Algorithm and prove mistake bounds for them that are closely related to the mistake bounds of the best algorithms of the pool. For example, given a sequence of trials, if there is an algorithm in the pool A that makes at most m mistakes then the Weighted Majority Algorithm will make at most c(log |A| + m) mistakes on that sequence, where c is fixed constant.

2,093 citations


Journal ArticleDOI
TL;DR: A new approach to proving type soundness for Hindley/Milner-style polymorphic type systems by an adaptation of subject reduction theorems from combinatory logic to programming languages and the use of rewriting techniques for the specification of the language semantics is presented.
Abstract: We present a new approach to proving type soundness for Hindley/Milner-style polymorphic type systems. The keys to our approach are (1) an adaptation of subject reduction theorems from combinatory logic to programming languages, and (2) the use of rewriting techniques for the specification of the language semantics. The approach easily extends from polymorphic functional languages to imperative languages that provide references, exceptions, continuations, and similar features. We illustrate the technique with a type soundness theorem for the core of Standard ML, which includes the first type soundness proof for polymorphic exceptions and continuations.

1,198 citations


Journal ArticleDOI
TL;DR: It is shown that the expressiveness of the timed μ-calculus is incomparable to theexpressiveness of timed CTL, which does not impair the symbolic verification of "implementable" real-time programs-those whose safety constraints are machine-closed with respect to diverging time and whose fairness constraints are restricted to finite upper bounds on clock values.
Abstract: We describe finite-state programs over real-numbered time in a guarded-command language with real-valued clocks or, equivalently, as finite automata with real-valued clocks. Model checking answers the question which states of a real-time program satisfy a branching-time specification (given in an extension of CTL with clock variables). We develop an algorithm that computes this set of states symbolically as a fixpoint of a functional on state predicates, without constructing the state space. For this purpose, we introduce a μ-calculus on computation trees over real-numbered time. Unfortunately, many standard program properties, such as response for all nonzeno execution sequences (during which time diverges), cannot be characterized by fixpoints: we show that the expressiveness of the timed μ-calculus is incomparable to the expressiveness of timed CTL. Fortunately, this result does not impair the symbolic verification of "implementable" real-time programs-those whose safety constraints are machine-closed with respect to diverging time and whose fairness constraints are restricted to finite upper bounds on clock values. All timed CTL properties of such programs are shown to be computable as finitely approximable fixpoints in a simple decidable theory.

998 citations


Journal ArticleDOI
Moshe Y. Vardi1, Pierre Wolper1
TL;DR: This work investigates extensions of temporal logic by connectives defined by finite automata on infinite words and shows that they do not increase the expressive power of the logic or the complexity of the decision problem.
Abstract: We investigate extensions of temporal logic by connectives defined by finite automata on infinite words. We consider three different logics, corresponding to three different types of acceptance conditions (finite, looping, and repeating) for the automata. It turns out, however that these logics all have the same expressive power and that their decision problems are all PSPACE-complete. We also investigate connectives defined by alternating automata and show that they do not increase the expressive power of the logic or the complexity of the decision problem.

928 citations


Journal ArticleDOI
TL;DR: It is shown that several d-unit delay constructs such as timeouts and watchdogs can be expressed in terms of the unit-delay operator and standard process algebra operators.
Abstract: The algebra of timed processes, ATP, uses a notion of discrete global time and suggests a conceptual framework for introducing time by extending untimed languages. The action vocabularly of ATP contains a special element representing the progress of time. The algebra has, apart from standard operators of process algebras such as prefixing by an action, alternative choice, and parallel composition, a primitive unit-delay operator. For two arguments, processes P and Q, this operator gives a process which behaves as P before the execution of a time event and behaves as Q afterwards. It is shown that several d-unit delay constructs such as timeouts and watchdogs can be expressed in terms of the unit-delay operator and standard process algebra operators. A sound and complete axiomatization for bisimulation semantics is studied and two examples illustrating the adequacy of the language for the description of timed systems are given. Finally we provide a comparison with existing timed process algebras.

286 citations


Journal ArticleDOI
TL;DR: Algorithms that, for any tree T, compute vs ( T ) in linear time and compute an optimal layout with respect to vertex separation in time O ( n log n) are given.
Abstract: We relate two concepts in graph theory and algorithmic complexity, namely the search number and the vertex separation of a graph. Let s ( G ) denote the search number and vs ( G ) denote the vertex separation of a connected, undirected graph G . We show that vs ( G ) ≤ s ( G ) ≤ vs ( G ) + 2 and we give a simple transformation from G to G′ such that vs ( G′ ) = s ( G ). We characterize those trees having a given vertex separation and describe the smallest such trees. We also note that there exist trees for which the difference between search number and vertex separation is indeed 2. We give algorithms that, for any tree T , compute vs ( T ) in linear time and compute an optimal layout with respect to vertex separation in time O ( n log n ). Vertex separation has previously been related to progressive black / white pebble demand and has been shown to be identical to a variant of search number, node search number , and to path width , which has been related to gate matrix layout cost . All these properties are known to be computationally intractable. For fixed k , an O ( n log 2 n ) algorithm is known which decides whether a graph has path width at most k .

249 citations


Journal ArticleDOI
TL;DR: This work defines both a dynamic and a static semantics for an ML-like language and proves that they are consistently related, and presents a reconstruction algorithm that computes the principal type and the minimal observable effect of expressions.
Abstract: The type and effect discipline is a new framework for reconstructing the principal type and the minimal effect of expressions in implicitly typed polymorphic functional languages that support imperative constructs. The type and effect discipline outperforms other polymorphic type systems. Just as types abstract collections of concrete values, effects denote imperative operations on regions. Regions abstract sets of possibly aliased memory locations. Effects are used to control type generalization in the presence of imperative constructs while regions delimit observable side-effects. The observable effects of an expression range over the regions that are free in its type environment and its type; effects related to local data structures can be discarded during type reconstruction. The type of an expression can be generalized with respect to the type variables that are not free in the type environment or in the observable effect. Introducing the type and effect discipline, we define both a dynamic and a static semantics for an ML-like language and prove that they are consistently related. We present a reconstruction algorithm that computes the principal type and the minimal observable effect of expressions. We prove its correctness with respect to the static semantics.

208 citations


Journal ArticleDOI
TL;DR: This paper builds the theoretical basis for a uniform and efficient method to automatically verify bisimulation-like relations between processes by means of model checking and demonstrates the expressive power of intuitionistically interpreted Hennessy-Milner Logic with mutual recursion.
Abstract: Characteristic formulae have been introduced by Graf and Sifakis to relate equational reasoning about proceses to reasoning in a modal logic, and therefore to allow proofs about processes to be carried out in a logical framework. This work, which concerned finite processes and bisimulation-like equivalences, has later on been extended to finite state processes and further equivalences. Based upon an intuitionistic understanding of Hennessy-Milner Logic (HML) with mutual recursion, we extend these results to cover bisimulation-like preorders, which are sensitive to liveness properties. This demonstrates the expressive power of intuitionistically interpreted HML with mutual recursion, and it builds the theoretical basis for a uniform and efficient method to automatically verify bisimulation-like relations between processes by means of model checking.

139 citations


Journal ArticleDOI
TL;DR: It is shown that maximal discrimination is obtained when all operators are considered and that this discrimination coincides with the one given by Z+ and that the adoption of certain non-deterministic operators is sufficient and necessary to induce it.
Abstract: The use of λ-calculus in richer settings, possibly involving parallelism, is examined in terms of the effect on the equivalence between λ-terms. We concentrate on Abramsky′s lazy λ- calculus and we follow two directions. Firstly, the λ-calculus is studied within a process calculus by examining the equivalence [formula] induced by Milner′s encoding into the π-calculus. We start from a characterization of [formula] presented in (Sangiorgi D., 1992) Ph.D. thesis. We derive a few simpler operational characterisations, from which we prove full abstraction w.r.t. Levy-Longo Trees. Secondly, we examine Abramsky′s applicative bisimulation when the λ-calculus is augmented with (well-formed) operators, that is symbols equipped with reduction rules describing their behaviour. In this way, the maximal discrimination between pure λ-terms (i.e., the finest behavioural equivalence) is obtained when all operators are used. We prove that the presence of certain non-deterministic operators is sufficient and necessary to induce it and that it coincides with the discrimination given by [formula]. We conclude that the introduction of non-determinism into the λ-calculus is exactly what makes applicative bisimulation appropriate for reasoning about the functional terms when concurrent features are also present in the language, or when they are embedded into a concurrent language.

136 citations


Journal ArticleDOI
TL;DR: This model is related to Valiant′s PAC learning model, but does not require the hypotheses used for prediction to be represented in any specified form and shows how to construct prediction strategies that are optimal to within a constant factor for any reasonable class F of target functions.
Abstract: We consider the problem of predicting {0, 1}-valued functions on Rn and smaller domains, based on their values on randomly drawn points. Our model is related to Valiant′s PAC learning model, but does not require the hypotheses used for prediction to be represented in any specified form. In our main result we show how to construct prediction strategies that are optimal to within a constant factor for any reasonable class F of target functions. This result is based on new combinatorial results about classes of functions of finite VC dimension. We also discuss more computationally efficient algorithms for predicting indicator functions of axis-parallel rectangles, more general intersection closed concept classes, and halfspaces in Rn. These are also optimal to within a constant factor. Finally, we compare the general performance of prediction strategies derived by our method to that of those derived from methods in PAC learning theory.

134 citations


Journal ArticleDOI
TL;DR: This work gives a procedure for converting any GSOS language definition to a finite complete equational axiom system (possibly with one infinitary induction principle) which precisely characterizes strong bisimulation of processes.
Abstract: Many process algebras are defined by structural operational semantics (SOS). Indeed, most such definitions are nicely structured and fit the GSOS format of Bloom et al. (J. Assoc. Comput. Mach., to appear). We give a procedure for converting any GSOS language definition to a finite complete equational axiom system (possibly with one infinitary induction principle) which precisely characterizes strong bisimulation of processes.

Journal ArticleDOI
TL;DR: An algorithms for solving systems of linear Diophantine equations based on a generalization of an algorithm for solving one equation due to Fortenbacher, which can solve a system as a whole, or be used incrementally when the system is a sequential accumulation of several subsystems.
Abstract: In this paper, we describe an algorithm for solving systems of linear Diophantine equations based on a generalization of an algorithm for solving one equation due to Fortenbacher. It can solve a system as a whole, or be used incrementally when the system is a sequential accumulation of several subsystems. The proof of termination of the algorithm is difficult, whereas the proofs of completeness and correctness are straightforward generalizations of Fortenbacher′s proof.

Journal ArticleDOI
TL;DR: A reduction from the halting problem for two-counter Turing machines is used to show that the subtyping and typing relations of F?
Abstract: F? is a typed ?-calculus with subtyping and bounded second-order polymorphism. First introduced by Cardelli and Wegner, it has been widely studied as a core calculus for type systems with subtyping. We use a reduction from the halting problem for two-counter Turing machines to show that the subtyping and typing relations of F? are undecidable.

Journal ArticleDOI
TL;DR: This work extends the specification language of temporal logic, the corresponding verification framework, and the underlying computational model to deal with real-;time properties of reactive systems to solve the problem of real-time systems that communicate through shared variables or by message passing.
Abstract: We extend the specification language of temporal logic, the corresponding verification framework, and the underlying computational model to deal with real-;time properties of reactive systems. The abstract notion of timed transition systems generalizes traditional transition systems conservatively: qualitative fairness requirements are replaced (and superseded) by quantitative lower-bound and upper-bound timing constraints on transitions. This framework can model real-time systems that communicate either through shared variables or by message passing and real-time issues such as timeouts, process priorities (interrupts), and process scheduling. We exhibit two styles for the specification of real-time systems. While the first approach uses time-bounded versions of the temporal operators, the second approach allows explicit references to time through a special clock variable. Corresponding to the two styles of specification, we present and compare two different proof methodologies for the verification of timing requirements that are expressed in these styles. For the bounded-operator style, we provide a set of proof rules for establishing bounded-invariance and bounded-responce properties of timed transition systems. This approach generalizes the standard temporal proof rules for verifying invariance and response properties conservatively. For the explicit-clock style, we exploit the observation that every time-bounded property is a safety property and use the standard temporal proof rules for establishing safety properties.

Journal ArticleDOI
TL;DR: In this article, a new formal embodiment of J.-Y. Girard's geometry of interaction program is given, and the computational interpretation is sketched in terms of data-flow nets.
Abstract: A new formal embodiment of J.-Y. Girard's (1989) geometry of interaction program is given. The geometry of interaction interpretation considered is defined, and the computational interpretation is sketched in terms of dataflow nets. Some examples that illustrate the key ideas underlying the interpretation are given. The results, which include the semantic analogue of cut-elimination, stated in terms of a finite convergence property, are outlined. >

Journal ArticleDOI
TL;DR: A numeration system based on a strictly increasing sequence of positive integers u0=1, u1u2,... expresses a non-negative integer n as a sum n=∑ j=0 i ajuj.
Abstract: A numeration system based on a strictly increasing sequence of positive integers u0=1, u1u2,... expresses a non-negative integer n as a sum n=∑ j=0 i ajuj. In this case we say the string a i a i −1 ...a1 a0 is a representation for n.

Journal ArticleDOI
TL;DR: Some of the results show that logical definiability has different implications for NP maximization problems than it has for NP minimization problems, in terms of both expressive power and approximation properties.
Abstract: We investigate here NP optimization problems from a logical definability standpoint. We show that the class of optimization problems whose optimum is definable using first-order formulae coincides with the class of polynomially bounded NP optimization problems on finite structures. After this, we analyze the relative expressive power of various classes of optimization problems that arise in this framework. Some of our results show that logical definiability has different implications for NP maximization problems than it has for NP minimization problems, in terms of both expressive power and approximation properties.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the semantics of non-monotonic negation in probabilistic deductive databases and examined three natural notions of stability: stable formula functions, stable families of probabilist interpretations, and stable probabilism models.
Abstract: In this paper we study the semantics of non-monotonic negation in probabilistic deductive databases. Based on the stable semantics for classical logic programming, we examine three natural notions of stability: stable formula functions, stable families of probabilistic interpretations, and stable probabilistic models. We show that stable formula functions are minimal fixpoints of operators associated with probabilistic logic programs. We also prove that each member in a stable family of probabilistic interpretations is a probabilistic model of the program. Then we show that stable formula functions and stable families behave as duals of each other, tying together elegantly the fixpoint and model theories for probabilistic logic programs with negation. Furthermore, since a probabilistic logic program may not necessarily have a stable family of probabilistic interpretations, we provide a stable class semantics for such programs. Finally, we investigate the notion of stable probabilistic model. We show that this notion, though natural, is too weak in the probabilistic framework.

Journal ArticleDOI
TL;DR: Two ?
Abstract: We introduce two ?-calculi and show that they are expressive for two canonical domains of parallel functions. The first calculus is an enrichment of the lazy, call-by-name ?-calculus with call-by-value abstractions and parallel composition, while in the second the usual call-by-name abstractions are disallowed. The corresponding domains are respectively Abramsky?s domain D = (D ??D)?, a lifted function space, and D = (D ??D)?, a lifted domain of strict functions. These domains are lattices, and we show that the parallelism is adequately represented by the join operator, while call-by-value abstractions correspond to strict functions. The proofs of the results rely on a completeness theorem for the logical presentation of the semantics.

Journal ArticleDOI
TL;DR: An intermediary transition system for CCS is used, where the past is recorded; this appears to be a system of "trace computations," which provides another means of define the same abstract domain of computations.
Abstract: We introduce three notions of computation for processes described as CCS (Calculus of Communicating Systems) terms. The first one uses an adaptation of the equivalence by permutations of Berry and Levy. In this setting, a computation is an equivalence class of sequences of transitions, up to the permutation of independent steps. The second notion of computation is given by means of an interpretation of CCS into a new class of event structures, the flow event structures. This can be seen as a reformulation of Winskel′s semantics for CCS by means of stable event structures. Here a computation is a configuration of an event structure. Finally, our third notion of computation is determined by an interpretation of CCS terms as Petri nets, and more precisely as flow nets. Here a computation is a set of events that are firable in sequence in the net. We then show that these three computation interpretations of CCS coincide, in the sense that for a given term, the three domains of computations are isomorphic. To this end we use an intermediary transition system for CCS, where the past is recorded; this appears to be a system of "trace computations," which provides another means of define the same abstract domain of computations.

Journal ArticleDOI
TL;DR: This work proposes a set of transformation rules for first order formulae whose atoms are either equations between terms or "membership constraints" t?? and provides a quantifier elimination procedure: for every regular tree language L, the first order theory of some structure defining L is decidable.
Abstract: We propose a set of transformation rules for first order formulae whose atoms are either equations between terms or "membership constraints" t??. ? can be interpreted as a regular tree language (? is called a sort in the algebraic specification community) or as a tree language in any class of languages which satisfies some adequate closure and decidability properties. This set of rules is proved to be correct, terminating, and complete. This provides a quantifier elimination procedure: for every regular tree language L, the first order theory of some structure defining L is decidable. This extends the results of several previously published results. We also show how to apply our results to automatic inductive proofs in equational theories.

Journal ArticleDOI
Martín Abadi1, Joseph Y. Halpern1
TL;DR: It is shown that, although the two logics capture quite different intuitions about probability, there is a precise sense in which they are equi-expressive, in which the logic cannot be axiomatized in either case.
Abstract: We consider decidability and expressiveness issues for two first-order logics of probability. In one, the probability is on possible worlds, while in the other, it is on the domain. It turns out that in both cases it takes very little to make reasoning about probability highly undecidable. We show that when the probability is on the domain, if the language contains only unary predicates then the validity problem is decidable. However, if the language contains even one binary predicate, the validity problem is ?21 complete, as hard as elementary analysis with free predicate and function symbols. With equality in the language, even with no other symbol, the validity problem is at least as hard as that for elementary analysis, ?1∞ hard. Thus, the logic cannot be axiomatized in either case. When we put the probability on the set of possible worlds, the validity problem is ?21 complete with as little as one unary predicate in the language, even without equality. With equality, we get ?1∞ hardness with only a constant symbol. We then turn our attention to an analysis of what causes this overwhelming complexity. For example, we show that if we require rational probabilities then we drop from ?21 to ?11. In many contexts it suffices to restrict attention to domains of bounded size; fortunately, the logics are decidable in this case. Finally, we show that, although the two logics capture quite different intuitions about probability, there is a precise sense in which they are equi-expressive.

Journal ArticleDOI
TL;DR: All other equivalences in the linear/branching time hierarchy are examined and it is shown that none of them are decidable for normed BPA processes.
Abstract: A recent theorem shows that strong bisimilarity is decidable for the class of normed BPA processes, which correspond to a class of context-free grammars generating the ϵ-free context-free languages. Huynh and Tian (Technical Report UTDCS-31-90, University of Texas at Dallas, 1990) have shown that readiness and failure equivalence are undecidable for BPA processes. In this paper we examine all other equivalences in the linear/branching time hierarchy and show that none of them are decidable for normed BPA processes.

Journal ArticleDOI
TL;DR: This paper focuses on a sequential extension of PCF that includes two classes of control operators: a possibly empty set of error generators and a collection of catch and throw constructs, and presents a fully abstract semantics for SPCF.
Abstract: One of the major challenges in denotational semantics is the construction of a fully abstract semantics for a higher-order sequential programming language. For the past fifteen years, research on this problem has focused on developing a semantics for PCF, an idealized functional programming language based on the typed λ-calculus. Unlike most practical languages, PCF has no facilities for observing and exploiting the evaluation order of arguments to procedures. Since we believe that these facilities play a crucial role in sequential computation, this paper focuses on a sequential extension of PCF, called SPCF, that includes two classes of control operators: a possibly empty set of error generators and a collection of catch and throw constructs. For each set of error generators, the paper presents a fully abstract semantics for SPCF. If the set of error generators is empty, the semantics interprets all procedures-including catch and throw-as Berry-Curien sequential algorithms. If the language contains error generators, procedures denote manifestly sequential functions. The manifestly sequential functions form a Scott domain that is isomorphic to a domain of decision trees, which is the natural extension of the Berry-Curien domain of sequential algorithms in the presence of errors.

Journal ArticleDOI
TL;DR: The correctness of the model is demonstrated by proving it equivalent to an operational semantics of inheritance based upon the method lookup algorithm of object-oriented languages.
Abstract: This paper presents a denotational model of inheritance. The model is based on an intuitive motivation of inheritance as a mechanism for deriving modified versions of recursive definitions. The correctness of the model is demonstrated by proving it equivalent to an operational semantics of inheritance based upon the method lookup algorithm of object-oriented languages.

Journal ArticleDOI
TL;DR: It is shown that inductive inference from positive data works successfully for their models as well as for their languages, and it follows that any class of logic programs corresponding to length-bounded EFSs can be inferred from positive facts.
Abstract: Inductive inference from positive data is shown to be remarkably powerful using the framework of elementary formal systems. An elementary formal system, EFS for short, is a kind of logic program on Σ+ consisting of finitely many axioms. Any context-sensitive language is definable by a restricted EFS, called a length-bounded EFS. Length-bounded EFSs with at most n axioms are considered, and it is shown that inductive inference from positive data works successfully for their models as well as for their languages. From this it follows that any class of logic programs, such as Prolog programs, corresponding to length-bounded EFSs can be inferred from positive facts.

Journal ArticleDOI
TL;DR: In this paper, modular embedding is used to define a preorder among concurrent constraint languages, representing different degrees of expressiveness, and it is shown that this preorder is not trivial (i.e., it does not collapse into one equivalence class).
Abstract: This paper addresses the problem of defining a formal tool to compare the expressive power of different concurrent constraint languages. We refine the notion of embedding by adding some "reasonable" conditions, suitable for concurrent frameworks. The new notion, called modular embedding, is used to define a preorder among these languages, representing different degrees of expressiveness. We show that this preorder is not trivial (i.e., it does not collapse into one equivalence class) by proving that Flat CP cannot be embedded into Flat GHC, and that Flat GHC cannot be embedded into a language without communication primitives in the guards, while the converses hold.

Journal ArticleDOI
TL;DR: The complexity of membership and inequivalence problems for regular expressions with the interleaving operator has been investigated in this paper for the class Σ p 2 at the second level of the polynomial-time hierarchy.
Abstract: We consider regular expressions extended with the interleaving operator, and investigate the complexity of membership and inequivalence problems for these expressions. For expressions using the operators union, concatenation, Kleene star, and interleaving, we show that the inequivalence problem (deciding whether two given expressions do not describe the same set of words) is complete for exponential space. Without Kleene star, we show that the inequivalence problem is complete for the class Σ p 2 at the second level of the polynomial-time hierarchy. Certain cases of the membership problem (deciding whether a given word is in the language described by a given expression) are shown to be NP-complete. It is also shown that certain languages can be described exponentially more succinctly by using interleaving.

Journal ArticleDOI
TL;DR: A Process Algebra for the specification of concurrent, communicating processes which incorporates operators for the refinement of actions by processes, in addition to the usual operators for communication, nondeterminism, internal actions, and restrictions is presented and a suitable notion of semantic equivalence for it is studied.
Abstract: In this paper we present a Process Algebra for the specification of concurrent, communicating processes which incorporates operators for the refinement of actions by processes, in addition to the usual operators for communication, nondeterminism, internal actions, and restrictions, and study a suitable notion of semantic equivalence for it. We argue that action refinements should not, in some formal sense, interfere with the internal evolution of processes and their application to processes should consider the restriction operator as a "binder." We show that, under the above assumptions, the weak version of the refine equivalence introduced by Aceto and Hennessy ((1993) Inform. and Comput.103, 204-269) is preserved by action refinements and, moreover, is the largest such equivalence relation contained in weak bismulation equivalence. We also discuss an example showing that, contrary to what happens in Aceto and Hennessy ((1993) Inform. and Comput.103, 204-269), refine equivalence and timed equivalence are different notions of equivalence over the language considered in this paper.

Journal ArticleDOI
TL;DR: The protocol shows that the lower bound for the multi-party communication complexity of the GIP function, given by Babai et al., cannot be improved significantly.
Abstract: We present a multi-party protocol which computes the Generalized Inner Product (GIP) function, introduced by Babai et al. (1989, in "Proceedings, 21st ACM STOC," pp. 1-11). Our protocol shows that the lower bound for the multi-party communication complexity of the GIP function, given by Babai et al., cannot be improved significantly.