scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 1988"


Journal ArticleDOI
TL;DR: In this article, the authors propose a method to solve the problem of homonymity in homonymization, i.e., homonym-of-subjects-with-objectivity.
Abstract: ion

1,268 citations


Journal ArticleDOI
Luca Cardelli1
TL;DR: Programming with taxonomically organized data is often called objectoriented programming, and has been advocated as an effective way of structuring programming environments, data bases, and large systems in general.
Abstract: There are two major ways of structuring data in programming languages. The first and common one, used for example in Pascal, can be said to derive from standard branches of mathematics. Data are organized as Cartesian products (i.e., record types), disjoint sums (i.e., unions or variant types), and function spaces (i.e., functions and procedures). The second method can be said to derive from biology and taxonomy. Data are organized in a hierarchy of classes and subclasses, and data at any level of the hierarchy inherit all the attributes of data higher up in the hierarchy. The top level of this hierarchy is usually called the class of all objects; every datum is an object and every datum inherits the basic properties of objects, e.g., the ability to tell whether two objects are the same or not. Functions and procedures are considered as local actions of objects, as opposed to global operations acting over objects. These different ways of structuring data have generated distinct classes of programming languages, and induced different programming styles. Programming with taxonomically organized data is often called objectoriented programming, and has been advocated as an effective way of structuring programming environments, data bases, and large systems in general. The notions of inheritance and object-oriented programming first appeared in Simula 67 (Dahl, 1966). In Simula, objects are grouped into classes and classes can be organized into a subclass hierarchy. Objects are similar to records with functions as components, and elements of a class can appear wherever elements of the respective superclasses are expected. Subclasses inherit all the attributes of their superclasses. In Simula, the issues are somewhat complicated by the use of objects as coroutines, so that communication between objects can be implemented as message passing between processes. Smalltalk (Goldberg, 1983) adopts and exploits the idea of inheritance, with some changes. While stressing the message-passing paradigm, a

706 citations


Journal ArticleDOI
TL;DR: The fast algorithm proposed in this paper also uses normal bases, and computes multiplicative inverses iterating multiplications in GF(2 m ).
Abstract: This paper proposes a fast algorithm for computing multiplicative inverses in GF(2 m ) using normal bases. Normal bases have the following useful property: In the case that an element x in GF(2 m ) is represented by normal bases, 2 k power operation of an element x in GF(2 m ) can be carried out by k times cyclic shift of its vector representation. C. C. Wang et al. proposed an algorithm for computing multiplicative inverses using normal bases, which requires ( m − 2) multiplications in GF(2 m ) and ( m − 1) cyclic shifts. The fast algorithm proposed in this paper also uses normal bases, and computes multiplicative inverses iterating multiplications in GF(2 m ). It requires at most 2[log 2 ( m − 1)] multiplications in GF(2 m ) and ( m − 1) cyclic shifts, which are much less than those required in the Wang's method. The same idea of the proposed fast algorithm is applicable to the general power operation in GF(2 m ) and the computation of multiplicative inverses in GF( q m ) ( q = 2 n ).

663 citations


Journal ArticleDOI
TL;DR: A communication protocol is described which uses k rooted spanning trees having the property that for every vertex ν the paths from ν to the root are edge-disjoint, and an algorithm to find two such trees in a 2-edge connected graph that runs in time proportional in the number of edges in the graph.
Abstract: Consider a network of asynchronous processors communicating by sending messages over unreliable lines. There are many advantages to restricting all communications to a spanning tree. To overcome the possible failure of k′ < k edges, we describe a communication protocol which uses k rooted spanning trees having the property that for every vertex ν the paths from ν to the root are edge-disjoint. An algorithm to find two such trees in a 2-edge connected graph is described that runs in time proportional in the number of edges in the graph. This algorithm has a distributed version which finds the two trees even when a single edge fails during their construction. The two trees then may be used to transform certain centralized algorithms to distributed, reliable, and efficient ones.

297 citations


Journal ArticleDOI
TL;DR: A new fault-tolerant algorithm for solving a variant of Lamport's clock synchronization problem for a system of distributed processes that communicate by sending messages that maintains synchronization to within a small constant, whose magnitude depends upon the rate of clock drift, the message delivery time and its uncertainty.
Abstract: We describe a new fault-tolerant algorithm for solving a variant of Lamport's clock synchronization problem. The algorithm is designed for a system of distributed processes that communicate by sending messages. Each process has its own read-only physical clock whose drift rate from real time is very small. By adding a value to its physical clock time, the process obtaines its local time. The algorithm solves the problem of maintaining closely synchronized local times, assuming that processes' local times are closely synchronized initially. The algorithm is able to tolerate the failure of just under one-third of the participating processes. It maintains synchronization to within a small constant, whose magnitude depends upon the rate of clock drift, the message delivery time and its uncertainty, and the initial closeness of synchronization. We also give a characterization of how far the clocks drift from real time. Reintegration of a repaired process can be accomplished using a slight modification of the basic alborithm. A similar style algorithm can also be used to achieve synchronization initially.

281 citations


Journal ArticleDOI
TL;DR: A formalism for constructing and using axiomatic specifications in an arbitrary logical system is presented and it is shown how to introduce free variables into the sentences of an arbitrary institution and how to add quantitiers which bind them.
Abstract: A formalism for constructing and using axiomatic specifications in an arbitrary logical system is presented. This builds on the framework provided by Goguen and Burstall’s work on the notion of an institution as a formalisation of the concept of a logical system for writing specifications. We show how to introduce free variables into the sentences of an arbitrary institution and how to add quantitiers which bind them. We use this foundation to define a set of primitive operations for building specifications in an arbitrary institution based loosely on those in the ASL kernel specification language. We examine the set of operations which results when the definitions are instantiated in institutions of total and partial tirst-order logic and compare these with the operations found in existing specification languages. We present proof rules which allow proofs to be conducted in specifications built using the operations we define. Finally, we introduce a simple mechanism for defining and applying parameterised specifications and briefly discuss the program development

251 citations


Journal ArticleDOI
John C. Mitchell1
TL;DR: A general semantics of polymorphic type expressions over models of untyped lambda calculus and complete rules for inferring types for terms are presented.
Abstract: Type expressions may be used to describe the functional behavior of untyped lambda terms. We present a general semantics of polymorphic type expressions over models of untyped lambda calculus and give complete rules for inferring types for terms. Some simplified typing theories are studied in more detail, and containments between types are investigated.

221 citations


Journal ArticleDOI
TL;DR: First a particular algebraic theory is introduced and a representation theorem proved, which gives the authors a coherent framework in which to place the various other definitions of “category of partial maps”.
Abstract: This paper attempts to reconcile the various abstract notions of “category of partial maps” which appear in the literature. First a particular algebraic theory ( p -categories) is introduced and a representation theorem proved. This gives the authors a coherent framework in which to place the various other definitions. Both algebraic theories and theories which make essential use of the poset-enriched structure of partial maps are discussed. Proofs of equivalence are given where possible and counterexamples where known. The paper concludes with brief sections on the representation of partial maps and on partial algebras.

153 citations


Journal ArticleDOI
TL;DR: Several lower and upper bounds for f(n, k) are derived such that the resulting gaps are "small", and relations between the directed chromatic index and the chromatic number are derived, which are of interest in their own right.
Abstract: The n-cube network is called faulty if it contains any faulty processor or any faulty link. For any number k we want to compute the minimum number f(n, k) of faults which is necessary for an adversary to make every (n - k)-dimensional subcube faulty. Reversely formulated: The existence of an (n - k)-dimensional non-faulty subcube can be guaranteed, if there are less than f(n, k) faults in the n-cube. In this paper several lower and upper bounds for f(n, k) are derived such that the resulting gaps are ''small.'' For instance if k >= 2 is constant, then f(n, k) = @q(logn). Especially for k = 2 and large n: f(n, 2) @? [[@a"[email protected]?]: [@a"n]@? + 2], where @a"n =logn + 1/2 log log n + 1/2. Or if k = @w(log log n) then 2^k < f(n, k) < 2^(^1^ ^+^ ^@?^)^k, with @? chosen arbitrarily small. The aforementioned upper bounds are obtained by analysing the behaviour of an adversary who makes ''worst-case'' distributions of a given number of faulty processors. For k = 2 the ''worst-case'' distribution is obtained constructively. In the general case the constructive methods presented in this paper lead to a (rather ''bad'') upper bound which can be significantly improved by probabilistic arguments. The bounds mentioned above change if the notions are relativized with respect to some given parallel fault-checking procedure P. In this case only subcubes which are possible outputs of P must be made faulty by the adversary. The notion of directed chromatic index is defined in order to analyse the case k = 2. Relations between the directed chromatic index and the chromatic number are derived, which are of interest in their own right.

116 citations


Journal ArticleDOI
TL;DR: A new notion of Probabilistic team inductive inference is introduced and compared with both probabilistic inference and team inference, and a subtle difference between probabilism and pluralism is revealed.
Abstract: A new notion of probabilistic team inductive inference is introduced and compared with both probabilistic inference and team inference. In many cases, but not all, probabilism can be traded for pluralism, and vice versa. Necessary and sufficient conditions are given describing when a team of deterministic or probabilistic learning machines can be coalesced into a single learning machine. A subtle difference between probabilism and pluralism is revealed.

99 citations


Journal ArticleDOI
TL;DR: Eggan(l963) introduced the notion of the star height to each regular expression which is a nonnegative integer denoting the nestedness of star operators in this expression.
Abstract: Eggan(l963) introduced the notion of the star height to each regular expression which is a nonnegative integer denoting the nestedness of star operators in this expression. The star height of a regular language is the minimum of the star height of regular expressions denoting this language. We remark here that to each regular language, there exist generally infinitely many regular expressions denoting this language. Eggan(l963) showed that for each nonnegative integer k, there exists a regular language of star height k, and posed the problem of determining the star height of any regular language. Dejean and Schutzenberger (1966) showed that for each nonnegative integer k, there exists a regular language of star height k over the two-letter alphabet. McNaughton (1967) presented an algorithm for determining the loop complexity (i.e., the star height) of any regular language whose syntactic monoid is a group. Cohen (1970, 1971) and Cohen and Brzozowski (1970) investigated many properties of star height, some of which provide algorithms for determining the star height of any regular language of certain reset-free type. Hashiguchi and Honda (1979) presented an algorithm for determining the star height of any reset-free language and any reset language. Hashiguchi (1982B) presented an algorithm for deciding whether or not an arbitrary language is of star height one. To obtain this result, it uses the limitedness theorem on finite automata with distance functions (in short, D-automata) in Hashiguci (1982A, 1983).

Journal ArticleDOI
TL;DR: It is shown that renamings enhance the defining power of concrete process algebra by using the example of a queue, and a definition of the trace set of a process is given, and when equality of trace sets implies equality of processes is seen.
Abstract: Renaming operators are introduced in concrete process algebra (concrete means that abstraction and silent moves are not considered) Examples of renaming operators are given: encapsulation, pre-abstraction, and localization We show that renamings enhance the defining power of concrete process algebra by using the example of a queue We give a definition of the trace set of a process, see when equality of trace sets implies equality of processes, and use trace sets to define the restriction of a process Finally, we describe processes with actions that have a side effect on a state space and show how to use this for a translation of computer programs into process algebra

Journal ArticleDOI
TL;DR: In this paper, a compositional denotational semantics for a real-time distributed language based on the linear history semantics for CSP is given, where concurrent execution is not modelled by interleaving but by an extension of the maximal parallelism model of Salwicki and Muldner, that allows for the modelling of transmission time for communications.
Abstract: We give a compositional denotational semantics for a real-time distributed language, based on the linear history semantics for CSP of Francez et al. Concurrent execution is not modelled by interleaving but by an extension of the maximal parallelism model of Salwicki and Muldner, that allows for the modelling of transmission time for communications. The importance of constructing a semantics (and, in general, a proof theory) for real-time is stressed by such different sources as the problem of formalizing the real-time aspects of Ada and the elimination of errors in the real-time flight control software of the NASA space shuttle (Comm. ACM 27 (1984)).

Journal ArticleDOI
TL;DR: In this framework unique satisfiability (and a variation of it called kSAT is, in some sense, “close” to satisfiability) is shown and that P ≠ NP iff every NP-complete set is pterse.
Abstract: Let A be a set and k ∈ N be such that we wish to know the answers to x1 ∈ A?, x2 ∈ A?, …, xk ∈ A? for various k-tuples 〈x1, x2, …, xk〉. If this problem requires k queries to A in order to be solved in polynomial time then A is called polynomial terse or pterse. We show the existence of both arbitrarily complex pterse and non-pterse sets; and that P ≠ NP iff every NP-complete set is pterse. We also show connections with p-immunity, p-selective, p-generic sets, and the boolean hierarchy. In our framework unique satisfiability (and a variation of it called kSAT is, in some sense, “close” to satisfiability.

Journal ArticleDOI
TL;DR: It is shown that some well-known moethods like first-fit- decreasing are P-complete, and it is hence very unlikely that they can be efficiently parallelized, and an optimal NC algorithm is exhibited that achieves the same performance bound as does FFD.
Abstract: We study the parallel complexity of polynomial heuristics for the bin packing problem. We show that some well-known (and simple) moethods like first-fit- decreasing are P-complete, and it is hence very unlikely that they can be efficiently parallelized. On the other hand, we exhibit an optimal NC algorithm that achieves the same performance bound as does FFD. Finally, we discuss parallelization of polynomial approximation algorithms for bin packing based on discretization.

Journal ArticleDOI
TL;DR: Several new optimal or nearly optimal lower bounds are derived on the time needed to simulate queue, stacks, and tapes by one off-line single-head tape-unit with one-way input, for both the deterministic case and the nondeterministic case.
Abstract: Several new optimal or nearly optimal lower bounds are derived on the time needed to simulate queue, stacks (stack = pushdown store), and tapes by one off-line single-head tape-unit with one-way input, for both the deterministic case and the nondeterministic case. The techniques rely on algorithmic information theory (Kolmogorov complexity).

Journal ArticleDOI
TL;DR: NP-hardness of the satisfiability promise problem follows, and graph isomorphism hardness of a promise problem that derives from the graph isomorphicism problem is proved.
Abstract: A general framework is given to obtain hardness results for promise problems that derive from self-reducible decision problems. The principal theorem is that if a set A is ≤ d P -equivalent to a disjunctive-self-reducible set in NP, then the natural promise problem associated with A is as hard to solve as it is to recognize A . NP-hardness of the satisfiability promise problem follows, and graph isomorphism hardness of a promise problem that derives from the graph isomorphism problem is proved.

Journal ArticleDOI
TL;DR: An algorithm for satisfiability testing in the propositional calculus with a worst case running time that grows at a rate less than 2(25+e)L is described, and it is shown that the Davis-Putnam procedure satisfies the same upper bound.
Abstract: An algorithm for satisfiability testing in the propositional calculus with a worst case running time that grows at a rate less than 2(.25+e)L is described, where L can be either the length of the input expression or the number of occurrences of literals (i.e., leaves) in it. This represents a new upper bound on the complexity of non-clausal satisfiability testing. The performance is achieved by using lemmas concerning assignments and pruning that preserve satisfiability, together with choosing a “good” variable upon which to recur. For expressions in clause form, it is shown that the Davis-Putnam procedure satisfies the same upper bound.

Journal ArticleDOI
TL;DR: Using a kernel language called Pebble based on the typed lambda calculus with bindings, declarations, dependent types, and types as compile time values, it is shown how to build modules, interfaces and implementations, abstract data types, generic types, recursivetypes, and unions.
Abstract: A small set of constructs can simulate a wide variety of apparently distinct features in modern programming languages Using a kernel language called Pebble based on the typed lambda calculus with bindings, declarations, dependent types, and types as compile time values, we show how to build modules, interfaces and implementations, abstract data types, generic types, recursive types, and unions Pebble has a concise operational semantics given by inference rules

Journal ArticleDOI
TL;DR: It is shown that a collecting (static) semantics exists, thus answering a problem left open by G. L. Burn, C. Hankin and S. Abramsky and showing the possibility of a general theory for the analysis of functional programs.
Abstract: A theory of abstract interpretation ( P. Cousot and R. Cousot, in “Conf. Record, 4th ACM Symposium on Principles of Programming Languages,” 1977 ) is developed for a typed λ-calculus. The typed λ-calculus may be viewed as the “static” part of a two-level denotational metalanguage for which abstract interpretation was developed by F. Nielson (Ph.D. thesis, University of Edinburgh, 1984; in “Proceedings, STACS 1986,” Lecture Notes in Computer Science, Vol. 210, Springer-Verlag, New York/Berlin, 1986 ). The present development relaxes a condition imposed there and this sufices to make the framework applicable to strictness analysis for the λ-calculus. This shows the possibility of a general theory for the analysis of functional programs and it gives more insight into the relative precision of the various analyses. In particular it is shown that a collecting (static; P. Cousot and R. Cousot, in “Conf. Record, 6th ACM Symposium on Principles of Programming Languages,” 1979 ) semantics exists, thus answering a problem left open by G. L. Burn, C. L. Hankin and S. Abramsky ( Sci. Comput. Programming 7 (1986) , 249–278).

Journal ArticleDOI
TL;DR: It is shown that the semantics of systems of recursive imperative procedures or of recursive applicative procedures computed with call-by-value or call- by-name can be expressed by an attribute grammar associating attributes with the nodes of the so-called trees of calls.
Abstract: An extension of the inductive assertion method allowing one to prove the partial correctness of an attribute grammar w.r.t. a specification is presented. It is complete in an abstract sense. It is also shown that the semantics of systems of recursive imperative procedures or of recursive applicative procedures computed with call-by-value or call-by-name can be expressed by an attribute grammar associating attributes with the nodes of the so-called trees of calls. Hence the proof methods for the partial correctness of attribute grammars can be applied to these recursive procedures. We show also how the proof method can be applied in logic programming.

Journal ArticleDOI
TL;DR: It is argued that the definition of RAM space, at least in the manner it is traditionally given in the literature, is inadequate for this purpose and can be validated only in a weak interpretation.
Abstract: In complexity theory the use of informal estimates can be justified by appealing to the Invariance Thesis which states that all standard models of sequential computing devices are equivalent in the sense that the fundamental complexity classes do not depend on the precise model chosen for their definition. This thesis would require, among others, that a RAM can be simulated by a Turing machine with constant factor overhead in space. We argue that the definition of RAM space, at least in the manner it is traditionally given in the literature, is inadequate for this purpose. The invariance thesis can be validated only in a weak interpretation. The rather complicated simulation which achieves the constant factor space overhead is based on a new method for condensing space and uses perfect hash functions with minimal program size.

Journal ArticleDOI
TL;DR: It is shown that postdictively consistent IIMs can be effectively replaced with post-dictively complete IIMs that succeed to at least the same degree.
Abstract: Three kinds of restrictions on inductive inference machines (IIMs) are considered: postdictive completeness, postdictive consistency, and reliability. It is shown that postdictively consistent IIMs can be effectively replaced with post-dictively complete IIMs that succeed to at least the same degree. Various loosenings of the notions of postdictive completeness and reliability are considered, and a pair of related triangular hierarchies is exhibited; IIMs higher (or to the right) in the hierarchies are less restricted and capable of learning more than IIMs lower or to the left. Various conjectures and older results are obtained as corollaries.

Journal ArticleDOI
TL;DR: A necessary and sufficient condition on the system such that every integer has a canonical representation is given and it is shown that this canonical representation can be computed from any representation by a rational function.
Abstract: A numeration system is a sequence of integers such that any integer can be represented by means of the sequence using integers of bounded size. We study numeration systems defined by linear recurrences of order two. We give a necessary and sufficient condition on the system such that every integer has a canonical representation. We show that this canonical representation can be computed from any representation by a rational function. This rational function is the composition of two subsequential functions that are simply obtained from the system. The addition of two integers represented in the system can be performed by a subsequential machine.

Journal ArticleDOI
TL;DR: The class of partial orders is shown to have Ol laws for first-order logic and for inductive fixed-point logic, a logic which properly contains first- order logic.
Abstract: The class of partial orders is shown to have Ol laws for first-order logic and for inductive fixed-point logic, a logic which properly contains first-order logic. This means that for every sentence in one of these logics the proportion of labeled (or unlabeled) partial orders of size n satisfying the sentence has a limit of either 0 or 1 as n goes to co. This limit, called the asymptotic probability of the sentence, is the same for labeled and unlabeled structures. The computational complexity of the set of sentences with asymptotic probability 1 is determined. For first-order logic, it is PSPACE-complete. For inductive fixed-point logic, it is EXPTIME-complete.

Journal ArticleDOI
TL;DR: Several design specifications for synthesizers of this kind are considered from a recursion-theoretic perspective and programs that accept descriptions of inductive inference problems and return machines that solve them are considered.
Abstract: We consider programs that accept descriptions of inductive inference problems and return machines that solve them. Several design specifications for synthesizers of this kind are considered from a recursion-theoretic perspective.

Journal ArticleDOI
TL;DR: The same results are obtained with regard to rooted bisimulation equivalence classes of automata with start states with the crucial feature of the consideration of only the pure simulations which carry the pure states of the domain automation to the pureStates of the codomain automaton.
Abstract: We view CCS terms as defining nondeterministic automata. An algebraic representation of automata is given, and categories of automata and simulations between them are defined. The crucial feature is the consideration of only the pure simulations which carry the pure (actual, determined) states of the domain automation to the pure states of the codomain automaton. The pure epimorphisms between the automata partition the category into bisimulation equivalence classes. There is a unique canonical representative for each bisimulation equivalence class. These results hold for weak bisimulation and hence for strong bisimulation. Essentially the same results are obtained with regard to rooted bisimulation equivalence classes of automata with start states.

Journal ArticleDOI
Martin Beaudry1
TL;DR: An algorithm for membership testing, valid for arbitrary commutative semigroups, is obtained and it is shown that the complexity of the problem varies with the threshold of the semigroup.
Abstract: Given a finite set X of states, a finite set of commuting transformations of X (generators), and another transformation f of X , we analyze the complexity of deciding whether f can be obtained by composition of the generators. Looking first at the action of a commutative semigroup of transformations of a finite set, we obtain an algorithm for membership testing, valid for arbitrary commutative semigroups. We then show that the complexity of the problem varies with the threshold of the semigroup: polynomial-time ( NC 3 in parallel) with threshold zero or one, and NP -complete otherwise.

Journal ArticleDOI
Susan Landau1
TL;DR: It is shown that π is deterministic polynomial time Turing reducible to ϕ, the Euler function, and that n θ (n) is the largest squarefree factor of n.
Abstract: Let n be a positive integer, and suppose n = Π p i a i is its prime factorization. Let θ(n) = Π p i a i − 1 , so that n θ (n) is the largest squarefree factor of n. We show that π is deterministic polynomial time Turing reducible to ϕ, the Euler function. We also show that θ is reducible to λ, the Carmichael function. We survey other recent work on computing the square part of an integer and give upper and lower bounds on the complexity of solving the problem.

Journal ArticleDOI
TL;DR: The (partial) cartesian closed category GEN of generalized numbered sets is defined, it is proved that it is a good extension of the category of numbered sets, and how it is related to the recursive topos is shown.
Abstract: This paper is divided in two parts. In the first we analyse in great generality data types in relation to partial morphisms . We introduce partial function spaces, partial cartesian closed categories and complete objects, motivate their introduction, and show some of their properties. In the second part we define the (partial) cartesian closed category GEN of generalized numbered sets, prove that it is a good extension of the category of numbered sets, and show how it is related to the recursive topos.