scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 1996"


Journal ArticleDOI
TL;DR: This paper examines graph-based access structures, i.e., access structures in which any qualified set of participants contains at least an edge of a given graph whose vertices represent the participants of the scheme, and provides a novel technique for realizing threshold visual cryptography schemes.
Abstract: A visual cryptography scheme for a set P ofnparticipants is a method of encoding a secret imageSIintonshadow images called shares, where each participant in P receives one share. Certain qualified subsets of participants can “visually” recover the secret image, but other, forbidden, sets of participants have no information (in an information-theoretic sense) onSI. A “visual” recovery for a setX?P consists of xeroxing the shares given to the participants inXonto transparencies, and then stacking them. The participants in a qualified setXwill be able to see the secret image without any knowledge of cryptography and without performing any cryptographic computation. In this paper we propose two techniques for constructing visual cryptography schemes for general access structures. We analyze the structure of visual cryptography schemes and we prove bounds on the size of the shares distributed to the participants in the scheme. We provide a novel technique for realizingkout ofnthreshold visual cryptography schemes. Our construction forkout ofnvisual cryptography schemes is better with respect to pixel expansion than the one proposed by M. Naor and A. Shamir (Visual cryptography,in“Advances in Cryptology?Eurocrypt '94” CA. De Santis, Ed.), Lecture Notes in Computer Science, Vol. 950, pp. 1?12, Springer-Verlag, Berlin, 1995) and for the case of 2 out ofnis the best possible. Finally, we consider graph-based access structures, i.e., access structures in which any qualified set of participants contains at least an edge of a given graph whose vertices represent the participants of the scheme.

639 citations


Journal ArticleDOI
Douglas J. Howe1
TL;DR: This work uses this method to show that some generalizations of Abramsky's applicative bisimulation are congruences whenever evaluation can be specified by a certain natural form of structured operational semantics.
Abstract: We give a method for proving congruence of bisimulation-like equivalences in functional programming languages. The method applies to languages that can be presented as a set of expressions together with an evaluation relation. We use this method to show that some generalizations of Abramsky's applicative bisimulation are congruences whenever evaluation can be specified by a certain natural form of structured operational semantics. One of the generalizations handles nondeterminism and diverging computations.

267 citations


Journal ArticleDOI
TL;DR: A Dichotomy Theorem is proved that if all logical relations involved in a generalized satisfiability counting problem are affine then the number of satisfying assignments of this problem can be computed in polynomial time, otherwise this function is #P-complete.
Abstract: The class of generalized satisfiability problems, introduced in 1978 by Schaefer, presents a uniform way of studying the complexity of satisfiability problems with special conditions. The complexity of each decision and counting problem in this class depends on the involved logical relations. In 1979, Valiant defined the class #P, the class of functions definable as the number of accepting computations of a polynomial-time nondeterministic Turing machine. Clearly, all satisfiability counting problems belong to this class #P. We prove a Dichotomy Theorem for generalized satisfiability counting problems. That is, if all logical relations involved in a generalized satisfiability counting problem are affine then the number of satisfying assignments of this problem can be computed in polynomial time, otherwise this function is #P-complete. This gives us a comparison between decision and counting generalized satisfiability problems. We can determine exactly the polynomial satisfiability decision problems whose number of solutions can be computed in polynomial time and also the polynomial satisfiability decision problems whose counting counterparts are already #P-complete. Moreover, taking advantage of a similar dichotomy result proved in 1978 by Schaefer for generalized satisfiability decision problems, we get as a corollary the implication that the counting counterpart of each NP-complete generalized satisfiability decision problem is #P-complete.

236 citations


Journal ArticleDOI
TL;DR: A general automaton model for timing-based systems is presented and is used as the context for developing a variety of simulation proof techniques for such systems, including refinements, forward and backward simulations, hybrid forward?backward and backward?forward simulations, and history and prophecy relations.
Abstract: A general automaton model for timing-based systems is presented and is used as the context for developing a variety of simulation proof techniques for such systems. These techniques include (1) refinements, (2) forward and backward simulations, (3) hybrid forward?backward and backward?forward simulations, and (4) history and prophecy relations. Relationships between the different types of simulations, as well as soundness and completeness results, are stated and proved. These results are (with one exception) analogous to the results for untimed systems in Part I of this paper. In fact, many of the results for the timed case are obtained as consequences of the analogous results for the untimed case.

210 citations


Journal ArticleDOI
TL;DR: A new form of bisimulation is proposed for higher-order process calculus, called context bisimulations, which yields a more satisfactory discriminanting power and is played by the factorisation theorem.
Abstract: Ahigher-order process calculusis a calculus for communicating systems which contains higher-order constructs like communication of terms. We analyse the notion ofbisimulationin these calculi. We argue that both the standard definition of bisimulation (i.e., the one for CCS and related calculi), as well ashigher-order bisimulationE. Astesiano, A. Giovini, and G. Reggio,in“STACS '88,” Lecture Notes in Computer Science, Vol. 294, pp. 207?226, Springer-Verlag, Berlin/New York, 1988; G. Boudol,in“TAPSOFT '89,” Lecture Notes in Computer Science, Vol. 351, pp. 149?161, Springer-Verlag, Berlin/New York, 1989; B. Thomsen, Ph.D. thesis, Dept. of Computing, Imperial College, 1990] are in general unsatisfactory, because of their over-discrimination. We propose and study a new form of bisimulation for such calculi, calledcontext bisimulation, which yields a more satisfactory discriminanting power. A drawback of context bisimulation is the heavy use of universal quantification in its definition, which is hard to handle in practice. To resolve this difficulty we introducetriggered bisimulationandnormal bisimulation, and we prove that they both coincide with context bisimulation. In the proof, we exploit thefactorisation theorem: When comparing the behaviour of two processes, it allows us to “isolate” subcomponents which might give differences, so that the analysis can be concentrated on them

181 citations


Journal ArticleDOI
TL;DR: In this paper, the cow-path problem is studied and the first randomized algorithm for the cow path problem is presented. But the algorithm is optimal for two paths (w = 2) and is not optimal for larger values of w.
Abstract: Searching for a goal is a central and extensively studied problem in computer science. In classical searching problems, the cost of a search function is simply the number of queries made to an oracle that knows the position of the goal. In many robotics problems, as well as in problems from other areas, we want to charge a cost proportional to the distance between queries (e.g., the time required to travel between two query points). With this cost function in mind, the abstract problem known as thew-lane cow-path problem was designed. There are known optimal deterministic algorithms for the cow-path problem; we give the first randomized algorithm in this paper. We show that our algorithm is optimal for two paths (w=2) and give evidence that it is optimal for larger values ofw. Subsequent to the preliminary version of this paper, Kaoet al.(in“Proceedings, 5th ACM?SIAM Symposium on Discrete Algorithm,” pp. 372?381, 1994) have shown that our algorithm is indeed optimal for allw?2. Our randomized algorithm gives expected performance that is almost twice as good as is possible with a deterministic algorithm. For the performance of our algorithm, we also derive the asymptotic growth with respect tow?despite similar complexity results for related problems, it appears that this growth has never been analyzed.

149 citations


Journal ArticleDOI
TL;DR: It is proved that, for each natural numbern, there is a polynomial time computable query which is not definable in any extension of fixpoint logic by any set of generalized quantifiers, which rules out the possibility of characterizing PTIME in terms of definability in fix point logic extended by a finite set of universal quantifiers.
Abstract: We consider the problem of finding a characterization for polynomial time computable queries on finite structures in terms of logical definability. It is well known that fixpoint logic provides such a characterization in the presence of a built-in linear order, but without linear order even very simple polynomial time queries involving counting are not expressible in fixpoint logic. Our approach to the problem is based on generalized quantifiers. A generalized quantifier isn-ary if it binds any number of formulas, but at mostnvariables in each formula. We prove that, for each natural numbern, there is a query on finite structures which is expressible in fixpoint logic, but not in the extension of first-order logic by any set ofn-ary quantifiers. It follows that the expressive power of fixpoint logic cannot be captured by adding finitely many quantifiers to first-order logic. Furthermore, we prove that, for each natural numbern, there is a polynomial time computable query which is not definable in any extension of fixpoint logic byn-ary quantifiers. In particular, this rules out the possibility of characterizing PTIME in terms of definability in fixpoint logic extended by a finite set of generalized quantifiers.

145 citations


Journal ArticleDOI
TL;DR: It is proved that every Turing machine can be simulated by a system based entirely on contextual insertions and deletions and decidability of existence of solutions to equations involving these operations.
Abstract: We investigate two generalizations of insertion and deletion of words, that have recently become of interest in the context of molecular computing. Given a pair of words (x, y), called a context, the (x, y)-contextual insertion of a wordvinto a worduis performed as follows. For each occurrence ofxyas a subword inu, we include in the result of the contextual insertion the words obtained by insertingvintou, betweenxandy. The (x, y)-contextual deletion operation is defined in a similar way. We study closure properties of the Chomsky families under the defined operations, contextual ins-closed and del-closed languages, and decidability of existence of solutions to equations involving these operations. Moreover, we prove that every Turing machine can be simulated by a system based entirely on contextual insertions and deletions

134 citations


Journal ArticleDOI
TL;DR: This paper considers other possible unreliable behaviors of communication channels, viz., (a) duplication and (b) insertion errors, and also considers various combinations of duplication, insertion, and lossiness errors.
Abstract: We consider the problem of verifying correctness of finite state machines that communicate with each other over unbounded FIFO channels that are unreliable Various problems of interest in verification of FIFO channels that can lose messages have been considered by Finkel and by Abdulla and Jonsson We consider, in this paper, other possible unreliable behaviors of communication channels, viz, (a) duplication and (b) insertion errors Furthermore, we also consider various combinations of duplication, insertion, and lossiness errors Finite state machines that communicate over unbounded FIFO buffers are a model of computation that forms the backbone of the ISO standard protocol specification languages Estelle and SDL While the assumption of a perfect communication medium is reasonable at the higher levels of the OSI protocol stack, the lower levels have to deal with an unreliable communication medium; hence our motivation for the present work The verification problems that are of interest arereachability,unboundedness,deadlock, andmodel-checking against CTL* All of these problems are undecidable for machines communicating over reliable unbounded FIFO channels So it is perhaps surprising that some of these problems become decidable when unreliable channels are modeled The contributions of this paper are (a) an investigation of solutions to these problems for machines with insertion errors, duplication errors, or a combination of duplication, insertion, and lossiness errors, and (b) a comparison of the relative expressive power of the various errors

133 citations


Journal ArticleDOI
TL;DR: The techniques can be applied to a much more general class of “sigmoidal-like” activation functions, suggesting that Turing universality is a relatively common property of recurrent neural network models.
Abstract: We investigate the computational power of recurrent neural networks that apply the sigmoid activation function?(x)=2/(1+e?x)]?1. These networks are extensively used in automatic learning of non-linear dynamical behavior. We show that in the noiseless model, there exists a universal architecture that can be used to compute any recursive (Turing) function. This is the first result of its kind for the sigmoid activation function; previous techniques only applied to linearized and truncated version of this function. The significance of our result, besides the proving technique itself, lies in the popularity of the sigmoidal function both in engineering applications of artificial neural networks and in biological modelling. Our techniques can be applied to a much more general class of “sigmoidal-like” activation functions, suggesting that Turing universality is a relatively common property of recurrent neural network models.

126 citations


Journal ArticleDOI
TL;DR: A temporal logic for the polyadic?-calculus based on fixed point extensions of Hennessy?Milner logic is introduced, including the relativisation of correctness assertions to conditions on names.
Abstract: We introduce a temporal logic for the polyadic?-calculus based on fixed point extensions of Hennessy?Milner logic. Features are added to account for parametrisation, generation, and passing of names, including the use, following Milner, of dependent sum and product to account for (unlocalised) input and output, and explicit parametrisation on names using?-abstraction and application. The latter provides a single name binding mechanism supporting all parametrisation needed. A proof system and decision procedure is developed based on Stirling and Walker's approach to model checking the modal?-calculus using constants. One difficulty, for both conceptual and efficiency-based reasons, is to avoid the explicit use of the?-rule for parametrised processes. A key idea, following Hennessy and Lin's approach to deciding bisimulation for certain types of value-passing processes, is the relativisation of correctness assertions to conditions on names. Based on this idea, a proof system and a decision procedure are obtained for arbitrary?-calculus processes with finite control,?-calculus correlates of CCS finite-state processes, avoiding the use of parallel composition in recursively defined processes.

Journal ArticleDOI
TL;DR: It is shown that decision trees are not likely to be efficiently PAC-learnable, despite their widespread practical application.
Abstract: k-Decision lists and decision trees play important roles in learning theory as well as in practical learning systems.k-Decision lists generalize classes such as monomials,k-DNF, andk-CNF, and like these subclasses they are polynomially PAC-learnable R. Rivest,Mach. Learning2(1987), 229?246]. This leaves open the question of whetherk-decision lists can be learned as efficiently ask-DNF. We answer this question negatively in a certain sense, thus disproving a claim in a popular textbook M. Anthony and N. Biggs, “Computational Learning Theory,” Cambridge Univ. Press, Cambridge, UK, 1992]. Decision trees, on the other hand, are not even known to be polynomially PAC-learnable, despite their widespread practical application. We will show that decision trees are not likely to be efficiently PAC-learnable. We summarize our specific results. The following problems cannot be approximated in polynomial time within a factor of 2log?nfor any? 0, unlessNP=P. Also,k-decision lists withl0?1 alternations cannot be approximated within a factor loglnunlessNP?DTIMEnO(loglogn)] (providing an interesting comparison to the upper bound obtained by A. Dhagat and L. Hellerstein in“FOCS '94,” pp. 64?74]).

Journal ArticleDOI
TL;DR: It is shown that a set of pictures is recognized by a finite tiling system iff it is definable in existential monadic second-order logic, which generalizes finite-state recognizability over strings and also matches a natural logic.
Abstract: It is shown that a set of pictures (rectangular arrays of symbols) is recognized by a finite tiling system iff it is definable in existential monadic second-order logic. As a consequence, finite tiling systems constitute a notion of recognizability over two-dimensional inputs which at the same time generalizes finite-state recognizability over strings and also matches a natural logic. The proof is based on the Ehrenfeucht–Fraisse technique for first-order logic and an implementation of “threshold counting” within tiling systems.

Journal ArticleDOI
TL;DR: This paper shows that the following problems are undecidable for lossy channel systems: the model checking problem in propositional temporal logics such as propositional linear time temporal logic (PTL) and computation tree logic (CTL).
Abstract: We consider the class of finite-state systems communicating through unbounded butlossyFIFO channels (calledlossy channel systems) These systems have infinite state spaces due to the unboundedness of the channels In an earlier paper, we showed that the problems of checking reachability, safety properties, and eventuality properties are decidable for lossy channel systems In this paper, we show that the following problems are undecidable:?The model checking problem in propositional temporal logics such as propositional linear time temporal logic (PTL) and computation tree logic (CTL)?The problem of deciding eventuality properties with fair channels: do all computations eventually reach a given set of states if the unreliable channels satisfy fairness assumptions ?The results are obtained through reduction from a variant of the Post correspondence problem

Journal ArticleDOI
TL;DR: The notion of weakly hyperbolic iterated function system (IFS) on a compact metric space, which generalises that ofhyperbolic IFS, is introduced and the existence and uniqueness of the attractor and invariant measure are proved.
Abstract: We introduce the notion of weakly hyperbolic iterated function system (IFS) on a compact metric space, which generalises that of hyperbolic IFS. Based on a domain-theoretic model, which uses the Plotkin power domain and the probabilistic power domain respectively, we prove the existence and uniqueness of the attractor of a weakly hyperbolic IFS and the invariant measure of a weakly hyperbolic IFS with probabilities, extending the classic results of Hutchinson for hyperbolic IFSs in this more general setting. We also present finite algorithms to obtain discrete and digitised approximations to the attractor and the invariant measure, extending the corresponding algorithms for hyperbolic IFSs. We then prove the existence and uniqueness of the invariant distribution of a weakly hyperbolic recurrent IFS and obtain an algorithm to generate the invariant distribution on the digitised screen. The generalised Riemann integral is used to provide a formula for the expected value of almost everywhere continuous functions with respect to this distribution. For hyperbolic recurrent IFSs and Lipschitz maps, one can estimate the integral up to any threshold of accuracy.

Journal ArticleDOI
TL;DR: This paper constructs for each TSS in tyft/tyxt format an equivalent TSS that consists of tree rules only, and can give an affirmative answer to an open question, namely whether the well-foundedness condition in the congruence theorem for tyft-tyxt can be dropped.
Abstract: Groote and Vaandrager introduced thetyft/tyxt formatfor Transition System Specifications (TSSs), and established that for each TSS in this format that iswell-founded, the bisimulation equivalence it induces is a congruence. In this paper, we construct for each TSS in tyft/tyxt format an equivalent TSS that consists oftree rulesonly. As a corollary we can give an affirmative answer to an open question, namely whether the well-foundedness condition in the congruence theorem for tyft/tyxt can be dropped. These results extend to tyft/tyxt with negative premises and predicates.

Journal ArticleDOI
TL;DR: This work develops a 3-phase algorithm for 2-way SMT, the first 2- phase algorithm for SMT that uses communication and ?
Abstract: We study perfectly secure message transmission (SMT) in general synchronous networks where processors and communication lines may be Byzantine faulty. Dolevet al.(J. Assoc. Comput. Mach.40, No. 1, 17?47, Jan. 1993) first posed and solved the problem; our work significantly improves on their algorithms in the number of communication bits and the amount of local computation. Hence, our algorithms are better suited for traditional and fiber-optic networks than previous algorithms while requiring the same amount of connectivity. The algorithms we develop do not rely on any complexity theoretic assumptions and simultaneously achieve the three goals of perfect secrecy, perfect resiliency, and worst case time that is linear in the diameter of the 1?network. Our algorithms assume that the containment assumption holds, i.e., there is effectively one adversary who controls and coordinates the activities of the faulty processors and lines. In SMT, a processor (Sender) wishes to transmit a secret message to another processor (Receiver) in such a way as to satisfy secrecy and resiliency requirements simultaneously. In 1-waySMT, Sender can send information to Receiver via the wires that connect them, but Receiver cannot send information to Sender. In 2-waySMT, Sender and Receiver can send information to each other via the wires. Aphaseis a send from Sender to Receiver or vice versa. First, we develop a 3-phase algorithm for 2-way SMT. Next, we present a 2-phase algorithm for 2-way SMT. To our knowledge, this is the first 2-phase algorithm for SMT that uses communication and ?computation costs that are polynomial in the number of wires that connect the sender and the receiver. The second algorithm uses less time and more communication bits than the first algorithm. Both the 2?phase and 3-phase algorithms employ new techniques to detect faulty paths. We also present a simple algorithm for 1-way SMT.

Journal ArticleDOI
TL;DR: A sound and complete proof system is introduced for symbolic bisimulation, which is more amenable to automatic manipulation and sheds light on the logical differences among different forms of bisimulations over algebras of name-passing processes.
Abstract: We use symbolic transition systems as a basis for providing the?-calculus with an alternative semantics. The latter is more amenable to automatic manipulation and sheds light on the logical differences among different forms of bisimulation over algebras of name-passing processes. Symbolic transitions have the formformula], where?is a boolean combination of equalities on names that has to hold for the transition to take place, and?is standard a?-calculus action. On top of the symbolic transition system, a symbolic bisimulation is defined that captures the standard ones. Finally, a sound and complete proof system is introduced for symbolic bisimulation.

Journal ArticleDOI
TL;DR: domain theoretic concepts are extended to include concepts from domain theory, including the notions of directed set, least upper bound, complete partial order, monotonicity, continuity, finite element,?-algebraicity, full abstraction, and least fixed point properties, and are used to construct a (strongly) fully abstract continuous model for the authors' language.
Abstract: This paper builds domain theoretic concepts upon an operational foundation. The basic operational theory consists of a single step reduction system from which an operational ordering and equivalence on programs are defined. The theory is then extended to include concepts from domain theory, including the notions of directed set, least upper bound, complete partial order, monotonicity, continuity, finite element,?-algebraicity, full abstraction, and least fixed point properties. We conclude by using these concepts to construct a (strongly) fully abstract continuous model for our language. In addition we generalize a result of Milner and prove the uniqueness of such models.

Journal ArticleDOI
TL;DR: The techniques of Beigel, Reingold, and Spielman are extended to show that PP is closed under general polynomial-time truth-table reductions and is also shown to be closed under constant-round truth- table reductions.
Abstract: Beigel, Reingold, and Spielman (J. Comput. System Sci.50, 191?202 (1995)) showed that PP is closed under intersection and a variety of special cases of polynomial-time truth-table closure. We extend their techniques to show that PP is closed under general polynomial-time truth-table reductions. We also show that PP is closed under constant-round truth-table reductions.

Journal ArticleDOI
TL;DR: It is argued that the use of the completeness result for branching congruence in obtaining the complementation result for weak congruent leads to a considerable simplification with respect to the only direct proof presented in the literature.
Abstract: Prefix iteration is a variation on the original binary version of the Kleene star operationP*Q, obtained by restricting the first argument to be an atomic action The interaction of prefix iteration with silent steps is studied in the setting of Milner's basic CCS Complete equational axiomatizations are given for four notions of behavioural congruence over basic CCS with prefix iteration, viz, branching congruence,?-congruence, delay congruence, and weak congruence The completeness proofs for?-, delay, and weak congruence are obtained by reduction to the completeness theorem for branching congruence It is also argued that the use of the completeness result for branching congruence in obtaining the completeness result for weak congruence leads to a considerable simplification with respect to the only direct proof presented in the literature The preliminaries and the completeness proofs focus on open terms, ie, terms that may contain process variables As a by-product, the?-completeness of the axiomatizations is obtained, as well as their completeness for closed terms

Journal ArticleDOI
TL;DR: An O(n3) algorithm is given for each of the four type inference problems of the calculus of objects and it is proved that all the problems are P-complete.
Abstract: M. Abadi and L. Cardelli have recently investigated a calculus of objects (1994). The calculus supports a key feature of object-oriented languages: an object can be emulated by another object that has more refined methods. Abadi and Cardelli presented four first-order type systems for the calculus. The simplest one is based on finite types and no subtyping, and the most powerful one has both recursive types and subtyping. Open until now is the question of type inference, and in the presence of subtyping "the absence of minimum typings poses practical problems for type inference." In this paper, we give an O(n3) algorithm for each of the four type inference problems and we prove that all the problems are P-complete. We also indicate how to modify the algorithms to handle functions and records.

Journal ArticleDOI
Farn Wang1
TL;DR: The algorithm presented here accepts timed transition system descriptions and parametric TCTL formulas with timing parameter variables of unknown sizes and can give back general linear equations of timing parameter variable whose solutions make the systems work.
Abstract: We extend a TCTL model-checking problem to a parametric timing analysis problem for real-time systems and develop new techniques for solving it. The algorithm we present here accepts timed transition system descriptions and parametric TCTL formulas with timing parameter variables of unknown sizes and can give back general linear equations of timing parameter variables whose solutions make the systems work.

Journal ArticleDOI
TL;DR: An important result in this paper is the proof that every computable functional on real numbers is continuous w.r.t. the compact open topology on the function space.
Abstract: We present the different constructive definitions of real number that can be found in the literature. Using domain theory we analyse the notion of computability that is substantiated by these definitions and we give a definition of computability for real numbers and for functions acting on them. This definition of computability turns out to be equivalent to other definitions given in the literature using different methods. Domain theory is a useful tool to study higher order computability on real numbers. An interesting connection between Scott topology and the standard topologies on the real line and on the space of continuous functions on reals is stated. An important result in this paper is the proof that every computable functional on real numbers is continuous w.r.t. the compact open topology on the function space.

Journal ArticleDOI
TL;DR: A systematic analysis of the amount of randomness needed by secret sharing schemes and secure key distribution schemes is given and a lower bound is provided, thus showing the optimality of a recently proposed key distribution protocol.
Abstract: Randomness is a useful computation resource due to its ability to enhance the capabilities of other resources. Its interaction with resources such as time, space, interaction with provers and its role in several areas of computer science has been extensively studied. In this paper we give a systematic analysis of the amount of randomness needed by secret sharing schemes and secure key distribution schemes. We give both upper and lower bounds on the number of random bits needed by secret sharing schemes. The bounds are tight for several classes of secret sharing schemes. For secure key distribution schemes we provide a lower bound on the amount of randomness needed, thus showing the optimality of a recently proposed key distribution protocol.

Journal ArticleDOI
TL;DR: The paper provides an accurate analysis of the derivation mechanism and the expressive power of the SR formalism, which is necessary to fully exploit the capabilities of the model.
Abstract: A common approach to the formal description of pictorial and visual languages makes use of formal grammars and rewriting mechanisms. The present paper is concerned with the formalism of Symbol?Relation Grammars (SR grammars, for short). Each sentence in an SR language is composed of a set of symbol occurrences representing visual elementary objects, which are related through a set of binary relational items. The main feature of SR grammars is the uniform way they use context-free productions to rewrite symbol occurrences as well as relation items. The clearness and uniformity of the derivation process for SR grammars allow the extension of well-established techniques of syntactic and semantic analysis to the case of SR grammars. The paper provides an accurate analysis of the derivation mechanism and the expressive power of the SR formalism. This is necessary to fully exploit the capabilities of the model. The most meaningful features of SR grammars as well as their generative power are compared with those of well-known graph grammar families. In spite of their structural simplicity, variations of SR grammars have a generative power comparable with that of expressive classes of graph grammars, such as the edNCE and the N-edNCE classes.

Journal ArticleDOI
TL;DR: A uniform framework for the study of index data structures for a two-dimensional matrixTEXT whose entries are drawn from an ordered alphabet and new algorithmic tools that yield a space-efficient implementation of the “naming scheme” of R.
Abstract: We provide a uniform framework for the study of index data structures for a two-dimensional matrixTEXT1:n, 1:n] whose entries are drawn from an ordered alphabet?. An index forTEXTcan be informally seen as the two-dimensional analog of the suffix tree for a string. It allows on-line searches and statistics to be performed onTEXTby representing compactly the?(n3) square submatrices ofTEXTin optimalO(n2) space. We identify 4n?1families of indices forTEXT, each containing ?ni=1(2i?1)! isomorphic data structures. We also develop techniques leading to a single algorithm that efficiently builds any index in any family inO(n2logn) time andO(n2) space. Such an algorithm improves in various respects the algorithms for the construction of the PAT tree and the Lsuffix tree. The framework and the algorithm easily generalize tod>2 dimensions. Moreover, as part of our algorithm, we provide new algorithmic tools that yield a space-efficient implementation of the “naming scheme” of R. Karpet al.(in“Proceedings, Fourth Symposium on Theory of Computing,” pp. 125?136) for strings and matrices.

Journal ArticleDOI
TL;DR: The most important contribution of this work is to identify a time complexity measure for asynchronous concurrent programs that strikes a balance between being conceptually simple and having a tangible connection to real performance.
Abstract: We establish trade-offs between time complexity and write- and access-contention for solutions to the mutual exclusion problem. Thewrite-contention(access-contention) of a concurrent program is the number of processes that may be simultaneously enabled to write (access by reading and/or writing) the same shared variable. Our notion of time complexity distinguishes between local and remote accesses of shared memory. We show that, for anyN-process mutual exclusion algorithm, if write-contention isw, and if at mostvremote variables can be accessed by a single atomic operation, then there exists an execution involving only one process in which that process executes?(logvwN) remote operations for entry into its critical section. We further show that, among these operations,formula]distinct remote variables are accessed. For algorithms with access-contentionc, we show that the latter bound can be improved to?(logvcN). The last two of these bounds imply that a trade-off between contention and time complexity exists even if coherent caching techniques are employed. In most shared-memory multiprocessors, an atomic operation may access only a constant number of remote variables. In fact, most commonly-available synchronization primitives (e.g., read, write, test-and-set, load-and-store, compare-and-swap, and fetch-and-add) access only one remote variable. In this case, the first and the last of our bounds are asymptotically tight. Our results have a number of important implications regarding specific concurrent programming problems. For example, the time bounds that we establish apply not only to the mutual exclusion problem, but also to a class of decision problems that includes the leader-election problem. Also, because the execution that establishes these bounds involves only one process, it follows that “fast mutual exclusion” requires arbitrarily high write-contention. Although such conclusions are interesting in their own right, we believe that the most important contribution of our work is to identify a time complexity measure for asynchronous concurrent programs that strikes a balance between being conceptually simple and having a tangible connection to real performance.

Journal ArticleDOI
TL;DR: This work presents a typed functional calculus that emphasizes the strong connection between the structures of whole pattern definitions and their types, and proves the basic properties connecting typing and evaluation: subject reduction and strong normalization.
Abstract: The theory of programming with pattern-matching function definitions has been studied mainly in the framework of first-order rewrite systems. We present a typed functional calculus that emphasizes the strong connection between the structures of whole pattern definitions and their types. In this calculus, type-checking guarantees the absence of runtime errors caused by non-exhaustive pattern-matching definitions. Its operational semantics is deterministic in a natural way, without the imposition of ad hoc solutions such as clause order or “best fit”. In the spirit of the Curry?Howard isomorphism, we design the calculus as a computational interpretation of the Gentzen sequent proofs for the intuitionistic propositional logic. We prove the basic properties connecting typing and evaluation: subject reduction and strong normalization. We believe that this calculus offers a rational reconstruction of the pattern-matching features found in successful functional languages.

Journal ArticleDOI
TL;DR: This paper indicates that it is a nontrivial problem to obtain information about the leaf string of a nonbalanced computation tree and present conditions under which it does not matter whether the computation tree is balanced or not.
Abstract: The computation tree of a nondeterministic machineMwith inputxgives rise to aleaf stringformed by concatenating the outcomes of all the computations in the tree in lexicographical order. We may characterize problems by considering, for a particular “leaf language”Y, the set of allxfor which the leaf string ofMis contained inY. In this way, in the context of polynomial time computation, leaf languages were shown to capture many complexity classes. In this paper, we study the expressibility of the leaf language mechanism in the contexts of logarithmic space and of logarithmic time computation. We show that logspace leaf languages yield a much finer classification scheme for complexity classes than polynomial time leaf languages, capturing also many classes withinP. In contrast, logtime leaf languages basically behave like logtime reducibilities. Both cases are more subtle to handle than the polynomial time case. We also raise the issue of balanced versus nonbalanced computation trees underlying the leaf language. We indicate that it is a nontrivial problem to obtain information about the leaf string of a nonbalanced computation tree and present conditions under which it does not matter whether the computation tree is balanced or not.