scispace - formally typeset
Search or ask a question

Showing papers in "Bulletin of The European Association for Theoretical Computer Science in 2013"


Journal Article
TL;DR: This paper proposes as weakest version the class of vertical weak adhesive HLR categories, shortM-adhesive categories, which are still sufficient to obtain most of the main results for graph transformation and HLR systems.
Abstract: Several variants of high-level replacement (HLR) and adhesive categories have been introduced in the literature as categorical frameworks for graph transformation and HLR systems based on the double pushout (DPO) approach. In addition to HLR, adhesive, and adhesive HLR categories several weak variants, especially weak adhesive HLR with horizontal and vertical variants, as well as partial variants, including partial map adhesive and partial VK square adhesive categories are reviewed and related to each other. We propose as weakest version the class of vertical weak adhesive HLR categories, shortM-adhesive categories, which are still sufficient to obtain most of the main results for graph transformation and HLR systems. The results in this paper are summarized in Fig. 1 showing a hierarchy of all these variants of adhesive, adhesive HLR, andM-adhesive categories, which can be considered as different categorical frameworks for graph transformation and HLR systems.

72 citations


Journal Article
TL;DR: A family of modal logics for reasoning about relational structures of intervals over (usually) linear orders, with modal operators associated with the various binary relations between such intervals, known as Allen’s interval relations is discussed.
Abstract: We discuss a family of modal logics for reasoning about relational structures of intervals over (usually) linear orders, with modal operators associated with the various binary relations between such intervals, known as Allen’s interval relations. The formulae of these logics are evaluated at intervals rather than points and the main effect of that semantic feature is substantially higher expressiveness and computational complexity of the interval logics as compared to point-based ones. Without purporting to provide a comprehensive survey of the field, we take the reader to a journey through the main developments in it over the past 10 years and outline some landmark results on expressiveness and (un)decidability of the satisfiability problem for the family of interval logics.

45 citations


Journal Article
TL;DR: The current review expands the contents of the original survey with updated results from these latest ten years and contributes an extensive bibliography.
Abstract: In 2002, J. Diaz, M. Serna and the author published “A Survey of Graph Layout Problems”, which then was a complete view of the current state of the art of layout problems from an algorithmic point of view. The current review expands the contents of the original survey with updated results from these latest ten years and contributes an extensive bibliography.

37 citations


Journal Article
TL;DR: This tutorial provides an introductory overview of online reconfiguration, including a description of the main technical challenges, as well as the various approaches that are used to address these challenges.
Abstract: We live in a world of Internet services such as email, social networks, web searching, and more, which must store increasingly larger volumes of data. These services must run on cheap infrastructure, hence they must use distributed storage systems; and they have to provide reliability of data for long periods as well as availability, hence they must support online reconfiguration to remove failed nodes and add healthy ones. The knowledge needed to implement online reconfiguration is subtle and simple techniques often fail to work well. This tutorial provides an introductory overview of this topic, including a description of the main technical challenges, as well as the various approaches that are used to address these challenges.

37 citations


Journal Article
Shantanu Das1
TL;DR: This work focuses on the problem of exploring an initially unknown network with one or more mobile agents and the related problem of constructing a map of the environment being explored by the mobile agents.
Abstract: One of the recent paradigms in networked distributed computing is the use of mobile agents. Mobile agents are software robots that can autonomously migrate from node to node within a network. Although mobile agents can be easily implemented over a message passing network, they provide an abstraction for designing algorithms in a non-traditional way which can be quite natural for certain problems, such as searching, monitoring or intruder detection. A principle sub-task in most algorithms for mobile agents is the traversal of the network. We focus on this problem of exploring an initially unknown network with one or more mobile agents. We also consider the related problem of constructing a map of the environment being explored by the mobile agents.

35 citations


Journal Article
TL;DR: The main historical development is presented and the basics concepts of descriptional complexity from a general abstract perspective are addressed and the representation by two-way finite automata, multi-head finite Automata, and limited automata are considered.
Abstract: Formal languages can be described by several means. A basic question is how succinctly can a descriptional system represent a formal language in comparison with other descriptional systems? What is the maximal size trade-o when changing from one system to another, and can it be achieved? Here, we select some recent trends in the descriptional complexity of formal languages and discuss the problems, results, and open questions. In particular, we present the main historical development and address the basics concepts of descriptional complexity from a general abstract perspective. Then we consider the representation by two-way finite automata, multi-head finite automata, and limited automata in more detail. Finally, we discuss a few further topics in note form. The results presented are not proved but we merely draw attention to the overall picture and some of the main ideas involved.

30 citations


DissertationDOI
TL;DR: In this paper, the authors introduce the process-algebraic language MAPA for modelling Markov automata (MA) and introduce five reduction techniques for MAPA specifications, which can be used to compute time-bounded reachability probabilities, expected times and long-run averages.
Abstract: Quantitative model checking is concerned with the verification of both quantitative and qualitative properties over models incorporating quantitative information. Increases in expressivity of these models allow more types of systems to be analysed, but also raise the difficulty of their efficient analysis. The recently introduced Markov automaton (MA) generalises probabilistic automata and interactive Markov chains, supporting nondeterminism, discrete probabilistic choice as well as stochastic timing. It can be used to compute time-bounded reachability probabilities, expected times and long-run averages. However, an efficient formalism for modelling and generating MAs was still lacking. Additionally, the omnipresent state space explosion always threatens their analysability. This thesis solves the first problem and contributes significantly to the solution of the second. First, we introduce the process-algebraic language MAPA for modelling MAs. It incorporates the use of static as well as dynamic data (such as lists), allowing systems to be modelled efficiently. Second, we introduce five reduction techniques for MAPA specifications. Constant elimination, expression simplification and summation elimination speed up state space generation by simplifying the specification, while dead variable reduction and confluence reduction speed up analysis by reductions in state space size. Since MAs generalise labelled transition systems, discrete-time Markov chains, continuous-time Markov chains, probabilistic automata and interactive Markov chains, our techniques and results are also applicable to all these subclasses. Third, we thoroughly compare confluence reduction to the ample set variant of partial order reduction in the context of probabilistic automata. We show that when preserving branching-time properties, confluence reduction strictly subsumes partial order reduction. Also, we compare the techniques in the practical setting of statistical model checking, demonstrating that the additional potential of confluence indeed may provide larger reductions. We developed the tool SCOOP, containing all our techniques and able to export to the IMCA model checker. Together, these tools for the first time allow the analysis of MAs. Case studies demonstrate the large variety of systems that can be modelled using MAPA. Experiments additionally show significant reductions by all our techniques, sometimes reducing state spaces to less than a percent of their original size: a major step forward in efficient quantitative verification.

19 citations


Journal Article
TL;DR: This paper surveys the failure detector-based approach in both asynchronous shared memory systems and asynchronous message passing systems and presents and discusses recent results and associated k-set agreement algorithms.
Abstract: In the k-set agreement problem, each process proposes a value and has to decide a value in such a way that a decided value is a proposed value and at most k different values are decided. This problem can easily be solved in synchronous systems or in asynchronous systems prone to t process crashes when t < k. In contrast, it has been shown that k-set agreement cannot be solved in asynchronous systems when k < t. Hence, since several years, the failure detector-based approach has been investigated to circumvent this impossibility. This approach consists in enriching the underlying asynchronous system with an additional module per process that provides it with information on failures. Hence, without becoming synchronous, the enriched system is no longer fully asynchronous. This paper surveys this approach in both asynchronous shared memory systems and asynchronous message passing systems. It presents and discusses recent results and associated k-set agreement algorithms.

16 citations


Journal Article
TL;DR: Basic language constructs and a type discipline are introduced as a foundation of structured communication-based concurrent programming, which offers a high-level type abstraction of interactive behaviours of programs as well as guaranteeing the compatibility of interaction patterns between processes in a well-typed program.
Abstract: We introduce basic language constructs and a type discipline as a foundation of structured communication-based concurrent programming. The constructs, which are easily translatable into the summation-less asynchronous π-calculus, allow programmers to organise programs as a combination of multiple flows of (possibly unbounded) reciprocal interactions in a simple and elegant way, subsuming the preceding communication primitives such as method invocation and rendez-vous. The resulting syntactic structure is exploited by a type discipline a la ML, which offers a high-level type abstraction of interactive behaviours of programs as well as guaranteeing the compatibility of interaction patterns between processes in a well-typed program. After presenting the formal semantics, the use of language constructs is illustrated through exampl s, and the basic syntactic re ults of the type discipline are established. Implementation concerns are also addressed.

16 citations


Journal Article
TL;DR: In their excellent and detailed survey, the authors bring out the intricate structures involved in the reductions and the effectiveness of standard complexity classes in capturing the complexity of model checking.
Abstract: Mathematical logic and computational complexity have close connections that can be traced to the roots of computability theory and the classical decision problem. In the context of complexity, some well-known fundamental problems are: satisfiability testing of formulas (in some logic), proof complexity, and the complexity of checking if a given model satisfies a given formula. The Model Checking problem, which is the topic of the present article, is also of practical relevance since efficient model checking algorithms for temporal/modal logics are useful in formal verification. In their excellent and detailed survey, tell us about the complexity of model checking for various logics: temporal, modal and hybrid and their many fragments. Their article brings out the intricate structures involved in the reductions and the effectiveness of standard complexity classes in capturing the complexity of model checking.

9 citations


Journal Article
TL;DR: This article showed that although it is hard to merge one's articles in an optimal way, it is easy to merge them in such a way that one's H-index increases, which suggests the need for an alternative scientific performance measure that is resistant to thistype of manipulation.
Abstract: We prove two complexity results about the H-index concerned with the Google scholar merge operation on one’s scientific articles. The results show that, although it is hard to merge one’s articles in an optimal way, it is easy to merge them in such a way that one’s H-index increases. This suggests the need for an alternative scientific performance measure that is resistant to thistype of manipulation.

Journal Article
TL;DR: In this paper, non-monotonicity refers to the fact that adding more axioms leads to potentially more possible conclusions, while adding new facts to a knowledge base may prevent previously valid conclusions.
Abstract: Over the past few decades, non-monotonic reasoning has developed to be one of the most important topics in computational logic and arti cial intelligence. The non-monotonicity here refers to the fact that, while in usual (monotonic) reasoning adding more axioms leads to potentially more possible conclusions, in non-monotonic reasoning adding new facts to a knowledge base may prevent previously valid conclusions. Di erent ways to introduce non-monotonic aspects to classical logic have been considered:

Journal Article
TL;DR: This paper is a short introduction to recursive algorithms that compute tasks in asynchronous distributed systems where communication is through atomic read/write registers, and any number of processes can commit crash failures.
Abstract: Recursion is a fundamental concept of sequential computing that allows for the design of simple and ele- gant algorithms. Recursion is also used in both parallel or distributed computing to operate on data structures, mainly by exploiting data independence (independent data being processed concurrently). This paper is a short introduction to recursive algorithms that compute tasks in asynchronous distributed systems where communication is through atomic read/write registers, and any number of processes can commit crash failures. In such a context and differently from sequential and parallel recursion, the conceptual novelty lies in the fact that the aim of the recursion parameter is to allow each participating process to learn the number of processes that it sees as participating to the task computation.

Journal Article
TL;DR: A number of authors have proposed a quantitative approach to the study of the impact of enhancing the algorithm with various specific types of information, where the amount of the added information is studied in relation with the improvement of the quality or efficiency of the solution.
Abstract: In several areas of computer science the possibility and efficiency of the solution is determined by information that is not accessible to the algorithm. Traditionally, a qualitative approach to the study of this information has been pursued, in which the impact of enhancing the algorithm with various specific types of information has been studied. Recently, a number of authors have proposed a quantitative approach, where the amount of the added information is studied in relation with the improvement of the quality or efficiency of the solution. We survey several recent examples of this approach from the area of distributed and online computing.

Journal Article
TL;DR: A general method for converting derivability problems, from a broad range of deductive systems, into the derivability problem in a quite specific system, namely the Datalog fragment of universal Horn logic, which is developed in the first part of the paper.
Abstract: In the first part of the paper, we develop a general method for converting derivability problems, from a broad range of deductive systems, into the derivability problem in a quite specific system, namely the Datalog fragment of universal Horn logic. In this generality, the derivability problems may not be recursively solvable, let alone feasible; in particular, we may get Datalog “programs” with infinitely many rules. We then discuss what would be needed to obtain computationally useful results from this method. In the second part of the paper, we analyze a particular deductive system, primal infon logic with variables, which arose in the development of the authorization language DKAL. A consequence of our analysis of primal infon logic with variables is that its derivability problems can be translated into Datalog with only a quadratic increase of size.

Journal Article
TL;DR: This work discusses two questions about algorithmic completeness (in the operational sense) and proposes solutions based on Gurevich’s Abstract State Machines.
Abstract: We discuss two questions about algorithmic completeness (in the operational sense). First, how to get a mathematical characterization of the classes of algorithms associated to the diverse computation models? Second, how to define a robust and satisfactory notion of primitive recursive algorithm? We propose solutions based on Gurevich’s Abstract State Machines.

Journal Article
TL;DR: This paper describes a number of aspects of streams that it has encountered and studied during the last years that are important in business computing applications.
Abstract: Just like diamonds, streams are forever. They are also ubiquitous, arising in theoretical computer science (formal languages and functional programming), mathematics (number theory), and engineering (signal processing). Also in business computing applications streams are important. Here one can think of the streams of data queries flowing into search engines, or streams of financial data processed by financial organizations. In this paper, we describe a number of aspects of streams that we have encountered and studied during the last years.

Journal Article
TL;DR: In this article, the authors proposed a self-synchronized duty-cycling mechanism for sensor networks, which is based on the working and resting phases of natural ant colonies.
Abstract: The main contributions of this thesis are located in the domain of wireless sensor netorks. More in detail, we introduce energyaware algorithms and protocols in the context of the following topics: self-synchronized duty-cycling in networks with energy harvesting capabilities, distributed graph coloring and minimum energy broadcasting with realistic antennas. In the following, we review the research conducted in each case. We propose a self-synchronized duty-cycling mechanism for sensor networks. This mechanism is based on the working and resting phases of natural ant colonies, which show self-synchronized activity phases. The main goal of duty-cycling methods is to save energy by efficiently alternating between different states. In the case at hand, we considered two different states: the sleep state, where communications are not possible and energy consumption is low; and the active state, where communication result in a higher energy consumption. In order to test the model, we conducted an extensive experimentation with synchronous simulations on mobile networks and static networks, and also considering asynchronous networks. Later, we extended this work by assuming a broader point of view and including a comprehensive study of the parameters. In addition, thanks to a collaboration with the Technical University of Braunschweig, we were able to test our algorithm in the real sensor network simulator Shawn (http://shawn.sf.net). The second part of this thesis is devoted to the desynchronization of wireless sensor nodes and its application to the distributed graph coloring problem. In particular, our research is inspired by the calling behavior of Japanese tree frogs, whose males use their calls to attract females. Interestingly, as female frogs are only able to correctly localize the male frogs when their calls are not too close in time, groups of males that are located nearby each other desynchronize their calls. Based on a model of this behavior from the literature, we propose a novel algorithm with applications to the field of sensor networks. More in detail, we analyzed the ability of the algorithm to desynchronize neighboring nodes. Furthermore, we considered extensions of the original model, hereby improving its desynchronization capabilities.To illustrate the potential benefits of desynchronized networks, we then focused on distributed graph coloring. Later, we analyzed the algorithm more extensively and show its performance on a larger set of benchmark instances. The classical minimum energy broadcast (MEB) problem in wireless ad hoc networks, which is well-studied in the scientific literature, considers an antenna model that allows the adjustment of the transmission power to any desired real value from zero up to the maximum transmission power level. However, when specifically considering sensor networks, a look at the currently available hardware shows that this antenna model is not very realistic. In this work we re-formulate the MEB problem for an antenna model that is realistic for sensor networks. In this antenna model transmission power levels are chosen from a finite set of possible ones. A further contribution concerns the adaptation of an ant colony optimization algorithm --currently being the state of the art for the classical MEB problem-- to the more realistic problem version, the so-called minimum energy broadcast problem with realistic antennas (MEBRA). The obtained results show that the advantage of ant colony optimization over classical heuristics even grows when the number of possible transmission power levels decreases. Finally we build a distributed version of the algorithm, which also compares quite favorably against centralized heuristics from the literature.

Journal Article
TL;DR: This work addresses self–organizing systems that compile to the scale–free small world model, and extends the conditions of dynamic team constitution in eco–grammar systems to capture the behaviour of agents participating in network cluster formation.
Abstract: The concept and the reality of self–organizing networks have come to pervade modern society. But what exactly is a self–organizing network? Scientists from a range of disciplines have been pursuing questions on the particularities of self–organizing networks. Our work addresses self–organizing systems that compile to the scale–free small world model. We model self–organizing networks at syntactical level as well as reveal some semantical and experimental aspects related to them. At syntactical level, we use devices from grammar systems theory: in grammar systems theory the agents are represented by grammars and the generated strings describe the behaviour of the system. At experimental level, we utilize the methods of selective learning and value estimation under evolutionary pressure. The selection is influenced by the ever changing external world and by the competing individuals. First, we model peer–to–peer networks with the aid of networks of parallel multiset string processors. We establish the connection between the growth of the number of strings being present during the computation at the components of these networks of parallel multiset string processors and the growth function of a developmental system. We formalize security rules that conform to self–organizing dynamic systems and allow intra– and intercommunity collaborations. Our approach guarantees quick and efficient local analysis of the security requirements, thus reducing the need for global verification. Secondly, we illustrate the great diversity of employing regulated rewriting devices in eco– grammar systems to describe the search strategy of Internet crawlers. We prove that if we ignore the aging of the web pages in the model, then systems with rather simple component grammars suffice to identify any recursively enumerable language. Whereas if the web pages may become obsolete, then the efficiency of the cooperation of the agents decreases considerably. We also examine the extent to which communication makes a goal– oriented community efficient in different graph topologies through simulations. Finally, we extend the conditions of dynamic team constitution in eco–grammar systems to capture the behaviour of agents participating in network cluster formation. From the language classes that these systems are capable of generating, we deduce the difficulty of the problem they can solve. Depending on the team constitution mode, different classes of languages can be obtained. For all self–organizing networks to be presented in this dissertation, we also propose some further research directions.

Journal ArticleDOI
TL;DR: An overview of the results for achieving secure computation in presence of concurrent and physical attacks contained in the phD thesis is provided, with emphasis to the relation of such results with the state of the art.
Abstract: Secure computation enables many parties to jointly compute a function of their private inputs. The security requirement is that the input privacy of any honest party is preserved even if other parties participating in the protocol collude or deviate from the protocol. In concurrent and physical attacks, adversarial parties try to break the privacy of honest parties by exploiting the network connection or physical weaknesses of the honest parties’ machine. This article provides an overview of the results for achieving secure computation in presence of concurrent and physical attacks contained in the phD thesis:”Secure Computation under concurrent and physical attacks", with emphasis to the relation of such results with the state of the art.

Journal Article
TL;DR: This edition of WTTM was dedicated to the 60th birthday of Maurice Herlihy and to his foundational work on Transactional Memory, which was commemorated by Michael Scott in the concluding talk of the event.
Abstract: This year, the 6th edition of the Workshop on Theory of Transactional Memory (WTTM) was collocated with PODC 2014 in Paris, and took place on July 14. The objective of WTTM was to discuss new theoretical challenges and recent achievements in the area of transactional computing. Among the various recent developments in the area of Transactional Memory (TM), one of the most relevant was the support for Hardware TM (HTM), which was introduced in various commercial processors. Unsurprisingly, the recent advent of HTM in commercial CPUs has had a major impact also in the program of this edition of WTTM, which has gathered several works addressing issues related to the programmability, efficiency, and correctness of HTM-based systems, as well as hybrid solutions combining software and hardware TM implementations (HyTM). As in its previous editions, WTTM could count on the generous support of the EuroTM COST Action (IC1001), and on a set of outstanding keynote talks which were delivered by some of the leading researchers in the area, namely Idit Keidar, Shlomi Dolev, Maged Michael and Michael Scott, who were invited to present their latest achievements. This edition was dedicated to the 60th birthday of Maurice Herlihy and to his foundational work on Transactional Memory, which was commemorated by Michael Scott in the concluding talk of the event. This report is intended to give the highlights of the problems discussed during the workshop. Transactional Memory (TM) is a concurrency control mechanism for synchronizing concurrent accesses to shared memory by different threads. It has been proposed as an alternative to lock-based synchronization to simplify concurrent programming while exhibiting good performance. The sequential code is encapsulated in transactions, which are sequences of accesses to shared or local variables that should be executed atomically. A transaction ends either by committing, in which case all of its updates take effect, or by aborting, in which case all its updates are discarded. 1 TM Correctness and Universal Constructions Idit Keidar opened the workshop with a talk presenting a joint work with Kfir LevAri and Gregory Chockler on the characterization of correctness for shared data structures. The idea pursued in this work is to replace the classic and overly conservative read-set validation technique (which checks that all read variables have not changed since they were first read) with the verification of abstract conditions over the shared variables, called base conditions. Reading values that satisfy some base condition at every point in time implies correctness of read-only operations. The resulting correctness guarantee, however, is found not to be equivalent to linearizability, and can be captured through two new conditions: validity and regularity. The former requires that a read-only operation never reaches a state unreachable in a sequential execution; the latter generalizes Lamport’s notion of regularity [17] for arbitrary data structures. An extended version of the work presented at WTTM has appeared also in the last edition of DISC [18]. Claire Capdevielle presented her joint work with Colette Johnen and Alessia Milani on solo-fast universal constructions for deterministic abortable objects, which are objects that ensure that, if several processes contend to operate on it, a special abort response may be returned. Such a response indicates that the operation failed and guarantees that an aborted operation does not take effect [13]. Operations that do not abort return a response which is legal with respect to the sequential specification of the object. The work presented uses only read/write registers when there is no contention and stronger synchronization primitives, e.g., CAS, when contention occurs [3]. They propose a construction with a lightweight helping mechanism that applies to objects that can return an abort event to indicate the failure of an operation. Sandeep Hans presented a joint work with Hagit Attiya, Alexey Gotsman, and Noam Rinetzky on an evaluation of TMS1 as a consistency criterion necessary and sufficient for the case where local variables are rolled-back upon transaction aborts [2]. The authors claim that TMS [9] is not trivially formulated. In particular, this formulation allows aborted and live transactions to have different views of the system state. Their proof reveals some natural, but subtle, assumptions on the TM required for the equivalence result.

Journal Article
TL;DR: This thesis provides a one-pass tableau method TTM that instead of a graph obtains a cyclic tree to decide whether a set of PLTL-formulas is satisfiable, and shows that the classical correspondence between tableaux and sequent calculi can be extended to temporal logic.
Abstract: In this thesis we propose a new approach to deduction methods for temporal logic. Our proposal is based on an inductive definition of eventualities that is different from the usual one. On the basis of this non-customary inductive definition for eventualities, we first provide dual systems of tableaux and sequents for Propositional Linear-time Temporal Logic (PLTL). Then, we adapt the deductive approach introduced by means of these dual tableau and sequent systems to the resolution framework and we present a clausal temporal resolution method for PLTL. Finally, we make use of this new clausal temporal resolution method for establishing logical foundations for declarative temporal logic programming languages. The key element in the deduction systems for temporal logic is to deal with eventualities and hidden invariants that may prevent the fulfillment of eventualities. Different ways of addressing this issue can be found in the works on deduction systems for temporal logic. Traditional tableau systems for temporal logic generate an auxiliary graph in a first pass.Then, in a second pass, unsatisfiable nodes are pruned. In particular, the second pass must check whether the eventualities are fulfilled. The one-pass tableau calculus introduced by S. Schwendimann requires an additional handling of information in order to detect cyclic branches that contain unfulfilled eventualities. Regarding traditional sequent calculi for temporal logic, the issue of eventualities and hidden invariants is tackled by making use of a kind of inference rules (mainly, invariant-based rules or infinitary rules) that complicates their automation. A remarkable consequence of using either a two-pass approach based on auxiliary graphs or aone-pass approach that requires an additional handling of information in the tableau framework, and either invariant-based rules or infinitary rules in the sequent framework, is that temporal logic fails to carry out the classical correspondence between tableaux and sequents. In this thesis, we first provide a one-pass tableau method TTM that instead of a graph obtains a cyclic tree to decide whether a set of PLTL-formulas is satisfiable. In TTM tableaux are classical-like. For unsatisfiable sets of formulas, TTM produces tableaux whose leaves contain a formula and its negation. In the case of satisfiable sets of formulas, TTM builds tableaux where each fully expanded open branch characterizes a collection of models for the set of formulas in the root. The tableau method TTM is complete and yields a decision procedure for PLTL. This tableau method is directly associated to a one-sided sequent calculus called TTC. Since TTM is free from all the structural rules that hinder the mechanization of deduction, e.g. weakening and contraction, then the resulting sequent calculus TTC is also free from this kind of structural rules. In particular, TTC is free of any kind of cut, including invariant-based cut. From the deduction system TTC, we obtain a two-sided sequent calculus GTC that preserves all these good freeness properties and is finitary, sound and complete for PLTL. Therefore, we show that the classical correspondence between tableaux and sequent calculi can be extended to temporal logic. The most fruitful approach in the literature on resolution methods for temporal logic, which was started with the seminal paper of M. Fisher, deals with PLTL and requires to generate invariants for performing resolution on eventualities. In this thesis, we present a new approach to resolution for PLTL. The main novelty of our approach is that we do not generate invariants for performing resolution on eventualities. Our method is based on the dual methods of tableaux and sequents for PLTL mentioned above. Our resolution method involves translation into a clausal normal form that is a direct extension of classical CNF. We first show that any PLTL-formula can be transformed into this clausal normal form. Then, we present our temporal resolution method, called TRS-resolution, that extends classical propositional resolution. Finally, we prove that TRS-resolution is sound and complete. In fact, it finishes for any input formula deciding its satisfiability, hence it gives rise to a new decision procedure for PLTL. In the field of temporal logic programming, the declarative proposals that provide a completeness result do not allow eventualities, whereas the proposals that follow the imperative future approach either restrict the use of eventualities or deal with them by calculating an upper bound based on the small model property for PLTL. In the latter, when the length of a derivation reaches the upper bound, the derivation is given up and backtracking is used to try another possible derivation. In this thesis we present a declarative propositional temporal logic programming language, called TeDiLog, that is a combination of the temporal and disjunctive paradigms in Logic Programming. We establish the logical foundations of our proposal by formally defining operational and logical semantics for TeDiLog and by proving their equivalence. Since TeDiLog is, syntactically, a sublanguage of PLTL, the logical semantics of TeDiLog is supported by PLTL logical consequence. The operational semantics of TeDiLog is based on TRS-resolution. TeDiLog allows both eventualities and always-formulas to occur in clause heads and also in clause bodies. To the best of our knowledge, TeDiLog is the first declarative temporal logic programming language that achieves this high degree of expressiveness. Since the tableau method presented in this thesis is able to detect that the fulfillment of an eventuality is prevented by a hidden invariant without checking for it by means of an extra process, since our finitary sequent calculi do not include invariant-based rules and since our resolution method dispenses with invariant generation, we say that our deduction methods are invariant-free.

Journal Article
TL;DR: The first result of this thesis is that TQBF, the problem of determining if a fully-quantified propositional CNF-formula is true, is PSPACE-complete even when restricted to instances of bounded tree-width, i.e. a parameter of structures that measures their similarity to a tree.
Abstract: Propositional Proof Complexity is the area of Computational Complexity that studies the length of proofs in propositional logic. One of its main questions is to determine which particular propositional formulas have short proofs in a given propositional proof system. In this thesis we present several results related to this question, all on proof systems that are extensions of the well-known resolution proof system. The first result of this thesis is that TQBF, the problem of determining if a fully-quantified propositional CNF-formula is true, is PSPACE-complete even when restricted to instances of bounded tree-width, i.e. a parameter of structures that measures their similarity to a tree. Instances of bounded tree-width of many NP-complete problems are tractable, e.g. SAT, the boolean satisfiability problem. We show that this does not scale up to TQBF. We also consider Q-resolution, a quantifier-aware version of resolution. On the negative side, our first result implies that, unless NP = PSPACE, the class of fully-quantified CNF-formulas of bounded tree-width does not have short proofs in any proof system (and in particular in Q-resolution). On the positive side, we show that instances with bounded respectful treewidth, a more restrictive condition, do have short proofs in Q-resolution. We also give a natural family of formulas with this property that have real-world applications. The second result concerns interpretability. Informally, we say that a first-order formula can be interpreted in another if the first one can be expressed using the vocabulary of the second, plus some extra features. We show that first-order formulas whose propositional translations have short R(const)-proofs, i.e. a generalized version of resolution with DNFformulas of constant-size terms, are closed under a weaker form of interpretability (that with no extra features), called definability. Our main result is a similar result on interpretability. Also, we show some examples of interpretations and show a systematic technique to transform some Σ1-definitions into quantifier-free interpretations. The third and final result is about a relativized weak pigeonhole principle. This says that if at least 2n out of n pigeons decide to fly into n holes, then some hole must be doubly occupied. We prove that the CNF encoding of this principle does not have polynomial-size DNF-refutations, i.e. refutations in the generalized version of resolution with unbounded DNF-formulas. For this proof we discuss the existence of unbalanced low-degree bipartite expanders satisfying a certain robustness condition.


Journal Article
TL;DR: It is shown that the resolved systems of equations over sets of natural numbers can have non-ultimately periodic sets as the least solutions, and conjunctive grammars over a single-letter alphabet can generate non-regular languages, as opposed to context-free grammar.
Abstract: Systems of equations ψ( ~ X) = φ( ~ X) over sets of natural numbers with union, intersection and addition allowed, are studied in this thesis. Such systems can be equally viewed as systems of language equations over a single-letter alphabet and operations of union, intersection and concatenation. The first to be considered is the subclass of systems of equations over sets of numbers of the resolved form ~ X = φ( ~ X). Their counterparts among the language equations are the resolved systems of language equations over a single-letter alphabet, which can be also seen as a conjunctive grammar over a single-letter alphabet. It is shown that the resolved systems of equations over sets of natural numbers can have non-ultimately periodic sets as the least solutions. Equivalently, conjunctive grammars over a single-letter alphabet can generate non-regular languages, as opposed to context-free grammars. To this end, an explicit construction of a resolved system with a given set of numbers as the least solution is presented, provided that base-k positional notations of numbers from this set are recognised by a certain type of a real-time cellular automaton. In the general case of systems of equations, it is shown that the class of unique (least, greatest) solutions of such systems coincides with the class of recursive (recursively enumerable, co-recursively enumerable, respectively) sets. This result holds even when only union and addition (or only intersection and addition) are allowed in the system. This generalises the known result for systems of language equations over a multiple-letter alphabet. Systems with addition as the only allowed operation are also considered, and it is shown that the obtained class of sets is computationally universal, in the sense that their unique (least, greatest) solutions can represent encodings of all recursive (recursively enumerable, co-recursively enumerable, respectively) sets. The computational complexity of decision problems for both formalisms is investigated. It is shown that the membership problem for the resolved systems of equations is EXPTIME-hard. Many other decision problems for both types of systems are proved to be undecidable, and their exact undecidability level is settled. Most of these results hold even when the systems are restricted to the use of one equation with one variable.

Journal Article
TL;DR: A very general RAM model is defined, and then a “quasi-optimal” result on the simulation of machines in a suitably defined general model by successor RAMs is given.
Abstract: In many works in the fields of computational complexity, algorithmic number theory and mathematical cryptology as well as in related areas, claims on the running times of algorithms are made. However, often no computational model is given and the analysis is performed in a more or less ad hoc way, counting in an intuitive way “bit operations”. On the other hand, the computational model of a successor RAM with logarithmic cost function provides an adequate and formal basis for the analysis of the complexity of algorithms from a “bit oriented” point of view. This motivates the search for a result on the simulation of machines in a suitably defined general model by successor RAMs. In this work, a very general RAM model is defined, and then a “quasi-optimal” result on the simulation of such machines by successor RAMs is given.

Journal Article
TL;DR: This work study the expressive power of concurrent language, namely Constraint Handling Rules, that supports constraints as a primitive construct and shows what features of this language make it Turing powerful and what happens to its expressive power if priorities are introduced.
Abstract: Constraints can be used in concurrency theory to increase the conciseness and the expressive power of concurrent languages from a pragmatic point of view. In this work we study the expressive power of concurrent language, namely Constraint Handling Rules, that supports constraints as a primitive construct. We show what features of this language make it Turing powerful and what happens to its expressive power if priorities are introduced.

Journal Article
TL;DR: Mihai Pǎtraşcu, aged 29, passed away on Tuesday June 5, 2012, after a 1.5 year battle with brain cancer as mentioned in this paper, and his main research area was data structure lower bounds.
Abstract: Mihai Pǎtraşcu, aged 29, passed away on Tuesday June 5, 2012, after a 1.5 year battle with brain cancer. Mihai’s academic career was short but explosive, full of rich and beautiful ideas as witnessed, e.g., in his 20 STOC/FOCS papers. His many interesting papers are available online at: http://people.csail.mit.edu/mip/papers/index.html. Mihai’s talent showed early. In high school he received numerous medals at national (Romanian) and international olympiads including prizes in informatics, physics and applied math. He received gold medals at the International Olympiad in Informatics (IOI) in both 2000 and 2001. He remained involved with olympiads and was elected member of the International Scientific Committee for the International Olympiad of Informatics since 2010. Mihai’s main research area was data structure lower bounds. In data structures we try to understand how we can efficiently represent, access, and update information. Mihai revolutionized and revitalized the lower bound side, in many cases matching known upper bounds. The lower bounds were proved in the powerful cell-probe model that only charges for memory access, hence which captures both RAM and external memory. Already in 2004 [17], as a second year undergraduate student, with his supervisor Erik Demaine as non-alphabetic second author, he broke the Ω(log n/ log log n) lower bound barrier that had impeded dynamic lower bounds since 1989 [6], and showed the first logarithmic lower bound by an elegant short proof, a true combinatorial gem. The important conclusion was that binary search trees are optimal algorithms for the textbook problem of maintaining prefix sums in a dynamic array. They also proved an Ω(log n) lower bound for dynamic trees, matching Sleator and Tarjan’s upper bound from 1983 [20]. In 2005 he received from the Computing Research Association (CRA) the Outstanding Undergraduate Award for best undergraduate research in the US and Canada. I was myself lucky enough to meet Mihai in 2004, starting one of most intense collaborations I have experienced in my career. It took us almost two years to find the first separation between near-linear and polynomial space in data structures [19]. What kept us going on this hard problem was that we always had lots of fun on the side: playing squash, going on long hikes, and having beers celebrating every potentially useful idea we found on the way. A strong friendship was formed. Mihai published more than 10 papers while pursuing his undergraduate studies at MIT from 2002 to 2006. Nevertheless he finished with a perfect 5.0/5.0 GPA. Over the next 2 years, he did his PhD at MIT. His thesis “Lower Bound Techniques for Data Structures” [11] is a must-read for researchers who want to get into data structure lower bounds. During Mihai’s PhD, I got to be his mentor at AT&T, and in 2009, after a year as Raviv Postdoctoral Fellow at IBM Almaden, he joined me at AT&T. We continued our work on lower bounds, but I also managed to get him interested in hashing which is of immense importance to real computing. We sought schemes that were both truly practical and theoretically powerful [15].


Journal Article
TL;DR: Congestion games are a widely studied class of non-cooperative games that constitute a framework with nice theoretical properties and guarantee a low price of anarchy, that is the ratio between the worst Nash Equilibrium and the social optimum.
Abstract: Congestion games are a widely studied class of non-cooperative games. In fact, besides being able to model many practical settings, they constitute a framework with nice theoretical properties: Congestion games always converge to pure Nash Equilibria by means of improvement moves performed by the players, and many classes of congestion games guarantee a low price of anarchy, that is the ratio between the worst Nash Equilibrium and the social optimum. Unfortunately, the time of convergence to Nash Equilibria, even under best response moves of the players, can be very high, i.e., exponential in the number of players, and in many setting also computing a Nash equilibrium can require a high computational complexity