scispace - formally typeset
Search or ask a question

Showing papers in "Acta Informatica in 1997"


Journal ArticleDOI
TL;DR: An analysis of the development of each of these specialized topics to determine if a general theory of stream processing has emerged is presented, including a comparison of the semantic models that are used to formalize stream based computation.
Abstract: Stream processing is a term that is used widely in the literature to describe a variety of systems. We present an overview of the historical development of stream processing and a detailed discussion of the different languages and techniques for programming with streams that can be found in the literature. This includes an analysis of dataflow, specialized functional and logic programming with streams, reactive systems, signal processing systems, and the use of streams in the design and verification of hardware.

334 citations


Journal ArticleDOI
TL;DR: The metric infimum of the minimum link measure is presented, a new distance function which is more appealing than the other distance functions mentioned and shown to be in NP for a broad class of instances; it is NP-hard for a natural problem class.
Abstract: We consider the problem of measuring the similarity or distance between two finite sets of points in a metric space, and computing the measure. This problem has applications in, e.g., computational geometry, philosophy of science, updating or changing theories, and machine learning. We review some of the distance functions proposed in the literature, among them the minimum distance link measure, the surjection measure, and the fair surjection measure, and supply polynomial time algorithms for the computation of these measures. Furthermore, we introduce the minimum link measure, a new distance function which is more appealing than the other distance functions mentioned. We also present a polynomial time algorithm for computing this new measure. We further address the issue of defining a metric on point sets. We present the metric infimum method that constructs a metric from any distance functions on point sets. In particular, the metric infimum of the minimum link measure is a quite intuitive. The computation of this measure is shown to be in NP for a broad class of instances; it is NP-hard for a natural problem class.

226 citations


Journal ArticleDOI
TL;DR: The decidability of the model checking problem for linear and branching time logics, and two models of concurrent computation, namely Petri nets and Basic Parallel Processes are studied.
Abstract: We study the decidability of the model checking problem for linear and branching time logics, and two models of concurrent computation, namely Petri nets and Basic Parallel Processes.

162 citations


Journal ArticleDOI
TL;DR: Polynomial-time algorithms for the feedback vertex set problem in cocomparability graphs and convex bipartite graphs are presented and two algorithms for solving this problem are presented.
Abstract: Polynomial-time algorithms for the feedback vertex set problem in cocomparability graphs and convex bipartite graphs are presented.

74 citations


Journal ArticleDOI
TL;DR: If G is an n vertex maximal planar graph and δ≤1 3, then the vertex set of G can be partitioned into three sets A, B, C such that neither A nor B has weight exceeding 1−δ, and C is a simple cycle with no more than 2√n+O(1) vertices.
Abstract: If G is an n vertex maximal planar graph and δ≤1 3, then the vertex set of G can be partitioned into three sets A, B, C such that neither A nor B contains more than (1−δ)n vertices, no edge from G connects a vertex in A to a vertex in B, and C is a cycle in G containing no more than (√2δ+√2−2δ)√n+O(1) vertices. Specifically, when δ=1 3, the separator C is of size (√2/3+√4/3)√n+O(1), which is roughly 1.97√n. The constant 1.97 is an improvement over the best known so far result of Miller 2√2≈2.82. If non-negative weights adding to at most 1 are associated with the vertices of G, then the vertex set of G can be partitioned into three sets A, B, C such that neither A nor B has weight exceeding 1−δ, no edge from G connects a vertex in A to a vertex in B, and C is a simple cycle with no more than 2√n+O(1) vertices.

68 citations


Journal ArticleDOI
TL;DR: The class of context-free (or equational) graph languages, with respect to these two operations, is the class of graph languages generated by HR grammars.
Abstract: An operation of concatenation is defined for graphs. This allows strings to be viewed as expressions denoting graphs, and string languages to be interpreted as graph languages. For a class \(K\) of string languages, \({\rm Int}(K)\) is the class of all graph languages that are interpretations of languages from \(K\). For the classes REG and LIN of regular and linear context-free languages, respectively, \({\rm Int}({\rm REG}) = {\rm Int}({\rm LIN})\). \({\rm Int}({\rm REG})\) is the smallest class of graph languages containing all singletons and closed under union, concatenation and star (of graph languages). \({\rm Int}({\rm REG})\) equals the class of graph languages generated by linear HR (= Hyperedge Replacement) grammars, and \({\rm Int}(K)\) is generated by the corresponding \(K\)-controlled grammars. Two characterizations are given of the largest class \(K'\) such that \({\rm Int}(K') = {\rm Int}(K)\). For the class CF of context-free languages, \({\rm Int}({\rm CF})\) lies properly inbetween \({\rm Int}({\rm REG})\) and the class of graph languages generated by HR grammars. The concatenation operation on graphs combines nicely with the sum operation on graphs. The class of context-free (or equational) graph languages, with respect to these two operations, is the class of graph languages generated by HR grammars.

63 citations


Journal ArticleDOI
TL;DR: It is shown how to formalise different kinds of loop constructs within the refinement calculus, and how to use this formalisation to derive general transformation rules for loop constructs, including transformation rules that have been found important in practical program derivations.
Abstract: We show here how to formalize different kinds of loop constructs within the refinement calculus, and how to use this formalization to derive general loop transformation rules. The emphasis is on using algebraic methods for reasoning about equivalence and refinement of loops, rather than looking at operational ways of reasoning about loops in terms of their execution sequences. We apply the algebraic reasoning techniques to derive a collection of different loop transformation rules that have been found important in practical program derivations: merging and reordering of loops, data refinement of loops with stuttering transitions and atomicity refinement of loops.

59 citations


Journal ArticleDOI
TL;DR: For a number of different classes of structures, it is shown that any structure can be represented as the intersection of its maximal extensions, which can be seen as a generalisation of Szpilrajn's theorem.
Abstract: We consider relational structures \((X,R_1,R_2)\) such that \(X\) is a set and \(R_1,R_2\) are two binary relations on \(X\). For a number of different classes of structures we show that any structure can be represented as the intersection of its maximal extensions. Such a property – called extension completeness – can be seen as a generalisation of Szpilrajn's theorem which states that each partial order is the intersection of its total order extensions. When \(R_1\) can be interpreted as causality and \(R_2\) as ‘weak’ causality we obtain a model of concurrent histories generalising that based on causal partial orders.

44 citations


Journal ArticleDOI
TL;DR: It is shown that the problem of finding faster implementations for a specification is connected to theproblem of finding more distributed implementations of the same specification by increasing efficiency while retaining the same functionality.
Abstract: A preorder based on execution speed, called performance preorder, is introduced for a simple process algebra with durational actions. Two processes \(E\) and \(F\) are related -\( E \sqsubseteq_p F\)- if they have the same functionality (in this case, we have chosen strong bisimulation equivalence) and \(E\) is at least as fast as \(F\). Hence, this preorder supports the stepwise refinement “from specification to implementation” by increasing efficiency while retaining the same functionality. We show that the problem of finding faster implementations for a specification is connected to the problem of finding more distributed implementations of the same specification. Both performance preorder and the induced equivalence, called competitive equivalence, are provided with sound and complete axiomatizations for finite agents.

39 citations


Journal ArticleDOI
TL;DR: A tool–based framework with the following features is presented: It supports new techniques to specify more language aspects in a static fashion, which improves the efficiency of generated software.
Abstract: The specification of realistic programming languages is difficult and expensive. One approach to make language specification more attractive is the development of techniques and systems for the generation of language–specific software from specifications. To contribute to this approach, a tool–based framework with the following features is presented: It supports new techniques to specify more language aspects in a static fashion. This improves the efficiency of generated software. It provides powerful interfaces to generated software components. This facilitates the use of these components as parts of language–specific software. It has a rather simple formal semantics. In the framework, static semantics is defined by a very general attribution technique enabling e.g. the specification of flow graphs. The dynamic semantics is defined by evolving algebra rules, a technique that has been successfully applied to realistic programming languages. After providing the formal background of the framework, an object–oriented programming language is specified to illustrate the central specification features. In particular, it is shown how parallelism can be handled. The relationship to attribute grammar extensions is discussed using a non-trivial compiler problem. Finally, the paper describes new techniques for implementing the framework and reports on experiences made so far with the implemented system.

35 citations


Journal ArticleDOI
Kim S. Larsen1
TL;DR: This paper defines the first relaxed binary search tree with amortized constant rebalancing using only standard single or double rotations, to speed up request processing in main-memory databases.
Abstract: The idea of relaxed balance is to uncouple the rebalancing in search trees from the updating in order to speed up request processing in main-memory databases during bursts of updates. This paper contains the first proof that amortized constant time rebalancing can be obtained in a relaxed binary search tree using only standard single and double rotations.

Journal ArticleDOI
TL;DR: The underlying theory of BURS, which stands for bottom-up rewrite system, is formalised, and an algorithm that computes all pattern matches is derived that terminates if the term rewrite system is finite.
Abstract: BURS theory provides a powerful mechanism to efficiently generate pattern matches in a given expression tree. BURS, which stands for bottom-up rewrite system, is based on term rewrite systems, to which costs are added. We formalise the underlying theory, and derive an algorithm that computes all pattern matches. This algorithm terminates if the term rewrite system is finite. We couple this algorithm with the well-known search algorithm A* that carries out pattern selection. The search algorithm is directed by a cost heuristic that estimates the minimum cost of code that has yet to be generated. The advantage of using a search algorithm is that we need to compute only those costs that may be part of an optimal rewrite sequence (and not the costs of all possible rewrite sequences as in dynamic programming). A system that implements the algorithms presented in this work has been built.

Journal ArticleDOI
TL;DR: The problem of transforming an FBDD or aOBDD for the Boolean function f into the reduced $\pi$OBDD $Q$ for f is considered and is important for the improvement of given variable orderings, e.g., by simulated annealing or genetic algorithms, and in the situation where incompatible representations of functions have to be made compatible.
Abstract: The problem of transforming an FBDD (free binary decision diagram) $P$ on $n$ variables or a $\pi'$ OBDD (ordered binary decision diagram with respect to the variable ordering $\pi'$ ) $P$ for the Boolean function $f$ into the reduced $\pi$ OBDD $Q$ for $f$ is considered. The algorithms run in time $O(\vert P \vert \vert Q \vert \log \vert Q \vert )$ (where, e.g., $\vert P \vert $ is the size of $P$ ) and need space $O(\vert P \vert +n \cdot \vert Q \vert )$ , if $P$ may be an FBDD, or $O(\vert P \vert + \vert Q \vert )$ , if $P$ is known to be an OBDD. The problem is important for the improvement of given variable orderings, e.g., by simulated annealing or genetic algorithms, and in the situation where incompatible representations of functions have to be made compatible.

Journal ArticleDOI
TL;DR: It is shown that SFP domains can be characterized as special kinds of rank orderded cpo's and the connection between the Lawson topology and the topology induced by the metric is discussed.
Abstract: In dealing with denotational semantics of programming languages partial orders resp. metric spaces have been used with great benefit in order to provide a meaning to recursive and repetitive constructs. This paper presents two methods to define a metric on a subset $M$ of a complete partial order $D$ such that $M$ is a complete metric spaces and the metric semantics on $M$ coincides with the partial order semantics on $D$ when the same semantic operators are used. The first method is to add a ‘length’ on a complete partial order which means a function $\rho : D \to {\Bbb N} \cup \{\infty\}$ of increasing power. The second is based on the ideas of [11] and uses pseudo rank orderings, i.e. monotone sequences of monotone functions $\pi_n : D \to D$ . We show that SFP domains can be characterized as special kinds of rank orderded cpo's. We also discuss the connection between the Lawson topology and the topology induced by the metric.

Journal ArticleDOI
TL;DR: This paper investigates the concept of unconditional transfer within various forms of regulated grammars like programmedgrammars, matrix grammar, grammarts with regular control, gramMars controlled by bicoloured digraphs, periodically time-variant grammARS and variants thereof, especially regarding their descriptive capacity.
Abstract: In this paper, we investigate the concept of unconditional transfer within various forms of regulated grammars like programmed grammars, matrix grammars, grammars with regular control, grammars controlled by bicoloured digraphs, periodically time-variant grammars and variants thereof, especially regarding their descriptive capacity. In this way, we solve some problems from the literature. Furthermore, we correct a construction from the literature. Most of the results of the present paper have been announced in [11].

Journal ArticleDOI
TL;DR: A specific implementation of bucket sort is presented whose primary advantanges are that linear average-time performance is achieved with an additional amount of storage equal to any fraction of the number of elements being sorted and no linked-list data structures are used (all sorting is done with arrays).
Abstract: Various methods, such as address-calculation sorts, distribution counting sorts, radix sorts, and bucket sorts, use the values of the numbers being sorted to increase efficiency but do so at the expense of requiring additional storage space. In this paper, a specific implementation of bucket sort is presented whose primary advantanges are that (i) linear average-time performance is achieved with an additional amount of storage equal to any fraction of the number of elements being sorted and (ii) no linked-list data structures are used (all sorting is done with arrays). Analytical and empirical results show the trade-off between the additional storage space used and the improved computational efficiency obtained. Computer simulations show that for lists containing 1,000 to 30,000 uniformly distributed positive integers, the sort developed here is faster than both Quicksort and a standard implementation of bucket sort. Furthermore, the running time increases with size at a slower rate.

Journal ArticleDOI
TL;DR: It is shown that also semantics discriminating according to space distribution of processes can be formulated in a natural way within a general framework proposed by Degano, De Nicola and Montanari.
Abstract: A general framework proposed by Degano, De Nicola and Montanari has been fruitful to define in a natural way non interleaving semantics for process description languages based on causality. The framework relies on a decomposition function used to obtain the set of its sequential processes from a parallel term, and on a set of distributed transition rules carrying information about the actions processes can perform and their location. In this paper we show that also semantics discriminating according to space distribution of processes can be formulated in a natural way within this framework. Two new semantics are proposed. The first one is based on an alternative characterization of the locality equivalence of Boudol, Castellani, Hennessy and Kiehn. Over the latter, our equivalence has the advantage of not requiring explicit introduction of a (infinite) space of locations; this makes it amenable to a mechanical treatment in the same vein as the classical bisimulation-based equivalences. The second semantics is proposed via a direct generalization of Castellani and Hennessy's distributed equivalence to languages with global scoping operators.

Journal ArticleDOI
TL;DR: A computational approach is formulated based on a discretized model in which the failure law is the analogous geometric distribution of the optimal completion probability and a computation of this optimum requires time, where $n$ is the job running time.
Abstract: Suppose $m \ge 2$ identical processors, each subject to random failures, are available for running a single job of given duration $\tau$ . The failure law is operative only while a processor is active. To guard against the loss of accrued work due to a failure, checkpoints can be made, each requiring time $\delta$ ; a successful checkpoint saves the state of the computation, but failures can also occur during checkpoints. The problem is to determine how best to schedule checkpoints if the goal is to maximize the probability that the job finishes before all $m$ processors fail. We solve this problem first for $m=2$ and an exponential failure law. For given $\tau$ and $\delta$ we show how to determine an integer $k \ge 0$ and time intervals $I_1, \ldots, I_{k+1}$ such that an optimal procedure is to run the job on one processor, checkpointing at the end of each interval $I_j, j = 1, \ldots, k$ , until either the job is done or a failure occurs. In the latter case, the remaining processor resumes the job starting in the state saved by the last successful checkpoint; the job then runs until it completes or until the second processor also fails. We give an explicit formula for the maximum achievable probability of completing the job for any fixed $k \ge 0$ . An explicit result for $k_{opt}$ , the optimum value of $k$ , seems out of reach; however, we give upper and lower bounds on $k_{opt}$ that are remarkably tight; they show that only a few values of $k$ need to be tested in order to find $k_{opt}$ . With the failure rate normalized to 1, we also derive the asymptotic estimate $$ k_{opt} - \sqrt{2 \tau / \delta} = O(1)~~{\rm as}~~ \delta \to 0 ~, $$ and calculate conditional expected job completion times. For the more difficult problem with $m \ge 3$ processors, we formulate a computational approach based on a discretized model in which the failure law is the analogous geometric distribution. By proving a unimodality property of the optimal completion probability, we are able to describe a computation of this optimum that requires $O(m n \log n )$ time, where $n$ is the job running time. Several examples bring out behavioral details.

Journal ArticleDOI
TL;DR: The cardinality vector (∣E∣, ∣P∣) of an equational representation of an ordered pair (E,‽P) represents the congruential tree language L which is the union of those ?
Abstract: A tree language is congruential if it is the union of finitely many classes of a finitely generated congruence on the term algebra. It is well known that congruential tree languages are the same as recognizable tree languages. An equational representation is an ordered pair (E, P) , where E is either a ground term equation system or a ground term rewriting system, and P is a finite set of ground terms. We say that (E, P) represents the congruential tree language L which is the union of those ?* E -classes containing an element of P, i.e., for which L=⋃{[p]? * E ∣p∈P}. We define two sorts of minimality for equational representations. We introduce the cardinality vector (∣E∣, ∣P∣) of an equational representation (E, P). Let ? l and ? a denote the lexicographic and antilexicographic orders on the set of ordered pairs of nonnegative integers, respectively. Let L be a congruential tree language. An equational representation (E, P) of L with ? l -minimal (? a -minimal) cardinality vector is called ? l -minimal (? a -minimal). We compute, for an L given by a deterministic bottom-up tree automaton, both a ? l -minimal and a ? a -minimal equational representation of L.

Journal ArticleDOI
TL;DR: These four classes form a deterministic counterpart of the classical Chomsky hierarchy, which is a kind of phrase structure grammar having a restricted type of rewriting rules, where parsing can be performed without backtracking.
Abstract: We introduce a new class of grammars called uniquely parsable grammars (UPGs). A UPG is a kind of phrase structure grammar having a restricted type of rewriting rules, where parsing can be performed without backtracking. We show that, in spite of such restriction to the rules, UPGs are universal in their generating ability. We then define three subclasses of UPGs. They are M-UPGs (monotonic UPGs), RC-UPGs (UPGs with right-terminating and context-free-like rules), and REG-UPGs (regular UPGs). It is proved that the generating abilities of the classes of M-UPGs, RC-UPGs, and REG-UPGs are exactly characterized by the classes of deterministic linear-bounded automata, deterministic pushdown automata, and deterministic finite automata, respectively. Especially, the class of RC-UPGs gives a very simple grammatical characterization of the class of deterministic context-free languages. Thus, these four classes form a deterministic counterpart of the classical Chomsky hierarchy.

Journal ArticleDOI
TL;DR: Applications of the framework in the context of schema transformations and improved automated modeling support are discussed and an essential advantage is its “configurable semantics”.
Abstract: For successful information systems development, conceptual data modeling is essential. Nowadays a plethora of techniques for conceptual data modeling exist. Many of these techniques lack a formal foundation and a lot of theory, e.g. concerning updates or schema transformations, is highly data model specific. As such there is a need for a unifying formal framework providing a sufficiently high level of abstraction. In this paper, focus is on the applications of such a framework defined in category theory. Well-known conceptual data modeling concepts, such as relationship types, generalization, specialization, and collection types are defined from a categorical point of view in this framework and an essential advantage is its “configurable semantics”. Features such as null values, uncertainty, and temporal behavior can be added by selecting appropriate instance categories. The addition of these features usually requires a complete redesign of the formalization in traditional set-based approaches to semantics. Applications of the framework in the context of schema transformations and improved automated modeling support are discussed.

Journal ArticleDOI
TL;DR: It is shown that weak satisfaction of FDs is additive if and only if the set F ofFDs is monodependent and that monodependence can be checked in time polynomial in the size of F.
Abstract: Incomplete relations are relations which contain null values, whose meaning is “value is at present unknown”. A functional dependency (FD) is weakly satisfied in an incomplete relation if there exists a possible world of this relation in which the FD is satisfied in the standard way. Additivity is the property of equivalence of weak satisfaction of a set of FDs, say F, in an incomplete relation with the individual weak satisfaction of each member of F in the said relation. It is well known that satisfaction of FDs is not additive. The problem that arises is: under what conditions is weak satisfaction of FDs additive. We solve this problem by introducing a syntactic subclass of FDs, called monodependent FDs, which informally means that for each attribute, say A, there is a unique FD that functionally determines A, and in addition only trivial cycles involving A arise between any two FDs one of which functionally determines A. We show that weak satisfaction of FDs is additive if and only if the set F of FDs is monodependent and that monodependence can be checked in time polynomial in the size of F.

Journal ArticleDOI
TL;DR: This algorithm improves the bound achieved in Next Fit Level (NFL) packing, by compressing the items packed on two successive levels of an NFL packing via on-line movement admissible under the Tetris constraint.
Abstract: Rectangles with dimensions independently chosen from a uniform distribution on [0, 1] are packed on-line into a unit width strip under a constraint like that of the Tetris game: rectangles arrive from the top and must be moved inside the strip to reach their place; once placed, they cannot be moved again. Cargo loading applications impose similar constraints. This paper assumes that rectangles must be moved without rotation. For n rectangles, the resulting packing height is shown to have an asymptotic expected value of at least (0.31382733 ... )n under any on-line packing algorithm. An on-line algorithm is presented that achieves an asymptotic expected height of (0.36976421 ... )n. This algorithm improves the bound achieved in Next Fit Level (NFL) packing, by compressing the items packed on two successive levels of an NFL packing via on-line movement admissible under the Tetris-like constraint.

Journal ArticleDOI
TL;DR: It is proved that Q can be tested on a finite set of canonical databases built from the body of Q, and a procedure is given that decides the bag-containment problem of conjunctive queries in a large number of cases.
Abstract: Under the bag-theoretic semantics relations are bags of tuples, that is, a tuple may have any number of duplicates. Under this semantics, a conjunctive query $Q$ is bag-contained in a conjunctive query $Q^{\prime }$ , denoted $ Q\leq _bQ^{\prime }$ , if for all databases ${\cal D}$ , $Q({\cal D})$ , the result of applying $Q$ to ${\cal D}$ , is a subbag of $ Q^{\prime }({\cal D)}$ . It is not known whether testing $Q\leq _bQ^{\prime }$ is decidable. In this paper we prove that $Q\leq _bQ^{\prime }$ can be tested on a finite set of canonical databases built from the body of $Q$ . Using that result we give a procedure that decides the bag-containment problem of conjunctive queries in a large number of cases.

Journal ArticleDOI
TL;DR: A WFA-inference algorithm is developed which can compute a close approximation of both the degree of self-similarity and the gray-tone fractal dimension as a generalization of Minkovski dimension of compact sets for anygray-tone image.
Abstract: We define two measures of “fractalness” of gray-tone images: the degree of self-similarity and the gray-tone fractal dimension as a generalization of Minkovski dimension of compact sets. We show how to compute both these measures from the WFA-representation of a gray-tone image. Since we have developed a WFA-inference algorithm which computes a good approximation of any gray-tone image we can compute a close approximation of both our measures of fractalness for any gray-tone image.

Journal ArticleDOI
TL;DR: It is proved that the system of word equations xi1=yi1yi2…yin, i=1,”2,…, ⌈n/2⌉ +1, has only cyclic solutions.
Abstract: It is proved that the system of word equations x i 1=y i 1 y i 2…y i n , i=1, 2,…, ⌈n/2⌉ +1, has only cyclic solutions Some sharpenings concerning the cases n=5, 7 and n≥9 are derived as well as results concerning the general system of equations x i 1 x i 2…x i m =y i 1 y i 2…y i n , i=1, 2,… Applications to test sets of certain bounded languages are considered

Journal ArticleDOI
TL;DR: It is shown that dynamic LL( $k$) Parsers are as powerful as LR( $ k$) parsers, i.e. that they are capable to analyze every deterministic context-free language while using only one symbol of lookahead.
Abstract: A new class of context-free grammars, called dynamic context-free grammars, is introduced. These grammars have the ability to change the set of production rules dynamically during the derivation of some terminal string. The notion of LL(\(k\)) parsing is adapted to this grammar model. We show that dynamic LL(\(k\)) parsers are as powerful as LR(\(k\)) parsers, i.e. that they are capable to analyze every deterministic context-free language while using only one symbol of lookahead.

Journal ArticleDOI
TL;DR: A new normal form for first order logic is created which is amenable to storage in flat files, and efficient search and retrieval, and the requirement to transport all constraints to main memory for testing is eliminated.
Abstract: First order static database constraints are expressed as counterexamples, i.e., examples that violate the integrity of the database. Examples are data and as such they can be specified and stored as data, and structured into database files for efficient search and retrieval. To express all first order constraints as counterexamples, a new normal form for first order logic is created which, after some syntactic transformation, is amenable to storage in flat files, and efficient search and retrieval. The critical contribution is the ability to manage a large number constraints on secondary storage devices, and eliminate the requirement to transport all constraints to main memory for testing.

Journal ArticleDOI
TL;DR: It is proved that theFamily of rational relations equals the family of vector languages of Generalized ITNCs, i.e. ITNC’s in which the restriction of completeness is dropped.
Abstract: An Individual Token Net Controller (or ITNC) is a particular type of state-machine decomposable Petri net that can be used as a synchronization mechanism in concurrent systems consisting of a fixed number of sequential subsystems. In this paper the family of ITNC vector languages is compared to the well-known family of rational relations. On the one hand it is proved that the family of rational relations equals the family of vector languages of Generalized ITNCs, i.e. ITNCs in which the restriction of completeness is dropped. On the other hand a vector language property induced by completeness is identified that precisely characterizes the difference between ITNC vector languages and Generalized ITNC vector languages. In addition, the results are shown to carry over to the prefix-closed versions of the models.

Journal ArticleDOI
TL;DR: This work provides a method which allows insertion or deletion of a tuple over any relation scheme in a deterministic way and uses both inserted and deleted tuples in the authors' derivation algorithms.
Abstract: The traditional approach to database querying and updating treats insertions and deletions of tuples in an asymmetric manner: if a tuple \(t\) is inserted then, intuitively, we think of \(t\) as being true and we use this knowledge in query and update processing; in contrast, if a tuple \(t\) is deleted then we think of \(t\) as being false but we do not use this knowledge at all! In this paper, we present a new approach to database querying and updating in which insertions and deletions of tuples are treated in a symmetric manner. Contrary to the traditional approach, we use both inserted and deleted tuples in our derivation algorithms. Our approach works as follows: if the deletion of a tuple \(t\) is requested, then we mark \(t\) as being deleted without removing it from the database; if the insertion of a tuple \(t\) is requested, then we simply place \(t\) in the database and remove all its marked subtuples. Derivation of tuples is done using two derivation rules under one constraint: a tuple \(t\) is derived only if\(t\) has no marked subtuples in the database. The derivation rules reflect relational projection and relational join. The main contribution of our work is to provide a method which allows insertion or deletion of a tuple over any relation scheme in a deterministic way.