scispace - formally typeset
Search or ask a question

Showing papers in "Fundamenta Informaticae in 2006"


Journal Article
TL;DR: In this article, the authors introduce a class of neural-like P systems which they call spiking neural P systems (in short, SN P systems), in which the result of a computation is the time between the moments when a specified neuron spikes.
Abstract: This paper proposes a way to incorporate the idea of spiking neurons into the area of membrane computing, and to this aim we introduce a class of neural-like P systems which we call spiking neural P systems (in short, SN P systems). In these devices, the time (when the neurons fire and/or spike) plays an essential role. For instance, the result of a computation is the time between the moments when a specified neuron spikes. Seen as number computing devices, SN P systems are shown to be computationally complete (both in the generating and accepting modes, in the latter case also when restricting to deterministic systems). If the number of spikes present in the system is bounded, then the power of SN P systems falls drastically, and we get a characterization of semilinear sets. A series of research topics and open problems are formulated.

589 citations


Journal Article
TL;DR: This paper can be regarded as a full realization of the proximity approach to the region-based theory of space and a lattice-theoretic generalization of methods and constructions from the theory of proximity spaces.
Abstract: This paper is the second part of the paper [2]. Both of themare in the field of region-based (or Whitehedian) theory of space, which is an important subfield of Qualitative Spatial Reasoning (QSR). The paper can be considered also as an application of abstract algebra and topology to some problems arising and motivated in Theoretical Computer Science and QSR. In [2], different axiomatizations for region-based theory of space were given. The most general one was introduced under the name "Contact Algebra". In this paper some categories defined in the language of contact algebras are introduced. It is shown that they are equivalent to the category of all semiregular T$_0$-spaces and their continuous maps and to its full subcategories having as objects all regular (respectively, completely regular; compact; locally compact) Hausdorff spaces. An algorithm for a direct construction of all, up to homeomorphism, finite semiregular T$_0$-spaces of rank n is found. An example of an RCC model which has no regular Hausdorff representation space is presented. The main method of investigation in both parts is a lattice-theoretic generalization of methods and constructions from the theory of proximity spaces. Proximity models for various kinds of contact algebras are given here. In this way, the paper can be regarded as a full realization of the proximity approach to the region-based theory of space.

115 citations


Journal Article
TL;DR: It is proved that non-deterministic systems of this type, using polynomial production functions, characterize the Turing computable sets of natural numbers, while deterministic systems, with polynometric production functions having non-negative coefficients, compute strictly more than semilinear sets ofnatural numbers.
Abstract: With inspiration from the economic reality, where numbers are basic entities to work with, we propose a genuinely new kind of P systems, where numerical variables evolve, starting from initial values, by means of production functions and repartition protocols. We prove that non-deterministic systems of this type, using polynomial production functions, characterize the Turing computable sets of natural numbers, while deterministic systems, with polynomial production functions having non-negative coefficients, compute strictly more than semilinear sets of natural numbers. A series of research topics to be addressed in this framework are mentioned.

112 citations


Journal Article
TL;DR: A new matrix view of the theory of rough sets is proposed, which starts with a binary relation and redefines a pair of lower and upper approximation operators using the matrix representation.
Abstract: The theory of rough sets deals with the approximation of an arbitrary subset of a universe by two definable or observable subsets called, respectively, the lower and the upper approximation. There are at least two methods for the development of this theory, the constructive and the axiomatic approaches. The rough set axiomatic system is the foundation of rough sets theory. This paper proposes a new matrix view of the theory of rough sets, we start with a binary relation and we redefine a pair of lower and upper approximation operators using the matrix representation. Different classes of rough set algebras are obtained from different types of binary relations. Various classes of rough set algebras are characterized by different sets of axioms. Axioms of upper approximation operations guarantee the existence of certain types of binary relations (or matrices) producing the same operators. The upper approximation of the Pawlak rough sets, rough fuzzy sets and rough sets of vectors over an arbitrary fuzzy lattice are characterized by the same independent axiomatic system.

98 citations


Journal Article
TL;DR: This paper demonstrates how some simple graph counting operations on the ideal lattice representation of a partially ordered set (poset)P allow for the counting of the number of linear extensions of P, for the random generation of a linear extension of P), for the calculation of the rank probabilities for every x∈P, and for the calculating of the mutual rank probabilities Prob(x>y) for every (x,y) ∼P.
Abstract: In this paper, we demonstrate how some simple graph counting operations on the ideal lattice representation of a partially ordered set (poset)P allow for the counting of the number of linear extensions of P, for the random generation of a linear extension of P, for the calculation of the rank probabilities for every x∈P, and, finally, for the calculation of the mutual rank probabilities Prob(x>y) for every (x,y)∈P$^2$. We show that all linear extensions can be counted and a first random linear extension can be generated in O(mI(P)m·w(p)) time, while every subsequent random linear extension can be obtained in O(mPm·w(P)) time, where mI(P)m denotes the number of ideals of the poset P and w(P) the width of the poset P. Furthermore, we show that all rank probability distributions can be computed in O(mI(P)m·w(P)) time, while the computation of all mutual rank probabilities requires O(mI(P)m·mPm·w(P)) time, to our knowledge the fastest exact algorithms currently known. It is well known that each of the four problems described above resides in the class of #P-complete counting problems, the counterpart of the NP-complete class for decision problems. Since recent research has indicated that the ideal lattice representation of a poset can be obtained in constant amortized time, the stated time complexity expressions also cover the time needed to construct the ideal lattice representation itself, clearly favouring the use of our approach over the standard ap-proach consisting of the exhaustive enumeration of all linear extensions.

94 citations


Journal Article
TL;DR: In this article, the authors explore the design space of dynamic rules and their application to transformation problems, and formally define the technique by extending the operational semantics underlying the program transformation language Stratego.
Abstract: The applicability of term rewriting to program transformation is limited by the lack of control over rule application and by the context-free nature of rewrite rules. The first problem is addressed by languages supporting user-definable rewriting strategies. The second problem is addressed by the extension of rewriting strategies with scoped dynamic rewrite rules. Dynamic rules are defined at run-time and can access variables available from their definition context. Rules defined within a rule scope are automatically retracted at the end of that scope. In this paper, we explore the design space of dynamic rules, and their application to transformation problems. The technique is formally defined by extending the operational semantics underlying the program transformation language Stratego, and illustrated by means of several program transformations in Stratego, including constant propagation, bound variable renaming, dead code elimination, function inlining, and function specialization.

94 citations


Journal Article
TL;DR: The paper shows that a membrane calculus recently proposed can be encoded into CLS, and uses this calculus to model interactions among bacteria and bacteriophage viruses, and to reason on their properties.
Abstract: The paper presents the Calculus of Looping Sequences (CLS) suitable to describe microbiological systems and their evolution. The terms of the calculus are constructed by basic constituent elements and operators of sequencing, looping, containment and parallel composition. The looping operator allows tying up the ends of a sequence, thus creating a circular sequence which can represent a membrane. We show that a membrane calculus recently proposed can be encoded into CLS. We use our calculus to model interactions among bacteria and bacteriophage viruses, and to reason on their properties.

80 citations


Journal Article
TL;DR: An improved notion of graph constraints and application conditions is introduced and under what conditions the basic results can be extended from graph transformation to high-level replacement systems are shown.
Abstract: Graph constraints and application conditions are most important for graph grammars and transformation systems in a large variety of application areas. Although different approaches have been presented in the literature already there is no adequate theory up to now which can be applied to different kinds of graphs and high-level structures. In this paper, we introduce a general notion of graph constraints and application conditions and show under what conditions the basic results can be extended from graph transformation to high-level replacement systems. In fact, we use the new framework of adhesive HLR categories recently introduced as combination of HLR systems and adhesive categories. Our main results are the transformation of graph constraints into right application conditions and the transformation from right to left application conditions in this new framework. The transformations are illustrated by a railroad control system with rail net constraints and application conditions.

74 citations


Journal Article
TL;DR: A rigorous approach to typed attributed graph transformation is obtained, providing as fundamental results the Local Church-Rosser, Parallelism, Concurrency, Embedding and Extension Theorem and a Local Confluence Theorem known as Critical Pair Lemma in the literature.
Abstract: The concept of typed attributed graphs and graph transformation is most significant for modeling and meta modeling in software engineering and visual languages, but up to now there is no adequate theory for this important branch of graph transformation. In this article we give a new formalization of typed attributed graphs, which allows node and edge attribution. The first main result shows that the corresponding category is isomorphic to the category of algebras over a specific kind of attributed graph structure signature. This allows to prove the second main result showing that the category of typed attributed graphs is an instance of "adhesive HLR categories". This new concept combines adhesive categories introduced by Lack and Sobocinski with the well-known approach of high-level replacement (HLR) systems using a new simplified version of HLR conditions. As a consequence we obtain a rigorous approach to typed attributed graph transformation providing as fundamental results the Local Church-Rosser, Parallelism, Concurrency, Embedding and Extension Theorem and a Local Confluence Theorem known as Critical Pair Lemma in the literature.

72 citations


Journal Article
TL;DR: An approach to achieving a calculus of approximation spaces that provides a basis for approximating reasoning in distributed systems of cooperating agents is considered in this paper.
Abstract: This paper considers the problem of how to establish calculi of approximation spaces. Approximation spaces considered in the context of rough sets were introduced by Zdzislaw Pawlak more than two decades ago. In general, a calculus of approximation spaces is a system for combining, describing, measuring, reasoning about, and performing operations on approximation spaces. An approach to achieving a calculus of approximation spaces that provides a basis for approximating reasoning in distributed systems of cooperating agents is considered in this paper. Examples of basic concepts are given throughout this paper to illustrate how approximation spaces can be beneficially used in many settings, in particular for complex concept approximation. The contribution of this paper is the presentation of a framework for calculi of approximation spaces useful for approximate reasoning by cooperating agents.

69 citations


Journal Article
TL;DR: Most of the HLR properties, which had been introduced to generalize some basic results from the category of graphs to high-level structures, are valid already in adhesive HLR categories, which leads to a smooth categorical theory of HLR systems which can be applied to a large variety of graphs and other visual models.
Abstract: Adhesive high-level replacement (HLR) systems are introduced as a new categorical framework for graph transformation in the double pushout (DPO) approach, which combines the well-known concept of HLR systems with the new concept of adhesive categories introduced by Lack and Sobocinacute;ski. In this paper we show that most of the HLR properties, which had been introduced to generalize some basic results from the category of graphs to high-level structures, are valid already in adhesive HLR categories. This leads to a smooth categorical theory of HLR systems which can be applied to a large variety of graphs and other visual models. As a main new result in a categorical framework we show the Critical Pair Lemma for the local confluence of transformations. Moreover we present a new version of embeddings and extensions for transformations in our framework of adhesive HLR systems.

Journal Article
TL;DR: It is proved that ideality of simplification is strictly related to query containment; in fact, an ideal simplification pro-cedure can only exist in database languages for which query containment is decidable.
Abstract: Without proper simplification techniques, database integrity checking can be prohibitively time consuming. Several methods have been developed for producing simplified incremental checks for each update but none until now of sufficient quality and generality for providing a true practical impact, and the present paper is an attempt to fill this gap. On the theoretical side, a general characterization is introduced of the problem of simplification of integrity constraints and a natural definition is given of what it means for a simplification procedure to be ideal. We prove that ideality of simplification is strictly related to query containment; in fact, an ideal simplification pro-cedure can only exist in database languages for which query containment is decidable. However, simplifications that do not qualify as ideal may also be relevant for practical purposes. We present a concrete approach based on transformation operators that apply to integrity constraints written in a rich DATALOG-like language with negation. The resulting procedure produces, at design-time, simplified constraints for parametric transaction patterns, which can then be instantiated and checked for consistency at run-time. These tests take place before the execution of the update, so that only consistency-preserving updates are eventually given to the database. The extension to more expressive languages and the application of the framework to other contexts, such as data integration and concurrent database systems, are also discussed. Our experiments show that the simplifications obtained with our method may give rise to much better performance than with previous methods and that further improvements are achieved by checking consistency before executing the update.

Journal Article
TL;DR: It is shown that the problem of finding the fastest possible black hole search scheme by two agents is NP-hard, and a 9.3-approximation is given for it.
Abstract: A black hole is a highly harmful stationary process residing in a node of a network and destroying all mobile agents visiting the node, without leaving any trace. We consider the task of locating a black hole in a (partially) synchronous network, assuming an upper bound on the time of any edge traversal by an agent. The minimum number of agents capable to identify a black hole is two. For a given graph and given starting node we are interested in the fastest possible black hole search by two agents, under the general scenario in which some subset of nodes is safe and the black hole can be located in one of the remaining nodes. We show that the problem of finding the fastest possible black hole search scheme by two agents is NP-hard, and we give a 9.3-approximation for it.

Journal Article
TL;DR: Computational universality is obtained for several combinations of such possibilities for objects both in compartments and on membranes, with the objects from membranes evolving under the control of the proteins.
Abstract: This work is a continuation of the investigations aiming to bridge membrane computing (where in a compartmental cell-like structure the chemicals to evolve are placed in the compartments defined by the membranes) and brane calculi (where one considers again a compartmental cell-like structure with the chemicals/proteins placed on the membranes themselves). In the current paper we use objects both in compartments and on membranes (the latter are called proteins), with the objects from membranes evolving under the control of the proteins. Several possibilities are considered (objects only moved across membranes or also changed during this operation, with the proteins only assisting the move/change or also changing themselves). Somewhat expected, computational universality is obtained for several combinations of such possibilities.

Journal Article
TL;DR: It is shown how session types allow not only high level specifications of complex interactions, but also the definition of powerful interoperability tests at the protocol level, namely compatibility and substitutability of components.
Abstract: This paper proposes the use of session types to extend with behavioural information the simple descriptions usually provided by software component interfaces. We show how session types allow not only high level specifications of complex interactions, but also the definition of powerful interoperability tests at the protocol level, namely compatibility and substitutability of components. We present a decidable proof system to verify these notions, which makes our approach of a pragmatic nature.

Journal ArticleDOI
TL;DR: The Weighted Suffix Tree is introduced, an efficient data structure for computing string regularities in weighted sequences of molecular data and some applications to problems taken from the Molecular Biology area such as pattern matching, repeats discovery, discovery of the longest common subsequence of two weighted sequences and computation of covers.
Abstract: In this paper we introduce the Weighted Suffix Tree, an efficient data structure for computing string regularities in weighted sequences of molecular data. Molecular Weighted Sequences can model important biological processes such as the DNA Assembly Process or the DNA-Protein Binding Process. Thus pattern matching or identification of repeated patterns, in biological weighted sequences is a very important procedure in the translation of gene expression and regulation. We present time and space efficient algorithms for constructing the weighted suffix tree and some applications of the proposed data structure to problems taken from the Molecular Biology area such as pattern matching, repeats discovery, discovery of the longest common subsequence of two weighted sequences and computation of covers.

Journal Article
TL;DR: This paper deals with the notion of M-unambiguity in connection with the Parikh matrix mapping introduced by Mateescu and others in [7], and several sufficient criteria for M- un Ambiguity are provided, nontrivially generalizing the criteria based on the γ-property introduced by Salomaa in [15].
Abstract: We deal with the notion of M-unambiguity [5] in connection with the Parikh matrix mapping introduced by Mateescu and others in [7]. M-unambiguity is studied both in terms of words and matrices and several sufficient criteria for M-unambiguity are provided in both cases, nontrivially generalizing the criteria based on the γ-property introduced by Salomaa in [15]. Also, the notion of M-unambiguity with respect to a word is defined in connection with the extended Parikh matrix morphism [16] and some of the M-unambiguity criteria are lifted from the classical setting to the extended one. This paper is a revised and extended version of [17].

Journal Article
TL;DR: This work model the protocol in terms of a network of communicating automata and verify that the protocol meets the anonymity requirements specified and evaluated two different model checking techniques to verify the protocols.
Abstract: We analyse different versions of the Dining Cryptographers protocol by means of automatic verification via model checking. Specifically we model the protocol in terms of a network of communicating automata and verify that the protocol meets the anonymity requirements specified. Two different model checking techniques (ordered binary decision diagrams and SAT-based bounded model checking) are evaluated and compared to verify the protocols.

Journal ArticleDOI
TL;DR: A rough set approach to reinforcement learning by swarms of cooperating agents is introduced and several viable alternatives to conventional reinforcement learning methods defined in the context of approximation spaces are presented.
Abstract: This paper introduces a rough set approach to reinforcement learning by swarms of cooperating agents. The problem considered in this paper is how to guide reinforcement learning based on knowledge of acceptable behavior patterns. This is made possible by considering behavior patterns of swarms in the context of approximation spaces. Rough set theory introduced by Zdzislaw Pawlak in the early 1980s provides a ground for deriving pattern-based rewards within approximation spaces. Both conventional and approximation space-based forms of reinforcement comparison and the actor-critic method as well as two forms of the off-policy Monte Carlo learning control method are investigated in this article. The study of swarm behavior by collections of biologically-inspired bots is carried out in the context of an artificial ecosystem testbed. This ecosystem has an ethological basis that makes it possible to observe and explain the behavior of biological organisms that carries over into the study of reinforcement learning by interacting robotic devices. The results of ecosystem experiments with six forms of reinforcement learning are given. The contribution of this article is the presentation of several viable alternatives to conventional reinforcement learning methods defined in the context of approximation spaces.

Journal Article
TL;DR: This paper proposes an efficient algorithm for logic synthesis based on the Incremental Boolean Satisfiability (SAT) approach and shows that this technique leads not only to huge memory savings when compared with the methods based on reachability graphs, but also to significant speedups in many cases, without affecting the quality of the solution.
Abstract: The behaviour of asynchronous circuits is often described by Signal Transition Graphs (STGs), which are Petri nets whose transitions are interpreted as rising and falling edges of signals. One of the crucial problems in the synthesis of such circuits is deriving equations for logic gates implementing each output signal of the circuit. This is usually done using reachability graphs.In this paper, we avoid constructing the reachability graph of an STG, which can lead to state space explosion, and instead use only the information about causality and structural conflicts between the events involved in a finite and complete prefix of its unfolding. We propose an efficient algorithm for logic synthesis based on the Incremental Boolean Satisfiability (SAT) approach. Experimental results show that this technique leads not only to huge memory savings when compared with the methods based on reachability graphs, but also to significant speedups in many cases, without affecting the quality of the solution.

Journal Article
TL;DR: A general definition of universality that applies to arbitrary discrete time symbolic dynamical systems and it is conjecture that universal systems have infinite number of subsystems is conjecture.
Abstract: Many different definitions of computational universality for various types of dynamical systems have flourished since Turing's work. We propose a general definition of universality that applies to arbitrary discrete time symbolic dynamical systems. Universality of a system is defined as undecidability of a model-checking problem. For Turing machines, counter machines and tag systems, our definition coincides with the classical one. It yields, however, a new definition for cellular automata and subshifts. Our definition is robust with respect to initial condition, which is a desirable feature for physical realizability. We derive necessary conditions for undecidability and universality. For instance, a universal system must have a sensitive point and a proper subsystem. We conjecture that universal systems have infinite number of subsystems. We also discuss the thesis according to which computation should occur at the 'edge of chaos' and we exhibit a universal chaotic system.

Journal Article
TL;DR: The soundness notion for workflow nets is extended to the workflow nets with resource constraints and some properties of sound resource-constrained workflow nets are proved; extra conditions concern the durability of resources.
Abstract: We study concurrent processes modelled as workflow Petri nets extended with resource constrains. Resources are durable units that can be neither created nor destroyed: they are claimed during the handling procedure and then released again. Typical kinds of resources are manpower, machinery, computer memory. We define structural criteria based on traps and siphons for the correctness of workflow nets with resource constraints. We also extend the soundness notion for workflow nets to the workflow nets with resource constraints; extra conditions concern the durability of resources. We prove some properties of sound resource-constrained workflow nets.

Journal Article
TL;DR: A low-complexity but non-trivial distance between strings to be used in biology and, even if preliminary, are quite encouraging.
Abstract: We exhibit a low-complexity but non-trivial distance between strings to be used in biology. The experimental results we provide were obtained on a standard laptop and, even if preliminary, are quite encouraging.

Journal Article
TL;DR: In this paper, the authors explore the notion of algebraicity in formal concept analysis from a category-theoretical perspective and provide a relatively comprehensive account of the representation theory of algebraic lattices in the framework of Stone duality.
Abstract: Formal concept analysis has grown from a new branch of the mathematical field of lattice theory to a widely recognized tool in Computer Science and elsewhere. In order to fully benefit from this theory, we believe that it can be enriched with notions such as approximation by computation or representability. The latter are commonly studied in denotational semantics and domain theory and captured most prominently by the notion of algebraicity, e.g. of lattices. In this paper, we explore the notion of algebraicity in formal concept analysis from a category-theoretical perspective. To this end, we build on the notion of approximable concept with a suitable category and show that the latter is equivalent to the category of algebraic lattices. At the same time, the paper provides a relatively comprehensive account of the representation theory of algebraic lattices in the framework of Stone duality, relating well-known structures such as Scott information systems with further formalisms from logic, topology, domains and lattice theory.

Journal Article
TL;DR: In this paper, the authors developed an algebraic theory about threads and multi-threading based on the assumption that a deterministic interleaving strategy determines how threads are interleaved.
Abstract: In a previous paper, we developed an algebraic theory about threads and multi-threading based on the assumption that a deterministic interleaving strategy determines how threads are interleaved. The theory includes interleaving operators for a number of plausible deterministic interleaving strategies. The interleaving of different threads constitutes a multi-thread. Several multi-threads may exist concurrently on a single host in a network, several host behaviors may exist concurrently in a single network on the internet, etc. In the current paper, we assume that the above-mentioned kind of interleaving is also present at these other levels. We extend the theory developed so far with features to cover the multi-level case. We use the resulting theory to develop a simplified formal representation schema of systems that consist of several multi-threaded programs on various hosts in different networks. We also investigate the connections of the resulting theory with the algebraic theory of processes known as ACP.

Journal Article
TL;DR: A basic ontology and a formal framework endorsing the viewpoint of coordination as a service are presented, whereby coordination media are characterised in terms of their interactive behaviour, and are seen as primary abstractions amenable of formal investigation.
Abstract: Coordination models like LINDA were first conceived in the context of closed systems, like high-performance parallel applications. There, all coordinated entities were known once and for all at design time, and coordination media were conceptually part of the coordinated application. Correspondingly, traditional formalisations of coordination models - where both coordinated entities and coordination media are uniformly represented as terms of a process algebra - endorse the viewpoint of coordination as a language for building concurrent systems. The complexity of today application scenarios calls for a new approach to the formalisation of coordination models. Open systems, typically hosting a multiplicity of applications working concurrently, require coordination to be imposed through powerful abstractions that (i) persist through the whole engineering process - from design to execution time - and (ii) provide coordination services to applications by a shared infrastructure in the form of coordination media. As a unifying framework for a number of existing works on the semantics of coordination media, in this paper we present a basic ontology and a formal framework endorsing the viewpoint of coordination as a service. By this framework, coordination media are characterised in terms of their interactive behaviour, and are seen as primary abstractions amenable of formal investigation, promoting their exploitation at every step of the engineering process.

Journal Article
TL;DR: This paper uses asymmetry introduced by left-closedness to derive criteria ensuring that both equational and inequational versions of short cut fusion and related program transformations based on free theorems hold in the presence of seq.
Abstract: Parametric polymorphism constrains the behavior of pure functional programs in a way that allows the derivation of interesting theorems about them solely from their types, i.e., virtually for free. Unfortunately, standard parametricity results - including so-called free theorems - fail for nonstrict languages supporting a polymorphic strict evaluation primitive such as Haskell's seq. A folk theorem maintains that such results hold for a subset of Haskell corresponding to a Girard-Reynolds calculus with fixpoints and algebraic datatypes even when seq is present provided the relations which appear in their derivations are required to be bottom-reflecting and admissible. In this paper we show that this folklore is incorrect, but that parametricity results can be recovered in the presence of seq by restricting attention to left-closed, total, and admissible relations instead. The key novelty of our approach is the asymmetry introduced by left-closedness, which leads to "inequational" versions of standard parametricity results together with preconditions guaranteeing their validity even when seq is present. We use these results to derive criteria ensuring that both equational and inequational versions of short cut fusion and related program transformations based on free theorems hold in the presence of seq

Journal Article
TL;DR: A new rule-action based term rewriting framework, called TermWare, is proposed and its application to software system analysis is described to provide better cost effectiveness of software maintenance under varied requirements and specifications of operation.
Abstract: In recent years light-weighted formal methods in construction and analysis of complex concurrent software system are of growing interest. In this paper a new rule-action based term rewriting framework, called TermWare, is proposed and its application to software system analysis is described to provide better cost effectiveness of software maintenance under varied requirements and specifications of operation. The main advantage is light-weighted formal model based on not computational semantics but on particular properties of software system to be analyzed. Such approach eliminates the need in full formal analysis of software system and allows extreme flexibility of applications in two major concerns: high adaptability to changeable environment and easy reengineering and component reuse. The language and formal semantics of the system are defined. A new semantic model, called term system with action, is proposed for TermWare. A case study with some representative examples in source code analysis and software development with TermWare framework is presented.

Journal Article
TL;DR: It is proved that computable functions over the real numbers in the sense of recursive analysis can be characterized as the smallest class of functions that contains some basic functions, and closed by composition, linear integration, minimalization and limit schema.
Abstract: Recently, using a limit schema, we presented an analog and machine independent algebraic characterization of elementary functions over the real numbers in the sense of recursive analysis. In a different and orthogonal work, we proposed a minimalization schema that allows to provide a class of real recursive functions that corresponds to extensions of computable functions over the integers. Mixing the two approaches we prove that computable functions over the real numbers in the sense of recursive analysis can be characterized as the smallest class of functions that contains some basic functions, and closed by composition, linear integration, minimalization and limit schema.

Journal Article
TL;DR: This paper provides a semantic understanding of call redundancy, upon which it is constructed an analysis for handling the tupling of functions with multiple recursion arguments, and provides a means to ensure termination of the tuplings transformation.
Abstract: Redundant call elimination has been an important program optimisation process as it can produce super-linear speedup in optimised programs. In this paper, we investigate use of the tupling transformation in achieving this optimisation over a first-order functional language. Standard tupling technique, as described in [6], works excellently in a restricted variant of the language; namely, functions with single recursion argument. We provide a semantic understanding of call redundancy, upon which we construct an analysis for handling the tupling of functions with multiple recursion arguments. The analysis provides a means to ensure termination of the tupling transformation. As the analysis is of polynomial complexity, it makes the tupling suitable as a step in compiler optimisation.