scispace - formally typeset
Search or ask a question

Showing papers in "Fundamenta Informaticae in 2005"


Journal Article
TL;DR: A comparative study on the quantitative relationship between some basic concepts of rough set theory like attribute reduction, attribute significance and core defined from these two viewpoints shows that the relationship between these conceptions from the two viewpoints is rather an inclusion than an equivalence.
Abstract: Attribute reduction is an important issue in rough set theory and has already been studied from the algebra viewpoint and information viewpoint of rough set theory respectively. However, the concepts of attribute reduction based on these two different viewpoints are not equivalent to each other. In this paper, we make a comparative study on the quantitative relationship between some basic concepts of rough set theory like attribute reduction, attribute significance and core defined from these two viewpoints. The results show that the relationship between these conceptions from the two viewpoints is rather an inclusion than an equivalence due to the fact that the rough set theory discussed from the information point of view restricts attributes and decision tables more specifically than it does when considered from the algebra point of view. The identity of the two viewpoints will hold in consistent information decision tables only. That is, the algebra viewpoint and information viewpoint are equivalent for a consistent decision table, while different for an inconsistent decision table. The results are significant for the design and development of methods for information reduction.

94 citations


Journal Article
TL;DR: Four main methods for proving properties of corecursive programs: fixpoint induction, the approximation (or take) lemma, coinduction, and fusion are taught.
Abstract: Recursion is a well-known and powerful programming technique, with a wide variety of applications. The dual technique of corecursion is less well-known, but is increasingly proving to be just as useful. This article is a tutorial on the four main methods for proving properties of corecursive programs: fixpoint induction, the approximation (or take) lemma, coinduction, and fusion.

58 citations


Journal Article
TL;DR: This paper bridges the gap between qualitative and quantitative models and applies time Petri nets for modelling and analysis of molecular biological systems and demonstrates how to develop quantitative models of biochemical networks in a systematic manner, starting from the underlying qualitative ones.
Abstract: Biochemical networks are modelled at different abstraction levels. Basically, qualitative and quantitative models can be distinguished, which are typically treated as separate ones. In this paper, we bridge the gap between qualitative and quantitative models and apply time Petri nets for modelling and analysis of molecular biological systems. We demonstrate how to develop quantitative models of biochemical networks in a systematic manner, starting from the underlying qualitative ones. For this purpose we exploit the well-established structural Petri net analysis technique of transition invariants, which may be interpreted as a characterisation of the system?s steady state behaviour. For the analysis of the derived quantitative model, given as time Petri net, we present structural techniques to decide the time-dependent realisability of a transition sequence and to calculate its shortest and longest time length. All steps of the demonstrated approach consider systems of integer linear inequalities. The crucial point is the total avoidance of any state space construction. Therefore, the presented technology may be applied also to infinite systems, i.e. unbounded Petri nets.

53 citations


Journal Article
TL;DR: This work applies LR-based parsing methods to turn a nondeterministic program into a deterministic program, which improves the efficiency of the inverse programs and greatly expands the application range of the earlier method for program inversion.
Abstract: We describe a method for automatic program inversion of first-order functional programs based on methods of LR(0) parsing. We formalize the transformation and illustrate it with several example programs. We approach one of the main problems of automatic program inversion - the elimination of nondeterminism - by viewing an inverse program as a context-free grammar. We apply LR-based parsing methods to turn a nondeterministic program into a deterministic program. This improves the efficiency of the inverse programs and greatly expands the application range of our earlier method for program inversion.

44 citations


Journal Article
TL;DR: A Probabilistic model checker PRISM is applied to the transition system derived by the model of this example, and the results of model checking of some illustrative properties are shown.
Abstract: We introduce a model for molecular reactions based on probabilistic rewriting rules. We give a probabilistic algorithm for rule applications as a semantics for the model, and we show how a probabilistic transition system can be derived from it. We use the algorithm in the development of an interpreter for the model, which we use to simulate the evolution of molecular systems. In particular, we show the results of the simulation of a real example of enzymatic activity. Moreover, we apply the probabilistic model checker PRISM to the transition system derived by the model of this example, and we show the results of model checking of some illustrative properties.

31 citations


Journal Article
TL;DR: Here, different ways to approach the synchronization of different rules, running in parallel, and having unknown execution times within the framework of membrane systems are presented.
Abstract: Membrane systems are parallel computational devices inspired from the cell functioning. Since the original definition, a standard feature of membrane systems is the fact that each rule of the system is executed in exactly one time-unit. However, this hypothesis seems not to have a counterpart in real world. In fact, in cells, chemical reactions might take different times to be exe-cuted. Therefore, a natural step is to associate to each rule of a membrane system a certain time of execution. We are interested in membrane systems, called time-free, working independently from the assigned execution time of the rules. A basic and interesting problem in time-free membrane systems consists in the synchronization of different rules, running in parallel, and having unknown execution times. Here, we present different ways to approach this problem within the framework of membrane systems. Several research proposals are also suggested.

28 citations


Journal Article
TL;DR: This work develops a logic for formalizing qualitative reasoning, which is generally used, for instance, when one has a lot of data from a real world example but the complexity of the numerical model suggests a qualitative approach.
Abstract: Non-classical logics have proven to be an adequate framework to formalize knowledge representation. In this paper we focus on a multimodal approach to formalize order-of-magnitude qualitative reasoning, extending the recently introduced system MQ, by means of a certain notion of negligibility relation which satisfies a number of intuitively plausible properties, as well as a minimal axiom system allowing for interaction among the different qualitative relations. The main aim is to show the completeness of the formal system introduced. Moreover, we consider some definability results and discuss possible directions for further research.

28 citations


Journal Article
TL;DR: The goal of the $-calculus is to propose a computational model with built-in performance measure as its central element that allows not only the expression of solutions, but also provides the means to incrementally construct solutions for computationally hard, real-life problems.
Abstract: This paper presents a novel model for resource bounded computation based on process algebras. Such model is called the $-calculus (cost calculus). Resource bounded computation attempts to find the best answer possible given operational constraints. The $-calculus provides a uniform representation for optimization in the presence of limited resources. It uses cost-optimization to find the best quality solutions while using a minimal amount of resources. A unique aspect of the approach is to propose a resource bounded process algebra as a generic problem solving paradigm targeting interactive AI applications. The goal of the $-calculus is to propose a computational model with built-in performance measure as its central element. This measure allows not only the expression of solutions, but also provides the means to incrementally construct solutions for computationally hard, real-life problems. This is a dramatic contrast with other models like Turing machines, λ-calculus, or conventional process algebras. This highly expressive model must therefore be able to express approximate solutions. This paper describes the syntax and operational cost semantics of the calculus. A standard cost function has been defined for strongly and weakly congruent cost expressions. Example optimization problems are given which take into account the incomplete knowledge and the amount of resources used by an agent. The contributions of the paper are twofold: firstly, some necessary conditions for achieving global optimization by performing local optimization in time and/or space are found. That deals with incomplete information and complexity during problem solving. Secondly, developing an algebra which expresses current practices, e.g., neural nets, cellular automata, dynamic programming, evolutionary computation, or mobile robotics as limiting cases, provides a tool for exploring the theoretical underpinnings of these methods. As the result, hybrid methods can be naturally expressed and developed using the algebra.

26 citations


Journal Article
TL;DR: This paper proposes two different approaches to create an individual indiscernibility relation for a particular information system, and assumes different semantics of missing attribute values, although this variability is limited with expressive power of formulas utilizing descriptors.
Abstract: The indiscernibility relation is a fundamental concept of the rough set theory. The original definition of the indiscernibility relation does not capture the situation where some of the attribute values are missing. This paper tries to enhance former works by proposing an individual treatment of missing values at the attribute or value level. The main assumption of the theses presented in this paper considers that not all missing values are semantically equal. We propose two different approaches to create an individual indiscernibility relation for a particular information system. The first relation assumes variable, but fixed semantics of missing attribute values in different columns. The second relation assumes different semantics of missing attribute values, although this variability is limited with expressive power of formulas utilizing descriptors. We provide also a comparison of flexible indiscernibility relations and missing value imputation methods. Finally we present a simple algorithm for inducing sub-optimal relations from data.

26 citations


Journal Article
TL;DR: A formal model and a semantics for tabular expressions are presented and the model covers most known types of tables used in software engineering, and admits precise classification and definition of new types of Tables.
Abstract: Tabular Expressions (Parnas et al. [20, 28, 32, 33]) are means to represent the complex relations that are used to specify or document software systems. A formal model and a semantics for tabular expressions are presented. The model covers most known types of tables used in software engineering, and admits precise classification and definition of new types of tables. The practical importance of the semantics of tabular expressions is also discussed.

23 citations


Journal Article
TL;DR: It is demonstrated that the introduction of probability into Petri nets increases their power, in particular for their reachability sets, and the definition of accepted languages by Probabilistic finite automata or probabilistic Turing machines with complexity restriction has been modified.
Abstract: It is demonstrated that the introduction of probability into Petri nets increases their power, in particular for their reachability sets. To achieve this the definition of accepted languages by probabilistic finite automata or probabilistic Turing machines with complexity restriction has to be modified.

Journal Article
TL;DR: A new discretization for timed automata is introduced which enables SAT based reachability analysis for timed Automata for which comparisons between two clocks are allowed.
Abstract: This paper deals with the problemof checking reachability for timed automata with diagonal constraints. Such automata are needed in many applications e.g. to model scheduling problems. We introduce a new discretization for timed automata which enables SAT based reachability analysis for timed automata for which comparisons between two clocks are allowed. In our earlier papers SAT based reachability analysis was restricted to the so called diagonal-free timed automata, where only comparisons between clocks and constants are allowed.

Journal Article
TL;DR: There And Back Again (TABA) as discussed by the authors is a programming pattern where a recursive function defined over a data structure traverses another data structure at return time, where the recursive calls get us "there" by traversing the first data structure and the returns get us 'back again' while traversing a second data structure.
Abstract: We present a programming pattern where a recursive function defined over a data structure traverses another data structure at return time. The idea is that the recursive calls get us 'there' by traversing the first data structure and the returns get us 'back again' while traversing the second data structure. We name this programming pattern of traversing a data structure at call time and another data structure at return time "There And Back Again" (TABA). The TABA pattern directly applies to computing symbolic convolutions and to multiplying polynomials. It also blends well with other programming patterns such as dynamic programming and traversing a list at double speed. We illustrate TABA and dynamic programming with Catalan numbers. We illustrate TABA and traversing a list at double speed with palindromes and we obtain a novel solution to this traditional exercise. Finally, through a variety of tree traversals, we show how to apply TABA to other data structures than lists. A TABA-based function written in direct style makes full use of an ALGOL-like control stack and needs no heap allocation. Conversely, in a TABA-based function written in continuation-passing style and recursively defined over a data structure (traversed at call time), the continuation acts as an iterator over a second data structure (traversed at return time). In general, the TABA pattern saves one from accumulating intermediate data structures at call time.

Journal Article
TL;DR: The test results demonstrate that the proposed method simplifies significantly the achieved solutions and shortens their evaluation time.
Abstract: Ant Colony Programming is a relatively new paradigm based on Genetic Programming and Ant Colony System. We discuss the problemof eliminating introns in Ant Colony Programming. The approach is tested on several approximation problems. The test results demonstrate that the proposed method simplifies significantly the achieved solutions and shortens their evaluation time.

Journal Article
TL;DR: This study proposes an interactive idea called GADAC to choose suitable parameters and accept the diverse radii for clustering and demonstrates that the noise and all clusters in any data shapes can be identified precisely in the proposed scheme.
Abstract: Density-based clustering can identify arbitrary data shapes and noises. Achieving good clustering performance necessitates regulating the appropriate parameters in the density-based clustering. To select suitable parameters successfully, this study proposes an interactive idea called GADAC to choose suitable parameters and accept the diverse radii for clustering. Adopting the diverse radii is the original idea employed to the density-based clustering, where the radii can be adjusted by the genetic algorithmto cover the clusters more accurately. Experimental results demonstrate that the noise and all clusters in any data shapes can be identified precisely in the proposed scheme. Additionally, the shape covering in the proposed scheme is more accurate than that in DBSCAN.

Journal Article
TL;DR: In this paper, a new algebraic semantics of rough set theory including additional meta aspects is proposed, which is based on enhancing the standard Rough Set Theory with notions of "relative ability of subsets of approximation spaces to approximate".
Abstract: In this research a new algebraic semantics of rough set theory including additional meta aspects is proposed. The semantics is based on enhancing the standard rough set theory with notions of 'relative ability of subsets of approximation spaces to approximate'. The eventual algebraic semantics is developed via many deep results in convexity in ordered structures. A new variation of rough set theory, namely 'ill-posed rough set theory' in which it may suffice to know some of the approximations of sets, is eventually introduced.

Journal Article
TL;DR: This work explores the open calculus of constructions as a uniform framework for programming, specification and interactive verification in an equational higher-order style and provides a foundation for a broad spectrum of applications ranging from what could be called executable mathematics.
Abstract: The open calculus of constructions integrates key features of Martin-Lof's type theory, the calculus of constructions, membership equational logic, and rewriting logic into a single uniform language The two key ingredients are dependent function types and conditional rewriting modulo equational theories We explore the open calculus of constructions as a uniform framework for programming, specification and interactive verification in an equational higher-order style By having equational logic and rewriting logic as executable sublogics we preserve the advantages of a first-order semantic and logical framework and we provide a foundation for a broad spectrum of applications ranging from what could be called executable mathematics, involving symbolic computations and logical proofs, to software and system engineering applications, involving symbolic execution and analysis of nondeterministic and concurrent systems

Journal Article
TL;DR: The problems of spatio-temporal reasoning in the context of hierarchical information maps and approximate reasoning networks (AR networks) are discussed and experiments with classifiers based on AR schemes using a road traffic simulator are briefly presented.
Abstract: We discuss the problems of spatio-temporal reasoning in the context of hierarchical information maps and approximate reasoning networks (AR networks). Hierarchical information maps are used for representations of domain knowledge about objects, their parts, and their dynamical changes. AR networks are patterns constructed over sensory measurements and they are discovered from hierarchical information maps and experimental data. They make it possible to approximate domain knowledge, i.e., complex spatio-temporal concepts and reasonings represented in hierarchical information maps. Experiments with classifiers based on AR schemes using a road traffic simulator are also briefly presented.

Journal Article
TL;DR: This paper revisits the transformation method initially proposed by Bird, but follows a pure point-free calculus that emphasizes the advantages of equational reasoning, and introduces several shortcut optimization rules that capture typical transformation patterns.
Abstract: Functional programs are particularly well suited to formal manipulation by equational reasoning. In particular, it is straightforward to use calculational methods for program transformation. Well-known transformation techniques, like tupling or the introduction of accumulating parameters, can be implemented using calculation through the use of the fusion (or promotion) strategy. In this paper we revisit this transformation method, but, unlike most of the previous work on this subject, we adhere to a pure point-free calculus that emphasizes the advantages of equational reasoning. We focus on the accumulation strategy initially proposed by Bird, where the transformed programs are seen as higher-order folds calculated systematically from a specification. The machinery of the calculus is expanded with higher-order point-free operators that simplify the calculations. A substantial number of examples (both classic and new) are fully developed, and we introduce several shortcut optimization rules that capture typical transformation patterns.

Journal Article
TL;DR: An approach to hierarchical modelling of complex patterns that is based on operations of sums with constraints on information systems is outlined and it is shown that such operations can be treated as a universal tool in hierarchical modellingof complex patterns.
Abstract: We outline an approach to hierarchical modelling of complex patterns that is based on operations of sums with constraints on information systems. We show that such operations can be treated as a universal tool in hierarchical modelling of complex patterns.

Journal ArticleDOI
TL;DR: The goal of this book is to collect recent outstanding studies on mining and learning within graphs, trees and sequences in studies worldwide.
Abstract: Ever since the early days of machine learning and data mining, it has been realized that the traditional attribute-value and item-set representations are too limited for many practical applications in domains such as chemistry, biology, network analysis and text mining. This has triggered a lot of research on mining and learning within alternative and more expressive representation formalisms such as computational logic, relational algebra, graphs, trees and sequences. The motivation for using graphs, trees and sequences. Is that they are 1) more expressive than flat representations, and 2) potentially more efficient than multi-relational learning and mining techniques. At the same time, the data structures of graphs, trees and sequences are among the best understood and most widely applied representations within computer science. Thus these representations offer ideal opportunities for developing interesting contributions in data mining and machine learning that are both theoretically well-founded and widely applicable. The goal of this book is to collect recent outstanding studies on mining and learning within graphs, trees and sequences in studies worldwide.

Journal Article
TL;DR: This paper presents a number of alternative approaches to rough satisfiability and meaning in approximation spaces, and touches upon derivative concepts of meaning and applicability of rules.
Abstract: In this paper, we study general notions of satisfiability and meaning of formulas and sets of formulas in approximation spaces. Rather than proposing one particular form of rough satisfiability and meaning, we present a number of alternative approaches. Approximate satisfiability and meaning are important, among others, for modelling of complex systems like systems of adaptive social agents. Finally, we also touch upon derivative concepts of meaning and applicability of rules.

Journal Article
TL;DR: A positive answer to the question whether or not insertion grammars with weight at least 7 can characterize recursively enumerable languages can be improved is come up with by decreasing the weight of the insertion grammar used to 5.
Abstract: Insertion grammars have been introduced in [1] and their computational power has been studied in several places. In [7] it is proved that insertion grammars with weight at least 7 can characterize recursively enumerable languages (modulo a weak coding and an inverse morphism), and the question was formulated whether or not this result can be improved. In this paper, we come up with a positive answer to this question, by decreasing the weight of the insertion grammar used to 5. We also give a characterization of recursively enumerable languages in terms of right quotients of insertion languages.

Journal Article
TL;DR: This paper shall propose the improved protocol secure against the known attacks to enhance the security of OSPA.
Abstract: Password authentication, which is widely used for authenticated method, also is important protocol by requiring a username and password before being allowed access to resources. In 2001, Lin et al. proposed the optimal strong-password authentication protocol, called (OSPA) which is a one-time password method by verifying with the different verifier every time for affirming its identity. However, Chen-Ku and Tsuji-Shimizu pointed respectively out that OSPA is vulnerable to the stolen-verifier attack and impersonation attack. In this paper, we shall propose the improved protocol secure against the known attacks to enhance the security

Journal Article
TL;DR: It is shown that certain sets of processes form together with these operations categories with additional structures and special properties, that processes of a system can be represented as morphisms of such categories, and that independence of processes can be characterized in a natural, purely algebraic way.
Abstract: The paper is concerned with modelling distributed systems by specifying their states and processes. Processes are defined as activities in a universe of objects, each object with a set of possible internal states, each activity changing states of some objects and establishing or destroying relations among objects. Partial operations of composing processes sequenially and in parallel are defined. It is shown that certain sets of processes form together with these operations categories with additional structures and special properties, that processes of a system can be represented as morphisms of such categories, and that independence of processes can be characterized in a natural, purely algebraic way.

Journal Article
Thierry Joly1
TL;DR: In this article, it was shown that the problem is undecidable for any FTS over at least three ground elements, and that the undecidability of the -Pattern Matching problem over at most two ground elements is known.
Abstract: The question whether a given functional of some full type structure (FTS for short) is λ-definable by a closed λ-term, known as the Definability Problem, was proved to be undecidable by R. Loader in 1993 (cf [Loa01a]). Later on, R. Loader refined his first result by establishing that the problem is undecidable for every FTS over at least 3 ground elements (cf [Loa01b]). We solve here the remaining non trivial case and show that the problem is undecidable whenever there are at least two ground elements. The proof is based on a direct encoding of the Halting Problem for register machines into the Definability Problem restricted to the functionals of the Monster type M=(((o→o)→o)→o)→(o→o). It follows that this new restriction of the Definability Problem, which is orthogonal to the ones considered so far, is also undecidable. The latter fact yields a particularly simple proof of the undecidability of the -Pattern Matching Problem, recently established by R. Loader in [Loa03].

Journal Article
TL;DR: There are graph classes for which the optimal linear arrangement problem can be efficiently approximated using the genetic hillclimbing algorithm but not using simulated annealing based algorithms.
Abstract: The optimal linear arrangement problem is defined as follows: given a graph G, find a linear ordering for the vertices of G on a line such that the sum of the edge lengths is minimized over all orderings. The problem is NP-complete and it has many applications in graph drawing and in VLSI circuit design. We introduce a genetic hillclimbing algorithm for the optimal linear arrangement problem. We compare the quality of the solutions and running times of our algorithm to those obtained by simulated annealing algorithms. To obtain comparable results, we use a benchmark graph suite for the problem. Our experiments show that there are graph classes for which the optimal linear arrangement problem can be efficiently approximated using our genetic hillclimbing algorithm but not using simulated annealing based algorithms. For hypercubes, binary trees and bipartite graphs, the solution quality is better and the running times are shorter than with simulated annealing algorithms. Also the average results are better. On the other hand, there also are graph classes for which simulated annealing algorithms work better.

Journal Article
TL;DR: In this paper, the authors extend the model underlying the PageRank algorithm by incorporating "back button" usage modeling in order to make the model less simplistic and explain the existence and uniqueness of the ranking induced by the extended model.
Abstract: The World Wide Web with its billions of hyperlinked documents is a huge and important resource of information. There is a necessity of filtering this information. Link analysis of the Web graph turned out to be a powerful tool for automatically identifying authoritative documents. One of the best examples is the PageRank algorithm used in Google [1] to rank search results. In this paper we extend the model underlying the PageRank algorithm by incorporating "back button" usage modeling in order to make the model less simplistic. We explain the existence and uniqueness of the ranking induced by the extended model. We also develop and implement an efficient approximation method for computing the novel ranking and present succesful experimental results made on 80- and 50- million page samples of the real Web.

Journal Article
TL;DR: The main result proved shows that the natural embedding of any recursively enumerable one-dimensional array language in the two-dimensional space can be characterized by the projection of a two- dimensional array language generated by a contextual array grammar working in the t-mode and with norm one.
Abstract: The main result proved in this paper shows that the natural embedding of any recursively enumerable one-dimensional array language in the two-dimensional space can be characterized by the projection of a two-dimensional array language generated by a contextual array grammar working in the t-mode and with norm one Moreover, we show that any recursively enumerable one - dimensional array language can even be characterized by the projection of a two-dimensional array language generated by a contextual array grammar working in the t-mode where in the selectors of the contextual array productions only the ability to distinguish between blank and non-blank positions is necessary; in that case, the norm of the two-dimensional contextual array grammar working in the -mode cannot be bounded

Journal Article
TL;DR: It is shown that for (right-) monotone deterministic one-way restarting automata that are right-left-monotone, the use of auxiliary symbols does not increase the expressive power, and a transformation of these automata into contextual grammars with regular selection is presented.
Abstract: It is known that for (right-) monotone deterministic one-way restarting automata, the use of auxiliary symbols does not increase the expressive power. Here we show that the same is true for deterministic two-way restarting automata that are right-left-monotone. Moreover, we present a transformation of this kind of restarting automata into contextual grammars with regular selection.