scispace - formally typeset
Search or ask a question

Showing papers presented at "Computer Aided Systems Theory in 2001"


Book ChapterDOI
19 Feb 2001
TL;DR: The concept of Grobner bases is developed by studying uniquenss of polynomial division ("reduction") and the crucial notion of S-polynomials is introduced, leading to the complete algorithmic solution of the construction problem.
Abstract: In this paper, we give a brief overview on Grobner bases theory, addressed to novices without prior knowledge in the field. After explaining the general strategy for solving problems via the Grobner approach, we develop the concept of Grobner bases by studying uniquenss of polynomial division ("reduction"). For explicitly constructing Grobner bases, the crucial notion of S-polynomials is introduced, leading to the complete algorithmic solution of the construction problem. The algorithm is applied to examples from polynomial equation solving and algebraic relations. After a short discussion of complexity issues, we conclude the paper with some historical remarks and references.

72 citations


Book ChapterDOI
19 Feb 2001
TL;DR: The paper, after some theoretical hints on the "morphogenetic neuron" proposes the use of this new technique to solve one of the most important themes in robotics, the manipulator kinematics structure representation and the following solution of the inverse kinematic problem.
Abstract: The paper, after some theoretical hints on the "morphogenetic neuron" proposes the use of this new technique to solve one of the most important themes in robotics, the manipulator kinematics structure representation and the following solution of the inverse kinematics problem. Even if the application has been completed and fully tested with success only on a two degrees of freedom SCARA robot, the first results here reported obtained on a more complex manipulator (spherical) seem to confirm the effectiveness of the approach.

11 citations


Book ChapterDOI
19 Feb 2001
TL;DR: The idea is to join Java Virtual Machines on different computers into a single virtual machine for running functional programs, and to test this virtual machine a small Haskell like functional language in which parallelism is expressed by some simple combinators is implemented.
Abstract: We present in this paper the implementation, in the Java language, of a distributed environment for running functional programs. The idea is to join Java Virtual Machines (JVMs)runni ng on different computers into a single virtual machine for running functional programs. To test this virtual machine we have implemented a small Haskell like functional language in which parallelism is expressed by some simple combinators.

11 citations


Book ChapterDOI
19 Feb 2001
TL;DR: Results of experiments conducted are discussed which give evidence that with the help of the computer algebra systems the planner is able to solve problems for which it would fail to create a proof otherwise.
Abstract: We report on a case study on combining proof planning with computer algebra systems. We construct proofs for basic algebraic properties of residue classes as well as for isomorphisms between residue classes using different proving techniques, which are implemented as strategies in a multi-strategy proof planner. We show how these techniques help to successfully derive proofs in our domain and explain how the search space of the proof planner can be drastically reduced by employing computations of two computer algebra systems during the planning process. Moreover, we discuss the results of experiments we conducted which give evidence that with the help of the computer algebra systems the planner is able to solve problems for which it would fail to create a proof otherwise.

11 citations


Book ChapterDOI
19 Feb 2001
TL;DR: A new heuristic proving method for predicate logic, called the PCS method since it proceeds by cycling through various phases of proving, which appears to be a valuable contribution for many of the routine proofs encountered in exploring mathematical theorems.
Abstract: In this paper, we present a new heuristic proving method for predicate logic, called the PCS method since it proceeds by cycling through various phases of proving (i.e. applying generic inference rules), computing (i.e. simplifying formulae), and solving (i.e. finding witness terms). Although not a complete proving calculus, it does produce very natural proofs for many propositions in elementary analysis like the limit theorems. Thus it appears to be a valuable contribution for many of the routine proofs encountered in exploring mathematical theorems.

10 citations


Book ChapterDOI
19 Feb 2001
TL;DR: Nowadays, complexity analysis of functional and technical mechatronic system becomes more and more important because of complexity influences almost all phases of product design and system engineering.
Abstract: Nowadays, complexity analysis of functional and technical mechatronic system becomes more and more important. This is because of complexity influences almost all phases of product design and system engineering. Therefore, there is the demand for a good assessment of complexity. For this purpose, several questions have to be discussed. → What is complexity and what are its characteristics? → What are the criteria for a good method to evaluate complexity? → How could complexity be evaluated and which method fits the requirements best?.

9 citations


Book ChapterDOI
19 Feb 2001
TL;DR: This work deals with hidden and coalgebraic methodologies through an institutional framework, giving a hidden specification for this aspect of the EAT system and proving the existence of final objects in convenient categories, which accurately model the E AT way of working.
Abstract: This paper is devoted to the formal study of the data structures appearing in a symbolic computation system, namely the EAT system. One of the main features of the EAT system is that it intensively uses functional programming techniques. This implies that some formalisms for the algebraic specification of systems must be adapted to this functional setting. Specifically, this work deals with hidden and coalgebraic methodologies through an institutional framework. As a byproduct, the new concept of coalgebraic institution associated to an institution is introduced. Then, the problem of modeling functorial relationships between data structures is tackled, giving a hidden specification for this aspect of the EAT system and proving the existence of final objects in convenient categories, which accurately model the EAT way of working.

8 citations


Proceedings Article
01 Feb 2001
TL;DR: This paper illustrates the use of ASM specifications to test the conformance of implementations to their specifications on the example of Universal Plug and Play devices.
Abstract: One benefit of executable specifications is that they allow one to test the conformance of implementations to their specifications. We illustrate this on the example of Universal Plug and Play devices. The necessary test sequences are generated automatically from ASM specifications.

8 citations


Book ChapterDOI
19 Feb 2001
TL;DR: The integration of the design has to simultaneously consider the controller design and its implementation and the resultant effect in the time constraints of the control requirements should be considered and balanced with the availability of real-time computational resources.
Abstract: Similar to the integrated design of industrial control systems, including the process and the controller at once, the integration of the design has to simultaneously consider the controller design and its implementation. The resultant effect in the time constraints of the control requirements should be considered and balanced with the availability of real-time computational resources. The adequate design of the real-time system may reduce these undesirable effects. The analysis of this interaction as well as some techniques to develop a joint design of the control system and its real-time implementation are analyzed. Some examples point out the effectiveness of this methodology.

8 citations


Book ChapterDOI
19 Feb 2001
TL;DR: It is shown that when the abstract syntax contains several categories, it is possible to define many-sorted algebras obtaining the same modularity.
Abstract: We present a Language Prototyping System that facilitates the modular development of interpreters from semantic specifications. The theoretical basis of our system is the integration of ideas from generic programming and modular monadic semantics. The system is implemented as a domain-specific language embedded in Haskell and contains an interactive framework for language prototyping. In the monadic approach, the semantic spscification of a programming language is captured as a function Σ → MV where Σ represents the abstract syntax, M the computational monad, and V the domain value. In order to obtain more extensibility, we use folds or catamorphisms over the fixpoint of non-recursive pattern functors that capture the structure of the abstract syntax. For each pattern functor F, the semantic specifications are defined as independent F-Algebras whose carrier is M V, where M is the computational monad and V models the domain value. The copmputational monad M can itself be obtained from the composition of several monad transformers applied to a base monad, and the domain value V can be defined using extensible union types. In this paper, we also show that when the abstract syntax contains several categories, it is possible to define many-sorted algebras obtaining the same modularity.

7 citations


Book ChapterDOI
19 Feb 2001
TL;DR: Computational results are included to show that for large times and for large boundaries the first passage time probability density through an asymptotically periodic boundary is exponentially distributed to an excellent degree of approximation.
Abstract: A parallel algorithm is implemented to simulate sample paths of stationary normal processes possessing a Butterworth-type covariance, in order to investigate asymptotic properties of the first passage time probability densities for time-varying boundaries. After a self-contained outline of the simulation procedure, computational results are included to show that for large times and for large boundaries the first passage time probability density through an asymptotically periodic boundary is exponentially distributed to an excellent degree of approximation.

Book ChapterDOI
19 Feb 2001
TL;DR: The Hyper-Automaton concept is briefly revised as a structural model for hypertext and its use as the basis for the central core of the agents in a learning system is analyzed.
Abstract: This paper describes the conception and implementation of a learning system on Euclidean Geometry demonstrations and its knowledge base. We use the formalism of finite automata with output to represent and ordain the statements that constitute a geometric demonstration in the knowledge base. The system is built on the MOSCA learning protocol, based on learning with the assistance of examples and interaction among five agents (Mestre, Oraculo, Sonda, Cliente and Aprendiz) involved in the learning process. We briefly revise the Hyper-Automaton concept as a structural model for hypertext and its use as the basis for the central core of the agents in a learning system is analyzed.

Book ChapterDOI
19 Feb 2001
TL;DR: This paper proposes a cleaner and more modular approach to the trace problem, which regard traces as observations of the program and at the same time they want to preserve the original graph.
Abstract: The debugging of lazy functional programs is a non yet satisfactorily solved problem. Different approaches have been proposed during the last years, all of them having a property in common: The graph produced by the traced program is different from the original graph, i.e. the one without traces. In this paper we propose a cleaner and more modular approach to the trace problem. We regard traces as observations of the program and at the same time we want to preserve the original graph. In this way, a clean separation between the trace and the program being observed is established. Consequently, there may be variables in the trace referencing parts of the graph (i.e. pointers from the trace to the graph), but not the other way around. By doing so the correctness is guaranteed, as the normal execution process is not altered. In order to reach this goal, a monadic approach is followed. The success of the approach is shown by simulating three up-to-date Haskell tracers.

Book ChapterDOI
19 Feb 2001
TL;DR: The anchor point milestones and the invariant definitions from the spiral model lifecycle for software development have been translated to CommonKADS methodology to develop a phytosanitary advisor system used by farmers in greenhouse crops.
Abstract: This work describes a proposal for software lifecycle and project management when developing knowledge based systems using CommonKADS methodology. To build this proposal, an analysis of the similarities between software engineering and knowledge engineering was made. As a result, the anchor point milestones and the invariant definitions from the spiral model lifecycle for software development have been translated to CommonKADS methodology. This proposal is being applied to a project in order to develop a phytosanitary advisor system used by farmers in greenhouse crops.

Book ChapterDOI
19 Feb 2001
TL;DR: A new approach to synthesize abstract machines for reactive programs that interact with processes in order to achieve some control requirements in the context of the Supervisory Control Theory is considered.
Abstract: We consider a new approach to synthesize abstract machines for reactive programs that interact with processes in order to achieve some control requirements in the context of the Supervisory Control Theory. The motivation behind our synthesis approach is related to the problem of scalability. Generally, synthesis procedures are based on a comparison of two state spaces (fixpoint calculation-based approach) or an exploration of a state space (search-based approach). In neither case do the synthesis procedures scale to specifications of realistic size. To circumvent this problem, we propose: i) to combine two formal notations for the representation of reactive programs in addition to the one for specifying control requirements; and ii) to integrate a synthesis procedure in a framework in which various transformations are applied with the sole aim of solving a smaller control problem from an abstract model.

Book ChapterDOI
19 Feb 2001
TL;DR: A categorical approach to cope with some questions originally studied within Computational Complexity Theory, aiming at characterising the structural properties of optimization problems, related to the approximative issue, by means of Category Theory.
Abstract: This work presents a categorical approach to cope with some questions originally studied within Computational Complexity Theory It proceeds a research with theoretical emphasis, aiming at characterising the structural properties of optimization problems, related to the approximative issue, by means of Category Theory In order to achieve it, two new categories are defined: the OPT category, which objects are optimization problems and the morphisms are the reductions between them, and the APX category, that has approximation problems as objects and approximation-preserving reductions as morphisms Following the basic idea of categorical shape theory, a comparison mechanism between these two categories is defined and a hierarchical structure of approximation to each optimization problem can be modelled

Book ChapterDOI
19 Feb 2001
TL;DR: The investigation contrasts and compares contrasts Java and two Haskell-based distributed functional languages, Eden and GdH, to investigate the effectiveness of this for distributed co-ordination.
Abstract: Conventional distributed programming languages require the programmer to explicitly specify many aspects of distributed coordination, including resource location, task placement, communication and synchronisation. Functional languages aim to provide higher-level abstraction, and this paper investigates the effectiveness of this for distributed co-ordination. The investigation contrasts and compares contrasts Java and two Haskell-based distributed functional languages, Eden and GdH. Three distributed programs are used as case studies, and the performance and programming effort are reported.

Book ChapterDOI
19 Feb 2001
TL;DR: It is argued that software development is, in some sense, formally unpredictable and suggests that software engineering is a scientific field not totally characterized by the typical work of engineering, but also by the experimental sciences methodology.
Abstract: Our main aim is to propose a new characterization for the software development process We suggest that software development methodology has some limits These limits are a clue that software development process is more subjective and empirical than objective and formal We use Kolmogorov complexity to develop the formal argument and to outline the informal conclusions Kolmogorov complexity is based on the size in bits of the smallest effective description of an object and is a suitable quantitative measure of the object's information content We try to show that notion of complexity is a suitable measure and a tool for the characterization of the software development process Following-the paper conclusions, the limits of formal methods typifies the software development process as experimental and heuristical based, like, for example, the scientific development in physics and chemistry Moreover, by our approach, we argue that software development is, in some sense, formally unpredictable These conclusions suggest that software engineering is a scientific field not totally characterized by the typical work of engineering, but also by the experimental sciences methodology

Book ChapterDOI
19 Feb 2001
TL;DR: For proofs of stability the role of Liapunov functions as abstractions is emphasized by identifying conditions under which they define Galois connections and the utility of a refinement notion based on trace inclusion is discussed.
Abstract: In order to promote a deeper understanding of hybrid, i.e. mixed discrete and continuous, systems, we introduce a set of important properties of such systems and classify them. For the properties of stability and attraction which are central for continuous systems we discuss their relationship to discrete systems usually studied in computer science. An essential result is that the meaning of these properties for discrete systems vitally depends on the used topologies. Based on the classification we discuss the utility of a refinement notion based on trace inclusion. Furthermore, for proofs of stability the role of Liapunov functions as abstractions is emphasized by identifying conditions under which they define Galois connections.

Book ChapterDOI
19 Feb 2001
TL;DR: This paper uses the general purpose theorem prover PVS for the rigorous formalization and analysis of hybrid systems and implements a deductive assertional proof method within PVS to allow for machine-assisted proofs.
Abstract: Hybrid systems are a well-established mathematical model for embedded systems. Such systems, which combine discrete and continuous behavior, are increasingly used in safety-critical applications. To guarantee safe functioning, formal verification techniques are crucial. While research in this area concentrates on model checking, deductive techniques attracted less attention. In this paper we use the general purpose theorem prover PVS for the rigorous formalization and analysis of hybrid systems. To allow for machine-assisted proofs, we implement a deductive assertional proof method within PVS. The sound and complete proof system allows modular proofs in that it comprises a proof rule for the parallel composition. Besides hybrid systems and the proof system, a number of examples are formalized within PVS.

Book ChapterDOI
19 Feb 2001
TL;DR: This prover is designed for proving statements involving notions from set theory using natural deduction inference rules for set theoryUsing the PCS paradigm for generating natural proofs that has already been used in other provers in the Theorema system.
Abstract: In this paper, we present the Theorema Set Theory Prover. This prover is designed for proving statements involving notions from set theory using natural deduction inference rules for set theory. Moreover, it applies the PCS paradigm (Proving-Computing-Solving) for generating natural proofs that has already been used in other provers in the Theorema system, notably the prover for automated proofs in elementary analysis. We show some applications of this prover in a case study on equivalence relations and partitions, which also nicely shows the interplay between proving, computing, and solving during an exploration of some mathematical theory.

Book ChapterDOI
19 Feb 2001
TL;DR: This work proposes a specific non analytical multiobjective function to perform allocations of qualified human resources within a dynamic environment with a specific three(3)tier optimizer architecture with different methodologies: fuzzy sets, numeric non gradient methods and evolutionary computation methods.
Abstract: Planning (creating optimized plans)i s the key feature of every management system. The behavior of human resources, especially the domain of description of cooperation in groups is not easy to model. Generally, this kind of planning is a classic NP problem and, together with the need of variable project structures and project flows, it leads to a very complex computational task. To perform allocations of qualified human resources within a dynamic environment we propose a specific non analytical multiobjective function. Our first target is to get optimal resource groups at any time and in such a way that global project time is minimal and that constraints of costs are satisfied. To meet these requirements we use a specific three(3)tier optimizer architecture with different methodologies: fuzzy sets, numeric non gradient methods and evolutionary computation methods.

Book ChapterDOI
Egon Börger1
19 Feb 2001
TL;DR: In this article, Gurevich's Abstract State Machines (ASMs) are used for incremental and modular system design by unfolding, via appropriate ASM components, the architecture of the Java Virtual Machine (JVM).
Abstract: Gurevich's [26] Abstract State Machines (ASMs), characterized by the parallel execution of abstract atomic actions in a global state, have been equipped in [13] with a refinement by standard composition concepts for structuring large machines that allows reusing machine components Among these concepts are parameterized (possibly recursive) sub-ASMs Here we illustrate their power for incremental and modular system design by unfolding, via appropriate ASM components, the architecture of the Java Virtual Machine (JVM), resulting from the language layering in combination with the functional decomposition of the JVM into loader, verifier, and interpreter We survey the ASM models for Java and the JVM that appear in [34], together with the mathematical and experimental analysis they support

Book ChapterDOI
19 Feb 2001
TL;DR: A CAST approach is presented, in order to develop a NOdi ffusion model, away from a biologically plausible morphology, that provides a formal framework for the establishing of neural signalling capacity of NO in biological and artificial neural environments.
Abstract: At present, a new type of process for signalling between cells seems to be emerging, the diffusion or volume transmission. The volume transmission is performed by means of a gas diffusion process, which is obtained with a diffusive type of signal (NO). We present in this paper a CAST approach, in order to develop a NOdi ffusion model, away from a biologically plausible morphology, that provides a formal framework for the establishing of neural signalling capacity of NOin biological and artificial neural environments. It is also presented a study which shows implications of volume transmission in the emergence of complex structures and self-organisation processes in both biological and artificial neural netwoks. Finally, we present the diffusion version of the Associative Network (AN) [6], the Diffusion Associative Network (DAN), where a more general framework of neural learning, which is based in synaptic and volume transmission, is considered.

Book ChapterDOI
19 Feb 2001
TL;DR: A general design model is proposed, which is represented using an extension of CommonKADS, as a framework for the development of design tools for different industrial objects, using alternative knowledge blocks selected dynamically using an ad-hoc mechanism.
Abstract: Actual advancements of the design tools for industrial elements include the addition of knowledge based elements to the classic design tools In this sense, we propose a general design model, which is represented using an extension of CommonKADS, as a framework for the development of design tools for different industrial objects This model was used developing the latest version of DAMOCIA-Design, a knowledge based design tool of greenhouse structures One key point of our design framework is using alternative knowledge blocks, which are selected dynamically using an ad-hoc mechanism, assembled using sets of selection criteria associated to the methods

Book ChapterDOI
19 Feb 2001
TL;DR: A working vision system for estimating size, location and motion of an object by using a set of randomly distributed receptive fields on a retina using a lateral interaction and memory schemes in which direction and speed are used to build a trajectory.
Abstract: In this paper we present a working vision system for estimating size, location and motion of an object by using a set of randomly distributed receptive fields on a retina. The approach used here differs from more conventional ones in which the receptive fields are arranged in a geometric pattern. From the input level, computations are performed in parallel in two different channels: one for purely spatial properties, the other for time-space analysis, and are then used at a subsequent level to yield estimates of the size and center of gravity of an object and the speed and direction of motion. Movement analysis refining is implemented by a lateral interaction (spatial) and memory (temporal) schemes in which direction and speed are used to build a trajectory. The different parameters involved (receptive field size, memory weighting function, number of cells) are tested for different speeds and the results compared, yielding new insights on the functioning of the living retina and suggesting ideas for improving the artificial system.

Book ChapterDOI
19 Feb 2001
TL;DR: This paper proposes a specialized ontology based on the Object-Oriented Approach and obtains some equivalent heterogeneous graphs which can be used to represent change propagation in the IS, DSS and Semantic Views.
Abstract: In order to use information stored in different repositories, it is necessary to improve the automatization process of decision-making. As a result, it would be necessary to share data. One of the main motivations of our emphasis on Semantic Views building is the possibility of sharing and reusing knowledge between the Information System, the Decision Support System and other external repositories. In this paper, we will try to show that Ontologies can be used in practice as a Semantic View. We also discuss some of the limitations of the Ontology described using Description Logics in a complex domain and the limitations which we have discovered when the intrinsic evolutionary aspects of Information Systems and Decision Support Systems must be modeled. Another important problem is how a change in such systems may produce a propagation of changes which affect the levels of the system (data) and the metasystem (data structure). For this purpose, we propose a specialized ontology based on the Object-Oriented Approach. Subsequently, we have obtained some equivalent heterogeneous graphs and these can be used to represent change propagation in the IS, DSS and Semantic Views. Most of the problems mentioned will be exemplified by means of a concrete case: a DSS for risk operations in financial institutions, the class structure of which is outlined.

Proceedings Article
01 Feb 2001
TL;DR: In this article, Boyer and Moore's integration schema is extended with a cover set induction schema, which enables the decision procedure to use suitably selected instances of the induction hypotheses.
Abstract: As many proof obligations in system verification require complex interleaving of pure logic and domain-specific steps, a crucial activity in the construction of a successful verification tool is the effective incorporation of reasoning specialists (such as decision procedures or unification algorithms) within more general inference modules (such as rewrite engines or inference procedures for reasoning by induction). Mathematical induction is a fundamental reasoning technique for system verification. In the last decades it has been thoroughly investigated and several powerful techniques and heuristics for reasoning about inductively defined objects have been put forward \cite{bm79,KNZ91:jsc,JAR::BouhoulaR1995}. These techniques allow the automatic discharge of many non trivial proof-obligations. Yet---when applied to verification problems of practical significance---the level of automation provided by such techniques is still unsatisfactory. The seminal work by Boyer and Moore \cite{bm88} showed that a higher level of automation can be achieved by induction provers thanks to the incorporation of decision procedures. However in Boyer and Moore's approach the interplay of the decision procedure is limited to the simplification engine and therefore it does not exploit the potentials of the cooperation between the decision procedure and the induction heuristics which are best illustrated by means of the following example. Let us consider the following proof-obligation occurring in the verification proof of the MJRTY algorithm \cite{bm91} given in \cite{str98b,str00b}: \begin{align} \label{eq:6} a eq mc(p,i)\Rightarrow 2*(ml(p,i)+count(p,i,a)) < s(i+ml(p,i)) \end{align} Application of the cover set leads to the following sub-goal which is obtained by first applying the substitution $\sigma=\{i\mapsto s(i'),a\mapsto Noname\}$ to (\ref{eq:6}) and then by normalizing the result with the available (conditional) rewrite rules: \begin{equation} \label{eq:8} \begin{split} \textit{Noname} eq mc(p,i'),access(p,i') eq mc(p,i'),0 < ml(p,i'), \textit{Noname} = access(p,i') \Rightarrow \\ 2*((ml(p,i')-1)+s(count(p,i',\textit{Noname}))) < s(s(i')+(ml(p,i')-1)) \end{split} \end{equation} (\ref{eq:8}) is not an instance of (\ref{eq:6}) (which now plays the role of the induction hypothesis), nor it can be directly solved by arithmetic reasoning only. However (\ref{eq:8}) follows by simple arithmetic reasoning from the instance of (\ref{eq:6}) obtained by applying the substitution $\{a\mapsto\textit{Noname},i \mapsto i'\}$. This example suggests a general approach to the integration of a decision procedure with the activity of selecting the induction hypothesis: \emph{(i)} normalize the induction conclusion with the available rewrite rules and \emph{(ii)} select an appropriate instance of the induction hypothesis which is likely to entail the normalized conclusion. In this paper we present an extension to Boyer and Moore's integration schema that enables the decision procedure to use suitably selected instances of the induction hypotheses.\footnote{An alternative approach to the incorporation of reasoning specialists in induction provers is presented in \cite{JAR::KapurS1996}. There the reasoning specialist is used to incorporate semantic information into the process of selection of the induction schemas. The proposed solution enables the use of induction hypotheses which otherwise could not be applied thereby leading to an increased automation. The approach is complementary to that described in this paper.} The induction proof method we consider is based on cover set induction \cite{red90} and is implemented in the SPIKE prover \cite{spike}.\footnote{Cover set induction combines the advantages of explicit induction with those of proof by consistency.} The interplay with the decision procedure is obtained by extending and incorporating into the induction proof method Constraint Contextual Rewriting. \emph{Constraint Contextual Rewriting} \cite{ftp98}, CCR(X) for short, is an abstract integration schema between rewriting and decision procedures inspired by Boyer and Moore's ideas. Similarly to Boyer and Moore's approach, CCR(X) empowers conditional rewriting with the services provided by the available decision procedure, X, and the decision procedure with additional facts derived from the available lemmas.\footnote{This capability, called \emph{augmentation} by Boyer and Moore, is crucial to the effectiveness of the integration schema. See \cite{bm88} for the details.} Unlike Boyer and Moore's simplifier, CCR(X) is independent from the theory decided by the decision procedure, is formally and concisely presented by means of a calculus \cite{ftp98}, and is guaranteed to terminate \cite{frocos2000}. This contrasts with the 40 page long description available in \cite{bm88} in which high level design decisions are intermixed with optimization details. The extension of CCR(X) considered in this paper, \emph{Inductive Constraint Contextual Rewriting} or ICCR(X) for short, differs from CCR(X) in that the decision procedure is allowed to use suitably selected instances of the induction hypotheses. The main contribution of our work is twofold: \begin{itemize} \item SPIKE benefits of the flexibility and generality of CCR(X) as an off-the-shelf integration schema of decision procedures in formula simplification. To emphasize this, we call the resulting calculus SPIKE(X) where X is an abstract decision procedure. X can be instantiated in different ways. We have considered an instantiation for X, called LA+EQ, resulting from the combination of the quantifier-free Linear Arithmetics (LA) and the quantifier-free theory of equality (EQ), but the scheme can be applied to many other theories as well. \item ICCR(X) is a simple but nevertheless significant extension to CCR(X) for induction through the use of induction hypotheses. \end{itemize} Computer experiments carried out with SPIKE(LA+EQ) on non-trivial verification problems have shown the effectiveness of our approach. The proof of MJRTY does not need user-defined tactics as it is the case with PVS and Nuprl. Also in the proof of the ABR conformance algorithm \cite{rsk00} ocite{bbr96} many of the about 80 user-defined lemmas require specific tactics with PVS but more than half of them are relieved automatically with SPIKE(LA+EQ). A formal presentation and proofs of soundness of ICCR(X), SPIKE(X), and of the compound decision procedure LA+EQ are not given here for the lack of space. They will be given in the final version of the paper.

Book ChapterDOI
19 Feb 2001
TL;DR: The tertiary level of a VoD server that is being implemented in top of a cheap Linux cluster, based on a hierarchical distributed architecture, using the functional programming language Erlang, is proposed in this paper.
Abstract: A Video-On-Demand (VoD) server provides video services to the end user, that can request a piece of video at any time, without any previously established timetable. The growing demand of such services suggests the design of flexible and scalable VoD servers, both in storage capacity and bandwidth. The tertiary level of a VoD server that is being implemented in top of a cheap Linux cluster, based on a hierarchical distributed architecture, using the functional programming language Erlang, is proposed in this paper.

Book ChapterDOI
19 Feb 2001
TL;DR: This work presents the classical interpretation of the fusion law for catamorphisms in a categoric context, examples of fusion for programs defined with recursive types in Coq are analysed, and theorems of corresponding optimisation are shown.
Abstract: Fusion theorem is a classical result that allows the simplification of the morphisms among homogeneus structures [10]. We present this theorem and some generalizations in the context of the constructive proof assistant tool Coq [2] where we have dependent types and parametric polymorphism. The work is organised as follows: afther the classical interpretation of the fusion law for catamorphisms in a categoric context, examples of fusion for programs defined with recursive types in Coq are analysed and the theorems of corresponding optimisation are shown. Finally, a generalisation of fusion law for inductive types is presented which is applied to a specific case.