scispace - formally typeset
Search or ask a question

Showing papers presented at "Web Reasoning and Rule Systems in 2012"


Book ChapterDOI
10 Sep 2012
TL;DR: This work addresses the issue of Ontology-Based Data Access which consists of exploiting the semantics expressed in ontologies while querying data through the backward chaining paradigm, which involves rewriting the query into a set of CQs.
Abstract: We address the issue of Ontology-Based Data Access which consists of exploiting the semantics expressed in ontologies while querying data. Ontologies are represented in the framework of existential rules, also known as Datalog+/-. We focus on the backward chaining paradigm, which involves rewriting the query (assumed to be a conjunctive query, CQ) into a set of CQs (seen as a union of CQs). The proposed algorithm accepts any set of existential rules as input and stops for so-called finite unification sets of rules (fus). The rewriting step relies on a graph notion, called a piece, which allows to identify subsets of atoms from the query that must be processed together. We first show that our rewriting method computes a minimal set of CQs when this set is finite, i.e., the set of rules is a fus. We then focus on optimizing the rewriting step. First experiments are reported.

49 citations


Book ChapterDOI
10 Sep 2012
TL;DR: This paper proposes a general mechanism of defining temporal query languages for time-stamped data in DLs, based on combinations of linear temporal logics with first-order queries, and advocates a controlled use of epistemic semantics in order to warrant practical query answering.
Abstract: Establishing a generic approach to representing and querying temporal data in the context of Description Logics (DLs) is an important, and still open challenge. The difficulty lies in that a proposed approach should reconcile a number of valuable contributions coming from diverse, yet relevant research lines, such as temporal databases and query answering in DLs, but also temporal DLs and Semantic Web practices involving rich temporal vocabularies. Within such a variety of influences, it is critical to carefully balance theoretical foundations with good prospects for reusing existing techniques, tools and methodologies. In this paper, we attempt to make first steps towards this goal. After providing a comprehensive overview of the background research and identifying the core requirements, we propose a general mechanism of defining temporal query languages for time-stamped data in DLs, based on combinations of linear temporal logics with first-order queries. Further, we advocate a controlled use of epistemic semantics in order to warrant practical query answering. We systematically motivate our proposal and highlight its basic theoretical and practical implications. Finally, we outline open problems and key directions for future research.

41 citations


Book ChapterDOI
10 Sep 2012
TL;DR: This paper shows how, building on first-order rewritability of queries over the system state that is typical of OBDA, the technology developed recently for Ontology-Based Data Access is able to reformulate the temporal properties into temporal properties expressed over the underlying database.
Abstract: In this paper we show how one can use the technology developed recently for Ontology-Based Data Access (OBDA) to govern data-aware processes through ontologies. In particular, we consider processes executed over a relational database which issue calls to external services to acquire new information and update the data.We equip these processes with an OBDA system, in which an ontology modeling the domain of interest is connected through declarative mappings to the database, and that consequently allows one to understand and govern the manipulated information at the conceptual level. In this setting, we are interested in verifying first-order μ-calculus formulae specifying temporal properties over the evolution of the information at the conceptual level. Specifically, we show how, building on first-order rewritability of queries over the system state that is typical of OBDA, we are able to reformulate the temporal properties into temporal properties expressed over the underlying database. This allows us to adopt notable decidability results on verification of evolving databases that have been established recently.

36 citations


Book ChapterDOI
10 Sep 2012
TL;DR: It is shown how inferable knowledge--specifically that found through owl:sameAs and RDFS reasoning--can improve recall in this setting, and how different configurations for live queries covering different shapes and domains can improve recall.
Abstract: Linked Data principles allow for processing SPARQL queries on-the-fly by dereferencing URIs. Link-traversal query approaches for Linked Data have the benefit of up-to-date results and decentralised execution, but operate only on explicit data from dereferenced documents, affecting recall. In this paper, we show how inferable knowledge--specifically that found through owl:sameAs and RDFS reasoning--can improve recall in this setting. We first analyse a corpus featuring 7 million Linked Data sources and 2.1 billion quadruples: we (1) measure expected recall by only considering dereferenceable information, (2) measure the improvement in recall given by considering rdfs:seeAlso links as previous proposals did. We further propose and measure the impact of additionally considering (3) owl:sameAs links, and (4) applying lightweight RDFS reasoning for finding more results, relying on static schema information. We evaluate different configurations for live queries covering different shapes and domains, generated from random walks over our corpus.

23 citations


Book ChapterDOI
10 Sep 2012
TL;DR: This paper analyses the consistency and satisfiability problems in the description logic ${\mathcal{SHI}} with semantics based on a complete residuated De Morgan lattice with upper complexity bounds that match the complexity of crisp reasoning.
Abstract: Fuzzy description logics can be used to model vague knowledge in application domains. This paper analyses the consistency and satisfiability problems in the description logic ${\mathcal{SHI}}$ with semantics based on a complete residuated De Morgan lattice. The problems are undecidable in the general case, but can be decided by a tableau algorithm when restricted to finite lattices. For some sublogics of ${\mathcal{SHI}}$, we provide upper complexity bounds that match the complexity of crisp reasoning.

22 citations


Book ChapterDOI
10 Sep 2012
TL;DR: Nominalschema is an expressive description logic (DL) construct that was proposed in recent efforts to integrate DLs and (logic programming) rule-based paradigms for the Semantic Web represented by two "diverging" W3C standards.
Abstract: Nominalschema is an expressive description logic (DL) construct that was proposed in recent efforts to integrate DLs and (logic programming) rule-based paradigms for the Semantic Web [1] represented by two "diverging" W3C standards: the DL-based Web Ontology Language (OWL) [2] whose major variant, OWL 2 DL, is based on the description logic (DL) ${\mathcal{SROIQ}}$ [3]; and the rulebased Rule Interchange Format (RIF) whose core variant, called RIF Core [4], is essentially Datalog, i.e., function-free Horn logic.

17 citations


Journal ArticleDOI
01 Jan 2012
TL;DR: A partir de los 80s, en la region centroamericana inicia un proceso de profundo cambio socio-economico, que tuvo efectos en muchas dimensiones de la vida as mentioned in this paper.
Abstract: A partir de los 80s, en la region centroamericana inicia un proceso de profundo cambio socio-economico, que tuvo efectos en muchas dimensiones de la vida. De la mano de estos cambios, los grupos de poder economico costarricenses se diversificaron y transnacionalizaron, modificando significativamente su logica de operacionalizacion, en la cual los medios de comunicacion tradicionales y masivos tienen un importante papel. Con esto, se muestran cuestionamientos en torno al poder de los medios de infocomunicacion; junto a un importante vacio academico al respecto. Este articulo problematiza los grupos de poder en funcion de sus intereses en los medios de comunicacion, e intenta abrir nuevas lineas de investigacion sobre los vinculos entre la comunicacion y el “nuevo” modelo economico en el contexto de la globalizacion neoliberal.

12 citations


Book ChapterDOI
10 Sep 2012
TL;DR: A new argumentation method for analysing opinion exchanges between on-line users aiding them to draw informative, structured and meaningful information is described.
Abstract: We describe a new argumentation method for analysing opinion exchanges between on-line users aiding them to draw informative, structured and meaningful information. Our method combines different factors, such as social support drawn from votes and attacking/supporting relations between opinions interpreted as abstract arguments. We show a prototype web application which puts into use this method to offer an intelligent business directory allowing users to engage in debate and aid them to extract the dominant, emerging public opinion.

10 citations


Book ChapterDOI
10 Sep 2012
TL;DR: This paper takes OWL 2 DL as a starting point and pursue the question of how features of rule-based formalisms can be added without jeopardizing decidability, and reports on incorporating the closed world assumption and on reasoning algorithms.
Abstract: As part of the quest for a unifyinglogic for the Semantic Web Technology Stack, a central issue is finding suitable ways of integrating description logics based on theWeb Ontology Language (OWL) with rule-based approaches based on logic programming. Such integration is difficult since naive approaches typically result in the violation of one ormore desirable design principles. For example, while both OWL 2 DL and RIF Core (a dialect of the Rule Interchange Format RIF) are decidable, their naive union is not, unless carefully chosen syntactic restrictions are applied. We report on recent advances and ongoing work by the authors in integrating OWL and rules. We take an OWL-centric perspective, which means that we take OWL 2 DL as a starting point and pursue the question of how features of rule-based formalisms can be added without jeopardizing decidability. We also report on incorporating the closed world assumption and on reasoning algorithms. This paper essentially serves as an entry point to the original papers, to which we will refer throughout, where detailed expositions of the results can be found.

8 citations


Book ChapterDOI
10 Sep 2012
TL;DR: This paper considers ontologies based on members of the DL-Lite family, and shows that answering CQs with inequalities is decidable for ontologies expressed in DL-$Lite^{\mathcal{H}}_{core}$.
Abstract: One of the most prominent applications of description logic ontologies is their use for accessing data. In this setting, ontologies provide an abstract conceptual layer of the data schema, and queries over the ontology are then used to access the data. In this paper we focus on extensions of conjunctive queries (CQs) and unions of conjunctive queries (UCQs) with restricted forms of negations such as inequality and safe negation. In particular, we consider ontologies based on members of the DL-Lite family. We show that by extending UCQs with any form of negated atoms, the problem of query answering becomes undecidable even when considering ontologies expressed in the core fragment of DL-Lite. On the other hand, we show that answering CQs with inequalities is decidable for ontologies expressed in DL-$Lite^{\mathcal{H}}_{core}$. To this end, we provide an algorithm matching the known coNP lower bound on data complexity. Furthermore, we identify a setting in which conjunctive query answering with inequalities is tractable. We regain tractability by means of syntactic restrictions on the queries, but keeping the expressiveness of the ontology.

7 citations


Book ChapterDOI
10 Sep 2012
TL;DR: The interoperability of heterogeneous objects participating in a smart space is enhanced by publishing their behavioral rules as RDF triples, i.e., in the same way as any other information in the space, which enables the use of answer-set programming (ASP) as the underlying paradigm for rule-based reasoning.
Abstract: A smart space is an ecosystem of interacting computational objects embedded in some environment. The space seamlessly provides users with information and services using the best available resources. In this paper, the interoperability of heterogeneous objects participating in a smart space is enhanced by publishing their behavioral rules as RDF triples, i.e., in the same way as any other information in the space. This enables the use of answer-set programming (ASP) as the underlying paradigm for rule-based reasoning. The main idea of this paper is to apply meta programming techniques to reified ASP rules published in the smart space. Such techniques enable syntactic and semantic transformations of rules without essentially changing the underlying computational platform so that standard ASP tools can be used to implement inference over rules. These ideas are illustrated in several ways. In addition to basic meta evaluation tasks, we describe a meta grounder for ASP rules involving variables. Moreover, we demonstrate how the qualitative aspects of reasoning can be taken into account in our approach and how meta programming techniques are made available to users.

Book ChapterDOI
10 Sep 2012
TL;DR: Advances in remote sensing technologies have enabled public and commercial organizations to send an ever-increasing number of satellites in orbit around Earth, and so have the scientific and commercial applications of EO data.
Abstract: Advances in remote sensing technologies have enabled public and commercial organizations to send an ever-increasing number of satellites in orbit around Earth. As a result, Earth Observation (EO) data has been constantly increasing in volume in the last few years, and is currently reaching petabytes in many satellite archives. For example, the multi-mission data archive of the TELEIOS partner German Aerospace Center (DLR) is expected to reach 2PB next year, while ESA estimates that it will be archiving 20PB of data before the year 2020. As the volume of data in satellite archives has been increasing, so have the scientific and commercial applications of EO data. Nevertheless, it is estimated that up to 95% of the data present in existing archives has never been accessed, so the potential for increasing exploitation is very big.

Book ChapterDOI
10 Sep 2012
TL;DR: This paper presents a prototypical reasoner for mobile devices, which leverages Semantic Web technologies to implement both standard and non-standard inferences for moderately expressive knowledge bases.
Abstract: Reasoning in pervasive computing has to face computational issues inherited by mobile platforms. This paper presents a prototypical reasoner for mobile devices, which leverages Semantic Web technologies to implement both standard (subsumption, satisfiability, classification) and non-standard (abduction, contraction) inferences for moderately expressive knowledge bases. System features are surveyed, followed by early performance analysis.

Journal ArticleDOI
30 Apr 2012
TL;DR: Gallega et al. as discussed by the authors presentó a muestras desangre mensuales (desde 30 dias antes del parto hasta 90dias despues) a 49 hembras de la raza Rubia Gallega.
Abstract: Con el objetivo de establecer, en un futuro, unosvalores de referencia de diferentes parametros bioquimicosen vacas de raza Rubia Gallega, se extrajeron muestras desangre mensuales (desde 30 dias antes del parto hasta 90dias despues) a 49 hembras de esta raza. En el suero sedeterminaron las concentraciones de colesterol total,trigliceridos, acidos grasos no esterificados, glucosa,aspartato aminotransferasa, alanina aminotransferasa,proteinas totales, albumina, urea, calcio, fosforo, magnesio.Asi mismo, se avaluo la influencia de la estacion y elnumero de partos sobre los niveles medios de los diferentesparametros analizados. Todos los metabolitos seencontraron dentro de los rangos referidos en la bibliografiapara el ganado vacuno. Ademas, pudimos comprobar que,el momento en que se obtenia la muestra respecto al partomostraba un efecto significativo en todos los parametrosexcepto en el calcio. Los niveles sericos de acidos grasosno esterificados, la glucosa, la aspartato aminotransferasa,la alanina aminotransferasa y la urea no se vieron afectadosni por la estacion ni por el numero de parto. Por su parte, laestacion afecto de forma significativa a los valores decolesterol total, trigliceridos, albumina y fosforo, siendoinferiores las concentraciones en primavera-verano que enotono-invierno, salvo en el caso del fosforo que era alcontrario. Por ultimo, el numero de parto influyosignificativamente en los valores sericos de proteinastotales, albumina, calcio, fosforo y magnesio, siendo entodos los casos, excepto en las proteinas totales, maselevados en novillas que en multiparas.

Book ChapterDOI
10 Sep 2012
TL;DR: In this paper, the problem of query answering the target data is inherently complex for general (non-positive) relational or aggregate queries, and there might be more than one target databases satisfying a given mapping.
Abstract: Data exchange is the problem of transforming data structured according to a source schema into data structured according to a target schema, via a mapping specified by means of rules in the form of source-to-targettuplegeneratingdependencies --- rules whose body is a conjunction of atoms over the source schema and the head is a conjunction of atoms over the target schema, with possibly existential variables in the head. With this formalization, given a fixed source database, there might be more than one target databases satisfying a given mapping. That is, the target database is actually an incompletedatabase represented by a set of possible databases. Therefore, the problem of query answering the target data is inherently complex for general (non-positive) relational or aggregate queries.

Book ChapterDOI
10 Sep 2012
TL;DR: This paper takes an important step in this direction by developing inconsistency-tolerant semantics for query answering in a probabilistic extension of Datalog+/--- that is tractable modulo the cost of computing probabilities.
Abstract: The Datalog+/--- family of ontology languages is especially useful for representing and reasoning over lightweight ontologies, and has many applications in the context of query answering and information extraction for the Semantic Web. It is widely accepted that it is necessary to develop a principled way to handle uncertainty in this domain. In addition to uncertainty as an inherent aspect of the Web, one must also deal with forms of uncertainty due to inconsistency. In this paper, we take an important step in this direction by developing inconsistency-tolerant semantics for query answering in a probabilistic extension of Datalog+/---. The main contributions of this paper are: (i) extension and generalization to probabilistic ontologies of the well-known concepts of repairs and consistent answers to queries from databases; (ii) complexity analysis for the problems of consistency checking, repair identification, and consistent query answering; and (iii) adaptation of the intersection semantics (a sound heuristic for consistent answers) to probabilistic ontologies, yielding a subset of probabilistic Datalog+/--- that is tractable modulo the cost of computing probabilities.

Book ChapterDOI
10 Sep 2012
TL;DR: The OWL 2 QL profile, which is based on DL-LiteR, has been designed so that query answering is possible using relational database technology via query rewriting, but the size of the rewritten query, Qo, which can be evaluated directly on the relational database, is worst case exponential w.r.t thesize of Q and O.
Abstract: The OWL 2 QL profile, which is based on DL-LiteR, has been designed so that query answering is possible using relational database technology via query rewriting. Unfortunately, given a query Q posed in terms of an OWL 2 QL ontology O, the size of the rewritten query, Qo, which can be evaluated directly on the relational database, is worst case exponential w.r.t the size of Q and O [1]. This means that the computation and evaluation of Qo can be costly. Recent research focuses on creating rewriting algorithms that generates Qo with a smaller size [3].

Journal ArticleDOI
31 Dec 2012
TL;DR: The process of struggle for land in the valley of Sixaola using different written andoral sources is explored to reflect the impact that this conflict had in the creation of the peasantry as a political subject and the transformation of social relations that allowed the development of a community.
Abstract: The following paper explores the process of struggle for land in the valley of Sixaola using different written and oral sources This case seeks to reflect the impact that this conflict had in the recreation of the peasantry as a political subject and the transformation of social relations that allowed the development of a community With this, we capture some reflections from a process of linking several years with the local population and put in debate the prospects for a town that has been forgotten by the academic literature and a window that allows entry to a more complex problems on the land struggles in Costa Rica

Book ChapterDOI
10 Sep 2012
TL;DR: This paper presents a rule-based architecture that enables causal and temporal reasoning of events and supports their relevance assessment given the user's situation, in order to provide contextualized services for citizens inhabiting a smart city.
Abstract: This paper presents a rule-based architecture that enables causal and temporal reasoning of events and supports their relevance assessment given the user's situation, in order to provide contextualized services for citizens inhabiting a smart city. Our approach for context reasoning and assessment is illustrated by emergency scenarios.

Book ChapterDOI
10 Sep 2012
TL;DR: This proposal is concerned with the addition of a time stamp (a date) to the triples normally used in the representation of folksonomies.
Abstract: This proposal is concerned with the addition of a time stamp (a date) to the triples normally used in the representation of folksonomies. We motived our approach by helping the detection of trends in social networks.

Book ChapterDOI
10 Sep 2012
TL;DR: This paper studies the problem of computing a rewriting for a CQ over an ontology that has been contracted and presents a practical algorithm which is implemented and evaluated against other state-of-the-art systems obtaining encouraging results.
Abstract: Conjunctive query (CQ) answering is a key reasoning service for ontology-based data access. One of the most prominent approaches to conjunctive query answering is query rewriting where a wide variety of systems has been proposed the last years. All of them accept as input a fixed CQ q and ontology ${\mathcal O}$ and produce a rewriting for $q, {\mathcal O}$. However, in many real world applications ontologies are very often dynamic--that is, new axioms can be added or existing ones removed frequently. In this paper we study the problem of computing a rewriting for a CQ over an ontology that has been contracted (i.e., some of its axioms have been removed) given a rewriting for the input CQ and ontology. Our goal is to compute a rewriting directly from the input rewriting and avoid computing one from scratch. We study the problem theoretically and provide sufficient conditions under which this process is possible. Moreover, we present a practical algorithm which we implemented and evaluated against other state-of-the-art systems obtaining encouraging results. Finally, axiom removal can also be relevant to ontology design. For each test ontology we study how much the removal of an axiom affects the size of the rewriting and the performance of systems. If the removal of a single axiom causes a significant decrease either in the size or in the computation time then this part of the ontology can be re-modelled.

Book ChapterDOI
10 Sep 2012
TL;DR: A wealth of tools and formalisms are now available, including rather basic ones like databases or the more recent triple-stores, and more expressive ones like ontology languages.
Abstract: Research in knowledge representation and, more generally, information technology has produced a large variety of formats and languages for representing knowledge.A wealth of tools and formalisms is now available, including rather basic ones like databases or the more recent triple-stores, and more expressive ones like ontology languages (e.g., description logics), temporal and modal logics, nonmonotonic logics, or logic programs under answer set semantics, to name just a few.

Book ChapterDOI
10 Sep 2012
TL;DR: Techniques to enhance existing review management systems with (re)configuration facilities are presented and a practical evaluation is provided.
Abstract: Constraint-based configuration is --- on the one hand --- one of the classical problem domains in AI and also in industrial practice. Additional problems arise, when configuration objects come from an open environment such as the Web, or in case of a reconfiguration. On the other hand, (re)configuration is a reasoning task very much ignored in the current (Semantic) Web reasoning literature, despite (i) the increased availability of structured data on the Web, particularly due to movements such as the Semantic Web and Linked Data, (ii) numerous practically relevant tasks in terms of using Web data involve (re)configuration. To bridge these gaps, we discuss the challenges and possible approaches for reconfiguration in an open Web environment, based on a practical use case leveraging Linked Data as a "component catalog" for configuration. In this paper, we present techniques to enhance existing review management systems with (re)configuration facilities and provide a practical evaluation.

Book ChapterDOI
10 Sep 2012
TL;DR: An algorithm is defined that is able to compute a simplified Datalog+/--- program P′ that is equivalent to P with respect to answering queries in ${\mathcal Q}$.
Abstract: In this paper we study query answering over ontologies expressed in Datalog+/---, i.e., datalog with existential variables in rule heads. Differently from previous proposals, we focus on subclasses of unions of conjunctive queries (UCQs), rather than on the whole class of UCQs. To identify subclasses of UCQs, we introduce the notion of conjunctive query pattern. Given a class of queries ${\mathcal Q}$ expressed by a conjunctive query pattern, we study decidability and complexity of answering queries in ${\mathcal Q}$ over a Datalog+/--- program. In particular, we define an algorithm that, given a Datalog+/--- program P and a class of queries ${\mathcal Q}$, is able to compute a simplified Datalog+/--- program P′ that is equivalent to P with respect to answering queries in ${\mathcal Q}$. We show that such an algorithm constitutes both a theoretical and a practical interesting tool for studying query answering over ontologies expressed in terms of Datalog+/--- rules.

Journal ArticleDOI
30 Apr 2012
TL;DR: In this paper, the authors describe floristic composition and structure of the tree component of riparian forests associated with Rio Queguay Grande (Paysandu, Uruguay).
Abstract: The aim of this study was to describe floristic composition and structure of the tree component of riparian forests associated with Rio Queguay Grande (Paysandu, Uruguay). This site has particular interest because of its large area of native forest and actual inclusion in the Sistema Nacional de Areas Protegidas. Quantitative sampling was performed in six transects perpendicular to the river. The description of the community was performed through the frequency, density and dominance. The relative values of these parameters were used to calculate the Importance Value Index (IVI), which reveals the ecological importance of each species in a plant community. 405 individuals were surveyed, classified into 13 families and 18 species. The index of Shannon-Wiener diversity was 2,46. The total forest density was 1313 ind./ha. The family Myrtaceae was the most represented, and we identify the four most important species in the community. Some species showed a preference for a forest region and most of them had a height range of 1.5 to 7m. All species sampled were native. We recorded a new specie for the riparian Rio Queguay forests: Nectandra angustifolia (Schrad.) Nees & Mart. ex Nees, expanding its geographic distribution to the east of the country.

Journal ArticleDOI
31 Dec 2012
TL;DR: The most important agricultural actors on a worldwide level (the United States of America, the European Union, and a big group of peripheral countries) are evolving in a progressive liberalization of the world trade as discussed by the authors.
Abstract: The most important agricultural actors on a worldwide level (United States of America, the European Union, and a big group of peripheral countries) are evolving in a progressive liberalization of the world trade. The agricultural policies’ balance of liberalization and protection elements constitutes the ideal framework to analyze the problematic emergence of the free circulation of food. It can additionally characterize some socio-political impasses related to the food security. This tendency towards liberalization can explain why for the international community, trade agreements are ranked by the ways that some countries can use them as market opening tools.

Book ChapterDOI
10 Sep 2012
TL;DR: In view of the practical deployment of OWL based on description logics, the importance of non-standard reasoning services for supporting ontology engineers was pointed out, for instance, in [8].
Abstract: In view of the practical deployment of OWL [9] based on description logics [2], the importance of non-standard reasoning services for supporting ontology engineers was pointed out, for instance, in [8]. An example of such reasoning services is that of uniforminterpolation: given a theory using a certain vocabulary, and a subset Σ of "relevant terms" of that vocabulary, find a theory that uses only Σ terms and gives rise to the same consequences (expressible via Σ) as the original theory.

Journal ArticleDOI
30 Apr 2012
TL;DR: Field application, on affected chestnuts, of hypovirulent strains, which can transmit the virus to the virulent ones, is by far the only prospect for reducing and/or minimize the damage that this pathogen causes.
Abstract: Chestnut blight, caused by Cryphonectria parasitica, is a widespread disease throughout the world. In Europe, it has been detected in most cultivated areas of Castanea sativa (European chestnut) in Mediterranean and Central European countries, and is considered a quarantine pathogen. There is no cultural or chemical method to control this fungus, or any European chestnut cultivar tolerant or resistant to the disease. In recent years, research on chestnut blight control has focused on the development of biological methods. Cryphonectria parasitica has two types of strains: virulent, causing severe lesions to the tree, and hypovirulent, which cause hardly any damage just because they are carriers of a virus that attenuates virulence. Field application, on affected chestnuts, of hypovirulent strains, which can transmit the virus to the virulent ones, is by far the only prospect for reducing and/or minimize the damage that this pathogen causes. The success of this biological control method for chestnut blight requires prior knowledge of the population structure of Cryphonectria parasitica (number and distribution of vegetative compatibility and sexual types) and hypovirulent isolates compatible with the virulent strains that are dominant in an affected area.

Book ChapterDOI
10 Sep 2012
TL;DR: A suite of algorithms, called non-Termination analyzer, is proposed, for automatic detection and explanation of non-termination in tabled logic engines with subgoal abstraction and a Cost-based query optimizer is implemented, which consists of a cost estimator and an optimizing unit.
Abstract: There have been many studies in termination analysis of logic programming but little has been done on analyzing non-termination of logic programs, which is even more important in our opinion. Non-termination analysis examines program execution history when non-termination is suspected and informs the programmer of non-termination causes and possible ways to fix them. In the first part of this thesis, we study the problem of non-termination in tabled logic engines with subgoal abstraction, such as XSB, and propose a suite of algorithms, called non-Termination analyzer, $\texttt{Terminyzer}$, for automatic detection and explanation of non-termination. The second part of this thesis focuses on cost-based query optimization. Database query optimizers rely on data statistics in selecting query execution plans and rule-based systems can greatly benefit from such optimizations as well. To this end, one first needs to collect data statistics for base and propagate them to derived predicates. However, there are two difficulties: dependencies among arguments and recursion. To address these problems, we implement a Cost-based query optimizer, $\texttt{Costimizer}$, which consists of a cost estimator and an optimizing unit. The optimizing unit performs a greedy search optimization based on predicate statistics computed by the cost estimator. We validate the effectiveness of $\texttt{Costimizer}$ on both size estimation and query optimization through experimental studies.

Book ChapterDOI
10 Sep 2012
TL;DR: Data exchange is a field of database theory that deals with transferring data between differently structured databases, with motivation coming from industry, and most of the results in the literature consider tuple generating dependencies (tgds) as the language to specify mappings.
Abstract: Data exchange is a field of database theory that deals with transferring data between differently structured databases, with motivation coming from industry [21,17]. The starting point of intensive investigation of the problem of data exchange was given in [14] where it was defined as, given data structured under a source schema and a mapping specifying how it should be translated to a target schema, to transform the source data into data structured under the target schema such that it accurately reflects the source data w.r.t. the mapping. This problem has been studied for different combinations of languages used to specify the source and target schema, and the mappings [8]. Most of the results in the literature consider tuple generating dependencies (tgds) as the language to specify mappings. Tgds allow one to express containment of conjunctive queries, and have been widely employed in other areas of database theory. Furthermore, once a target instance is materialized, one mightwant to perform query answering over it.