scispace - formally typeset
Search or ask a question

Showing papers presented at "Web Reasoning and Rule Systems in 2011"


Book ChapterDOI
29 Aug 2011
TL;DR: It is shown that in such settings inconsistency-tolerant query answering has the same computational complexity of standard query answering and that it can rely on well-established relational database technology, as under standard DL semantics.
Abstract: In this paper we study the problem of obtaining meaningful answers to queries posed over inconsistent DL-Lite ontologies. We consider different variants of inconsistency-tolerant semantics and show that for some of such variants answering unions of conjunctive queries (UCQs) is first-order (FOL) rewritable, i.e., it can be reduced to standard evaluation of a FOL/SQL query over a database. Since FOL-rewritability of query answering for UCQs over consistent ontologies under first-order logic semantics is one of the distinguishing features of DL-Lite, in this paper we actually identify some settings in which such property is preserved also under inconsistency-tolerant semantics. We therefore show that in such settings inconsistency-tolerant query answering has the same computational complexity of standard query answering and that it can rely on well-established relational database technology, as under standard DL semantics.

58 citations


Book ChapterDOI
29 Aug 2011
TL;DR: This paper focuses on the ontological query answering problem, which consists of querying data while taking ontological knowledge into account, and considers a logical framework based on existential rules, also called Tuple-Generating Dependencies or Datalog+/- rules.
Abstract: The need for an ontological layer on top of data, associated with advanced reasoning mechanisms able to exploit the semantics encoded in ontologies, has been acknowledged both in the database and knowledge representation communities. We focus in this paper on the ontological query answering problem, which consists of querying data while taking ontological knowledge into account. To tackle this problem, we consider a logical framework based on existential rules, also called Tuple-Generating Dependencies or Datalog+/- rules. This framework can also be defined in graph terms. Query entailment with existential rules is not decidable, thus a crucial issue is to define decidable classes of rules as large as possible. This paper is a survey of decidable classes of rules, including a review of the main complexity results. It mostly relies on previous work presented at IJCAI'2009 [BLMS09] and KR'2010 [BLM10] (and developed in a journal paper [BLMS11]), updated to include very recent results.

40 citations


Book ChapterDOI
29 Aug 2011
TL;DR: This work describes some of the implementation details of the log-linear reasoner ELOG, which employs database technology to dynamically transform inference problems to integer linear programs (ILP) in order to lower the size of the ILPs and reduce the complexity.
Abstract: Log-linear description logics are probabilistic logics combining several concepts and methods from the areas of knowledge representation and reasoning and statistical relational AI. We describe some of the implementation details of the log-linear reasoner ELOG. The reasoner employs database technology to dynamically transform inference problems to integer linear programs (ILP). In order to lower the size of the ILPs and reduce the complexity we employ a form of cutting plane inference during reasoning.

32 citations


Book ChapterDOI
29 Aug 2011
TL;DR: A practical method to acquire knowledge about the conceptual spaces representation of labels in a conceptual space, starting from meta-data that was obtained from the music recommendation website last.fm is proposed.
Abstract: In most knowledge representation settings, atomic properties correspond to natural language labels. Although these labels are usually taken to be primitive, automating some forms of commonsense inference requires background knowledge on the cognitive meaning of these labels. We consider two such forms of commonsense reasoning, which we refer to as interpolative and extrapolative reasoning. In both cases, rule-based knowledge is augmented with knowledge about the geometric representation of labels in a conceptual space. Specifically, to support interpolative reasoning, we need to know which labels are conceptually between which other labels, considering that intermediary conditions tend to lead to intermediary conclusions. Extrapolative reasoning is based on information about the direction of change that is needed when replacing one label by another, taking the view that parallel changes in the conditions of rules tend to lead to parallel changes in the conclusions. In this paper, we propose a practical method to acquire such knowledge about the conceptual spaces representation of labels. We illustrate the method in the domain of music genres, starting from meta-data that was obtained from the music recommendation website last.fm.

31 citations


Book ChapterDOI
29 Aug 2011
TL;DR: Two algorithms for answering conjunctive queries over a Probabilistic extension of guarded Datalog+/- that uses Markov logic networks as the underlying probabilistic semantics are presented.
Abstract: Datalog+/- is a recently developed family of ontology languages that is especially useful for representing and reasoning over lightweight ontologies, and is set to play a central role in the context of query answering and information extraction for the Semantic Web. It has recently become apparent that it is necessary to develop a principled way to handle uncertainty in this domain; in addition to uncertainty as an inherent aspect of the Web, one must also deal with forms of uncertainty due to inconsistency and incompleteness, uncertainty resulting from automatically processing Web data, as well as uncertainty stemming from the integration of multiple heterogeneous data sources. In this paper, we present two algorithms for answering conjunctive queries over a probabilistic extension of guarded Datalog+/- that uses Markov logic networks as the underlying probabilistic semantics. Conjunctive queries ask: "what is the probability that a given set of atoms hold?". These queries are especially relevant to Web information extraction, since extractors often work with uncertain rules and facts, and decisions must be made based on the likelihood that certain facts are inferred. The first algorithm for answering conjunctive queries is a basic one using classical forward chaining (known as the chase procedure), while the second one is a backward chaining algorithm and works on a specific subset of guarded Datalog+/-; it can be executed as an anytime algorithm for greater scalability.

22 citations


Book ChapterDOI
29 Aug 2011
TL;DR: This paper proposes a generalization of the notion of rule dependency, namely k-dependency, that allows to enlarge halting cases and can be used to compile the rule base, which leads to improve query answering algorithms.
Abstract: We consider existential rules (also called Tuple-Generating Dependencies or Datalog+/- rules). These rules are particularly well-suited to the timely ontological query answering problem, which consists of querying data while taking terminological knowledge into account. Since this problem is not decidable in general, various conditions ensuring decidability have been proposed in the literature. In this paper, we focus on conditions that restrict the way rules may interact to ensure that the forward chaining mechanism is finite. After a review of existing proposals, we propose a generalization of the notion of rule dependency, namely k-dependency, that allows to enlarge halting cases. It can also be used to compile the rule base, which leads to improve query answering algorithms.

18 citations


Book ChapterDOI
29 Aug 2011
TL;DR: A set of basic principles and features are proposed by which rules can be represented and shared over the web so that they may be effectively reused and several methods of rule reuse are demonstrated.
Abstract: Ontologies are information models which provide vocabulary terms or terminologies and associated meanings to allow the modeling of a domain. They are shared conceptualizations; this has never been more true, because in recent years they have been developed by community efforts, often including experts from academia as well as industry. Those efforts have been complemented by the standardization of formats and languages, such as RDF, OWL, and SPARQL, for representing and (re)using ontologies and data on the (Semantic) Web. Rules, on the other hand, are (seldom) used for knowledge representation (i.e. to define the semantics or integrity constraints). Rules are also used for other intelligent reasoning tasks such as for defining business logic and policies. With the prevalence of shared information models, it is possible and may be necessary to share and reuse rules. Furthermore, with the advent of the Rule Interchange Format (RIF), rules can be shared across many rule systems. We propose a set of basic principles and features by which rules can be represented and shared over the web so that they may be effectively reused and demonstrate several methods of rule reuse. Finally, we discuss how some of these features work in practice in the N3-based AIR web rules language.

17 citations


Book ChapterDOI
29 Aug 2011
TL;DR: A Amber system that replaces the human interaction with a domain ontology applicable to all sites of a domain, and is implemented in logical rules to allow an explicit formulation of the heuristics and easy adaptation to different domains.
Abstract: Web extraction is the task of turning unstructured HTML into structured data. Previous approaches rely exclusively on detecting repeated structures in result pages. These approaches trade intensive user interaction for precision. In this paper, we introduce the Amber ("Adaptable Model-based Extraction of Result Pages") system that replaces the human interaction with a domain ontology applicable to all sites of a domain. It models domain knowledge about (1) records and attributes of the domain, (2) low-level (textual) representations of these concepts, and (3) constraints linking representations to records and attributes. Parametrized with these constraints, otherwise domain-independent heuristics exploit the repeated structure of result pages to derive attributes and records. Amber is implemented in logical rules to allow an explicit formulation of the heuristics and easy adaptation to different domains. We apply Amber to the UK real estate domain where we achieve near perfect accuracy on a representative sample of 50 agency websites.

16 citations


Book ChapterDOI
29 Aug 2011
TL;DR: A novel incremental algorithm to compute changes to materialized views in logic databases like those used by rule-based reasoners, arguably at an acceptable price: a slight adaptation of the semi-naive forward chaining.
Abstract: We present a novel incremental algorithm to compute changes to materialized views in logic databases like those used by rule-based reasoners. Such reasoners have to address the problem of changing axioms in the presence of materializations of derived atoms. Existing approaches have drawbacks: some require to generate and evaluate large transformed programs that are in Datalog while the source program is in Datalog and significantly smaller; some recompute the whole extension of a predicate even if only a small part of this extension is affected by the change. The method presented in this article overcomes both drawbacks, arguably at an acceptable price: a slight adaptation of the semi-naive forward chaining.

15 citations


Book ChapterDOI
29 Aug 2011
TL;DR: An ontology (also referred to as a terminology, knowledge base) is an entity used to represent some domain (field of knowledge) and the building blocks of an ontology include categories, relations, and objects.
Abstract: An ontology (also referred to as a terminology, knowledge base) is an entity used to represent some domain (field of knowledge). Usually the building blocks of an ontology include categories (concepts), relations (roles) and objects (individuals).

14 citations


Book ChapterDOI
29 Aug 2011
TL;DR: The prototype is based on a layered semantic architecture that facilitates semantic data access and integration using OWL, SWRL and SPARQL and the lessons learned from its development.
Abstract: This paper describes the implementation of a Smart Campus application prototype that integrates heterogeneous data using semantic technologies. The prototype is based on a layered semantic architecture that facilitates semantic data access and integration using OWL, SWRL and SPARQL. The focus of the paper is on the prototype implementation and the lessons learned from its development.

Book ChapterDOI
29 Aug 2011
TL;DR: The architecture and the semantic technologies that the ONTORULE project proposes to business rule application development are introduced and that are validated and demonstrated in two pilot applications are introduced.
Abstract: The objective of ONTORULE is to enable users, from business executives over business analysts to IT developers, to interact in their own way with the part of a business application that is relevant to them. This extended abstract describes the approach the ONTORULE project proposes to business rule application development, and it introduces the architecture and the semantic technologies that we develop for that purpose and that are validated and demonstrated in two pilot applications.

Book ChapterDOI
29 Aug 2011
TL;DR: This research proposes a new strategy, called inline evaluation, which compiles the whole hybrid KB into a new KB using only one single formalism, which can use a single reasoner to do the reasoning tasks, and improve the efficiency of hybrid reasoning.
Abstract: The deployment of knowledge representation formalisms to the Web has created the need for hybrid formalisms that combine heterogeneous knowledge bases. The aim of this research is to improve the reasoning efficiency over hybrid knowledge bases (KBs). The traditional way of reasoning over hybrid KBs is to use different underlying reasoners to access the different data sources, which causes overhead. To remedy this, we propose a new strategy, called inline evaluation, which compiles the whole hybrid KB into a new KB using only one single formalism. Hence we can use a single reasoner to do the reasoning tasks, and improve the efficiency of hybrid reasoning.

Book ChapterDOI
29 Aug 2011
TL;DR: This work presents and compares several approaches to optimise nested queries in XSPARQL and improves efficiency up to two orders of magnitude in a practical evaluation.
Abstract: XSPARQL is a language to transform data between the tree-based XML format and the graph-based RDF format. XML is a widely adopted data exchange format which brings its own query language XQuery along. RDF is the standard data format of the Semantic Web with SPARQL being the corresponding query language. XSPARQL combines XQuery and SPARQL to a unified query language which provides a more intuitive and maintainable way to translate data between the two data formats. A naive implementation of XSPARQL can be inefficient when evaluating nested queries. However, such queries occur often in practice when dealing with XML data. We present and compare several approaches to optimise nested queries. By implementing these optimisations we improve efficiency up to two orders of magnitude in a practical evaluation.

Book ChapterDOI
29 Aug 2011
TL;DR: This paper employs the four-valued logic proposed by Belnap, and presents a paraconsistent semantics for Hybrid MKNF knowledge bases, which can detect inconsistencies and handle it effectively, and transforms the proposed semantics to the stable model semantics via a linear transformation operator.
Abstract: Hybrid MKNF knowledge bases, originally based on the stable model semantics, is a mature method of combining rules and Description Logics (DLs). The well-founded semantics for such knowledge bases has been proposed subsequently for better efficiency of reasoning. However, integration of rules and DLs may give rise to inconsistencies, even if they are respectively consistent. Accordingly, reasoning systems based on the previous two semantics will break down. In this paper, we employ the four-valued logic proposed by Belnap, and present a paraconsistent semantics for Hybrid MKNF knowledge bases, which can detect inconsistencies and handle it effectively. Besides, we transform our proposed semantics to the stable model semantics via a linear transformation operator, which indicates that the data complexity in our paradigm is not higher than that of classical reasoning. Moreover, we provide a fixpoint algorithm for computing paraconsistent MKNF models.

Book ChapterDOI
29 Aug 2011
TL;DR: A polynomial query rewriting for conjunctive queries under unary inclusion dependencies is presented and it is shown that binary inclusion dependencies do not admit polynometric query rewriting algorithms.
Abstract: Ontology-based data access (OBDA) is widely accepted as an important ingredient of the new generation of information systems. In the OBDA paradigm, potentially incomplete relational data is enriched by means of ontologies, representing intensional knowledge of the application domain. We consider the problem of conjunctive query answering in OBDA. Certain ontology languages have been identified as FO-rewritable (e.g., DL-Lite and sticky-join sets of TGDs), which means that the ontology can be incorporated into the user's query, thus reducing OBDA to standard relational query evaluation. However, all known query rewriting techniques produce queries that are exponentially large in the size of the user's query, which can be a serious issue for standard relational database engines. In this paper, we present a polynomial query rewriting for conjunctive queries under unary inclusion dependencies. On the other hand, we show that binary inclusion dependencies do not admit polynomial query rewriting algorithms.

Book ChapterDOI
29 Aug 2011
TL;DR: The experiences and lessons learned in building a large scale reasoning engine for Sindice are reported and the reasoning approach has been deployed, used and improved since 2008 and has enabled Sindice to reason over billions of triples.
Abstract: The Sindice Semantic Web index provides search capabilities over 260 million documents. Reasoning over web data enables to make explicit what would otherwise be implicit knowledge: it adds value to the information and enables Sindice to ultimately be more competitive in terms of precision and recall. However, due to the scale and heterogeneity of web data, a reasoning engine for the Sindice system must (1) scale out through parallelisation over a cluster of machines; and (2) cope with unexpected data usage. In this paper, we report our experiences and lessons learned in building a large scale reasoning engine for Sindice. The reasoning approach has been deployed, used and improved since 2008 within Sindice and has enabled Sindice to reason over billions of triples.

Book ChapterDOI
29 Aug 2011
TL;DR: This work combines production systems (PS) and Description Logics (DL) in such a way that allows one to express both, facts and rules, using an ontology language, and proposes a generic semantics for such combination.
Abstract: Production systems are an established paradigm in knowledge representation, while ontologies are widely used to model and reason about the domain of an application. Description logics, underlying for instance the Web ontology language OWL, are a well-studied formalism to express ontologies. In this work we combine production systems (PS) and Description Logics (DL) in such a way that allows one to express both, facts and rules, using an ontology language. We explore the space of design options for combining the traditional closed world semantics of PS with the open world semantics of DL and propose a generic semantics for such combination. We show how to encode our semantics in a fixpoint extension of first-order logic. We show that in special cases (monotonic and light PS) checking properties of the system such as termination is decidable.

Journal ArticleDOI
01 Jul 2011
TL;DR: When Intel started its production process in Costa Rica in 1998, at the same time the neoliberal strategy entered in a new phase as mentioned in this paper, a phase characterized by the dominance of capital flows, mostly short term capitals, which not only lead the evolution of the economy, but also provoked the real value of the rate colon-dollar to rise consistently trough the period that started in 2005.
Abstract: When Intel started its production process in Costa Rica in 1998, at the same time the neoliberal strategy entered in a new phase. From 1985 to 1997 it was the “light exportation phase”. After 1998 it was the time for high technology corporations and transnational hotel chains to take a part of that scenario. However, in 2005 a third phase started, a phase that was characterized by the dominance of capital flows, mostly short term capitals. That capital did not just only lead the evolution of the economy, but also provoked the real value of the rate colon-dollar to rise consistently trough the period that started in 2005. All of this brought new contradictions, unknown for the neoliberal strategy, mostly because it implied to harm the economical structure based on exportation and tourist activities. Moreover, it suggested that Costa Rica had fully entered in financial globalization, and this implied a very serious risk in terms of both, the damage to that economical structure, and the point of view of the instability that it carried.

Book ChapterDOI
29 Aug 2011
TL;DR: This paper gives an overview of a formal semantic-based approach of modeling some regulations in the photovoltaic field to help the delivering of technical assessments at the French scientific center on Building Industry (CSTB).
Abstract: This paper gives an overview of a formal semantic-based approach of modeling some regulations in the photovoltaic field to help the delivering of technical assessments at the French scientific center on Building Industry (CSTB). Starting from regulatory texts, we first explicit SBVR rules and then formalize them into ontology-based rules in the SPARQL language. These are exploited in the modeling of the compliance checking process required for the delivering of technical assessments.

Book ChapterDOI
29 Aug 2011
TL;DR: A new feature of the SEWEBAR-CMS system is demonstrated that allows using any of the rules as a query-by-example, and in one click, discover whether this rule is in some interesting relation to a rule already stored in a knowledge base.
Abstract: The long-pending research challenge of the association rule mining task is to identify, out of the multitude of discovered rules, the ones that are interesting for the domain expert. We will demonstrate a new feature of the SEWEBAR-CMS system that allows using any of the rules as a query-by-example, and in one click, discover whether this rule is in some interesting relation to a rule already stored in a knowledge base. New relations can be plugged in using a PMML-like XML-based declarative language through XSLT transformations.

Book ChapterDOI
29 Aug 2011
TL;DR: An improved local closed world extension for description logics based on circumscription, and deviates from previous circumscriptive descriptionLogics in that extensions of minimized predicates may contain only extensions of named individuals in the knowledge base.
Abstract: We present an improved local closed world extension for description logics. It is based on circumscription, and deviates from previous circumscriptive description logics [1,3] in that extensions of minimized predicates may contain only extensions of named individuals in the knowledge base. Besides an (arguably) higher intuitive appeal, the improved semantics is applicable to expressive description logics without loss of decidability.

Book ChapterDOI
29 Aug 2011
TL;DR: A sound and complete proof theory is developed for TRPAD (Transaction Logic with Partially Defined Actions)--an expressive formalism for reasoning about the effects of compound actions that is reducible to ordinary logic programming.
Abstract: This paper introduces TRPAD (Transaction Logic with Partially Defined Actions)--an expressive formalism for reasoning about the effects of compound actions. TRPAD is based on a subset of Transaction Logic, but extends it with special premise-formulas that generalize the data and transition formulas of the original Transaction Logic. We develop a sound and complete proof theory for TRPAD and illustrate the formalism on a number of non-trivial examples. In addition, we show that most of TRPAD is reducible to ordinary logic programming and that this reduction is sound and complete.

Book ChapterDOI
29 Aug 2011
TL;DR: This paper summarises the lessons learnt from the PhD Thesis Exploiting RDFS and OWL for Integrating Heterogeneous, Large-Scale, Linked Data Corpora and discusses the applicability and utility of the reasoning methods in such scenarios.
Abstract: In this paper, we summarise the lessons learnt from the PhD Thesis Exploiting RDFS and OWL for Integrating Heterogeneous, Large-Scale, Linked Data Corpora where we looked at three use-cases for reasoning over Linked Data: (i) translating data between different vocabulary terms; (ii) identifying and repairing noise in the form of inconsistency; and (iii) detecting and processing coreferent identifiers (identifiers which refer to the same thing). We summarise how we overcome the challenges of scalability and robustness faced when reasoning over Linked Data. We validate our methods against an open-domain corpus of 1.1 billion quadruples crawled from 4 million Linked Data documents, discussing the applicability and utility of our reasoning methods in such scenarios.

Journal ArticleDOI
01 Jul 2011
TL;DR: This work illustrates, through the use of Critical Discourse Analysis tools, how power is exercised by journalistic discourse on a sector of the city of San Jose known as "Dominican Land” as a way to account for the criminalization and racialization of this space and its foreign residents.
Abstract: This work illustrates, through the use of Critical Discourse Analysis tools, how power is exercised by journalistic discourse on a sector of the city of San Jose known as "Dominican Land” as a way to account for the criminalization and racialization of this space and its foreign residents.

Journal ArticleDOI
01 Jul 2011
TL;DR: The degree of improvement the country has had in mitigation and adaptation of water resources sector to climate change is evaluated to offer a balance of national public policies in relation to integral water resources planning and management.
Abstract: This article analyzes the implications of climate change on water resources. The point of departure is a previous research by the authors which takes into consideration the impacts of climate change on water resources and some scenarios, to offer a balance of national public policies in relation to integral water resources planning and management. To this end, official documents were analyzed, some events were attended, and experts were consulted to evaluate the degree of improvement the country has had in mitigation and adaptation of water resources sector to climate change. To achieve pertinent actions it is necessary to strengthen institutions and political will.

Book ChapterDOI
29 Aug 2011
TL;DR: A sound reduction of a large fragment of L1 to TRPAD is provided, and it is shown that this reduction is complete with respect to the LP embedding of L 1.
Abstract: Transaction Logic with Partially Defined Actions (TRPAD) is an expressive formalism for reasoning about the effects of actions and for declarative specification of state-changing transactions. The action language L1 is a wellknown formalism to describe changing domains and for reasoning about actions. The purpose of this paper is to compare these two formalisms and identify their similarities and points of divergence in order to better understand their modeling and reasoning capabilities. We provide a sound reduction of a large fragment of L1 to TRPAD, and show that this reduction is complete with respect to the LP embedding of L1. We also explore how action planning is modeled in both languages and discuss the relationship to other languages for representing actions.

Book ChapterDOI
29 Aug 2011
TL;DR: This talk addresses the problem of exchanging information given by representation systems, which are essentially finite descriptions of (possibly infinite) sets of complete instances, and shows the robustness of the proposal by applying it to the problems of exchanging incomplete information and exchanging knowledge bases.
Abstract: In the traditional data exchange setting source instances are restricted to be complete, in the sense that every fact is either true or false in these instances. Although natural for a typical database translation scenario, this restriction is gradually becoming an impediment to the development of a wide range of applications that need to exchange objects that admit several interpretations. In particular, we are motivated by two specific applications that go beyond the usual data exchange scenario: exchanging incomplete information and exchanging knowledge bases. In this talk, we propose a general framework for data exchange that can deal with these two applications.More specifically, we address the problem of exchanging information given by representation systems, which are essentially finite descriptions of (possibly infinite) sets of complete instances, and then we show the robustness of our proposal by applying it to the problems of exchanging incomplete information and exchanging knowledge bases, which are both instantiations of the exchanging problem for representation systems.

Book ChapterDOI
29 Aug 2011
TL;DR: A vocabulary expansion of RDFS is presented, which admits the assertion of rules that hold on associations among sets ofRDFS classes, and an actual-world semantics to statements in the language expansion is given that properly captures the meaning commonly given to association rules w.r.t. context, support and confidence.
Abstract: We present a vocabulary expansion of RDFS, which admits the assertion of rules that hold on associations among sets ofRDFS classes, and we give an actual-world semantics to statements in the language expansion that properly captures the meaning commonly given to association rules w.r.t. context, support and confidence, while interacting with class subsumptions and instance typing. In addition, we exhibit a sound and complete procedure for the association rule entailment problem.

Book ChapterDOI
Nguyen Thanh Binh1
29 Aug 2011
TL;DR: This paper looks at decidability of unification in EL without top (EL-τ), and proves that EL-τ unification is decidable and construct an NExpTime decision procedure for the problem.
Abstract: In recent years, the description logic EL has received a significant interest. The description logic EL is a knowledge representation formalism used e.g in natural language processing, configuration of technical systems, databases and biomedical ontologies. Unification is used there as a tool to recognize equivalent concepts. It has been proven that unification in EL is NP-complete. This result was based on a locality property of certain EL unifiers. In fact, the large medical ontology SNOMED CT was built on a subset of EL++ formalism, however, without top-concept. It would be interesting to investigate decidability of unification in extensions of EL without using top-concept. In this paper, we look at decidability of unification in EL without top (EL-τ). We show that a similar locality holds for EL-τ, but decidability of EL-τ unification does not follow immediately from locality as it does in the case of unification in EL. However, by restricting further the locality property, we prove that EL-τ unification is decidable and construct an NExpTime decision procedure for the problem. Moreover, the procedure allows us to compute a specific set of solutions to the unification problem.