scispace - formally typeset
Open Access

Problems impacting the quality of automatically built ontologies

TLDR
A new framework providing standardized definitions which leads to a new error classification that removes ambiguities of the previous ones is introduced and experimental results of the analysis on an ontology automatically built by Text2Onto for the domain of composite materials manufacturing are presented.
Abstract
Building ontologies and debugging them is a timeconsuming task. Over the recent years, several approaches and tools for the automatic construction of ontologies from textual resources have been proposed. But, due to the limitations highlighted by experimentations in real-life applications, different researches focused on the identification and classification of the errors that affect the ontology quality. However, these classifications are incomplete and the error description is not yet standardized. In this paper we introduce a new framework providing standardized definitions which leads to a new error classification that removes ambiguities of the previous ones. Then, we focus on the quality of automatically built ontologies and we present experimental results of our analysis on an ontology automatically built by Text2Onto for the domain of composite materials manufacturing.

read more

Content maybe subject to copyright    Report

Problems impacting the quality of automatically built
ontologies
Toader GHERASIM
1
and Giuseppe BERIO
2
and Mounira HARZALLAH
3
and Pascale KUNTZ
4
Abstract. Building ontologies and debugging them is a time-
consuming task. Over the recent years, several approaches and tools
for the automatic construction of ontologies from textual resources
have been proposed. But, due to the limitations highlighted by ex-
perimentations in real-life applications, different researches focused
on the identification and classification of the errors that affect the on-
tology quality. However, these classifications are incomplete and the
error description is not yet standardized. In this paper we introduce
a new framework providing standardized definitions which leads to
a new error classification that removes ambiguities of the previous
ones. Then, we focus on the quality of automatically built ontologies
and we present experimental results of our analysis on an ontology
automatically built by Text2Onto for the domain of composite mate-
rials manufacturing.
1 Introduction
Since the pioneering works of Gruber [15], ontologies play a ma-
jor role in knowledge engineering whose importance is growing with
the rise of the semantic Web. Today they are an essential component
in numerous applications in various fields: e.g. information retrieval
[22, 20], knowledge management [26], analysis of social semantic
networks [8] and business intelligence [27]. However, despite the
maturity level reached in ontology engineering, important problems
remain open and are still widely discussed in the literature. The most
challenging issues concern the automation of ontology construction
and their evaluation.
The increasing popularity of ontologies and the scaling changes of
this last decade have motivated the development of ontology learn-
ing techniques. Promising results have been obtained [6, 5]. And,
although these techniques have been often experimentally proved to
be not sufficient enough for constructing ready-to-use ontology [5],
their interest is not questioned in particular in technical domains [17].
Few recent works recommend an integration between ontology learn-
ing techniques and manual intervention [27].
Whatever their use, it is essential to assess their quality through-
out their development. Several ontology quality criteria and dif-
ferent evaluation methods have been proposed in the literature
[19, 4, 11, 21, 1]. However, as mentioned by [28], defining ”a good
ontology” remains a difficult problem and the different approaches
only permit to ”recognize problematic parts of an ontology”. From
an operational point of view, error identification is a very important
step for the ontology integration in real-life complex systems. And,
1
LINA, UMR 6241 CNRS, e-mail: toader.gherasim@univ-nantes.fr
2
LABSTICC, UMR 6285 CNRS, email: giuseppe.berio@univ-ubs.fr
3
LINA, UMR 6241 CNRS, e-mail: mounira.harzallah@univ-nantes.fr
4
LINA, UMR 6241 CNRS, e-mail: pascale.kuntz@polytech.univ-nantes.fr
different researches recently focused on that issue [13, 2, 24]. How-
ever, as far as we know, a generic standardized description of these
errors does not still exist. It seems however a preliminary step for the
development of assisted construction method.
In this paper, we focus on the most important errors that affect
the quality of semi-automatically built ontologies. To get closer the
operational concerns we propose a detailed typology of the different
types of problems that can be identified when evaluating an ontology.
Our typology is inspired from a generic standardized description of
the notion of quality in conceptual modeling [18]. And, our analysis
is applied on a real-life situation concerning the manufacturing of
pieces in composite materials for the aerospace industry.
The rest of this paper is organized as follows. Section 2 is a state-
of-the art of the ontology errors. Section 3 describes a framework
which provides a standardized description of the errors and draws
correspondences between our new classification and the main errors
previously identified in the literature. Section 4 presents our experi-
mental results in the domain of composite materials manufacturing.
More precisely, we analyze errors affecting an ontology produced by
an automatic construction tool (here Text2Onto) from a set of tech-
nical textual resources.
2 State-of-the art on ontological errors
In the literature, the notion of ”ontological error” is often used in a
broad sense covering a wide variety of problems which affect the on-
tology quality. But, from several studies published this last decade,
we have identified four major denominations associated to comple-
mentary definitions: (1) ”taxonomic errors” [14, 13, 9, 2], (2) ”design
anomalies” or ”deficiencies” [2, 3], (3) ”anti-patterns” [7, 25, 23],
and (4) ”pitfalls” or ”worst practices [23, 24].
2.1 Taxonomic errors
From the pioneering works of Gomez-Perrez [14], the denomination
”taxonomic error” is used to refer to three types of errors that affect
the taxonomic structure of ontologies: inconsistency, incompleteness
and redundancy. Recently, extensions have been proposed to non-
taxonomic properties [3], but in this synthesis we focus on taxonomic
errors.
Inconsistencies in the ontology may be logical or semantic. More
precisely, three classes of inconsistencies in the taxonomic structure
have been detailed: circularity errors (e.g. a concept that is a special-
ization or a generalization of itself), partitioning errors which pro-
duce logical inconsistencies (e.g. a concept defined as a specializa-
tion of two disjoint concepts), and semantic errors (e.g. a taxonomic
relationship between two concepts that is not consistent with the se-
mantics of the latter).

Incompleteness is met when concepts or relations of specialization
are missing, or when some distributions of the instances of a concept
between its sons are not stated as exhaustive and/or disjoint.
In the opposite way, redundancy errors are met when a taxonomic
relationship can be directly deduced by logical inference from the
other relationships of the ontology, or when concepts with the same
father in the taxonomy do not share any common information (no
instances, no children, no axioms, etc.) and can be only differentiated
by their names.
2.2 Design anomalies
Roughly speaking, design anomalies mainly focus on ontology un-
derstanding and maintainability. They are not necessarily errors but
undesirable situations. Five classes of design anomalies have been
described: (1) ”lazy concepts” (leaf concepts in the taxonomy not
implied in any axiom and without any instances); (2) ”chains of in-
heritance” (long chains composed of intermediate concepts with a
single child); (3) ”lonely disjoint” concepts (superfluous disjunction
axiom between distant concepts in the taxonomy which may disrupt
inference reasoning); (4) ”over-specific property range” (too specific
property range which should be replaced by a coarser range which
fits the considered domain better); (5) ”property clumps” (duplica-
tion of the same properties for a large set of concepts instead of the
inheritance of these properties from a more general concept).
2.3 Anti-patterns
Ontology design patterns (ODP) are formal models of solutions com-
monly used by domain experts to solve recurrent modeling problems.
Anti-patterns are ODP that are a priori known to produce incon-
sistencies or unsuitable behaviors. [23] also called anti-patterns ad-
hoc solutions specifically designed for a problem even if well-known
ODP are available. Three classes of anti-patterns have been described
[7, 25, 23]: (1) ”logical anti-patterns” that can be detected by logi-
cal reasoning; (2) ”cognitive anti-patterns” (possible modeling errors
due to misunderstanding of the logical consequences of the used ex-
pression); (3) ”guidelines” (complex expressions valid from a logical
and a cognitive point of view but for which simpler or more accurate
alternatives exist).
2.4 Pitfalls
Pitfalls are complementary to ODPs. Their broad definition covers
problems affecting the ontology quality for which ODPs are not
available. Poveda et al. [24] described 24 types of experimentally
identified pitfalls as, for instance, forgetting the declaration of an in-
verse relation when this latter exists or of the attribute range. And
they proposed a pitfall classification which follows the three evalu-
able dimensions of an ontology proposed by Gangemi et al. [11]:
(1) structural dimension (aspects related to syntax and logical prop-
erties), (2) functional dimension (how well the ontology fits a pre-
defined function), (3) the usability dimension (to which extent the
ontology is easy to be understood and used). Four pitfall classes cor-
respond to the structural dimension: ”modeling decisions” (MD, sit-
uations where OWL primitives are not used properly), ”wrong infer-
ence” (WI, e.g. relationships or axioms that allow false reasoning),
”no inference” (NI, gaps in the ontology which do not allow infer-
ences required to produce new desirable knowledge), ”real world
modeling” (RWM, when commonsense knowledge is missing in the
ontology). One class corresponds to the functional dimension: ”re-
quirement completeness” (RC, when the ontology does not cover its
specifications). And, two classes correspond to the usability dimen-
sion: ”ontology understanding” (OU, information that makes under-
standability more difficult e.g. concept label polysemy or label syn-
onymy for distinct concepts, non explicit declaration of inverse rela-
tions or equivalent properties) and ”ontology clarity” (OC, e.g. vari-
ations of writing-rule and typography for the labels).
It is easy to deduce from this classification that some pitfalls
should belong to different classes associated to different dimensions
(e.g. the fact that two inverse relations are not stated as inverse is
both a ”no inference” (NI) pitfall and an ”ontology understanding”
(OU) pitfall). Another attempt [24] proposed a classification of the
24 identified pitfalls in the three error classes (inconsistency, incom-
pleteness and redundancy) given by Gomez-Perrez et al. [14]. But,
these classes are concerned by the ontology structure and content,
and consequently four pitfalls associated with the ontology context
do not fit with this classification.
In order to highlight the links between the different classifications,
Poveda et al. tried to define a mapping between the classification in 7
classes deduced from the dimensions defined by Gangemi et al. [11]
and the 3 error classes proposed by Gomez-Perrez et al. [14]. How-
ever, this task turned out to be very complex, and only four pitfall
classes exactly fit with one of the error classes. For the other, there is
overlapping or no possible fitting.
3 The framework
The state of the art briefly presented in the previous section shows
that the terminology used for describing the different problems im-
pacting on the quality of ontologies is not yet standardized and that
existing classifications do not cover the whole diversity of problems
described in the literature.
In this section we present a framework providing standardized def-
initions for quality problems of ontologies and leading to a new clas-
sification of these problems. The framework comprises two distinct
and orthogonal dimensions: errors vs. unsuitable situations (first di-
mension) and logical facet vs. social facet of problems (second di-
mension).
Unsuitable situations identify problems which do not prevent the
usage of an ontology (within specific targeted domain and applica-
tions). On the contrary, errors identify problems preventing the usage
of an ontology.
It is well known that one ontology has two distinct facets: an on-
tology can be processed by machines (according to its logical speci-
fication) and can be used by humans (including an implicit reference
to a social sharing).
The remainder of the section is organized alongside the second di-
mension (i.e. logic vs. social facet) and within each facet, errors and
unsuitable situations are defined. The framework is based on ”nat-
ural” analogies between respectively social and logical errors and
social and logical unsuitable situations.
3.1 Problem classification
3.1.1 Logical ground problems
The logical ground problems can be formally defined by consider-
ing notions defined by Guarino et al. [16]: e.g. Interpretation (Ex-
tensional first order structure), Intended Model, Language, Ontology
and the two usual relations , ` provided in any logical language.
The relation is used to express both that one interpretation I is a

model of a logical theory T , written as I T (i.e. all the formulas
in T are true in I, written for each formula ϕ T , I ϕ), and
also for expressing the logical consequence (i.e. that any model of a
logical theory T is also a model of a formula, written as T ϕ). The
relation ` is used to express the logical calculus i.e. the set of rules
used to prove a theorem (i.e. any formula) ϕ starting from a theory
T , written as T ` ϕ.
Examples and formalizations hereinafter are provided by using a
typical Description Logics notation (but easily transformable in first
order or other logics).
The usual logical ground errors are listed below.
1. Logical inconsistency corresponding to ontologies containing log-
ical contradictions for which a model does not exist (because the
set of intended models is never empty, an ontology without mod-
els does not make sense anyway; formally, given an ontology O
and the logical consequence relation according to the logical
language L used for building O, there is no interpretation I of
O such that I O). For example, if an ontology contains the
following axioms B A (B is a A), A B > (A and B
are disjoint), c B (c is instance of B), then c A and
c A B, so there is a logical contradiction in the definition of
this ontology;
2. Unadapted
5
ontologies wrt to intended models
6
i.e. an ontology
for which something that is false in all (some of) the intended
models of L is true in the ontology; formally, there exists a for-
mula ϕ such that for each (for some) intended model(s) of L, ϕ
is false and O ϕ. For example, if we have in the ontology two
concepts A and B that are declared as disjoint (O A B ⊆⊥)
and in each intended model there exists an instance c that is com-
mon between A and B (i.e. c A B), then the ontology is
unadapted;
3. Incomplete ontologies wrt to intended models i.e. an ontology for
which something that is true in all the intended models of L, is
not necessarily true in all the models of O; formally, there exists
a formula ϕ such that for each intended model of L, ϕ is true and
O 2 ϕ. As an example, if in all the intended models C B = A,
and the ontology O defines B A and C A, it is not possible
to prove that C B = A;
4. Incorrect (or unsound) reasoning wrt the logical consequence i.e.
when some specific conclusions are derived by using suitable rea-
soning systems for targeted ontology applications even if these
conclusions are not true in the intended models and must not be
derived by any reasoning according to the targeted ontology appli-
cations (formally, when a specific formula ϕ, false in the intended
models O 2 ϕ, can be derived O ` ϕ within any of those suitable
reasoning systems);
5. Incomplete reasoning wrt the logical consequence i.e. when some
specific conclusions cannot be derived by using suitable reasoning
systems for targeted ontology applications even if these conclu-
sions are true in intended models and must be derived by some
5
We use the term ”unadapted” instead of ”incorrect” ontologies because it
remains unclear if intended models are defined for building the ontology
or may also be defined independently. However, if intended models are
defined for building the ontology, the term ”incorrect” may be more appro-
priate.
6
Intended models should have been defined fully and independently as in the
case of models representing abstract structures or concepts such as num-
bers, processes, events, time and other ”upper concepts”, often defined ac-
cording to their own properties. If intended models are not available, some
specific entailments can be defined as facts that should necessarily be true
in the targeted domain (or for targeted applications); specific counterexam-
ples can also be defined instead of building entire intended models.
reasoning according to the targeted ontology applications (for-
mally, for some specific formula ϕ, true in the intended models
O ϕ, cannot be derived O 0 ϕ within those suitable reasoning
systems);
The most common logical ground unsuitable situations are
listed below. These situations impact negatively on the ”non func-
tional qualities” of ontologies such as reusability, maintainability, ef-
ficiency as defined in the ISO 9126 standard for software quality.
6. Logical equivalence of distinct artifacts (concepts / relationships
/ instances) i.e. whenever two distinct artifacts are proved to be
logically equivalent; for example, A and B are two concepts in O
and O A = B;
7. Symmetrically, logically indistinguishable artifacts i.e. whenever
it is not possible to prove that two distinct artifacts are not equiv-
alent from a logical point of view; in other words, if not possi-
ble to prove anyone of the following statements: (O A = B),
(O A B ⊆⊥) and (O c AandO c B); this case
(7) can be partially covered in the case (3) above whenever in-
tended models provide precise information on the equivalence or
the difference between A and B;
8. OR artifacts i.e. an artifact A equivalent to a disjunction like CS,
A 6= C, S but for which, if applicable, it does not exist at least a
common (non optional) role / property for C and S or because C
and S have common instances; in the first case, a simple formal-
ization can be expressed by saying that it does not exist a (non
optional) role R such that O (C S) R.>; in the second
case, an even simpler formalization is O c C and O c S,
being c one constant not part of O; the first case targets potentially
heterogeneous artifacts such as Car P erson, with probably
no counterpart in the intended models, thus possibly leading to
unadapted ontologies according to case (2) above; the second case
targets potential ambiguities as, for instance, one role (property)
R logically equivalent to a disjunction (R
1
R
2
) being (R
1
R
2
)
satisfiable;
9. AND artifacts i.e. one artifact A equivalent to a conjunction like
C S, A 6= C, S but for which, if applicable, it does not exist
at least a common (non optional) role / property for C and S;
this case is relevant to limit as much as possible some potentially
heterogeneous artifacts such as Car P erson, possibly leading
to artifact unsatisfiability;
10. While some case of unsatisfiability of ontology artifacts (concepts,
roles, properties etc.) can be covered by (2) because intended mod-
els may not contain void concepts, unsatisfiability tout-court is not
necessarily an error but a situation which is not suitable for ontol-
ogy artifacts (i.e. given an ontology artifact A, O A ); even
if in ontologies it might be possible to define what must not be true
(instead of what must be true), this practice is not encouraged;
11. High complexity of the reasoning task i.e. whenever something
is expressed in a way that complicates the reasoning, while there
exist more simple ways to express the same thing;
12. Ontology not minimal i.e. whenever the ontology contains unnec-
essary information:
Unnecessary because it can be derived or built
7
. An example of
such unsuitable situation is the redundancy of taxonomic rela-
tions such as whenever A B, B C, and A C are all
ontology axioms, the last axiom can be derived from the first
two ones;
7
Built means that the artifact can be defined by using other artifacts.

Unnecessary because it is not part of the intended models. For
instance, a concept A being part of the ontology (language) but
not defined by intended models.
3.1.2 Social ground problems
Social ground problems are related to the perception (interpreta-
tion) and the targeted usage of ontologies by social actors (humans,
applications based on social artifacts like WordNet, etc.). Percep-
tion (interpretation) and usage may not be formalized at all. In some
sense, a further distinction between social facet and logical facet is
as the distinction between respectively tacit and explicit knowledge.
There are four social ground errors:
1. Social contradiction i.e. the perception (interpretation) that the so-
cial actor gives to the ontology or to the ontology artifacts is in
contradiction with the ontology axioms and their consequences; a
natural analogy is with unadapted ontologies;
2. Perception of design errors i.e. the social actor perception ac-
counts for some design errors such as modeling instances as con-
cepts; a natural analogy is with unadapted ontologies;
3. Socially meaningless i.e. the social actor is unable to give any in-
terpretation to the ontology or to ontology artifacts as in the case
of artificial labels such as ”XYHG45”; a natural analogy is with
unadapted ontologies;
4. Social incompleteness i.e. the social actor perception is that one or
several artifacts (axioms and/or their consequences) are missing in
the ontology; a natural analogy is with incomplete ontologies;
The social ground unsuitable situations are mostly related to the
difficulties that a social actor has to overcome for using the ontology
especially due to limited understandability, learnability and compli-
ance (as defined in ISO 9126). As for the logical ground unsuitable
situations, it is difficult to dress an exhaustive list; the most common
and important are listed below.
5. Lack of or poor textual explanations i.e. when there are few, no or
poor annotations; prevents understanding by social actors; there
are no natural analogies;
6. Potentially equivalent artifacts i.e. the social actors may identify
as equivalent (similar) distinct artifacts as in the case of artifacts
with synonymous or exactly the same labels assigned to distinct
artifacts; a natural analogy is with logically equivalent artifacts;
7. Socially indistinguishable artifacts i.e. the social actors would not
be able to distinguish two distinct artifacts as, for instance, in the
case of artifacts with polysemic labels assigned to distinct arti-
facts; a natural analogy is with logically indistinguishable arti-
facts;
8. Artifacts with polysemic labels may be interpreted as union or in-
tersection of their several rather distinct meanings associated to
labels; a natural analogy is therefore with OR and AND artifacts.
9. Flatness of the ontology (or non modularity), i.e. ontology pre-
sented as a set of artifacts without any additional structure, espe-
cially if coupled with a important number of artifacts; a natural
analogy is with high complexity of the reasoning task but also pre-
venting effective learning and understanding by social actors;
10. Non-standard formalization of the ontology, using a very specific
logics or theory, requires a specific effort by social actors for un-
derstanding and learning the ontology but also to use the ontology
in standard contexts (reduced compliance); there are no natural
analogies;
11. Lack of adapted and certified versions of the ontology in various
languages requires specific efforts by social actors for understand-
ing and learning the ontology but also to use the ontology in spe-
cific standard contexts (limited compliance); there are no natural
analogies;
12. Socially useless artifacts included in the ontology; a natural anal-
ogy is with ontology not minimal.
3.2 Positioning state of the art relevant problem
classes in to the proposed framework
The precise definitions of the proposed framework allow us to clas-
sify most of the ontology quality problems described in literature. Ta-
ble 1 presents our classification of the different problems mentioned
in Section 2. Some of the problems described in literature may cor-
respond to more than one class of problems from our framework, as
the definitions of these problems are often very large and sometimes
ambiguous.
Table 1 reveals, at a first view, that the proposed framework pro-
vides additional problems that are not directly pointed out, to our
knowledge, in the current literature about ontology quality and eval-
uation (but may be mentioned elsewhere). These problems are No
adapted and certified ontology version, Indistinguishable artifacts,
Socially meaningless, High complexity of the reasoning task and In-
correct reasoning. However, while covered, other problems are, in
our opinion, too much narrowly defined in existing literature about
ontology quality and evaluation. For instance, No standard formal-
ization is specific to very simple situations while we refer to com-
plete non standard theories.
A deeper analysis of Table 1 reveals that the ”logical anti-patterns”
presented in [7, 25] belong to the logical ground category and are
focusing on unadapted ontologies error and unsatisfability unsuit-
able situation. The ”non-logical anti patterns” presented in [7, 25]
partially cover the logical ground unsuitable situations. The ”guide-
lines” presented in [7, 25] span only over unsuitable situations from
both logical and social ground category.
What is qualified as ”inconsistency” in [14] span over errors and
unsuitable situations and also (as in the case of ”semantic incon-
sistency”) over the two dimensions (logical and social), making, in
our opinion, the terminology a little bit confusing. According to our
framework, we perceive ”circularity in taxonomies”, as defined in
[14], as an unsuitable situation (logical equivalence of distinct arti-
facts) because, from a logical point of veiw, this only means that ar-
tifacts are equivalent (not requiring a fixpoint semantics). However,
”circularity in taxonomies” can be seen also within a social contra-
diction if actors assign distinct meanings to the various involved ar-
tifacts. The problems presented as ”incompleteness errors” in [13]
belong to the incomplete ontologies class of logical errors. The ”re-
dundancy errors” fits, in our classification, within the ontology not
minimal class of logical unsuitable situations.
None of the ”design anomalies” presented in [2] is perceived as a
logical error. Two of them correspond to a logical unsuitable situation
(logically undistinguishable artifacts), one to a social error (percep-
tion of design errors) and the last one to a social unsuitable situation
(no standard formalization).
Concerning ”pitfalls” [24], the most remarkable fact concerns
what we call incomplete reasoning. Indeed, introducing ad-hoc re-
lations such as is a, instance of , etc., replacing the ”standard” re-
lations such as subsumption, member of, etc., should not be con-
sidered as a case of incomplete ontologies but as a case of incomplete
reasoning. This is because accepting a specific ontological commit-

Table 1. Positioning state of the art relevant problem classes in to the proposed framework.
Framework State of the art problems
Logical ground
Errors
1
Logical inconsistency
â inconsistency error: ”partition errors - common instances in disjoint decomposition”
2
Unadapted ontologies
â inconsistency errors: ”partition errors - common classes in disjoint decomposition”, ”semantic inconsistency”
â logical anti-patterns: ”OnlynessIsLoneliness”, ”UniversalExistence”, ”AndIsOR”, ”EquivalenceIsDifference”
â pitfalls: P5 (wrong inverse relationship, WI), P14 (misusing ”allValuesFrom”, MD), P15 (misusing ”not
some”/”some not”, WI), P18 (specifying too much the domain / range, WI), P19 (swapping and , WI)
3
Incomplete ontologies
â incompleteness errors: ”incomplete concept classification”, ”disjoint / exhaustive knowledge omission”
â pitfalls: P3 (”is a” instead of ”subclass-of”, MD), P9 (missing basic information, RC & RWM), P10 (missing
disjointness, RWM), P11 (missing domain / range in prop., NI & OU), P12 (missing equiv. prop., NI & OU), P13
(missing inv. rel., NI & OU), P16 (misusing primitive and defined classes, NI)
4
Incorrect reasoning
5
Incomplete reasoning
â pitfalls: P3 (using ”is a” instead of ”subclass-of”, MD), P24 - using recursive def., MD)
Unsuitable situations
6
Logical equivalence of dis-
tinct artifacts
â inconsistency error: ”circularity”
â pitfall: P6 (cycles in the hierarchy, WI)
â non logical anti-pattern: ”SynonymeOfEquivalence”
7
Logically indistinguishable
artifacts
â pitfall: P4 (unconnected ontology elements, RC)
â design anomalies: ”lazy concepts” and ”chains of inheritance”
8
OR artifacts
â pitfall: P7 (merging concepts to form a class, MD & OU)
9
AND artifacts
â pitfall: P7 (merging concepts to form a class, MD & OU)
10
Unsatisfiability
inconsistency error: ”partition errors - common classes in disjoint decomposition”
â logical anti-patterns: ”OnlynessIsLoneliness”, ”UniversalExistence”, ”AndIsOR”, ”EquivalenceIsDifference”
11
High complexity of the rea-
soning task
12
Ontology not minimal
â redundancy error: ”redundancy of taxonomic relations”
â pitfalls: P3 (using ”is a” instead of ”subclass-of”, MD), P7 (merging concepts to form a class, MD & OU), P21
(miscellaneous class, MD)
â non logical anti-pattern: ”SomeMeansAtLeastOne”
â guidelines: ”Domain&CardinalityConstraints”, ”MinIsZero”
Social ground
Errors
1
Social contradiction
â inconsistency error: ”semantic inconsistency”
â logical anti-pattern: ”AndIsOR”
â pitfalls: P1 (polysemic elements, MD), P5 (wrong inv. rel., WI), P14 (misusing ”allValuesFrom”, MD), P15
(misusing ”not some”/”some not”, WI), P19 (swapping and , WI)
2
Perception of design errors
â pitfalls: P17 (specializing too much the hierarchy, MD), P18 (specifying too much the domain / range, WI), P23
(using incorrectly ontology elements, MD)
â non logical anti-pattern: ”SumOfSome”
â design anomaly: ”lonely disjoints”
3
Socially meaningless
4
Social incompleteness
â pitfalls: P12 (missing equiv. prop., NI & OU), P13 (missing inv. rel., NI & OU), P16 (misusing primitive and
defined classes, NI)
Unsuitable situations
5
Lack/poor textual explana-
tions
â pitfalls: P8 (missing annotation, OC & OU)
6
Potentially equiv. artifacts
â pitfalls: P2 (synonym as classes, MD & OU)
7
Indistinguishable artifacts
8
Polysemic labels
â pitfalls: P1 (polysemic elements, MD & OU)
9
Flatness of the ontology
10
No standard formalization
â pitfalls: P20 (swapping label and comment, OU), P22 (using different naming criteria in the ontology, OC)
â guidelines: ”GroupAxioms”, ”DisjointnessOfComplement” and ”Domain&CardinalityConstraints”
â design anomaly: ”property clumps”
11
No adapted and certified
ontology version
12
Useless artifacts
â pitfall: P21 (using a miscellaneous class, MD & OU)

Citations
More filters
Journal ArticleDOI

Evaluating Domain Ontologies: Clarification, Classification, and Challenges

TL;DR: This article reviews domain ontology assessment efforts to highlight the work that has been carried out and to clarify the important issues that remain.
Dissertation

Apport de la modélisation ontologique pour le partage des connaissances en psychiatrie

TL;DR: OntoPsychia as discussed by the authors is an ontologie for the treatment of mental health disorders, divisee en deux modules : les facteurs sociaux et environnementaux des troubles mentaux, and les troubles mentauques.
Book ChapterDOI

Linked Open Data Driven Game Generation

TL;DR: This work on simulating the terrain of a Great War battle using data from multiple Linked Open Data projects is presented.
Book ChapterDOI

Towards an Approach for Configuring Ontology Validation

TL;DR: This paper introduces checking dependencies between standard problems; a validation process needs to satisfy checking dependencies for detecting and removing properly problems mentioned above.
References
More filters
Journal ArticleDOI

A translation approach to portable ontology specifications

TL;DR: This paper describes a mechanism for defining ontologies that are portable over representation systems, basing Ontolingua itself on an ontology of domain-independent, representational idioms.
Proceedings Article

METHONTOLOGY: From Ontological Art Towards Ontological Engineering

TL;DR: The goal of this paper is to clarify to readers interested in building ontologies from scratch, the activities they should perform and in which order, as well as the set of techniques to be used in each phase of the methodology.
Book ChapterDOI

What Is an Ontology

TL;DR: This paper shall revisit the previous attempts to clarify and formalize such original definition of (computational) ontologies as “explicit specifications of conceptualizations”, providing a detailed account of the notions of conceptualization and explicit specification, while discussing the importance of shared explicit specifications.
Book

Ontological Engineering: with examples from the areas of Knowledge Management, e-Commerce and the Semantic Web

TL;DR: Theoretical Foundations of Ontologies as discussed by the authors The most outstanding ontologies and methods for building ontologies are discussed in Section 3.1.2.2 Languages for Building Ontologies.
Related Papers (5)