scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 2010"


Journal ArticleDOI
TL;DR: This paper provides an introduction to ontology-based information extraction and reviews the details of different OBIE systems developed so far to identify a common architecture among these systems and classify them based on different factors, which leads to a better understanding on their operation.
Abstract: Information extraction (IE) aims to retrieve certain types of information from natural language text by processing them automatically. For example, an IE system might retrieve information about geopolitical indicators of countries from a set of web pages while ignoring other types of information. Ontology-based information extraction (OBIE) has recently emerged as a subfield of information extraction. Here, ontologies - which provide formal and explicit specifications of conceptualizations - play a crucial role in the IE process. Because of the use of ontologies, this field is related to knowledge representation and has the potential to assist the development of the Semantic Web. In this paper, we provide an introduction to ontology-based information extraction and review the details of different OBIE systems developed so far. We attempt to identify a common architecture among these systems and classify them based on different factors, which leads to a better understanding on their operation. We also discuss the implementation details of these systems including the tools used by them and the metrics used to measure their performance. In addition, we attempt to identify the possible future directions for this field.

409 citations


Journal ArticleDOI
TL;DR: 3-D vision on humanoid robots with complex oculomotor systems is often difficult due to the modeling uncertainties, but it is shown that these uncertainties can be accounted for by the proposed approach.
Abstract: Acquisition of new sensorimotor knowledge by imitation is a promising paradigm for robot learning. To be effective, action learning should not be limited to direct replication of movements obtained during training but must also enable the generation of actions in situations a robot has never encountered before. This paper describes a methodology that enables the generalization of the available sensorimotor knowledge. New actions are synthesized by the application of statistical methods, where the goal and other characteristics of an action are utilized as queries to create a suitable control policy, taking into account the current state of the world. Nonlinear dynamic systems are employed as a motor representation. The proposed approach enables the generation of a wide range of policies without requiring an expert to modify the underlying representations to account for different task-specific features and perceptual feedback. The paper also demonstrates that the proposed methodology can be integrated with an active vision system of a humanoid robot. 3-D vision data are used to provide query points for statistical generalization. While 3-D vision on humanoid robots with complex oculomotor systems is often difficult due to the modeling uncertainties, we show that these uncertainties can be accounted for by the proposed approach.

334 citations


Journal ArticleDOI
17 Jun 2010
TL;DR: An image parsing to text description (I2T) framework that generates text descriptions of image and video content based on image understanding and uses automatic methods to parse image/video in specific domains and generate text reports that are useful for real-world applications.
Abstract: In this paper, we present an image parsing to text description (I2T) framework that generates text descriptions of image and video content based on image understanding. The proposed I2T framework follows three steps: 1) input images (or video frames) are decomposed into their constituent visual patterns by an image parsing engine, in a spirit similar to parsing sentences in natural language; 2) the image parsing results are converted into semantic representation in the form of Web ontology language (OWL), which enables seamless integration with general knowledge bases; and 3) a text generation engine converts the results from previous steps into semantically meaningful, human readable, and query-able text reports. The centerpiece of the I2T framework is an and-or graph (AoG) visual knowledge representation, which provides a graphical representation serving as prior knowledge for representing diverse visual patterns and provides top-down hypotheses during the image parsing. The AoG embodies vocabularies of visual elements including primitives, parts, objects, scenes as well as a stochastic image grammar that specifies syntactic relations (i.e., compositional) and semantic relations (e.g., categorical, spatial, temporal, and functional) between these visual elements. Therefore, the AoG is a unified model of both categorical and symbolic representations of visual knowledge. The proposed I2T framework has two objectives. First, we use semiautomatic method to parse images from the Internet in order to build an AoG for visual knowledge representation. Our goal is to make the parsing process more and more automatic using the learned AoG model. Second, we use automatic methods to parse image/video in specific domains and generate text reports that are useful for real-world applications. In the case studies at the end of this paper, we demonstrate two automatic I2T systems: a maritime and urban scene video surveillance system and a real-time automatic driving scene understanding system.

322 citations


Proceedings ArticleDOI
03 Dec 2010
TL;DR: CRAM equips autonomous robots with lightweight reasoning mechanisms that can infer control decisions rather than requiring the decisions to be preprogrammed, which makes them much more flexible, reliable, and general than control programs that lack such cognitive capabilities.
Abstract: This paper describes CRAM (Cognitive Robot Abstract Machine) as a software toolbox for the design, the implementation, and the deployment of cognition-enabled autonomous robots performing everyday manipulation activities. CRAM equips autonomous robots with lightweight reasoning mechanisms that can infer control decisions rather than requiring the decisions to be preprogrammed. This way CRAM-programmed autonomous robots are much more flexible, reliable, and general than control programs that lack such cognitive capabilities. CRAM does not require the whole domain to be stated explicitly in an abstract knowledge base. Rather, it grounds symbolic expressions in the knowledge representation into the perception and actuation routines and into the essential data structures of the control programs. In the accompanying video, we show complex mobile manipulation tasks performed by our household robot that were realized using the CRAM infrastructure.

246 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed approach can work effectively and that the menu can be provided as a reference for the involved diabetes after diet validation by domain experts.
Abstract: It has been widely pointed out that classical ontology is not sufficient to deal with imprecise and vague knowledge for some real-world applications like personal diabetic-diet recommendation. On the other hand, fuzzy ontology can effectively help to handle and process uncertain data and knowledge. This paper proposes a novel ontology model, which is based on interval type-2 fuzzy sets (T2FSs), called type-2 fuzzy ontology (T2FO), with applications to knowledge representation in the field of personal diabetic-diet recommendation. The T2FO is composed of 1) a type-2 fuzzy personal profile ontology ( type-2 FPPO); 2) a type-2 fuzzy food ontology ( type-2 FFO); and 3) a type-2 fuzzy-personal food ontology (type-2 FPFO). In addition, the paper also presents a T2FS-based intelligent diet-recommendation agent ( IDRA), including 1) T2FS construction; 2) a T2FS-based personal ontology filter; 3) a T2FS-based fuzzy inference mechanism; 4) a T2FS-based diet-planning mechanism; 5) a T2FS-based menu-recommendation mechanism; and 6) a T2FS-based semantic-description mechanism. In the proposed approach, first, the domain experts plan the diet goal for the involved diabetes and create the nutrition facts of common Taiwanese food. Second, the involved diabetics are requested to routinely input eaten items. Third, the ontology-creating mechanism constructs a T2FO, including a type-2 FPPO, a type-2 FFO, and a set of type-2 FPFOs. Finally, the T2FS-based IDRA retrieves the built T2FO to recommend a personal diabetic meal plan. The experimental results show that the proposed approach can work effectively and that the menu can be provided as a reference for the involved diabetes after diet validation by domain experts.

229 citations


Journal ArticleDOI
23 Apr 2010-Science
TL;DR: The Landscape Model captures the reading process and the influences of reader characteristics and text characteristics and suggests factors that can optimize—or jeopardize—learning science from text.
Abstract: Texts form a powerful tool in teaching concepts and principles in science How do readers extract information from a text, and what are the limitations in this process? Central to comprehension of and learning from a text is the construction of a coherent mental representation that integrates the textual information and relevant background knowledge This representation engenders learning if it expands the reader's existing knowledge base or if it corrects misconceptions in this knowledge base The Landscape Model captures the reading process and the influences of reader characteristics (such as working-memory capacity, reading goal, prior knowledge, and inferential skills) and text characteristics (such as content/structure of presented information, processing demands, and textual cues) The model suggests factors that can optimize--or jeopardize--learning science from text

229 citations


Journal ArticleDOI
TL;DR: This paper provides a learning algorithm based on refinement operators for the description logic ALCQ including support for concrete roles and shows that the approach is superior to other learning approaches on description logics, and is competitive with established ILP systems.
Abstract: With the advent of the Semantic Web, description logics have become one of the most prominent paradigms for knowledge representation and reasoning. Progress in research and applications, however, is constrained by the lack of well-structured knowledge bases consisting of a sophisticated schema and instance data adhering to this schema. It is paramount that suitable automated methods for their acquisition, maintenance, and evolution will be developed. In this paper, we provide a learning algorithm based on refinement operators for the description logic ALCQ including support for concrete roles. We develop the algorithm from thorough theoretical foundations by identifying possible abstract property combinations which refinement operators for description logics can have. Using these investigations as a basis, we derive a practically useful complete and proper refinement operator. The operator is then cast into a learning algorithm and evaluated using our implementation DL-Learner. The results of the evaluation show that our approach is superior to other learning approaches on description logics, and is competitive with established ILP systems.

223 citations


Journal ArticleDOI
TL;DR: An ontology model of a Product Data and Knowledge Management Semantic Object Model for PLM has been developed, with the aim of implementing ontology advantages and features into the model.

177 citations


Proceedings ArticleDOI
11 Jul 2010
TL;DR: This paper discusses three paradigms ensuring decidability: chase termination, guardedness, and stickiness, and extends plain Datalog by features such as existentially quantified rule heads and restricts the rule syntax so as to achieveDecidability and tractability.
Abstract: This paper summarizes results on a recently introduced family of Datalog-based languages, called Datalog+/-, which is a new framework for tractable ontology querying, and for a variety of other applications. Datalog+/- extends plain Datalog by features such as existentially quantified rule heads and, at the same time, restricts the rule syntax so as to achieve decidability and tractability. In particular, we discuss three paradigms ensuring decidability: chase termination, guardedness, and stickiness.

173 citations


Journal ArticleDOI
TL;DR: This work surveys the literature and considers three aspects of eligibility criteria knowledge representation to be essential constructs of a formal knowledge representation for eligibility criteria, which should inform the development and choice of the constructs toward cost-effective knowledge representation efforts.

173 citations


Book ChapterDOI
15 Jul 2010
TL;DR: HyperGraphDB is an embedded, transactional database designed as a universal data model for highly complex, large scale knowledge representation applications such as found in artificial intelligence, bioinformatics and natural language processing.
Abstract: We present HyperGraphDB, a novel graph database based on generalized hypergraphs where hyperedges can contain other hyperedges. This generalization automatically reifies every entity expressed in the database thus removing many of the usual difficulties in dealing with higher-order relationships. An open two-layered architecture of the data organization yields a highly customizable system where specific domain representations can be optimized while remaining within a uniform conceptual framework. HyperGraphDB is an embedded, transactional database designed as a universal data model for highly complex, large scale knowledge representation applications such as found in artificial intelligence, bioinformatics and natural language processing.

Journal ArticleDOI
TL;DR: In this article, the authors consider theories about processes of visual perception and perception-based knowledge representation (VPR) in order to explain difficulties encountered in figural processing in junior high school geometry tasks, and take advantage of the following perspectives of VPR: (1) Perceptual organization: Gestalt principles, recognition: bottom-up and top-down processing; and (3) representation of perception based knowledge: verbal vs. pictorial representation, mental images and hierarchical structure of images.
Abstract: In this paper, we consider theories about processes of visual perception and perception-based knowledge representation (VPR) in order to explain difficulties encountered in figural processing in junior high school geometry tasks. In order to analyze such difficulties, we take advantage of the following perspectives of VPR: (1) Perceptual organization: Gestalt principles, (2) recognition: bottom-up and top-down processing; and (3) representation of perception-based knowledge: verbal vs. pictorial representation, mental images and hierarchical structure of images. Examples given in the paper were mostly taken from Gal's study (2005) which aimed at identifying and analyzing Problematic Learning Situations (after Gal & Linchevski, 2000) in junior high school geometry classes. Gal's study (2005) suggests that while this theoretical perspective became part of teachers' pedagogic content knowledge, the teachers were aware of their students' thinking processes and their ability to analyze and cope with their students’ difficulties in geometry was improved.

Proceedings Article
23 Aug 2010
TL;DR: The most mature of these novel languages are presented, show how they can balance the disadvantages of natural languages and formal languages for knowledge representation, and discuss how domain specialists can be supported writing specifications in controlled natural language.
Abstract: This paper presents a survey of research in controlled natural languages that can be used as high-level knowledge representation languages. Over the past 10 years or so, a number of machine-oriented controlled natural languages have emerged that can be used as high-level interface languages to various kinds of knowledge systems. These languages are relevant to the area of computational linguistics since they have two very interesting properties: firstly, they look informal like natural languages and are therefore easier to write and understand by humans than formal languages; secondly, they are precisely defined subsets of natural languages and can be translated automatically (and often deterministically) into a formal target language and then be used for automated reasoning. We present and compare the most mature of these novel languages, show how they can balance the disadvantages of natural languages and formal languages for knowledge representation, and discuss how domain specialists can be supported writing specifications in controlled natural language.

Journal ArticleDOI
TL;DR: It is demonstrated that different types of minimal modules induced by inseparability relations can be automatically extracted from real-world medium-size DL-Lite ontologies by composing the known tractable syntactic locality-based module extraction algorithm with the authors' non-tractable extraction algorithms and using the multi-engine QBF solver aqme.

Book
15 Sep 2010
TL;DR: It is shown that the DL SROIQ --the basis for the ongoing standardisation of OWL 2 --can completely internalise DL rules, and DL rules enable us to significantly extend the tractable DLs EL++ and DLP.
Abstract: We introduce description logic (DL) rules as a new rule-based formalism for knowledge representation in DLs. As a fragment of the Semantic Web Rule Language SWRL, DL rules allow for a tight integration with DL knowledge bases. In contrast to SWRL, however, the combination of DL rules with expressive description logics remains decidable, and we show that the DL SROIQ --the basis for the ongoing standardisation of OWL 2 --can completely internalise DL rules. On the other hand, DL rules capture many expressive features of SROIQ that are not available in simpler DLs yet. While reasoning in SROIQ is highly intractable, it turns out that DL rules can be introduced to various lightweight DLs without increasing their worst-case complexity. In particular, DL rules enable us to significantly extend the tractable DLs EL++ and DLP.

Journal ArticleDOI
01 Dec 2010
TL;DR: It is concluded that when integrating a real-life application like BibSonomy into research, certain constraints have to be considered; but in general, the tight interplay between the scientific work and the running system has made Bibsonomy a valuable platform for demonstrating and evaluating Web 2.0 research.
Abstract: Social resource sharing systems are central elements of the Web 2.0 and use the same kind of lightweight knowledge representation, called folksonomy. Their large user communities and ever-growing networks of user-generated content have made them an attractive object of investigation for researchers from different disciplines like Social Network Analysis, Data Mining, Information Retrieval or Knowledge Discovery. In this paper, we summarize and extend our work on different aspects of this branch of Web 2.0 research, demonstrated and evaluated within our own social bookmark and publication sharing system BibSonomy, which is currently among the three most popular systems of its kind. We structure this presentation along the different interaction phases of a user with our system, coupling the relevant research questions of each phase with the corresponding implementation issues. This approach reveals in a systematic fashion important aspects and results of the broad bandwidth of folksonomy research like capturing of emergent semantics, spam detection, ranking algorithms, analogies to search engine log data, personalized tag recommendations and information extraction techniques. We conclude that when integrating a real-life application like BibSonomy into research, certain constraints have to be considered; but in general, the tight interplay between our scientific work and the running system has made BibSonomy a valuable platform for demonstrating and evaluating Web 2.0 research.

Journal ArticleDOI
TL;DR: A conceptual framework, based on taxonomy of the most important argumentation models, approaches and systems found in the literature, is proposed, which highlights the similarities and differences between these argueation models.
Abstract: Understanding argumentation and its role in human reasoning has been a continuous subject of investigation for scholars from the ancient Greek philosophers to current researchers in philosophy, logic and artificial intelligence. In recent years, argumentation models have been used in different areas such as knowledge representation, explanation, proof elaboration, commonsense reasoning, logic programming, legal reasoning, decision making, and negotiation. However, these models address quite specific needs and there is need for a conceptual framework that would organize and compare existing argumentation-based models and methods. Such a framework would be very useful especially for researchers and practitioners who want to select appropriate argumentation models or techniques to be incorporated in new software systems with argumentation capabilities. In this paper, we propose such a conceptual framework, based on taxonomy of the most important argumentation models, approaches and systems found in the literature. This framework highlights the similarities and differences between these argumentation models. As an illustration of the practical use of this framework, we present a case study which shows how we used this framework to select and enrich an argumentation model in a knowledge acquisition project which aimed at representing argumentative knowledge contained in texts critiquing military courses of action.

Journal ArticleDOI
TL;DR: It is argued that reasoning for the Semantic Web should be understood as “shared inference,” which is not necessarily based on deductive methods.
Abstract: The realization of Semantic Web reasoning is central to substantiating the Semantic Web vision. However, current mainstream research on this topic faces serious challenges, which forces us to question established lines of research and to rethink the underlying approaches. We argue that reasoning for the Semantic Web should be understood as “shared inference,” which is not necessarily based on deductive methods. Model-theoretic semantics (and sound and complete reasoning based on it) functions as a gold standard, but applications dealing with large-scale and noisy data usually cannot afford the required runtimes. Approximate methods, including deductive ones, but also approaches based on entirely different methods like machine learning or nature-inspired computing need to be investigated, while quality assurance needs to be done in terms of precision and recall values (as in information retrieval) and not necessarily in terms of soundness and completeness of the underlying algorithms.

Proceedings ArticleDOI
01 Dec 2010
TL;DR: This paper presents KNOWROB-MAP, a system for building environment models for robots by combining spatial information about objects in the environment with encyclopedic knowledge about the types and properties of objects, with common-sense knowledge describing what the objects can be used for.
Abstract: Autonomous household robots are supposed to accomplish complex tasks like cleaning the dishes which involve both navigation and manipulation within the environment. For navigation, spatial information is mostly sufficient, but manipulation tasks raise the demand for deeper knowledge about objects, such as their types, their functions, or the way how they can be used. We present KNOWROB-MAP, a system for building environment models for robots by combining spatial information about objects in the environment with encyclopedic knowledge about the types and properties of objects, with common-sense knowledge describing what the objects can be used for, and with knowledge derived from observations of human activities by learning statistical relational models. In this paper, we describe the concept and implementation of KNOWROB-MAP and present several examples demonstrating the range of information the system can provide to autonomous robots.

Journal ArticleDOI
TL;DR: This paper extends soft sets with DLs by using the concepts of DLs to act as the parameters of soft sets, and proves that certain De Morgan's laws hold in the extended soft set theory with respect to these operations.
Abstract: Molodtsov initiated the concept of soft set theory, which can be used as a generic mathematical tool for dealing with uncertainty. Description Logics (DLs) are a family of knowledge representation languages which can be used to represent the terminological knowledge of an application domain in a structured and formally well-understood way. The current research progress and the existing problems of soft set theory are analyzed. In this paper we extend soft sets with DLs, i.e., present an extended soft set theory by using the concepts of DLs to act as the parameters of soft sets. We define some operations for the extended soft sets. Moreover, we prove that certain De Morgan's laws hold in the extended soft set theory with respect to these operations.

Journal ArticleDOI
17 Jun 2010
TL;DR: Triads being provided with such concept maps acquired more knowledge about the others’ knowledge structures and information, focused while collaborating mainly on problem-relevant information, and therefore, solved the problems faster and more often correctly, compared to triads with no access to their collaborators’ maps.
Abstract: For collaboration in learning situations, it is important to know what the collaborators know. However, developing such knowledge is difficult, especially for newly formed groups participating in a computer-supported collaboration. The solution for this problem described in this paper is to provide to group members access to the knowledge structures and the information resources of their collaboration partners in the form of digital concept maps. In an empirical study, 20 triads having access to such maps and 20 triads collaborating without such maps are compared regarding their group performance in problem-solving tasks. Results showed that the triads being provided with such concept maps acquired more knowledge about the others’ knowledge structures and information, focused while collaborating mainly on problem-relevant information, and therefore, solved the problems faster and more often correctly, compared to triads with no access to their collaborators’ maps.

Journal ArticleDOI
TL;DR: In this article, the notion of treewidth has been applied to logic-based reasoning problems such as abduction, closed world reasoning, circumscription, and disjunctive logic programming.

Journal ArticleDOI
TL;DR: This paper presents the integration of methodologies with a model of knowledge for conceptual design in accordance with model-driven engineering and extends the FBS model and presents its practical implementation through ontology and language such as SysML.

DissertationDOI
06 Nov 2010
TL;DR: The goal of this thesis is to give the research area of CNLs for knowledge representation a shift in perspective: from the present explorative and proof-of-concept-based approaches to a more engineering focused point of view.
Abstract: Knowledge representation is a long-standing research area of computer science that aims at representing human knowledge in a form that computers can interpret. Most knowledge representation approaches, however, have suffered from poor user interfaces. It turns out to be difficult for users to learn and use the logic-based languages in which the knowledge has to be encoded. A new approach to design more intuitive but still reliable user interfaces for knowledge representation systems is the use of controlled natural language (CNL). CNLs are subsets of natural languages that are restricted in a way that allows their automatic translation into formal logic. A number of CNLs have been developed but the resulting tools are mostly just prototypes so far. Furthermore, nobody has yet been able to provide strong evidence that CNLs are indeed easier to understand than other logic-based languages. The goal of this thesis is to give the research area of CNLs for knowledge representation a shift in perspective: from the present explorative and proof-of-concept-based approaches to a more engineering-focussed point of view. For this reason, I introduce theoretical and practical building blocks for the design and application of controlled English for the purpose of knowledge representation. I first show how CNLs can be defined in an adequate and simple way by the introduction of a novel grammar notation and I describe efficient algorithms to process such grammars. I then demonstrate how these theoretical concepts can be implemented and how CNLs can be embedded in knowledge representation tools so that they provide intuitive and powerful user interfaces that are accessible even to untrained users. Finally, I discuss how the understandability of CNLs can be evaluated. I argue that the understandability of CNLs cannot be assessed reliably with existing approaches, and for this reason I introduce a novel testing framework. Experiments based on this framework show that CNLs are not only easier to understand than comparable languages but also need less time to be learned and are preferred by users.

Proceedings ArticleDOI
10 May 2010
TL;DR: The primary target of this work is human-robot collaboration, especially for service robots in complicated application scenarios, and a series of case study was conducted on Ke Jia with positive results, verifying its ability of acquiring knowledge through spoken dialog with users, autonomous solving problems by virtue of acquired causal knowledge, and autonomous planning for complex tasks.
Abstract: The primary target of this work is human-robot collaboration, especially for service robots in complicated application scenarios. Three assumptions and four requirements are identified. State-of-the-art, general-purpose Natural Language Processing (NLP), Commonsense Reasoning (in particular, ASP), and Robotics techniques are integrated in a layered architecture. The architecture and mechanisms have been implemented on a service robot, Ke Jia. Instead of command languages, small limited segments of natural languages are employed in spoken dialog between Ke Jia and its users. The information in the dialog is extracted, classified and transferred into inner representation by Ke Jia's NLP mechanism, and further used autonomously in problem-solving and planning. A series of case study was conducted on Ke Jia with positive results, verifying its ability of acquiring knowledge through spoken dialog with users, autonomous solving problems by virtue of acquired causal knowledge, and autonomous planning for complex tasks.

Journal ArticleDOI
TL;DR: This paper presents the (abstract) syntax and semantics of a rather elementary fuzzy extension of OWL creating fuzzy OWL (f-OWL), and uses this extension to provide an investigation on the semantics of several f-owL axioms and more precisely for those which, in classical DLs, can be expressed in different but equivalent ways.

Journal ArticleDOI
TL;DR: The design and evaluation results for a system called AURA are presented, which enables domain experts in physics, chemistry, and biology to author a knowledge base and that then allows a different set of users to ask novel questions against that knowledge base.
Abstract: In the winter, 2004 issue of AI Magazine, we reported Vulcan Inc.'s first step toward creating a question-answering system called "Digital Aristotle." The goal of that first step was to assess the state of the art in applied Knowledge Representation and Reasoning (KRR) by asking AI experts to represent 70 pages from the advanced placement (AP) chemistry syllabus and to deliver knowledge-based systems capable of answering questions from that syllabus. This paper reports the next step toward realizing a Digital Aristotle: we present the design and evaluation results for a system called AURA, which enables domain experts in physics, chemistry, and biology to author a knowledge base and that then allows a different set of users to ask novel questions against that knowledge base. These results represent a substantial advance over what we reported in 2004, both in the breadth of covered subjects and in the provision of sophisticated technologies in knowledge representation and reasoning, natural language processing, and question answering to domain experts and novice users.

Journal ArticleDOI
01 Dec 2010
TL;DR: Results of this study facilitate the tacit knowledge storage, management and sharing to provide knowledge requesters with accurate and comprehensive empirical knowledge for problem solving and decision support.
Abstract: In the knowledge economy era of the 21st century [14,17], the competitive advantage of enterprises has shifted from visible equipment, capital and labor in the past to invisible knowledge nowadays. Knowledge can be distinguished into tacit knowledge and explicit knowledge. Meanwhile, tacit knowledge largely encompasses empirical knowledge difficult to be documented and generally hidden inside of personal mental models. The inability to transfer tacit knowledge to organizational knowledge would cause it to disappear after knowledge workers leaving their post, ultimately losing important intellectual assets for enterprises. Therefore, enterprises attempting to create higher knowledge value are highly concerned with how to transfer personal empirical knowledge inside of an enterprise into an organizational explicit knowledge by using a systematic method to manage and share such valuable empirical knowledge effectively. This study develops a method of ontology-based empirical knowledge representation and reasoning, which adopts OWL (Web Ontology Language) to represent empirical knowledge in a structural way in order to help knowledge requesters clearly understand empirical knowledge. An ontology reasoning method is subsequently adopted to deduce empirical knowledge in order to share and reuse relevant empirical knowledge effectively. Specifically, this study involves the following tasks: (i) analyze characteristics for empirical knowledge, (ii) design an ontology-based multi-layer empirical knowledge representation model, (iii) design an ontology-based empirical knowledge concept schema, (iv) establish an OWL-based empirical knowledge ontology, (v) design reasoning rules for ontology-based empirical knowledge, (vi) develop a reasoning algorithm for ontology-based empirical knowledge, and (vii) implement an ontology-based empirical knowledge reasoning mechanism. Results of this study facilitate the tacit knowledge storage, management and sharing to provide knowledge requesters with accurate and comprehensive empirical knowledge for problem solving and decision support.

Journal ArticleDOI
TL;DR: A fuzzy neural network is proposed to enhance the learning ability of FCMs and incorporates the inference mechanism of conventional FCMs with the determination of membership functions, as well as the quantification of causalities.
Abstract: The fuzzy cognitive map (FCM) has gradually emerged as a powerful paradigm for knowledge representation and a simulation mechanism that is applicable to numerous research and application fields. However, since efficient methods to determine the states of the investigated system and to quantify causalities that are the very foundations of FCM theory are lacking, constructing FCMs for complex causal systems greatly depends on expert knowledge. The manually developed models have a substantial shortcoming due to the model subjectivity and difficulties with assessing its reliability. In this paper, we proposed a fuzzy neural network to enhance the learning ability of FCMs. Our approach incorporates the inference mechanism of conventional FCMs with the determination of membership functions, as well as the quantification of causalities. In this manner, FCM models of the investigated systems can automatically be constructed from data and, therefore, operate with less human intervention. In the employed fuzzy neural network, the concept of mutual subsethood is used to describe the causalities, which provides more transparent interpretation for causalities in FCMs. The effectiveness of the proposed approach in handling the prediction of time series is demonstrated through many numerical simulations.

Journal ArticleDOI
01 Jun 2010
TL;DR: This paper uses the Web as a massive learning corpus to retrieve data and to infer information distribution using highly contextualized queries aimed at improving the quality of the result.
Abstract: Class descriptors such as attributes, features or meronyms are rarely considered when developing ontologies. Even WordNet only includes a reduced amount of part-of relationships. However, these data are crucial for defining concepts such as those considered in classical knowledge representation models. Some attempts have been made to extract those relations from text using general meronymy detection patterns; however, there has been very little work on learning expressive class attributes (including associated domain, range or data values) at an ontological level. In this paper we take this background into consideration when proposing and implementing an automatic, non-supervised and domain-independent methodology to extend ontological classes in terms of learning concept attributes, data-types, value ranges and measurement units. In order to present a general solution and minimize the data sparseness of pattern-based approaches, we use the Web as a massive learning corpus to retrieve data and to infer information distribution using highly contextualized queries aimed at improving the quality of the result. This corpus is also automatically updated in an adaptive manner according to the knowledge already acquired and the learning throughput. Results have been manually checked by means of an expert-based concept-per-concept evaluation for several well distinguished domains showing reliable results and a reasonable learning performance.