scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 2004"


Book
01 Jan 2004
TL;DR: This landmark text takes the central concepts of knowledge representation developed over the last 50 years and illustrates them in a lucid and compelling way, and offers the first true synthesis of the field in over a decade.
Abstract: Knowledge representation is at the very core of a radical idea for understanding intelligence. Instead of trying to understand or build brains from the bottom up, its goal is to understand and build intelligent behavior from the top down, putting the focus on what an agent needs to know in order to behave intelligently, how this knowledge can be represented symbolically, and how automated reasoning procedures can make this knowledge available as needed. This landmark text takes the central concepts of knowledge representation developed over the last 50 years and illustrates them in a lucid and compelling way. Each of the various styles of representation is presented in a simple and intuitive form, and the basics of reasoning with that representation are explained in detail. This approach gives readers a solid foundation for understanding the more advanced work found in the research literature. The presentation is clear enough to be accessible to a broad audience, including researchers and practitioners in database management, information retrieval, and object-oriented systems as well as artificial intelligence. This book provides the foundation in knowledge representation and reasoning that every AI practitioner needs. *Authors are well-recognized experts in the field who have applied the techniques to real-world problems * Presents the core ideas of KR&R in a simple straight forward approach, independent of the quirks of research systems *Offers the first true synthesis of the field in over a decade Table of Contents 1 Introduction * 2 The Language of First-Order Logic *3 Expressing Knowledge * 4 Resolution * 5 Horn Logic * 6 Procedural Control of Reasoning * 7 Rules in Production Systems * 8 Object-Oriented Representation * 9 Structured Descriptions * 10 Inheritance * 11 Numerical Uncertainty *12 Defaults *13 Abductive Reasoning *14 Actions * 15 Planning *16 A Knowledge Representation Tradeoff * Bibliography * Index

938 citations


Journal ArticleDOI
TL;DR: The authors present a parallel distributed processing implementation of this theory, in which semantic representations emerge from mechanisms that acquire the mappings between visual representations of objects and their verbal descriptions, to understand the structure of impaired performance in patients with selective and progressive impairments of conceptual knowledge.
Abstract: Wernicke (1900, as cited in G. H. Eggert, 1977) suggested that semantic knowledge arises from the interaction of perceptual representations of objects and words. The authors present a parallel distributed processing implementation of this theory, in which semantic representations emerge from mechanisms that acquire the mappings between visual representations of objects and their verbal descriptions. To test the theory, they trained the model to associate names, verbal descriptions, and visual representations of objects. When its inputs and outputs are constructed to capture aspects of structure apparent in attribute-norming experiments, the model provides an intuitive account of semantic task performance. The authors then used the model to understand the structure of impaired performance in patients with selective and progressive impairments of conceptual knowledge. Data from 4 well-known semantic tasks revealed consistent patterns that find a ready explanation in the model. The relationship between the model and related theories of semantic representation is discussed.

847 citations


Journal ArticleDOI
TL;DR: The nonmonotonic causal logic defined in this paper can be used to represent properties of actions, including actions with conditional and indirect effects, nondeterministic actions, and concurrently executed actions.

507 citations


Book ChapterDOI
01 Jan 2004
TL;DR: Graphviz is a collection of software for viewing and manipulating abstract graphs that provides graph visualization for tools and web sites in domains such as software engineering, networking, databases, knowledge representation, and bioinformatics.
Abstract: Graphviz is a collection of software for viewing and manipulating abstract graphs. It provides graph visualization for tools and web sites in domains such as software engineering, networking, databases, knowledge representation, and bioinformatics. Hundreds of thousands of copies have been distributed under an open source license.

469 citations



Proceedings ArticleDOI
28 Mar 2004
TL;DR: This work proposes Appleseed, a novel proposal for local group trust computation that borrows many ideas from spreading activation models in psychology and relates their concepts to trust evaluation in an intuitive fashion.
Abstract: Semantic Web endeavors have mainly focused on issues pertaining to knowledge representation and ontology design. However, besides understanding information metadata stated by subjects, knowing about their credibility becomes equally crucial. Hence, trust and trust metrics, conceived as computational means to evaluate trust relationships between individuals, come into play. Our major contributions to semantic Web trust management are twofold. First, we introduce our classification scheme for trust metrics along various axes and discuss advantages and drawbacks of existing approaches for semantic Web scenarios. Hereby, we devise our advocacy for local group trust metrics, guiding us to the second part which presents Appleseed, our novel proposal for local group trust computation. Compelling in its simplicity, Appleseed borrows many ideas from spreading activation models in psychology and relates their concepts to trust evaluation in an intuitive fashion.

330 citations


01 Jan 2004
TL;DR: The thesis that ontologies developed for research in the natural sciences should be understood as having as their subject matter, not concepts, but rather the universals and particulars which exist in reality and are captured in scientific laws is defended.
Abstract: There is an assumption commonly embraced by ontological engineers, an assumption which has its roots in the discipline of knowledge representation, to the effect that it is concepts which form the subject-matter of ontology. The term 'con- cept' is hereby rarely precisely defined, and the intended role of concepts within on- tology is itself subject to a variety of conflicting (and sometimes intrinsically inco- herent) interpretations. It seems, however, to be widely accepted that concepts are in some sense the products of human cognition. The present essay is devoted to the application of ontology in support of research in the natural sciences. It defends the thesis that ontologies de- veloped for such purposes should be understood as having as their subject matter, not concepts, but rather the universals and particulars which exist in reality and are captured in scientific laws. We outline the benefits of a view along these lines by showing how it yields rigorous formal definitions of the foundational relations used in many influential ontologies, illustrating our results by reference to examples drawn from the domain of the life sciences.

297 citations


Journal ArticleDOI
TL;DR: The Guideline Interchange Format (GLIF) as mentioned in this paper is a model for representation of sharable computer-interpretable guidelines, which can be used to represent a guideline at three levels: a conceptual flowchart, a computable specification, and an implementable specification intended to be incorporated into particular institutional information systems.

288 citations


Journal ArticleDOI
TL;DR: In spite of many remaining unsolved problems and need for further research and development, use of knowledge and semi-automation are the only viable alternatives towards development of useful object extraction systems, as some commercial systems on building extraction and 3D city modelling as well as advanced, practically oriented research have shown.
Abstract: The paper focuses mainly on extraction of important topographic objects, like buildings and roads, that have received much attention the last decade. As main input data, aerial imagery is considered, although other data, like from laser scanner, SAR and high-resolution satellite imagery, can be also used. After a short review of recent image analysis trends, and strategy and overall system aspects of knowledge-based image analysis, the paper focuses on aspects of knowledge that can be used for object extraction: types of knowledge, problems in using existing knowledge, knowledge representation and management, current and possible use of knowledge, upgrading and augmenting of knowledge. Finally, an overview on commercial systems regarding automated object extraction and use of a priori knowledge is given. In spite of many remaining unsolved problems and need for further research and development, use of knowledge and semi-automation are the only viable alternatives towards development of useful object extraction systems, as some commercial systems on building extraction and 3D city modelling as well as advanced, practically oriented research have shown.

277 citations


Proceedings ArticleDOI
05 Jan 2004
TL;DR: This work proposes to incorporate Bayesian networks (BN), a widely used graphic model for knowledge representation under uncertainty and OWL, the de facto industry standard ontology language recommended by W3C to support uncertain ontology representation and ontology reasoning and mapping.
Abstract: To support uncertain ontology representation and ontology reasoning and mapping, we propose to incorporate Bayesian networks (BN), a widely used graphic model for knowledge representation under uncertainty and OWL, the de facto industry standard ontology language recommended by W3C. First, OWL is augmented to allow additional probabilistic markups, so probabilities can be attached with individual concepts and properties in an OWL ontology. Secondly, a set of translation rules is defined to convert this probabilistically annotated OWL ontology into the directed acyclic graph (DAG) of a BN. Finally, the BN is completed by constructing conditional probability tables (CPT) for each node in the DAG. Our probabilistic extension to OWL is consistent with OWL semantics, and the translated BN is associated with a joint probability distribution over the application domain. General Bayesian network inference procedures (e.g., belief propagation or junction tree) can be used to compute P(C/spl bsol/e): the degree of the overlap or inclusion between a concept C and a concept represented by a description e. We also provide a similarity measure that can be used to find the most similar concept that a given description belongs to.

262 citations


Journal ArticleDOI
TL;DR: In this paper, the authors classify the concepts used for knowledge representation into four broad ontological categories: static ontologies describe static aspects of the world, i.e., what things exist, their attributes and relationships.
Abstract: Knowledge management research focuses on concepts, methods, and tools supporting the management of human knowledge. The main objective of this paper is to survey basic concepts that have been used in computer science for the representation of knowledge and summarize some of their advantages and drawbacks. A secondary objective is to relate these techniques to information science theory and practice.The survey classifies the concepts used for knowledge representation into four broad ontological categories. Static ontologies describe static aspects of the world, i.e., what things exist, their attributes and relationships. A dynamic ontology, on the other hand, describes the changing aspects of the world in terms of states, state transitions and processes. Intentional ontologies encompass the world of things agents believe in, want, prove or disprove, and argue about. Finally, social ontologies cover social settings – agents, positions, roles, authority, permanent organizational structures or shifting networks of alliances and interdependencies.

Journal ArticleDOI
TL;DR: This paper proposes a new combination method which is computationally robust in the sense that the combination of decidable formalisms is again decidable, and which, nonetheless, allows non-trivial interactions between the combined components.

Journal ArticleDOI
01 Jul 2004
TL;DR: In this work, ontologies are proposed for modeling the high-level security requirements and capabilities of Web services and clients and helps to match a client's request with appropriate services-those based on security criteria as well as functional descriptions.
Abstract: Web services will soon handle users' private information. They'll need to provide privacy guarantees to prevent this delicate information from ending up in the wrong hands. More generally, Web services will need to reason about their users' policies that specify who can access private information and under what conditions. These requirements are even more stringent for semantic Web services that exploit the semantic Web to automate their discovery and interaction because they must autonomously decide what information to exchange and how. In our previous work, we proposed ontologies for modeling the high-level security requirements and capabilities of Web services and clients.1 This modeling helps to match a client's request with appropriate services-those based on security criteria as well as functional descriptions.

Journal Article
TL;DR: The relationship between randomness and fuzziness is discussed and a simple and effective way is proposed to simulate the uncertainty by means of knowledge representation which provides a basis for the automation of both logic and image thinking with uncertainty.
Abstract: Uncertainty exists widely in the subjective and objective world. In all kinds of uncertainty, randomness and fuzziness are the most important and fundamental. In this paper, the relationship between randomness and fuzziness is discussed. Uncertain states and their changes can be measured by entropy and hyper-entropy respectively. Taken advantage of entropy and hyper-entropy, the uncertainty of chaos, fractal and complex networks by their various evolution and differentiation are further studied. A simple and effective way is proposed to simulate the uncertainty by means of knowledge representation which provides a basis for the automation of both logic and image thinking with uncertainty. The AI (artificial intelligence) with uncertainty is a new cross-discipline, which covers computer science, physics, mathematics, brain science, psychology, cognitive science, biology and philosophy, and results in the automation of representation, process and thinking for uncertain information and knowledge.

Proceedings Article
22 Aug 2004
TL;DR: It is shown that even admitting general concept inclusion (GCI) axioms and role hierarchies in ℇL terminologies preserves the polynomial time upper bound for subsumption, and implication of the first result is that reasoning over the widely used medical terminology SNOMED is possible in polynometric time.
Abstract: In the area of Description Logic (DL) based knowledge representation, research on reasoning w.r.t. general terminologies has mainly focused on very expressive DLs. Recently, though, it was shown for the DL ℇL, providing only the constructors conjunction and existential restriction, that the subsumption problem w.r.t. cyclic terminologies can be decided in polynomial time, a surprisingly low upper bound. In this paper, we show that even admitting general concept inclusion (GCI) axioms and role hierarchies in ℇL terminologies preserves the polynomial time upper bound for subsumption. We also show that subsumption becomes co-NP hard when adding one of the constructors number restriction, disjunction, and 'allsome', an operator used in the DL K-REP. implication of the first result is that reasoning over the widely used medical terminology SNOMED is possible in polynomial time.

Journal ArticleDOI
TL;DR: An explosion of interest in ontologies as artifacts to represent human knowledge and as critical components in knowledge management, the semantic Web, business-to-business applications, and several other application areas is seen.
Abstract: Recently, we have seen an explosion of interest in ontologies as artifacts to represent human knowledge and as critical components in knowledge management, the semantic Web, business-to-business applications, and several other application areas. Various research communities commonly assume that ontologies are the appropriate modeling structure for representing knowledge. However, little discussion has occurred regarding the actual range of knowledge an ontology can successfully represent.

Book ChapterDOI
07 Nov 2004
TL;DR: The concept of a Traversal View is developed, a view where a user specifies the central concept or concepts of interest, the relationships to traverse to find other concepts to include in the view, and the depth of the traversal.
Abstract: One of the original motivations behind ontology research was the belief that ontologies can help with reuse in knowledge representation. However, many of the ontologies that are developed with reuse in mind, such as standard reference ontologies and controlled terminologies, are extremely large, while the users often need to reuse only a small part of these resources in their work. Specifying various views of an ontology enables users to limit the set of concepts that they see. In this paper, we develop the concept of a Traversal View, a view where a user specifies the central concept or concepts of interest, the relationships to traverse to find other concepts to include in the view, and the depth of the traversal. For example, given a large ontology of anatomy, a user may use a Traversal View to extract a concept of Heart and organs and organ parts that surround the heart or are contained in the heart. We define the notion of Traversal Views formally, discuss their properties, present a strategy for maintaining the view through ontology evolution and describe our tool for defining and extracting Traversal Views.

Journal ArticleDOI
TL;DR: This work proposes a common ontology called semantic conflict resolution ontology (SCROL) that addresses the inherent difficulties in the conventional approaches to semantic interoperability of heterogeneous databases, i.e., federated schema and domain ontology approaches.
Abstract: Establishing semantic interoperability among heterogeneous information sources has been a critical issue in the database community for the past two decades. Despite the critical importance, current approaches to semantic interoperability of heterogeneous databases have not been sufficiently effective. We propose a common ontology called semantic conflict resolution ontology (SCROL) that addresses the inherent difficulties in the conventional approaches, i.e., federated schema and domain ontology approaches. SCROL provides a systematic method for automatically detecting and resolving various semantic conflicts in heterogeneous databases. SCROL provides a dynamic mechanism of comparing and manipulating contextual knowledge of each information source, which is useful in achieving semantic interoperability among heterogeneous databases. We show how SCROL is used for detecting and resolving semantic conflicts between semantically equivalent schema and data elements. In addition, we present evaluation results to show that SCROL can be successfully used to automate the process of identifying and resolving semantic conflicts.

Proceedings ArticleDOI
27 Jun 2004
TL;DR: The results of a "Challenge Project on Video Event Taxonomy" sponsored by the Advanced Research and Development Activity (ARDA) of the U.S. Government in the summer and fall of 2003 are described, which resulted in the development of a formal language for describing an ontology of events, which is called VERL (Video Event Representation Language).
Abstract: Representation and recognition of events in a video is important for a number of tasks such as video surveillance, video browsing and content based video indexing. This paper describes the results of a "Challenge Project on Video Event Taxonomy" sponsored by the Advanced Research and Development Activity (ARDA) of the U.S. Government in the summer and fall of 2003. The project brought together more than 30 researchers in computer vision and knowledge representation and representatives of the user community. It resulted in the development of a formal language for describing an ontology of events, which we call VERL (Video Event Representation Language) and a companion language called VEML (Video Event Markup Language) to annotate instances of the events described in VERL. This paper provides a summary of VERL and VEML as well as the considerations associated with the specific design choices.

Journal ArticleDOI
01 Sep 2004
TL;DR: An overview of the CAMPaM field is presented and it is shown how transformations assume a central place and are explicitly modeled themselves by graph grammars.
Abstract: Modeling and simulation are quickly becoming the primary enablers for complex system design. They allow the representation of intricate knowledge at various levels of abstraction and allow automated analysis as well as synthesis. The heterogeneity of the design process, as much as of the system itself, however, requires a manifold of formalisms tailored to the specific task at hand. Efficient design approaches aim to combine different models of a system under study and maximally use the knowledge captured in them. Computer Automated Multi-Paradigm Modeling (CAMPaM) is the emerging field that addresses the issues involved and formulates a domain-independent framework along three dimensions: (1) multiple levels of abstraction, (2) multiformalism modeling, and (3) meta-modeling. This article presents an overview of the CAMPaM field and shows how transformations assume a central place. These transformation are, in turn, explicitly modeled themselves by graph grammars.

Journal ArticleDOI
01 Jan 2004
TL;DR: An automatic mechanism for selecting appropriate concepts that both describe and identify documents as well as language employed in user requests is described, and a scalable disambiguation algorithm that prunes irrelevant concepts and allows relevant ones to associate with documents and participate in query generation is proposed.
Abstract: Technology in the field of digital media generates huge amounts of nontextual information, audio, video, and images, along with more familiar textual information. The potential for exchange and retrieval of information is vast and daunting. The key problem in achieving efficient and user-friendly retrieval is the development of a search mechanism to guarantee delivery of minimal irrelevant information (high precision) while insuring relevant information is not overlooked (high recall). The traditional solution employs keyword-based search. The only documents retrieved are those containing user-specified keywords. But many documents convey desired semantic information without containing these keywords. This limitation is frequently addressed through query expansion mechanisms based on the statistical co-occurrence of terms. Recall is increased, but at the expense of deteriorating precision. One can overcome this problem by indexing documents according to context and meaning rather than keywords, although this requires a method of converting words to meanings and the creation of a meaning-based index structure. We have solved the problem of an index structure through the design and implementation of a concept-based model using domain-dependent ontologies. An ontology is a collection of concepts and their interrelationships that provide an abstract view of an application domain. With regard to converting words to meaning, the key issue is to identify appropriate concepts that both describe and identify documents as well as language employed in user requests. This paper describes an automatic mechanism for selecting these concepts. An important novelty is a scalable disambiguation algorithm that prunes irrelevant concepts and allows relevant ones to associate with documents and participate in query generation. We also propose an automatic query expansion mechanism that deals with user requests expressed in natural language. This mechanism generates database queries with appropriate and relevant expansion through knowledge encoded in ontology form. Focusing on audio data, we have constructed a demonstration prototype. We have experimentally and analytically shown that our model, compared to keyword search, achieves a significantly higher degree of precision and recall. The techniques employed can be applied to the problem of information selection in all media types.

Book ChapterDOI
20 Sep 2004
TL;DR: It is concluded that the flexibility of natural language makes it a highly suitable representation for achieving practical inferences over text, such as context finding, inference chaining, and conceptual analogy.
Abstract: ConceptNet is a very large semantic network of commonsense knowledge suitable for making various kinds of practical inferences over text. ConceptNet captures a wide range of commonsense concepts and relations like those in Cyc, while its simple semantic network structure lends it an ease-of-use comparable to WordNet. To meet the dual challenge of having to encode complex higher-order concepts, and maintaining ease-of-use, we introduce a novel use of semi-structured natural language fragments as the knowledge representation of commonsense concepts. In this paper, we present a methodology for reasoning flexibly about these semi-structured natural language fragments. We also examine the tradeoffs associated with representing commonsense knowledge in formal logic versus in natural language. We conclude that the flexibility of natural language makes it a highly suitable representation for achieving practical inferences over text, such as context finding, inference chaining, and conceptual analogy.

Journal ArticleDOI
TL;DR: It will be outlined how conceptual spaces can represent various kind of information and how they can be used to describe concept learning.
Abstract: I focus on the distinction between sensation and perception. Perceptions contain additional information that is useful for interpreting sensations. Following Grush, I propose that emulators can be seen as containing (or creating) hidden variables that generate perceptions from sensations. Such hidden variables could be used to explain further cognitive phenomena, for example, causal reasoning.

Proceedings ArticleDOI
19 Jul 2004
TL;DR: An approach to extending the BDI framework to create an enhanced framework for human modelling is described, drawing upon the folk psychological roots of the framework to creating the extension, maintaining the mapping between the knowledge representation in the framework and the natural means of expressing expert knowledge.
Abstract: BDI agents have been used with considerable success to model humans and create human-like characters in simulated environments. A key reason for this success is that the BDI paradigm is based in folk psychology, which means that the core concepts of the agent framework map easily to the language people use to describe their reasoning and actions in everyday conversation. However there are many generic aspects of human behaviour and reasoning that are not captured in the framework. While it is possible for the builder of a specific model or character to add these things to their model on a case by case basis, if many models are to be built it is highly desirable to integrate such generic aspects into the framework. This paper describes an approach to extending the BDI framework to create an enhanced framework for human modelling. It draws upon the folk psychological roots of the framework to create the extension, maintaining the mapping between the knowledge representation in the framework and the natural means of expressing expert knowledge. The application of this approach is illustrated with an extension to support human decision making.

Book ChapterDOI
23 Feb 2004
TL;DR: The interplay of FCA and ontologies is studied along the life cycle of an ontology: FCA can support the building of the ontology as a learning technique, and the ontologies may be used to improve an FCA application.
Abstract: Among many other knowledge representations formalisms, Ontologies and Formal Concept Analysis (FCA) aim at modeling ‘concepts’. We discuss how these two formalisms may complement another from an application point of view. In particular, we will see how FCA can be used to support Ontology Engineering, and how ontologies can be exploited in FCA applications. The interplay of FCA and ontologies is studied along the life cycle of an ontology: (i) FCA can support the building of the ontology as a learning technique. (ii) The established ontology can be analyzed and navigated by using techniques of FCA. (iii) Last but not least, the ontology may be used to improve an FCA application.

Book ChapterDOI
08 Nov 2004
TL;DR: The well-founded semantics for dl-programs is presented, and it is shown that it generalizes the well- founded semantics for ordinary normal programs.
Abstract: In previous work, towards the integration of rules and ontologies in the Semantic Web, we have proposed a combination of logic programming under the answer set semantics with the description logics \({\cal SHIF}({\mathbf{D}})\) and \({\cal SHOIN}({\mathbf{D}})\), which underly the Web ontology languages OWL Lite and OWL DL, respectively. More precisely, we have introduced description logic programs (or dl-programs), which consist of a description logic knowledge base L and a finite set of description logic rules P, and we have defined their answer set semantics. In this paper, we continue this line of research. Here, as a central contribution, we present the well-founded semantics for dl-programs, and we analyze its semantic properties. In particular, we show that it generalizes the well-founded semantics for ordinary normal programs. Furthermore, we show that in the general case, the well-founded semantics of dl-programs is a partial model that approximates the answer set semantics, whereas in the positive and the stratified case, it is a total model that coincides with the answer set semantics. Finally, we also provide complexity results for dl-programs under the well-founded semantics.

Book ChapterDOI
01 Oct 2004
TL;DR: Key areas of research and development include current methods, architecture requirements, and the history of question answering on the Web; the development of systems to address new types of questions; interactivity; reuse of answers; advanced methods; and knowledge representation and reasoning used to support question answering.
Abstract: Question answering systems, which provide natural language responses to natural language queries, are the subject of rapidly advancing research encompassing both academic study and commercial applications, the most well-known of which is the search engine Ask Jeeves. Question answering draws on different fields and technologies, including natural language processing, information retrieval, explanation generation, and human computer interaction. Question answering creates an important new method of information access and can be seen as the natural step beyond such standard Web search methods as keyword query and document retrieval. This collection charts significant new directions in the field, including temporal, spatial, definitional, biographical, multimedia, and multilingual question answering.After an introduction that defines essential terminology and provides a roadmap to future trends, the book covers key areas of research and development. These include current methods, architecture requirements, and the history of question answering on the Web; the development of systems to address new types of questions; interactivity, which is often required for clarification of questions or answers; reuse of answers; advanced methods; and knowledge representation and reasoning used to support question answering. Each section contains an introduction that summarizes the chapters included and places them in context, relating them to the other chapters in the book as well as to the existing literature in the field and assessing the problems and challenges that remain.

Journal ArticleDOI
01 Nov 2004
TL;DR: A fuzzy user model is proposed to deal with vagueness in the user's knowledge description, which is used for user knowledge modeling in an adaptive educational system and provides a valuable, easy-to-use tool.
Abstract: Education is a dominating application area for adaptive hypermedia. Web-based adaptive educational systems incorporate complex intelligent tutoring techniques, which enable the system to recognize an individual user and their needs, and consequently adapt the instructional sequence. The personalization is done through the user model, which collects information about the user. Since the description of user knowledge and features also involves imprecision and vagueness, a user model has to be designed that is able to deal with this uncertainty. This paper presents a way of describing the uncertainty of user knowledge, which is used for user knowledge modeling in an adaptive educational system. The system builds on the concept domain model. A fuzzy user model is proposed to deal with vagueness in the user's knowledge description. The model uses fuzzy sets for knowledge representation and linguistic rules for model updating. The data from the fuzzy user model form the basis for the system adaptation, which implements various navigation support techniques. The evaluation of the presented educational system has shown that the system and its adaptation techniques provide a valuable, easy-to-use tool, which positively affects user knowledge acquisition and, therefore, leads to better learning results.

Journal ArticleDOI
TL;DR: It is shown that POLE is a general model of function learning that accommodates both benchmark results and recent data on knowledge partitioning and makes the counterintuitive prediction that a person's distribution of responses to repeated test stimuli should be multimodal.
Abstract: Knowledge partitioning is a theoretical construct holding that knowledge is not always integrated and homogeneous but may be separated into independent parcels containing mutually contradictory information. Knowledge partitioning has been observed in research on expertise, categorization, and function learning. This article presents a theory of function learning (the population of linear experts model--POLE) that assumes people partition their knowledge whenever they are presented with a complex task. The authors show that POLE is a general model of function learning that accommodates both benchmark results and recent data on knowledge partitioning. POLE also makes the counterintuitive prediction that a person's distribution of responses to repeated test stimuli should be multimodal. The authors report 3 experiments that support this prediction.

Journal ArticleDOI
01 May 2004
TL;DR: An overview and classification for approaches to dealing with preference is presented, followed by a set of desiderata that an approach might be expected to satisfy.
Abstract: In recent years, there has been a large amount of disparate work concerning the representation and reasoning with qualitative preferential information by means of approaches to nonmonotonic reasoning. Given the variety of underlying systems, assumptions, motivations, and intuitions, it is difficult to compare or relate one approach with another. Here, we present an overview and classification for approaches to dealing with preference. A set of criteria for classifying approaches is given, followed by a set of desiderata that an approach might be expected to satisfy. A comprehensive set of approaches is subsequently given and classified with respect to these sets of underlying principles.