scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge acquisition published in 1996"


Journal ArticleDOI
TL;DR: In this article, the authors examined organizational characteristics, structural mechanisms and contextual factors that influence knowledge acquisition from the foreign parent in international joint ventures (IJVs) and in turn relate assessments of knowledge acquisition to IJV performance.
Abstract: In this paper, we examine organizational characteristics, structural mechanisms and contextual factors that influence knowledge acquisition from the foreign parent in international joint ventures (IJVs). We in turn relate assessments of knowledge acquisition to IJV performance. The data come from a survey of IJVs in the Hungarian context, where learning and knowledge acquisition from the foreign parent is thought to be particularly critical. Adaptation mechanisms, such as capacity to learn, articulated goals, and structural mechanisms, such as the provision of training, technology and managerial assistance by foreign parents, all were positively associated with the degree to which IJVs reported acquiring knowledge from their foreign parents. We also found limited support for the belief that cultural conflicts can impede knowledge acquisition, but only for two-party joint ventures with 50/50 equity arrangements. We also looked at the relationship between knowledge acquisition and different dimensions for evaluating IJV performance. The relationship between knowledge acquisition and performance was significant for all indicators of performance, through knowledge acquisition from the foreign parent and the organizational characteristics hypothesized to enhance IJV knowledge acquisition affected assessments of some dimensions of performance more than others. Our findings contribute to advancing knowledge about the relationship between organizational characteristics and organizational knowledge acquisition in IJVs, as well as the relationships between knowledge acquisition and different dimensions of IJV performance.

1,229 citations


Journal ArticleDOI
TL;DR: Evidence shows that scientific adaptive management relies excessively on the use of linear systems models, discounts nonscientific forms of knowledge, and pays inadequate attention to policy processes that promote the development of shared understandings among diverse stakeholders.
Abstract: Proponents of the scientific adaptive management approach argue that it increases knowledge acquisition rates, enhances information flow among policy actors, and provides opportunities for creating shared understandings. However, evidence from efforts to implement the approach in New Brunswick, British Columbia, Canada, and the Columbia River Basin indicates that these promises have not been met. The data show that scientific adaptive management relies excessively on the use of linear systems models, discounts nonscientific forms of knowledge, and pays inadequate attention to policy processes that promote the development of shared understandings among diverse stakeholders. To be effective, new adaptive management efforts will need to incorporate knowledge from multiple sources, make use of multiple systems models, and support new forms of cooperation among stakeholders.

546 citations


Journal ArticleDOI
TL;DR: A knowledge-based framework for the creation of abstract, interval-based concepts from time-stamped clinical data, the knowledge- based temporal-abstraction (KBTA) method is defined and the RESUME system implements the KBTA method.

291 citations


Journal ArticleDOI
TL;DR: The amount of transfer across shifts at a manufacturing facility is analyzed and whether knowledge acquired through learning by doing is cumulative and persists through time or whether it depreciates is studied.
Abstract: Does knowledge acquired through learning by doing on one shift transfer to a second shift when it is introduced at a manufacturing plant? The answer to this question has important theoretical implications about where knowledge is embedded in organizations and about sources of productivity growth. The answer also has important practical implications for managers planning to introduce additional facilities. This paper analyzes the amount of transfer across shifts at a manufacturing facility. Specifically, we analyze the amount of knowledge that is carried forward when the plant makes the transition from one to two shifts. We also investigate whether the rate of knowledge acquisition differs by shift, and we estimate the amount of transfer that occurs across shifts once both are in operation. In addition, we study transfer over time by analyzing whether knowledge acquired through learning by doing is cumulative and persists through time or whether it depreciates.

288 citations


Journal ArticleDOI
TL;DR: In this article, the authors reported data on the acquisition of knowledge about astronomy in children from India and found that the cosmological models that children construct are influenced by both first-order and second-order constraints on knowledge acquisition.

162 citations


Proceedings ArticleDOI
26 Feb 1996
TL;DR: The TASA (Telecommunication Network Alarm Sequence Analyzer) system for discovering and browsing knowledge from large alarm databases is described, built on the basis of viewing knowledge discovery as an interactive and iterative process, containing data collection, pattern discovery, rule postprocessing, etc.
Abstract: A telecommunication network produces daily large amounts of alarm data. The data contains hidden valuable knowledge about the behavior of the network. This knowledge can be used in filtering redundant alarms, locating problems in, the network, and possibly in predicting severe faults. We describe the TASA (Telecommunication Network Alarm Sequence Analyzer) system for discovering and browsing knowledge from large alarm databases. The system is built on the basis of viewing knowledge discovery as an interactive and iterative process, containing data collection, pattern discovery, rule postprocessing, etc. The system uses a novel framework for locating frequently occurring episodes from sequential data. The TASA system offers a variety of selection and ordering criteria for episodes, and supports iterative retrieval from the discovered knowledge. This means that a large part of the iterative nature of the KDD process can be replaced by iteration in the rule postprocessing stage. The user interface is based on dynamically generated HTML. The system is in experimental use, and the results are encouraging: some of the discovered knowledge is being integrated into the alarm handling software of telecommunication operators.

157 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed an automated site layout system for construction temporary facilities (TFs) including a geographic information system (GIS) integrated with database management systems (DBMSs) for identifying suitable areas to locate TFs.
Abstract: This study develops an automated site layout system for construction temporary facilities (TFs). The system, ArcSite, including a geographic information system (GIS) integrated with database management systems (DBMSs) is a new tool to assist designers in identifying suitable areas to locate TFs. ArcSite consists of knowledge specific to construction site layout, TF databases, Arc/Info databases, and algorithms for integrating and automating TF layout design. The system proposes a method, the knowledge acquisition form, to systematically acquire and interpret experts' knowledge and experience in site planning. Using the concept of searching by elimination, the system develops a heuristic approach to model the process of human decision-making and generate the potential sites for each TF. Through both qualitative and quantitative modeling of facility relationships, an objective function called proximity index is developed to determine the optimal site of each TF. ArcSite demonstrates that GIS is a promising ...

133 citations


Journal ArticleDOI
TL;DR: The authors provide a framework for discussing five approaches to understand knowledge acquisition and representation, including behaviorism, schema theory, social perspective theories, connectionism, and situated cognition, and suggest that although each of these theories has merit in explaining certain aspects of knowledge acquisition, no approach adequately addresses the issues of consciousness, self-awareness, and selfreflection.
Abstract: The purpose of this paper is to provide a framework for discussing five approaches to understanding knowledge acquisition and representation. These approaches are behaviorism, schema theory, social perspective theories, connectionism, and situated cognition. We describe these approaches as lying on a continuum running from an experience-centered view of knowledge acquisition to a mind-centered view, with a more interactive view at the center. All five approaches are explicated in light of this continuum. Specifically, assumptions about knowledge acquisition and representation, the strengths and weaknesses of the approach, and the potential or actual impact on schooling are highlighted for each theory. We suggest that although each of these theories has merit in explaining certain aspects of knowledge acquisition, no approach adequately addresses the issues of consciousness, self-awareness, and self-reflection. Also, we argue that viewing cognitive functioning through the lenses of machine metaphors is nev...

129 citations



Journal ArticleDOI
TL;DR: This study shows that knowledge discovery substantially broadens the spectrum of intelligent query answering and may have deep implications on query answering in data- and knowledge-base systems.
Abstract: Knowledge discovery facilitates querying database knowledge and intelligent query answering in database systems. We investigate the application of discovered knowledge, concept hierarchies, and knowledge discovery tools for intelligent query answering in database systems. A knowledge-rich data model is constructed to incorporate discovered knowledge and knowledge discovery tools. Queries are classified into data queries and knowledge queries. Both types of queries can be answered directly by simple retrieval or intelligently by analyzing the intent of query and providing generalized, neighborhood or associated information using stored or discovered knowledge. Techniques have been developed for intelligent query answering using discovered knowledge and/or knowledge discovery tools, which includes generalization, data summarization, concept clustering, rule discovery, query rewriting, deduction, lazy evaluation, application of multiple-layered databases, etc. Our study shows that knowledge discovery substantially broadens the spectrum of intelligent query answering and may have deep implications on query answering in data- and knowledge-base systems.

96 citations


Journal ArticleDOI
TL;DR: This paper discusses the advantage and the inconvenience of new tutoring strategies, presents a new learning strategy which improves performance for good or intermediate learners, and describes an experiment with this strategy.
Abstract: Intelligent tutoring systems have recently evolved towards a co-operative approach between the learner and the system. Knowledge acquisition is facilitated by interaction with the system under the control of the learner. New tutoring strategies have been introduced to enhance motivation of the learner by involving a second learner or a companion who simulates the behaviour of a second learner in the learning process. An inverted model called “learning by teaching” in which the learner could teach the learning companion by giving explanations has also been presented. In this paper we discuss the advantage and the inconvenience of these strategies and present a new learning strategy which improves performance for good or intermediate learners. We describe an experiment with this strategy and compare results with those obtained with the companion. We analyze and discuss results obtained.

Proceedings Article
02 Aug 1996
TL;DR: A rule induction method is introduced, which extracts not only classification rules but also other medical knowledge needed for diagnosis from clinical cases, and is evaluated on a clinical database of headache.
Abstract: Automated knowledge acquisition is an important research issue to solve the bottleneck problem in developing expert systems. Although many inductive learning methods have been proposed for this purpose, most of the approaches focus only on inducing classification rules. However, medical experts also learn other information important for diagnosis from clinical cases. In this paper, a rule induction method is introduced, which extracts not only classification rules but also other medical knowledge needed for diagnosis. This system is evaluated on a clinical database of headache, whose experimental results show that our proposed method correctly induces diagnostic rules and estimates the statistical measures of rules.

Journal ArticleDOI
TL;DR: This article examined the relationship among knowledge, affect, and environmental education that has emerged in the last 15 years from research on classroom-type settings and applications and found that an association between knowledge and affect has surfaced, along with prominent sex differences and a suggestion of ethnic variation.
Abstract: The current review examined the relationship among knowledge, affect, and environmental education that has emerged in the last 15 years from research on classroom-type settings and applications. Despite methodological and statistical problems, an association between knowledge and affect has surfaced, along with prominent sex differences and a suggestion of ethnic variation. However, the nature of this relationship is still unclear. Given that both knowledge and affect are necessary for active participation in environmental concerns, more research is needed to determine how existing attitudes influence knowledge acquisition and how knowledge influences attitudes. The potential of television, with its unique attitudinal properties, was also examined in relation to environmental education.

Journal ArticleDOI
TL;DR: The main goal of this paper is to describe in detail how PROTEGE-II was used to model the elevator-configuration task, and provide a starting point for comparison with other frameworks that use abstract problem-solving methods.
Abstract: This paper describes how we applied the PROTEGE-II architecture to build a knowledge-based system that configures elevators. The elevator-configuration task was solved originally with a system that employed the propose-and-revise problem-solving method (VT). A variant of this task, here named the Sisyphus-2 problem, is used by the knowledge-acquisition community for comparative studies. PROTEGE-II is a knowledge-engineering environment that focuses on the use of reusable ontologies and problem-solving methods to generate task-specific knowledge-acquisition tools and executable problem solvers. The main goal of this paper is to describe in detail how we used PROTEGE-II to model the elevator-configuration task. This description provides a starting point for comparison with other frameworks that use abstract problem-solving methods. Beginning with the textual description of the elevator-configuration task, we analysed the domain knowledge with respect to PROTEGE-II’s main goal: to build domain-specific knowledge-acquisition tools. We used PROTEGE-II’s suite of tools to construct a knowledge-based system, called ELVIS, that includes a reusable domain ontology, a knowledge-acquisition tool, and a propose-and-revise problem-solving method that is optimized to solve the elevator-configuration task. We entered domain-specific knowledge about elevator configuration into the knowledge base with the help of a task-specific knowledge-acquisition tool that PROTEGE-II generated from the ontologies. After we constructed mapping relations to connect the knowledge base with the method’s code, the final executable problem solver solved the test case provided with the Sisyphus-2 material. We have found that the development of ELVIS has afforded a valuable test case for evaluating PROTEGE-II’s suite of system-building tools. Only projects based on reasonably large problems, such as the Sisyphus-2 task, will allow us to improve the design of PROTEGE-II and its ability to produce reusable components.

Journal ArticleDOI
TL;DR: It is preliminarily concluded that the Maastricht Progress Test may not be suitabe to solve the problem of assessment of individual international exchange students, but it may be helpful in identifying corresponding cognitive levels on, for example, basic sciences for students in different curricula.
Abstract: The increasing international mobility of medical students has inspired the search for an international assessment format. As one step along this line, kinetics of knowledge acquisition and final cognitive levels of students were compared among one Dutch, one German and four Italian medical faculties. For this comparison, the Maastricht Progress Test (MPT) was used. For four out of the six participating faculties, it was possible to compare the level of knowledge of sixth-year students. These data showed no significant differences on the test as a whole. On the other hand, as judged from cross-sectional data on students from all study years, the kinetics of knowledge acquisition showed different trends. In one school applying problem-based learning, acquisition of knowledge by students occurred almost linearly. In another school, over the first 2 years, acquisition of knowledge occurred only in the basic sciences but not in clinical or public health/behavioural sciences. In two other schools over that same period, students seemed to gain no knowledge at all. In some faculties, a marked boost in knowledge was noted with third- or fourth-year students. These findings may be explained by peculiarities of the respective curricula, selection of students during their studies, and national or local assessment procedures. It is preliminarily concluded that the different educational approaches and assessment systems in medical education in Europe seem to have only limited influence on the final level of knowledge of the graduates. On the other hand, these differences may influence the kinetics of knowledge acquisition, especially in distinct domains like basic or clinical sciences. Therefore, the MPT may not be suitabe to solve the problem of assessment of individual international exchange students, but it may be helpful in identifying corresponding cognitive levels on, for example, basic sciences for students in different curricula.

Proceedings ArticleDOI
29 Mar 1996
TL;DR: A method for identifying and extracting business rules by means of data output identification and program stripping has been implemented in a reverse engineering tool SOFT-REDOC for COBOL programs.
Abstract: The paper reviews the state of the art on application knowledge acquisition from existing software systems and defines the role of business rules It then goes on to present a method for identifying and extracting business rules by means of data output identification and program stripping This method has been implemented in a reverse engineering tool SOFT-REDOC for COBOL programs The results are intended to aid the business analyst in comprehending legacy programs

Proceedings Article
04 Aug 1996
TL;DR: Evidence is provided supporting the need for explicit representations in building knowledge-based systems by representing problem-solving knowledge explicitly and deriving from the current knowledge base the knowledge gaps that must be resolved by the user during KA.
Abstract: Role-limiting approaches support knowledge acquisition (KA) by centering knowledge base construction on common types of tasks or domain-independent problem-solving strategies. Within a particular problem-solving strategy, domain-dependent knowledge plays specific roles. A KA tool then helps a user to fill these roles. Although role-limiting approaches are useful for guiding KA, they are limited because they only support users in filling knowledge roles that have been built in by the designers of the KA system. EXPECT takes a different approach to KA by representing problem-solving knowledge explicitly, and deriving from the current knowledge base the knowledge gaps that must be resolved by the user during KA. This paper contrasts role-limiting approaches and EXPECT's approach, using the propose-and-revise strategy as an example. EXPECT not only supports users in filling knowledge roles, but also provides support in making other modifications to the knowledge base, including adapting the problem-solving strategy. EXPECT's guidance changes as the knowledge base changes, providing a more flexible approach to knowledge acquisition. This work provides evidence supporting the need for explicit representations in building knowledge-based systems.


Proceedings Article
01 Aug 1996
TL;DR: SPIRIT as mentioned in this paper is an expert system shell for probabilistic knowledge bases, where knowledge acquisition is performed by processing facts and rules on discrete variables in a rich syntax, and the shell generates a probability distribution which respects all acquired facts, rules and maximizes entropy.
Abstract: SPIRIT is an expert system shell for probabilistic knowledge bases Knowledge acquisition is performed by processing facts and rules on discrete variables in a rich syntax The shell generates a probability distribution which respects all acquired facts and rules and which maximizes entropy The user-friendly devices of SPIRIT to define variables, formulate rules and create the knowledge base are revealed in detail Inductive learning is possible Medium sized applications show the power of the system

Journal ArticleDOI
TL;DR: A system called Wizard is developed and constructed to analyze databases for their inference problems, which can determine inference problems within single facets as well as some inference problems between two or more facets.
Abstract: The database inference problem is a well-known problem in database security and information system security in general. In order to prevent an adversary from inferring classified information from combinations of unclassified information, a database inference analyst must be able to detect and prevent possible inferences. Detecting database inference problems at database design time provides great power in reducing problems over the lifetime of a database. We have developed and constructed a system called Wizard to analyze databases for their inference problems. The system takes as input a database schema, its constituent instances (if available) and additional human-supplied domain information, and provides a set of associations between entities and/or activities that can be grouped by their potential severity of inference vulnerability. A knowledge acquisition process called microanalysis permits semantic knowledge of a database to be incorporated into the analysis using conceptual graphs. These graphs are then analyzed with respect to inference-relevant domains we call facets using tools we have developed. We can determine inference problems within single facets as well as some inference problems between two or more facets. The architecture of the system is meant to be general so that further refinements of inference information subdomains can be easily incorporated into the system.

Proceedings ArticleDOI
26 Feb 1996
TL;DR: The principle and experimental results of an attribute oriented rough set approach for knowledge discovery in databases are described and a prototype knowledge discovery system, DBROUGH, has been constructed.
Abstract: The principle and experimental results of an attribute oriented rough set approach for knowledge discovery in databases are described. Our method integrates the database operation, rough set theory and machine learning techniques. In this method the learning procedure consists of two phases: data generalization and data reduction. In the data generalization phase, attribute oriented induction is performed attribute by attribute using attribute removal and concept ascension, some undesirable attributes to the discovery task are removed and the primitive data is generalized to the desirable level; thus a set of tuples may be generalized to the same generalized tuple. This procedure substantially reduces the computational complexity of the database learning process. Subsequently, in data reduction phase, the rough set method is applied to the generalized relation to find a minimal attribute set relevant to the learning task. The generalized relation is reduced further by removing those attributes which are irrelevant and/or unimportant to the learning task. Finally the tuples in the reduced relation are transformed into different knowledge rules based on different knowledge discovery algorithms. Based upon these principles, a prototype knowledge discovery system, DBROUGH has been constructed. In DBROUGH, a variety of knowledge discovery algorithms are incorporated and different kinds of knowledge rules, such as characteristic rules, classification rules, decision rules, maximal generalized rules can be discovered efficiently and effectively from large databases.

Book
09 Mar 1996
TL;DR: The Art of the Possible: From Two Strikes to a Home Run: The Renewal of Ace Clearwater Enterprises as discussed by the authors, which is a classic example of the art of the possible.
Abstract: The Art of the Possible. From Two Strikes to a Home Run: The Renewal of Ace Clearwater Enterprises. Define, Then Align. Forming the Partnership with Top Management. Starting at the Top: Getting Everyone Moving in the Same Direction. Finding Your Starting Point. Buy, Rent, or Develop: Knowledge Acquisition Strategies. Learning from the Best. Building a Knowledge Network. Action-Oriented Teamwork. Bridging Two Worlds. Valuing Learning. Appendices. Endnotes. Index.

Book ChapterDOI
01 May 1996
TL;DR: Though the MIKE approach aims at supporting the building process of kbs, its principles and methods apply also to the development of non-knowledge-based software systems, e.g. information systems.
Abstract: The paper describes the MIKE (Model-based and Incremental Knowledge Engineering) approach for the development of knowledge-based systems (kbs). It integrates semiformal specification techniques, formal specification techniques, and prototyping into a coherent framework. This allows the domain and task model of a kbs to be described on different formalization levels. All activities in the building process are embedded in a cyclic life cycle model. For the semiformal representation we use a hypermedia-based formalism which serves as a communication basis between expert and knowledge engineer during knowledge acquisition. The semiformal knowledge representation is also the basis for formalization, resulting in a formal and executable model of expertise specified in the Knowledge Acquisition and Representation Language (KARL). Since KARL is executable the model of expertise can be developed and validated by prototyping. A smooth transition from a semiformal to a formal specification and further on to design is achieved as all the description techniques rely on the same conceptual model to describe the functional and non-functional aspects of the system. Thus, the system is thoroughly documented at different description levels, each of which focuses on a distinct aspect of the entire development effort. Traceability of requirements is supported by linking the different models to each other. Though the MIKE approach aims at supporting the building process of kbs, its principles and methods apply also to the development of non-knowledge-based software systems, e.g. information systems.

Journal ArticleDOI
TL;DR: The user interface of the ES for tomato diseases identification is enhanced with additional capabilities, based on a recently developed shell which allows the manipulation of the corresponding knowledge base as a database.

Journal ArticleDOI
TL;DR: This is the first method that offers assurance and sufficiency arguments that the mechanism is at least strong enough to protect the high data in the database from inference attacks that require low data.
Abstract: Database systems that contain information of varying degrees of sensitivity pose the threat that some of the low data may infer high data. This study derives conditions sufficient to identify such inference threats. First, it is reasoned that a database can only control material implications, as specified in formal logic systems. These material implications are found using knowledge discovery techniques. Material implications allow reasoning about outside knowledge, and provide the first assurance that outside knowledge does not assist in circumventing the inference controls. Database queries specify the properties of sets of data and are compared to help determine inferences. These queries are grouped into equivalence classes based upon their inference characteristics. A unique graph based model is developed for the equivalence classes that (1) makes such comparisons easy, and (2) allows implementation of an algorithm capable of finding those material implication rules where high data is inferred from low data. This is the first method that offers assurance and sufficiency arguments that the mechanism is at least strong enough to protect the high data in the database from inference attacks that require low data.

Journal ArticleDOI
TL;DR: Learning potential assessment, learning test concept, testing the limits, and interactive or dynamic assessment have been used to evaluate the performance of the learning test as mentioned in this paper, which has been shown to be superior to the traditional static test.
Abstract: Under the rubrics Learning Potential Assessment, Learning Test Concept, Testing the Limits, and Interactive or Dynamic Assessment, alternatives or supplements to the conventional static intelligence test are discussed and subjected to empirical scrutiny. Researchers hope that prompts built into the test will make it possible to determine the “zone of proximal development” (Vygotsky) and lead to a fairer and more valid assessment. The authors begin by reviewing conventional attempts at validation using external criteria such as school grades and teacher ratings. This is followed by a review of more recent research done in the construct validation tradition. In the latter, learning tests vs. conventional intelligence tests are compared with basic components of intelligence (measured by using elementary cognitive tasks) and knowledge acquisition processes (in complex problem solving scenarios). Although in these attempts at validation the learning tests turn out to be superior to the static versions, much mo...

Book
31 Mar 1996
TL;DR: This is the first book to provide a detailed process for planning, designing, implementing, and testing knowledge-based systems for natural resource management and demonstrates how knowledge can be effectively organized and administered, enabling natural resource professionals to respond intelligently to natural resource problems.
Abstract: This is the first book to provide a detailed process for planning, designing, implementing, and testing knowledge-based systems for natural resource management. It presents material on all these major aspects of building a deliverable system. Equipped with these techniques, managers and scientists will improve their ability to solve complex resource problems that are multidisciplinary in scope and for which mathematical approaches prove insufficient. Fully describing the various components of these systems, this important work includes discussions on system design, knowledge acquisition, prototyping, knowledge verification and validation, implementation, and system delivery. To further illuminate the material presented, it contains a tutorial on the knowledge-based programming environment PROLOG as well as many examples of expert system development, including one for forest management. Building Knowledge-Based Systems for Natural Resource Management demonstrates how knowledge can be effectively organized and administered, enabling natural resource professionals to respond intelligently to natural resource problems. This book also provides researchers and students with an essential resource for understanding this useful technology.

Proceedings ArticleDOI
08 Sep 1996
TL;DR: Fuzzy representation bridges the gap between symbolic and non-symbolic data by linking qualitative linguistic terms with quantitative data in decision trees and a few new inferences based on exemplar learning are proposed.
Abstract: Decision-tree algorithms provide one of the most popular methodologies for symbolic knowledge acquisition. The resulting knowledge, a symbolic decision tree along with a simple inference mechanism, has been praised for comprehensibility. The most comprehensible decision trees have been designed for perfect symbolic data. Over the years, additional methodologies have been investigated and proposed to deal with continuous or multi-valued data, and with missing or noisy features. Recently, with the growing popularity of fuzzy representation, a few researchers independently have proposed to utilize fuzzy representation in decision trees to deal with similar situations. Fuzzy representation bridges the gap between symbolic and non-symbolic data by linking qualitative linguistic terms with quantitative data. In this paper, we overview our fuzzy decision tree and propose a few new inferences based on exemplar learning.

Book ChapterDOI
14 May 1996
TL;DR: This paper describes a method for extracting knowledge from large corpora using conceptual relations such as definition and exemplification by combinatorial pattern-matching, embodied in a robust program which is capable of attempting extraction even in the absence of part-of-speech tags in the input text.
Abstract: This paper describes a method for extracting knowledge from large corpora using conceptual relations such as definition and exemplification. The two major steps in this process are the identification of specific relations using positive and negative triggering, and the extraction of the conceptual information by combinatorial pattern-matching. Validation of extracted candidate text is performed by analysis of part-of-speech tag patterns. The algorithms are embodied in a robust program which is capable of attempting extraction even in the absence of part-of-speech tags in the input text. Unlike many knowledge extraction systems, the KEP program is designed to be non domain specific. Intended applications described include knowledge acquisition for automatic examination question setting and marking, and knowledge acquisition for the creation and updating of semantic nets used in a hypermedia-based tutoring system.

Proceedings ArticleDOI
15 Apr 1996
TL;DR: The results of this study indicate that the repertory grid analysis method generates all of the attributes produced by the other two methods, that it is easy to apply in the field, and is useful without complex analysis and re-interpretation of the results.
Abstract: In this paper we describe a case study comparing the effectiveness of three indirect knowledge elicitation techniques: repertory grid analysis, multi-dimensional scaling, and hierarchical clustering. These techniques are used in situations where it is difficult for the expert to articulate their knowledge in response to direct questions. The techniques were compared in terms of the number of attributes elicited, the ease with which these data were obtained, and the degree of post-analysis and interpretation required. The study was conducted in the domain of airline safety inspections and the objective was to define inspection indicators. The results of this study indicate that the repertory grid analysis method generates all of the attributes produced by the other two methods, that it is easy to apply in the field, and is useful without complex analysis and re-interpretation of the results.