scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Human-computer Studies \/ International Journal of Man-machine Studies in 1995"


Journal ArticleDOI
TL;DR: The role of ontology in supporting knowledge sharing activities is described, and a set of criteria to guide the development of ontologies for these purposes are presented, and it is shown how these criteria are applied in case studies from the design ofOntologies for engineering mathematics and bibliographic data.
Abstract: Recent work in Artificial Intelligence is exploring the use of formal ontologies as a way of specifying content-specific agreements for the sharing and reuse of knowledge among software entities. We take an engineering perspective on the development of such ontologies. Formal ontologies are viewed as designed artifacts, formulated for specific purposes and evaluated against objective design criteria. We describe the role of ontologies in supporting knowledge sharing activities, and then present a set of criteria to guide the development of ontologies for these purposes. We show how these criteria are applied in case studies from the design of ontologies for engineering mathematics and bibliographic data. Selected design decisions are discussed, and alternative representation choices and evaluated against the design criteria.

6,949 citations


Journal ArticleDOI
TL;DR: The notion of the ontological level is introduced, intermediate between the epistemological and the conceptual levels discussed by Brachman, as a way to characterize a knowledge representation formalism taking into account the intended meaning of its primitives.
Abstract: The purpose of this paper is to defend the systematic introduction of formal ontological principles in the current practice of knowledge engineering, to explore the various relationships between ontology and knowledge representation, and to present the recent trends in this promising research area. According to the "modelling view" of knowledge acquisition proposed by Clancey, the modelling activity must establish a correspondence between a knowledge base and two separate subsystems: the agent's behaviour (i.e. the problem-solving expertise ) and its own environment (the problem domain ). Current knowledge modelling methodologies tend to focus on the former sub-system only, viewing domain knowledge as strongly dependent on the particular task at hand: in fact, AI researchers seem to have been much more interested in the nature of reasoning rather than in the nature of the real world. Recently, however, the potential value of task-independent knowledge bases (or "ontologies") suitable to large scale integration has been underlined in many ways. In this paper, we compare the dichotomy between reasoning and representation to the philosophical distinction between epistemology and ontology. We introduce the notion of the ontological level, intermediate between the epistemological and the conceptual levels discussed by Brachman, as a way to characterize a knowledge representation formalism taking into account the intended meaning of its primitives. We then discuss some formal ontologic distinctions which may play an important role for such purpose.

1,140 citations


Journal ArticleDOI
TL;DR: The findings demonstrate that personality does not require richly defined agents, sophisticated pictorial representations, nautral language processing, or artificial intelligence, rather, even the most superficial manipulations are sufficient to exhibit personality, with powerful effects.
Abstract: The claim that computer personalities can be human personalities was tested by demonstrating that (1) computer personalities can be easily created using a minimal set of cues, and (2) that people will respond to these personalities in the same way they would respond to similar human personalities. The present study focused on the "similarity-attraction hypothesis," which predicts that people will prefer to interact with others who are similar in personality. In a 2 × 2, balanced, between-subjects experiment (n = 48), dominant and submissive subjects were randomly matched with a computer that was endowed with the properties associated with dominance or submissiveness. Subjects recognized the computer's personality type, distinct from friendliness and competence. In addition, subjects not only preferred the similar computer, but they were more satisfied with the interaction. The findings demonstrate that personality does not require richly defined agents, sophisticated pictorial representations, nautral language processing, or artificial intelligence. Rather, even the most superficial manipulations are sufficient to exhibit personality, with powerful effects.

538 citations


Journal ArticleDOI
Andrew Odlyzko1
TL;DR: This paper surveys the pressures that are leading to the impending change from print journals to electronic ones and makes predictions about the future of journals, publishers, and libraries, concluding that the new electronic publishing methods are likely to improve greatly scholarly communication, partially through more rapid publication, but also through wider dissemination and a variety of novel features that cannot be implemented with the present print system.
Abstract: Scholarly publishing is on the verge of a drastic change from print journals to electronic ones. Although this change has been predicted for a long time, trends in technology and growth in the literature are making this transition inevitable. It is likely to occur in a few years, and is likely to be sudden. This article surveys the pressures that are leading to the impending change, and makes predictions about the future of journals, publishers, and libraries. The new electronic publishing methods are likely to improve greatly scholarly communication, partially through more rapid publication, but also through wider dissemination and a variety of novel features that cannot be implemented with the present print system, such as references in a paper to later papers that cite it.

213 citations


Journal ArticleDOI
TL;DR: It is suggested that research move away from an exclusive focus on non-verbal communication, and begin to investigate these other uses of real-time video, as well as identify design implications and outstanding research questions derived from current findings.
Abstract: This paper re-assesses the role of real-time video as a technology to support interpersonal communications at distance. We review three distinct hypotheses about the role of video in the coordination of conversational content and process. For each hypothesis, we identify design implications and outstanding research questions derived from current findings. We first evaluate the non-verbal communication hypothesis, namely the prevailing assumption that the role of video is to supplement speech, and embodied in applications such as videoconferencing and videophone. We conclude that previous work has overestimated the importance of video at the expense of audio. This finding has strong implications for the implementation of such systems, and we make recommendations about both synchronization and bandwidth allocation. Furthermore our own recent studies of workplace interactions point to other communicative functions of video. Current systems have neglected another potentially vital role of visual information in supporting the process of achieving opportunistic connection. Rather than providing a supplement to audio information, video is used to assess the communication availability of others. Visual information therefore promotes the types of remote opportunistic communications that are prevalent in face-to-face settings. We discuss early experiments with such connection applications and identify outstanding design and implementation issues. Finally we discuss another novel application of video "video-as-data". Here the video image is used to transmit information about the work objects themselves, rather than information about interactants, creating a dynamic shared workspace, and simulating a shared physical environment. In conclusion we suggest that research move away from an exclusive focus on non-verbal communication, and begin to investigate these other uses of real-time video.

199 citations


Journal ArticleDOI
TL;DR: OLAE is described as an assessment tool that collects data from students solving problems in introductory college physics, analyses that data with probabilistic methods that determine what knowledge the student is using, and flexibly presents the results of analysis.
Abstract: We describe OLAE as an assessment tool that collects data from students solving problems in introductory college physics, analyses that data with probabilistic methods that determine what knowledge the student is using, and flexibly presents the results of analysis. For each problem, OLAE automatically creates a Bayesian net that relates knowledge, represented as first-order rules, to particular actions, such as written equations. Using the resulting Bayesian network, OLAE observes a student's behavior and computes the probabilities that the student knows and uses each of the rules.

164 citations


Journal ArticleDOI
TL;DR: Some of the ontological questions that arise in artificial intelligence are surveyed, some answers that have been proposed by various philosophers, and an application of the philosophical analysis to the clarification of some current issues in AI are applied.
Abstract: Philosophers have spent 25 centuries debating ontological categories. Their insights are directly applicable to the analysis, design, and specification of the ontologies used in knowledge-based systems. This paper surveys some of the ontological questions that arise in artificial intelligence, some answers that have been proposed by various philosophers, and an application of the philosophical analysis to the clarification of some current issues in AI. Two philosophers who have developed the most complete systems of categories are Charles Sanders Peirce and Alfred North Whitehead. Their analyses suggest a basic structure of categories that can provide some guidelines for the design of AI systems.

162 citations


Journal ArticleDOI
TL;DR: A general concept mapping system that is open architecture for integration with other systems, scriptable to support arbitrary interactions and computations, and cutomizable to emulate many styles of map is described.
Abstract: Concept mapping has a history of use in many disciplines as a formal or semi-formal diagramming technique. Concept maps have an abstract structure as typed hypergraphs, and computer support for concept mapping can associate visual attributes with node types to provide an attractive and consistent appearance. Computer support can also provide interactive interfaces allowing arbitrary actions to be associated with nodes such as hypermedia links to other maps and documents. This article describes a general concept mapping system that is open architecture for integration with other systems, scriptable to support arbitrary interactions and computations, and cutomizable to emulate many styles of map. The system supports collaborative development of concept maps across local area and wide area networks, and integrates with World-Wide Web in both client helper and server gateway roles. A number of applications are illustrated ranging through education, artificial intelligence, active documents, hypermedia indexing and concurrent engineering. It is proposed that concept maps be regarded as basic components of any hypermedia system, complementing text and images with formal and semi-formal active diagrams.

159 citations


Journal ArticleDOI
TL;DR: Results are presented showing that blocking-Gibbs sampling converges much faster than plain Gibbs sampling for very complex problems.
Abstract: We introduce a methodology for performing approximate computations in very complex probabilistic systems (e.g. huge pedigrees). Our approach, called blocking Gibbs, combines exact local computations with Gibbs sampling in a way that complements the strengths of both. The methodology is illustrated on a real-world problem involving a heavily inbred pedigreee containing 20 000 individuals. We present results showing that blocking-Gibbs sampling converges much faster than plain Gibbs sampling for very complex problems.

151 citations


Journal ArticleDOI
Kyung S. Park1, Soung Hie Kim1
TL;DR: A fuzzy time cognitive map (FTCM), which is a FCM introduced to a time relationship on arrows, and a method of translating the FTCM, that has a different time lag, into a FTCM that has one or the same unit-time lag, which are a value-preserving translation.
Abstract: Causal knowledge is often cyclic and fuzzy, thus it is hard to represent in the form of trees. A fuzzy cognitive map (FCM) can represent causal knowledge as a signed directed graph with feedback. It provides an intuitive framework in which to form decision problems as perceived by decision makers and to incorporate the knowledge of experts. This paper proposes a fuzzy time cognitive map (FTCM), which is a FCM introduced to a time relationship on arrows. We first discuss the characteristics and basic assumptions of the FCM, and present a description of causal propagations in a FCM with the causalities of negative-positive-neutral interval, -1, 1]. We develop a method of translating the FTCM, that has a different time lag, into the FTCM that has one or the same unit-time lag, which is a value-preserving translation. With the FTCM, we illustrate analysing the change of causalities among factors according to lapse of time.

143 citations


Journal ArticleDOI
TL;DR: It is argued that the default mode for truly expert designers is typically a top-clown and breadth-first approach, since longer-term considerations of cost-effectiveness are more important for expert designers than short- term considerations of cognitive cost.
Abstract: We present a critical discussion of research into the nature of design expertise, in particular evaluating claims that opportunism is a major influence on the behaviour of expert designers. We argue that the notion of opportunism has been under-constrained, and as a consequence the existence of opportunism in expert design has been exaggerated. Much of what has been described as opportunistic design behaviour appears to reflect a mix of breadth-first and depth-first modes of solution development. Whilst acknowledging that opportunities can arise in the design process (e.g. serendipitous solution discovery), such events might equally confirm structured behaviour as cause unstructured behaviour. We argue that the default mode for truly expert designers is typically a top-clown and breadth-first approach, since longer-term considerations of cost-effectiveness are more important for expert designers than short-term considerations of cognitive cost. However, there are situations (e.g. when faced with a highly unfamiliar design task) where it is cost-effective for experts to pursue a depth-first mode of solution development. The implications of our analysis for the development of methods and tools to support the design process are also discussed.

Journal ArticleDOI
TL;DR: The present paper draws on recent work in the fields of naive and qualitative physics, in perceptual and developmental psychology, and in cognitive anthropology, in order to consider in a new light these and related questions and to draw conclusions for the methodology and philosophical foundations of the cognitive sciences.
Abstract: Common sense is on the one hand a certain set of processes of natural cognition—of speaking, reasoning, seeing, and so on. On the other hand common sense is a system of beliefs (of folk physics and folk psychology). Over against both of these is the world of common sense, the world of objects to which the processes of natural cognition and the corresponding belief-contents standardly relate. What are the structures of this world and how does its scientific treatment relate to traditional and contemporary metaphysics and formal ontology? Can we embrace a thesis of common-sense realism to the effect that the world of common sense exists uniquely? Or must we adopt instead a position of cultural relativism which would assign distinct worlds of common sense to each group and epoch? The present paper draws on recent work in the fields of naive and qualitative physics, in perceptual and developmental psychology, and in cognitive anthropology, in order to consider in a new light these and related questions and to draw conclusions for the methodology and philosophical foundations of the cognitive sciences.

Journal ArticleDOI
TL;DR: This paper presents a classification of part-whole relations that is suitable for different cognitive tasks and gives proposals for the representation and processing of these relations.
Abstract: This paper deals with the conceptual part-whole relation as it occurs in language processing, visual perception, and general problem solving. One important long-term goal is to develop a naive or common sense theory of the mereological domain, that is the domain of parts and wholes and their relations. In this paper, we work towards such a theory by presenting a classification of part-whole relations that is suitable for different cognitive tasks and give proposals for the representation and processing of these relations. In order to be independent of specific tasks like language understanding or the recognition of objects, we use structural properties to develop our classification. The paper starts with a brief overview of the mereological research in different disciplines and two examples of the role of part-whole relations in linguistics (possessive constructions) and knowledge processing (reasoning about objects). In the second section, we discuss two important approaches to mereological problems: the "Classical Extensional Mereology" as presented by Simons and the meronymic system of part-whole relations proposed by Winston, Chaffin and Hermann. Our own work is described in the third and last section. First, we discuss different kinds of wholes according to their inherent compositional structure; complexes, collections, and masses. Then partitions induced by or independent of the compositional structure of a whole are described, accompanied by proposals for their processing.

Journal ArticleDOI
TL;DR: Thirty-eight university students were tested for field-dependence/-independence using Riding's computer-administered Cognitive Styles Analysis (CSA) and learned using computerized versions of Pask and Scott's teaching materials designed to suit holist and serialist learning strategies.
Abstract: Thirty-eight university students were tested for field-dependence/-independence using Riding's computer-administered Cognitive Styles Analysis (CSA). They also learned using computerized versions of Pask and Scott's teaching materials designed to suit holist and serialist learning strategies. It was found that (a) students' holist and serialist competence could be predicted using CSA scores, (b) learning in matched conditions (using instructional materials structured to suit their learning styles) was significantly superior for both holists and serialists than in mismatched conditions, and (c) serialist instructional materials resulted in overall better learning performance and efficiency than did holist materials. Possible reasons for the lack of positive correlations reported in previous studies, along with implications for the development of user models to support the development of adaptive instructional systems, are discussed.

Journal ArticleDOI
Yoram Reich1
TL;DR: This paper articulates two definitions of knowledge and their associated value measures; it stresses the issue of constructing meaningful measures rather than discussing some of the desirable properties of measures (e.g. reliability or validity).
Abstract: The quality of knowledge that a system has substantially influences its performance. Often, the terms "knowledge", its "quality" and how it is "measured" or "valuated", are left vague enough to accommodate several ad hoc interpretations. This paper articulates two definitions of knowledge and their associated value measures. The paper focuses on the theory underlying measurements and its application to knowledge valuation; it stresses the issue of constructing meaningful measures rather than discussing some of the desirable properties of measures (e.g. reliability or validity). A detailed example of knowledge valuation using the measures is described. The example demonstrates the importance for system understanding and the difficulty of valuating knowledge. It shows the importance of employing several different measures simultaneously for a single valuation. The paper concludes by discussing the scope of and relationships between the measures.

Journal ArticleDOI
TL;DR: It is demonstrated that a single common-sense ontology produces plausible interpretations at all levels from parsing through reasoning, and some of the problems and tradeoffs for a method which has just one content ontology are explored.
Abstract: This paper defends the choice of a linguistically-based content ontology for natural language processing and demonstrates that a single common-sense ontology produces plausible interpretations at all levels from parsing through reasoning. The paper explores some of the problems and tradeoffs for a method which has just one content ontology. A linguistically-based content ontology represents the "world view" encoded in natural language. The content ontology (as opposed to the formal semantic ontology which distinguishes events from propositions, and so on) is best grounded in the culture, rather than in the world itself, or in the mind. By "world view" we mean naive assumptions about "what there is" in the world, and how it should be classified. These assumptions are time-worn and reflected in language at several levels: morphology, syntax and lexical semantics. The content ontology presented in the paper is part of a Naive Semantic lexicon, Naive Semantics is a lexical theory in which associated with each word sense is a naive theory (or set of beliefs) about the objects or events of reference. While naive semantic representations are not combinations of a closed set of primitives, they are also limited by a shallowness assumption. Included is just the information required to form a semantic interpretation incrementally, not all of the information known about objects. The Naive Semantic ontology is based upon a particular language, its syntax and its word senses. To the extent that other languages codify similar world views, we predict that their ontologies are similar. Applied in a computational natural language understanding system, this linguistically-motivated ontology (along with other native semantic information) is sufficient to disambiguate words, disambiguate syntactic structure, disambiguate formal semantic representations, resolve anaphoric expressions and perform reasoning tasks with text.

Journal ArticleDOI
TL;DR: It is claimed that the structural differences between restricted domains are not based on different mereological concepts, but on different concepts of being a whole, which sheds more light on the specific nature of these domains, their similarities and differences.
Abstract: Classical Mereology, the formal theory of the concepts of part, overlap and sum as defined by Leśniewski does not have any notion of being a whole. Because of this neutrality the concepts of Mereology are applicable in each and every domain. This point of view is not generally accepted. But a closer look at domain-specific approaches defining non-classical (quasi)-mereological notions reveals that the question of whether something belongs to a restricted domain (and, thus, fulfills a certain criterion of integrity) has come to be mixed up with the question of whether it exists. We claim that the structural differences between restricted domains are not based on different mereological concepts, but on different concepts of being a whole. Taking Classical Mereology for granted in looking at different domains can shed more light on the specific nature of these domains, their similarities and differences. Three examples of axiomatic accounts dealing with restricted domains (linear orders of extended entities as they can be found in discussions of the ontology of time, topological structure and set-theory) are discussed. We show that Classical Mereology is applicable to these domains as soon as they are seen as being embedded in a less restricted (or even the most comprehensive) domain. Each of the accounts may be axiomatically formulated by adding one non-mereological primitive to whatever concepts are chosen to develop Classical Mereology. These primitives are strongly related to the domain-specific notions of integrity or being a whole.

Journal ArticleDOI
TL;DR: This paper describes a computational model of skilled use of an application with a graphical user interface based on Hutchins, Holland and Norman's analysis of direct manipulation and is implemented using Kintsch and Mannes's construction-integration theory of action planning.
Abstract: This paper describes a computational model of skilled use of an application with a graphical user interface. The model provides a principled explanation of action slips, errors made by experienced users. The model is based on Hutchins, Holland and Norman's analysis of direct manipulation and is implemented using Kintsch and Mannes's construction-integration theory of action planning. The model attends to a limited number of objects on the screen and then selects action on one of them, such as moving mouse cursor, clicking mouse button, typing letters, and so on, by integrating information from various sources. These sources include the display, task goals, expected display states, and knowledge about the interface and the application domain. The model simulates a graph drawing task. In addition, we describe how the model makes errors even when it is provided with the knowledge sufficient to generate correct actions.

Journal ArticleDOI
TL;DR: Parallel earcons are shown to be an effective means of increasing the presentation rates of audio messages without compromising recognition rates.
Abstract: This paper describes a method of presenting structured audio messages, earcons, in parallel so that they take less time to play and can better keep pace with interactions in a human-computer interface. The two component parts of a compound earcon are played in parallel so that the time taken is only that of a single part. An experiment was conducted to test the recall and recognition of parallel compound earcons as compared to serial compound earcons. Results showed that there are no differences in the rates of recognition between the two groups. Non-musicians are also shown to be equal in performance to musicians. Some extensions to the earcon creation guidelines of Brewster, Wright and Edwards are put forward based upon research into auditory stream segregation. Parallel earcons are shown to be an effective means of increasing the presentation rates of audio messages without compromising recognition rates.

Journal ArticleDOI
TL;DR: This paper develops a taxonomy of qualitative spatial relations for pairs of regions, which are all logically defined from two primitive (but axiomatized) notions, which allows many more relations to be defined.
Abstract: This paper develops a taxonomy of qualitative spatial relations for pairs of regions, which are all logically defined from two primitive (but axiomatized) notions. The first primitive is the notion of two regions being connected, which allows eight jointly exhaustive and pairwise disjoint relations to be defined. The second primitive is the convex hull of a region which allows many more relations to be defined. We also consider the development of the useful notions of composition tables for the defined relations and networks specifying continuous transitions between pairs of regions. We conclude by discussing what kind of criteria to apply when deciding how fine grained a taxonomy to create.

Journal ArticleDOI
TL;DR: It is shown how, on one side, fuzzy logic can be used to support the construction of schedules that are robust with respect to changes due to certain types of event, and how a reaction can be restricted to a small environment by means of fuzzy constraints and a repair-based problem-solving strategy.
Abstract: Practical scheduling usually has to reach to many unpredictable events and uncertainties in the production environment. Although often possible in theory, it is undesirable to reschedule from scratch in such cases. Since the surrounding organization will be prepared for the predicted schedule, it is important to change only those features of the schedule that are necessary. We show how, on one side, fuzzy logic can be used to support the construction of schedules that are robust with respect to changes due to certain types of event. On the other side, we show how a reaction can be restricted to a small environment by means of fuzzy constraints and a repair-based problem-solving strategy. We demonstrate the proposed representation and problem-solving method by introducing a scheduling application in a steelmaking plant. We construct a preliminary schedule by taking into account only the most likely duration of operations. This schedule is iteratively "repaired" until some threshold evaluation is found. A repair is found with a local search procedure based on Tabu Search. Finally, we show which events can lead to reactive scheduling and how this is supported by the repair strategy.

Journal ArticleDOI
TL;DR: Considering the kinds of information and information organizations required for adequate accounts of natural language and for sophisticated natural language capabilities in computational systems, this paper distinguishes several different classes of "ontology", each with its own characteristics and principles.
Abstract: The design and construction of "ontologies" is currently a topic of great interest for diverse groups. Less clear is the extent to which these groups are addressing a common area of concern. By considering the kinds of information and information organizations that are required for adequate accounts of natural language and for sophisticated natural language capabilities in computational systems, this paper distinguishes several different classes of "ontology", each with its own characteristics and principles. A classification for these ontological "realms" is motivated on the basis of systemic-functional semiotics. The resulting stratified "meta-ontology" offers a unifying framework for relating distinct ontological realms while maintaining their individual orientations. In this context, formal ontology can be seen to provide a rather small (although important) component of the overall organization necessary. Claims for the sufficiency of formal ontology in AI and NLP need then to be treated with caution.

Journal ArticleDOI
TL;DR: This study investigates strategies in failure diagnosis at cutting-machine-tools with a verbal knowledge acquisition technique and shows that typical strategies of failure diagnosis are "Historical information", "Least effort", "Reconstruction", and "Sensory check".
Abstract: This study investigates strategies in failure diagnosis at cutting-machine-tools with a verbal knowledge acquisition technique. Sixty-nine semi-structured interviews were performed with mechanical and electrical maintenance technicians, and a protocol analysis was conducted. Strategies were analysed in dependence of the technician's job experience, his familiarity with the problem and problem complexity. The technicians were categorized into three groups, novices, advanced, and experts, based upon level of experience. Results show that typical strategies of failure diagnosis are "Historical information", "Least effort", "Reconstruction", and "Sensory check". Strategies that lead to a binary reduction of the problem space, such as "Information uncertainty" and "Split half", play only a minor role in real-life failure diagnosis. Job experience and the familiarity with the problem significantly influenced the occurrence of strategies. In addition to "Symptomatic search" and "Topographic search", results show frequent use of case-based strategies, particularly in routine failures. In novel situations, technicians usually used "Topographic search". A software design method, the strategy-based software design (SSD) is proposed, that uses strategies to derive decision support systems, that are adaptive to the different working styles and the changing levels of experience in user groups. The methodology is briefly described and illustrated by the development of an information support system for maintenance and repair.


Journal ArticleDOI
TL;DR: The model predicted four possible causes of join clause omission, and empirical testing revealed that all four contributed to the error, which is significant for understanding user errors in general and for developing new interfaces and training schemes for the task of writing database queries.
Abstract: This research reports on the experimental test of several causes of user errors while composing database queries. The query language under consideration is Structured Query Language (SQL), the industry standard language for querying databases. Unfortunately, users commit many errors when using SQL. To understand user errors, a model of query writing was developed that integrated a GOMS-type analysis of query writing with the characteristics of human cognition. This model revealed multiple cognitive causes of a frequent and troublesome error, join clause omission. This semantic user error returns answers from the database that may be undetectably wrong, affecting users, decision makers, and programmers.The model predicted four possible causes of join clause omission, and empirical testing revealed that all four contributed to the error. Specifically, the frequency of this error increased because (1) the load on working memory caused by writing intervening clauses made the users forget to include the join clause, (2) an explicit clue to write the join clause was absent from the problem statement, (3) users inappropriately reused the procedure appropriate for a single table query, which requires no join clause, when a join clause is indeed necessary, and (4) some users never learned the correct procedure. These results are significant for understanding user errors in general and for developing new interfaces and training schemes for the task of writing database queries.

Journal ArticleDOI
TL;DR: The major contributions of this work include a direct manipulation interaction paradigm for exploring webs of documents, using maps and an integrated graphical query language, and the ability to use the maps themselves as documents that can be customized, stored in a library and shared among users.
Abstract: Interactive dynamic maps (IDMs) help users interactively explore webs of hypermedia documents. IDMs provide automatically-generated abstract graphical views at different levels of granularity. Visual cues give users a better understanding of the content of the web, which results in better navigation control and more accurate and effective expressions of queries. IDMs consist of: topic maps, which provide visual abstractions of the semantic content of a web of documents and document maps, which provide visual abstractions of subsets of documents. The major contributions of this work include (1) automatic techniques for building maps directly from a web of documents, including extraction of semantic content and use of a spatial metaphor for generating layout and filling space, (2) a direct manipulation interaction paradigm for exploring webs of documents, using maps and an integrated graphical query language, and (3) the ability to use the maps themselves as documents that can be customized, stored in a library and shared among users.

Journal ArticleDOI
TL;DR: The PROCOPE formalism as discussed by the authors treats goals and procedures to reach goals as properties of objects, just as structural properties; i.e. they are the functional property of objects.
Abstract: Formalisms for the description of procedural knowledge, such as action grammars and production systems, do not allow for direct handling of the semantic of objects which are involved in the actions. As models focusing on rules, objects only appear through rule triggering conditions, and hence preclude opportunities to make the overall semantic structure of the task world explicit.After a critical review of action grammars and their semantic extensions, the PROCOPE formalism is presented as an alternative way to describe know-how focusing on objects. Goals and procedures to reach goals are treated as properties of objects, just as structural properties; i.e. they are the functional properties of objects. Handled in this way, goals and procedures, because they categorize objects, are used to generate the class inclusion semantic network which is the core of the PROCOPE description.A major advantage of PROCOPE over rule-based systems is its ability to express the part of cognitive complexity which is due, not to the number of procedures, but to the complexity of the overall structure generated by the way the objects involved in the actions are sharing those procedures.Finally, we present the PROCOPE software and we show how it has been put to practical use.

Journal ArticleDOI
TL;DR: This paper proposes that institutions all over the world should take full advantage of the new technologies available, and promote and coordinate such a global service as a virtual National Library system in order to make possible a really efficient management of human knowledge on a global scale.
Abstract: The Internet is like a new country, with a growing population of millions of well educated citizens. If it wants to keep track of its own cultural achievements in real time, it will have to provide itself with an infostructure like a virtual National Library system. This paper proposes that institutions all over the world should take full advantage of the new technologies available, and promote and coordinate such a global service. This is essential in order to make possible a really efficient management of human knowledge on a global scale.

Journal ArticleDOI
TL;DR: A general structure is proposed for an underlying conceptualization of the world that is particularly well suited to language understanding and consists of a set of core theories of a very abstract character that explicate the concepts of systems and the figure-ground relation, scales, change, causality, and goal-directed behavior.
Abstract: A general structure is proposed for an underlying conceptualization of the world that is particularly well suited to language understanding. It consists of a set of core theories of a very abstract character. Some of the most important of these are discussed, in particular, core theories that explicate the concepts of systems and the figure-ground relation, scales, change, causality, and goal-directed behavior. These theories are too abstract to impose many constraints on the entities and situations they are applied to; rather their main purpose is to provide the basis for a rich vocabulary for talking about entities and situations. The fact that the core theories apply so widely means that they provide a great many domains of discourse with a rich vocabulary.

Journal ArticleDOI
TL;DR: The general philosophy and rationale of CODE4 is described and the knowledge representation, specifically designed to meet the needs of flexible, interactive knowledge management, is discussed.
Abstract: CODE4 is a general-purpose knowledge management system, intended to assist with the common knowledge processing needs of anyone who desires to analyse, store, or retrieve conceptual knowledge in applications as varied as the specification, design and user documentation of computer systems; the construction of term banks, or the development of ontologies for natural language understanding. This paper provides an overview of CODE4 as follows: We first describe the general philosophy and rationale of CODE4 and relate it to other systems. Next, we discuss the knowledge representation, specifically designed to meet the needs of flexible, interactive knowledge management. The highly-developed user interface, which we believe to be critical for this type of system, is explained in some detail. We finally describe how CODE4 is being used in a number of applications.