scispace - formally typeset
Search or ask a question

Showing papers on "Conceptual schema published in 1985"


01 Jan 1985
TL;DR: Conceptual modeling languages make information systems easier to design and maintain by using vocabularies that relate naturally and directly to the 'real world' of many computer applications.
Abstract: Conceptual modeling languages make information systems easier to design and maintain by using vocabularies that relate naturally and directly to the 'real world" of many computer applications.

96 citations


Book ChapterDOI
01 Jan 1985
TL;DR: This paper surveys the two basic approaches proposed to solve the view update problem and suggests treating views as abstract data types so that the definition of a view includes all permissible view updates, together with their translations.
Abstract: A view is a logical subset of the data base conceptual schema. Providing a facility for supporting views simplifies the user interface, but creates the problem of translating updates on views into equivalent updates on the data base. The translation of a view update may not be unique, it may not even exist or be ill-defined, and it may create inconsistencies in the data base or have side effects on the view. This paper surveys the two basic approaches proposed to solve the view update problem. The first approach suggests treating views as abstract data types so that the definition of a view includes all permissible view updates, together with their translations. The second approach leads to general view update translators and is based either on an analysis of the conceptual schema dependencies or on the concept of view complement to disambiguate view update translations.

66 citations


Proceedings Article
21 Aug 1985
TL;DR: Part of a "database administrator's assistant" is described - a computer system which can suggest modifications and additions to the current definitions and integrity constraints in the schema, based on techniques used in Machine Learning.
Abstract: To utilize DBMSs, a database designer must usually construct a schema, which is used to validate the data stored and help set up efficient access structures. Because database design is an art, and because the real world is irregular, unpredictable, and evolves, truly useful database systems must be tolerant of occasional deviations from the constraints imposed by the schema, including the semantic integrity constraints. We therefore examine the problems involved in accommodating ezceptional information in a database, and outline techniques for resolving them. Furthermore, we consider ways in which the schema can be refined to better characterize reality as it is reflected in the data encountered, including the exceptions. For this purpose, we describe part of a "database administrator's assistant" - a computer system which can suggest modifications and additions to the current definitions and integrity constraints in the schema. This system makes generalizations from the currently encountered exceptions, and is based on techniques used in Machine Learning.

37 citations


Journal ArticleDOI
TL;DR: The interactive database design tool Gambit supports the whole process in an optimal way based on an extended relational-entity relationship model based on the database programming language Modula/R.
Abstract: The design of a database is a rather complex and dynamic process that requires comprehensive knowledge and experience. There exist many manual design tools and techniques, but the step from a schema to an implementation is still a delicate subject. The interactive database design tool Gambit supports the whole process in an optimal way. It is based on an extended relational-entity relationship model. The designer is assisted in outlining and describing data structures and consistency preserving update transactions. The constraints are formulated using the database programming language Modula/R which is based upon first-order predicate calculus. The update transactions are generated automatically as Modula/R programs and include all defined integrity constraints. They are collected in so-called data modules that represent the only interface to the database apart from read operations. The prototype facility of Gambit allows the designer to test the design of the database. The results can be used as feedback leading to an improvement of the conceptual schema and the transactions.

32 citations



Journal ArticleDOI
TL;DR: This work introduces the concept of “essential-ISD” which is a concise, abstract and easily “read” ISD, containing the most important elements of the conceptual schema, and is used to produce a “skeleton” of the database schema, which outlines the normalized record types and their associations.

23 citations



Journal ArticleDOI
TL;DR: This paper integrates data, schema and meta-schema into a uniform model and provides one data language to manipulate and modify both data and schema.
Abstract: This paper integrates data, schema and meta-schema into a uniform model and provides one data language to manipulate and modify both data and schema. The modifications on the schema are then propagated to the schema's extension via propagation rules. The integration and the schema modification are all developed within the framework of self-describing and self-documenting models of data and they are fundamental in capturing the evolution of the database, which includes not only data changes, but schema changes as well.

12 citations


Proceedings Article
01 Jan 1985

9 citations


Journal ArticleDOI
TL;DR: The method is proved to generate the optimal Internal Schema and to be easily amendable to updates in the Conceptual and External Schemata.

8 citations


Proceedings Article
18 Aug 1985
TL;DR: The goal of this research is to investigate the possibility of automatically inferring a database schema using certain features occurring in natural languages, namely, the syntactic structure of sentences.
Abstract: The goal of this research is to investigate the possibility of automatically inferring a database schema. Our motivation is to make the task of the database designer easier. We require the designer only to provide a picture of how she expects the database to be used. This is provided in the form of natural language queries which the database might be expected to answer. The system synthesises a schema from this information. The above problem can be viewed as a problem in learning. The inference method we are proposing incrementally constructs the schema. The central idea of the inference mechanism is that it exploits certain features occurring in natural languages, namely, the syntactic structure of sentences.

Journal ArticleDOI
01 Jun 1985
TL;DR: One possible expert system design aid environment has been suggested to assist the designer in his work and is a step towards closing the gap between the theory of the conventional data base theory and AI databases.
Abstract: With the actual penetration of expert systems into the business world, the question is, how the expert system idea can be used to enhance the existing information systems with more intelligence in usage and operation. This interest is not surprising due to the advancement of the fifth generation of computer technology, and avid interest in the field of Artificial Intelligence. Therefore design of an information system for an application becomes more complex, and the inability of the human designer to deal with it increases. For designing intelligent systems, we have to be able to forecast the behavior of the information system more precisely before implementing it, i.e. we'have to support the specification process. Clearly the technology, such as Data base systems, is leading on efficiency issues as those needed for the construction, retrieval and manipulation of large shared data base. On the other hand, the AI techniques have improved significantly with function such as deductive reasoning and natural language processing. It is important to find way to merge these technologies into one mainstream of computing. A meeting point for the two areas is the issue of conceptual knowledge modelling, so that models can be created that will define the role and the ways to use data in AI systems. In the framework of this study, one possible expert system design aid environment has been suggested to assist the designer in his work. In a conceptual modelling environment a model is given for analysing complex real world problems known as the Conceptual Knowledge Model (CKM), represented by a Graphical and a Formal Representation. The Graphical Representation consists of three graphs: Conceptual Requirement Graph, Conceptual Behavior Graph, and Conceptual Structure Graph. These graphs are developed by involving the expert during the design process. The graphs are then transformed into first-order predicate logic to represent the logical axioms of a theory, which constitutes the knowledge base of the Expert System. The model suggested here is a step towards closing the gap between the theory of the conventional data base theory and AI databases.

Proceedings ArticleDOI
05 Jun 1985
TL;DR: A survey performed by Safayeni showed that only 22% of offices that have installed office automation equipment use any kind of software for filing documents; even when used, the percentage of documents on-line is very small.
Abstract: The question arises regarding the effectiveness of the o~rrcnt automated document storage and retrieval systems for offh:cs. A survey performed by Safayeni et aL 1 showed that only 22% of offices tha t have installed office automation ctluipment use any kind of software for filing documents; even when used, the percentage of documents on-line is very small. Furthermore, none of the respondents identified the att tomated filing of documents as an example of a "successful technology." The level of satisfaction with document retrieval software was lower than that for word processing arm electronic mail systems.



01 Jan 1985
TL;DR: A hierarchical model of forms is introduced that captures a number of significant characteristics of the entities in a concise manner and is based on an extended version of the Entity-Relationship Model.
Abstract: Conceptual data schema is constructed from the analysis of the business forms which are used in an enterprise. In order to peform the analysis a data model, a forms model, and heuristics to map from the forms model to the data model are developed. The data model we use is an extended version of the Entity-Relationship Model. Extensions include the addition of the min-max cardinalities and generalization hierarchy. By extending the min-max cardinalities to attributes we capture a number of significant characteristics of the entities in a concise manner. We introduce a hierarchical model of forms. The model specifies various properties of each form field within the form such as their origin, hierarchical structure, and cardinalities. The inter-connection of the forms is expressed by specifying which form fields flow from one form to another. The Expert Database Design System creates a conceptual schema by incrementally integrating related collections of forms. The rules of the expert system are divided into six groups: (1) Form Selection, (2) Entity Identification, (3) Attribute Attachment, (4) Relationship Identification, (5) Cardinality Identification, and (6) Integrity Constraints. The rules of the first group use knowledge about the form flow to determine the order in which forms are analyzed. The rules in other groups are used in conjunction with a designer dialogue to identify entities, relationships, and attributes of a schema that represents the collection of forms.




Book
01 Jan 1985
TL;DR: In a spinning unit comprising a Fiber separating device and a spinning device, fibers are separated by withdrawing the fibers from a combing roller of the fiber separating device by a pressure air stream.
Abstract: In a spinning unit comprising a fiber separating device and a spinning device, fibers are separated by withdrawing the fibers from a combing roller of the fiber separating device by a pressure air stream, directed tangentially to the working surface of the combing roller and conveying the fibers to a sliding wall of the rotary spinning chamber.

Journal ArticleDOI
TL;DR: In this article, a twelve-dimensional psychosocial and environmental taxonomy is presented based on data gathered from 989 older residents of 18 small Kansas towns of 2500 and less in population.


Proceedings ArticleDOI
01 Mar 1985
TL;DR: The model suggested here is a step towards closing the gap between data base theory and AI databases.
Abstract: In a conceptual modelling environment a model is given for analysing complex real world problems known as Conceptual Knowledge Model (CKM), represented by a Graphical Representation and a Formal Representation. The Graphical Representation consist of 3 graphs: Conceptual Requirement Graph, Conceptual Behavior Graph, and Conceptual Structure Graph. This graphs are developed by consulting the expert during the design process. The graphs are then transformed into first- order predicate logic to represent the non-logical axioms of a first order theory.The model suggested here is a step towards closing the gap between data base theory and AI databases.

Proceedings ArticleDOI
01 Oct 1985
TL;DR: A mathematical foundation to the data structuring problem based on an analysis of natural languages is presented, which provides a unifying approach to the 'Entity-Relationship Data Model'.
Abstract: A mathematical foundation to the data structuring problem based on an analysis of natural languages is presented, which provides a unifying approach to the 'Entity-Relationship Data Model'. The mathematical operations are straightforward mappings of both the indicative and imperative/interrogative modes of natural languages. This analysis which emphasizes the fundamental concept of "time", focuses on the role of VERBS in the Entity-Relationship model of data. These techniques use a subset of natural language to define the data schemas which are then inverted in the interrogative mode. This provides a simple approach to designing, implementing and interrogating real-world management information systems (MIS), using techniques which are understandable by both systems designers and end-users of the database.

Book ChapterDOI
01 Jan 1985
TL;DR: Many psychological phenomena are so very complex that no single approach can do justice to their complexity as mentioned in this paper, and intelligence would seem to be a prime example of such a phenomenon, no matter how one defines intelligence, its complexity seems to overwhelm the conceptual resources any one approach can bring to bear on understanding it.
Abstract: Many psychological phenomena are so very complex that no single approach can do justice to their complexity. Intelligence would seem to be a prime example of such a phenomenon. No matter how one defines intelligence, its complexity seems to overwhelm the conceptual resources any one approach can bring to bear on understanding it. Even limited aspects of intelligence seem almost staggering in their complexity. Consider, for example, that aspect of intelligence measured by conventional IQ tests. If almost a century of research on IQ test performance has shown anything, it is that no simple conceptual scheme or methodological approach has led, or perhaps can lead to an understanding of all the complexities that underlie test performance. The conceptual scheme and methodology one chooses will, of course, depend in large part upon the kinds of questions one wishes to ask.

Book ChapterDOI
01 Jan 1985
TL;DR: The members of Working Group Omega originally belonged to Working Group Beta and, therefore, their deliberations ought to be seen in the context of that Working Group, and the issue of the user’s conceptual model (UCM) received serious attention.
Abstract: The members of Working Group Omega originally belonged to Working Group Beta and, therefore, their deliberations ought to be seen in the context of that Working Group. Already at Seillac II (Guedj et al. 1980), the issue of the user’s conceptual model (UCM) received serious attention. We felt that the time was ripe to make an attempt at the formal specification of such user conceptual models. As a starting point we take the following definition of a UCM: A UCM is the set of all concepts and conceptual relations possessed by a (human) user with respect to some set of entities. To each UCM there corresponds a universe of discourse that is the expression of those concepts and conceptual relations. The M-notation, introduced in (Mac an Airchinnigh 1984), may be used as a basis for further development. Specifically, a user u possesses a conceptual model M of some application domain a. This UCM may then be denoted by M(u, a). It must be emphasised that such UCMs exist solely and entirely in the minds of their owners! The next step is to consider how to represent such UCMs. The following section contains the details.

Book ChapterDOI
01 Jan 1985
TL;DR: Many microcomputers are everyday implemented in hospital clinical departments; there is only a little conceptual phase before implementation; the consequences of such an approach are problems for semantical definitions, problems comming from redondance, problems for structure of data.
Abstract: Many microcomputers are everyday implemented in hospital clinical departments. COMMONLY there is only a little conceptual phase before implementation; the consequences of such an approach are: problems for semantical definitions, problems comming from redondance, problems for structure of data.

Book ChapterDOI
01 Jan 1985
TL;DR: The ANSI/SPARC DBSSG(1) took a major step forward for the database community when it identified the need for a conceptual schema in the context of a three-schema framework for database systems.
Abstract: The ANSI/SPARC DBSSG(1) took a major step forward for the database community when it identified the need for a conceptual schema in the context of a three-schema framework for database systems. The three-schema framework allows a clear separation of the conceptual schema from the external schema and the internal physical schema, resulting in databases which are flexible and adaptable to changes. A new step forward needs to be taken if the database system framework is to allow databases to be flexible and adaptable to changes of the conceptual schema.

Proceedings ArticleDOI
01 May 1985
TL;DR: A physical storage structure based on the List data structure is presented, meant to serve as an internal model of a database system utilizing a semantic conceptual data model.
Abstract: A physical storage structure based on the List data structure is presented. The structure, along with the operations defined on it, is meant to serve as an internal model of a database system utilizing a semantic conceptual data model. The advantages of the List-based storage structure stem from its ability to: cluster the related data items arbitrarily; represent the entire database, i.e., storage structures, access structures, and schemata, in a uniform way using the same structure; reorganize the storage structures using well-defined operations on the structures; and incorporate basic integrity constraints at the storage structure level. The structure is particularly suitable for environments where the amount of structural information is large relative to the size of the database - a characterization that fits a personal information environment.The definition of the List-based structure, the operations defined for it, and an implementation technique are discussed.