scispace - formally typeset
Search or ask a question

Showing papers on "Data management published in 1978"


Book ChapterDOI
Jim Gray1
01 Jan 1978
TL;DR: This paper is a compendium of data base management operating systems folklore and focuses on particular issues unique to the transaction management component especially locking and recovery.
Abstract: This paper is a compendium of data base management operating systems folklore. It is an early paper and is still in draft form. It is intended as a set of course notes for a class on data base operating systems. After a brief overview of what a data management system is it focuses on particular issues unique to the transaction management component especially locking and recovery.

1,635 citations


Book ChapterDOI
01 Jan 1978
TL;DR: DADM uses Deductive pathfinding and inference planning to select small sets of relevant premises and to construct skeletal derivations that guide the retrieval of data values and produce proofs supporting those answers.
Abstract: Inference planning techniques have been implemented and incorporated within a prototype deductive processor designed to support the extraction of information implied by, but not explicitly included in, the contents of a relationally structured data base, Deductive pathfinding and inference planning are used to select small sets of relevant premises and to construct skeletal derivations. When these “skeletons” are verified, the system uses them as plans to create data-base access strategies that guide the retrieval of data values, to assemble answers to user requests, and to produce proofs supporting those answers. Several examples are presented to illustrate the current capability of the prototype Deductively Augmented Data Management (DADM) system.

40 citations


Journal ArticleDOI
TL;DR: Management computing is most prevalent in those governments with professional management practices where top management is supportive of computing and tends to control computing decisions and where department users have less control over design and implementation activities.
Abstract: Traditional concepts of management information systems (MIS) bear little relation to the information systems currently in use by top management in most US local governments. What exists is management-oriented computing, involving the use of relatively unsophisticated applications. Despite the unsophisticated nature of these systems, management use of computing is surprisingly common, but also varied in its extent among local governments. Management computing is most prevalent in those governments with professional management practices where top management is supportive of computing and tends to control computing decisions and where department users have less control over design and implementation activities. Finally, management computing clearly has impacts for top managers, mostly involving improvements in decision information.

38 citations



Journal ArticleDOI
TL;DR: A set of the most fundamental principles that have emerged in the field to guide development information systems in organizations are identified, including such important areas as data management, data independence, and information system structure.
Abstract: This article draws together and examines research findings and practical industrial experiences related to computer-based information systems. Its purpose is to identify a set of the most fundamental principles that have emerged in the field to guide development information systems in organizations. It also presents a foundation from which further research efforts can be launched. The characteristics of computer-based information systems are examined as background. Technical and developmental principles are then examined, including such important areas as data management, data independence, and information system structure. Behavioral and organizational principles are discussed next, encompassing resistance to systems, user and management involvement in development activities, and integration of systems into the organization. Finally, the question of "where do we go from here" is raised to point out areas needing continued research and development.

34 citations


Journal ArticleDOI
TL;DR: This paper focuses on solutions to the problems of data mangement in distributed systems, some of which arise directly from the nature of distributed architecture, while others carry over from centralized systems, acquiring new importance in their broadened environment.
Abstract: Successful implementation of most distributed processing systems hinges on solutions to the problems of data mangement, some of which arise directly from the nature of distributed architecture, while others carry over from centralized systems, acquiring new importance in their broadened environment. Numerous solutions have been proposed for the most important of these problems.

33 citations


01 Jan 1978
TL;DR: The DATA system is an excellent tool to implement alerting due to its ability to view the database at previous points in time and the work describes the implementation of the DATA System.
Abstract: : This thesis presents the DATA (Dynamic Alerting Transaction Analysis) System as an alternative to a conventional database management system. The DATA System contains no records corresponding to entities but rather is simply a time ordered list of transactions. The advantages of DATA in the areas of security, integrity, and operational ease is discussed and the concept of alerting is presented. An alerting system provides facilities to monitor changes to the database in order to perform some action whenever certain conditions become true. An alerter can be thought of as a program that continuously monitors the database and takes some specified action when the corresponding condition becomes true. The DATA system is an excellent tool to implement alerting due to its ability to view the database at previous points in time. The work describes the implementation of the DATA System.

31 citations


Journal ArticleDOI
TL;DR: A pilot study undertaken to develop and test analytical methodologies for application in comprehensive flood plain information studies is described, which permits and encourages comprehensive, systematic, practical assessments of present and alternative future basin-wide development patterns.
Abstract: : A pilot study undertaken to develop and test analytical methodologies for application in comprehensive flood plain information studies is described. The methodology permits and encourages comprehensive, systematic, practical assessments of present and alternative future basin-wide development patterns as reflected by alternative land use patterns and physical works in terms of flood hazard, economic damage potential and selected environmental consequences. The analysis methodologies are centered about integrated use of computerized spatial, gridded geographic and resource data files. A family of special purpose utility computer programs access the data file and extract appropriate variables and interpret and format the data into specific analytical parameters that are subsequently formatted for input to traditional modeling computer programs. An example application to Trail Creek in Clarke County, Georgia, is described. (Author)

29 citations


Journal ArticleDOI
Glen G. Langdon1
TL;DR: Some alternatives to the design of comparators, garbage collection, and domain extraction for architectures like the Relational Associative Processor (RAP) are offered.
Abstract: Associative “logic-per-track” processors for data management are examined from a technological and engineering point of view. Architectural and design decisions are discussed. Some alternatives to the design of comparators, garbage collection, and domain extraction for architectures like the Relational Associative Processor (RAP) are offered.

27 citations


01 Jan 1978
TL;DR: The goal of this work is to design mechanisms that can automatically select a near-optimal attribute partition of a file''s attributes, based on the usage pattern of the file and on the characteristics of the data in the file.
Abstract: One technique that is sometimes employed to enhance the performance of a data base management system is known as attribute partitioning. This is the process of dividing the attributes of a file into subfiles that are stored separately. By storing together those attributes that are frequently requested together by transactions, and by separating those that are not, attribute partitioning can reduce the number of pages that must be transferred from secondary storage to primary memory in order to process a transaction. The goal of this work is to design mechanisms that can automatically select a near-optimal attribute partition of a file''s attributes, based on the usage pattern of the file and on the characteristics of the data in the file. The approach taken to this problem is the large space of possible partitions. The heuristics propose a small set of promising partitions to submit for detailed analysis. The estimator assigns a figure of merit to any proposed partition that reflects the cost that would be incurred in processing the transactions in the usage pattern if the file were partitioned in the proposed way. We have also conducted an extensive series of experiments with a variety of design heuristics; as a result, we have identified a heuristic that nearly always finds the optimal partition of a file. The context of this study is a relational data base management system that can process transactions made against relations whose physical partitioning is unknown to the user. In specifying and modeling this system, it is necessary to address the problem of optimizing nonprocedural queries made to a partitioned file. We have derived a number of such optimation techniques and have provided the results of a number of experiments with them.

25 citations


01 Aug 1978
TL;DR: The semantic data model (SDM) has been designed as a natural application modelling mechanism that can capture and express the structure of an application environment and is designed to enhance the effectiveness and usability of data base systems.
Abstract: : The conventional approaches to the structuring of data bases provided in contemporary data base management systems are in many ways unsatisfactory for modelling data base application environments. The features they provide are too low-level, computer-oriented, are representational to allow the semantics of a data base to be directly expressed in its structure. The semantic data model (SDM) has been designed as a natural application modelling mechanism that can capture and express the structure of an application environment. The features of the SDM correspond to the principal intentional structures naturally occurring in contemporary data base applications. The SDM provides a rich but limited vocabulary of data structure types and primitive operations, striking a balance between semantic expressibility and the control of complexity. Furthermore, facilities for expressing derived (conceptually redundant) information are an essential part of the SDM; derived information is as prominent in the description of an SDM data base as is primitive data. The SDM is designed to enhance the effectiveness and usability of data base systems. There are many data base management systems in use today which represent a considerable investment on the parts of their developers and users; the SDM can be effectively used in conjunction with these existing data base systems to enhance their effectiveness and usability.

Proceedings ArticleDOI
31 May 1978
TL;DR: An overview of a methodology developed to support systems analysts in the process of database design is provided, and application of the modeling approach to a realistic design problem is described, and modeling accuracy is claimed.
Abstract: This paper provides an overview of a methodology developed to support systems analysts in the process of database design. The design approach is built upon an analytic model composed of (1) parametric descriptions for components of a generalized database organization, (2) costing equations which can evaluate a proposed modular database design, (3) an analyst interface which accepts an arbitrary database organization for evaluation, and (4) search procedures which automatically generate and compare thousands of alternative designs. Performance is measured as the sum of storage, retrieval, and maintenance costs and is estimated from parameters of the proposed design, the problem description and the storage environment. A virtual, record-frame view of secondary storage has been developed in which data records are added, deleted and modified with minimal effect on existing data structures. Application of the modeling approach to a realistic design problem is described, and modeling accuracy to within four percent is claimed.

Journal ArticleDOI
TL;DR: This paper examines the attributes of a generalized data base management system with respect to its impact on managerial decision making and the utilization of a facile method for non-programming users to interrogate the data base.
Abstract: This paper examines the attributes of a generalized data base management system with respect to its impact on managerial decision making. The discussion focuses upon two primary considerations: 1) the organization of data within a data base such that all intricate relationships are represented; and 2) the utilization of a facile method for non-programming users to interrogate the data base. Examples drawn from the field of material requirements planning are used to illustrate the concepts and potential of the generalized data base management system.

Proceedings Article
13 Sep 1978
TL;DR: It is shown how a relational database can be supported on a specific database machine, known as the database computer (DBC), with good performance, and how the size of the relational software is considerably reduced by using the DBC for supporting relational databases.
Abstract: Database machines are special-purpose devices that are expected to perform the common data management operations efficiently. In this paper, we attempt to show how a relational database can be supported on a specific database machine, known as the database computer (DBC), with good performance. The DBC employs modified moving-head disks for database storage. To achieve high-volumed accessing, the read-out mechanisms of the moving-head disks are made into tracks-in-parallel. To provide content-addressable search, the disk controller is incorporated with a set of microprocessors, corresponding to the tracks of a cylinder. In this way, not only can an entire cylinder of data be accessed in one disk revolution, but relevant data which satisfies,the user request can also be found and output in the same revolution. To minimize the number of cylinders involved in a database access, some structural information about the database is maintained in a blockoriented content-addressable memory made of charge-coupled devices (CCDs). Furthermore, clustering and security mechanisms are a part of the hardware features provided by the DBC. With cylinder-oriented content-addressable database store, block-oriented content-addressable structure memory and several functionally specialized components, the DBC can achieve one or two orders of magnitude of performance improvement over the conventional computer in database management. Also, a possible twofold increase in database storage requirement as compared to a conventional implementation is adequately offset by one or more orders of magnitude reduction in storage for structural information. The purpose of this paper is to analyze these performance issues. By using the DBC for supporting relational databases, the size of the relational software is considerably reduced. Specifically, the query optimizer of conventional systems is now rendered unnecessary. In comparison with a conventional implementation of a relational system, the DBC has been found to contribute larger performance gains. These gains are tabulated in the paper. All these tend to demonstrate that the DBC in particular and database machines in general can indeed contribute to an appreciable improvement in database management.

ReportDOI
31 Dec 1978
TL;DR: An experimental information system which exploits the user's sense of spatiality to organize and access data, and which should have special appeal for that class of user for whom directness and immediacy are essential qualities for interaction.
Abstract: : The Architecture Machine Group has been developing an experimental information system which exploits the user's sense of spatiality to organize and access data. Conceptual roots lie in the observation that one can readily locate and retrieve some book from one's bookshelf, or the appointment calendar from one's desktop, on the basis of where it is, or where one put it, in a well- learned, familiar space. A prototype system has been developed that uses wall- sized, full color digital television with synchronized stereo sound to create a virtual spatial world, 'Dataland,' over which the user helicopters via joystick control. Items of interest seen through a graphics 'window' can be zoomed in upon and interactively perused. Data types include: maps, text, book-like items, letters, photographs, slides, movies, sound and television. Results thus far suggest that the user quickly learns (on the order of minutes) to navigate about such a space, and readily adopts a spatial way of regarding and discussing data. The approach of managing data spatially is offfered not as an alternative in competition with managing data on a symbolic or name basis, but as a complement thereto. This manner of dealing with data should have special appeal for that class of user for whom directness and immediacy are essential qualities for interaction.

Proceedings ArticleDOI
13 Nov 1978
TL;DR: Some of the main design features of the 'S' interactive language and system are described and the implementation strategy is outlined.
Abstract: 'S' is an interactive language and system developed at Bell Laboratories for statistical computing, graphics, and data management. The language provides a combination of simplicity with power and extensibility, and applies a number of recent ideas in computer science to statistical systems for the first time. This paper describes some of the main design features of the system and outlines the implementation strategy.

Journal ArticleDOI
TL;DR: Software management is considered from the corporate headquarters viewpoint and standardization is presented as the most effective management device available at the corporate level for enhancing the overall software posture.
Abstract: Software management is considered from the corporate headquarters viewpoint. This perspective encompasses all facets of management, but specifically dealt with are those that are brought to bear on software management obstacles and ways to cope with them are presented. Standardization is presented as the most effective management device available at the corporate level for enhancing the overall software posture. Corporate management actions available for favorably influencing the quality of software over its life cycle and research initiatives are described.

01 Jan 1978
TL;DR: RIM is a prototype data management system for the Aerospace Vehicle design (IPAD) project and an overview of the relational information management (RIM) system is provided.
Abstract: An overview of the relational information management (RIM) system is provided. RIM is a prototype data management system for the Aerospace Vehicle design (IPAD) project.

Book
01 Dec 1978
TL;DR: In this article, the authors present a framework for looking at management development in practice, and a specific procedure called Management Development Audit, by which organisations may obtain data of direct use in the assessment of their management development systems, analysed in a way which will encourage recommendations and strategies for change.
Abstract: Introduction The purpose of this monograph is to describe a practical process for the assessment of management development in organisations. This aim is achieved through the examination both of a framework for looking at management development in practice, and a specific procedure—the Management Development Audit—by which organisations may obtain data of direct use in the assessment of their management development systems, analysed in a way which will encourage recommendations and strategies for change. Although the monograph focuses specifically on the Management Development Audit, it is also intended to stimulate the reader's thinking in terms of examination and reflection upon their own management development systems. This may be best achieved by reflecting carefully upon the kinds of questions and issues to which the Management Development Audit addresses itself.

Proceedings Article
13 Sep 1978
TL;DR: This paper describes a man-machine interface system, called EUFID, that will permit users of data management systems to communicate with those systems in natural language and will act as a security screen to prevent unauthorized users from having access to particular fields in a data base.
Abstract: This paper describes a man-machine interface system, called EUFID, that will permit users of data management systems to communicate with those systems in natural language. At the same time, EUFID will act as a security screen to prevent unauthorized users from having access to particular fields in a data base. Our specific objective is to build a system that will be practical, efficient, and widely usable in existing, real-world applications. Our approach is to model the restricted set of linguistic structures and functions required for each application, rather than the manifold linguistic properties of natural language per se. This allows our system to be powerful enough to efficiently process English queries against specific data bases without attempting to understand forms of English that have little or no function in the contexts of those data bases.

Journal Article
TL;DR: An interactive minicomputer-based system has been developed that enables the clinical research investigator to personally explore and analyze his research data and, as a consequence of these explorations, to acquire more information.

Journal ArticleDOI
TL;DR: This paper focuses on data base management systems that allow unanticipated queries to be answered with relative ease when dealing with complex data structures and variable- length fields, records, and files.
Abstract: Within the past few years data base management systems have received an increasingly large share of the computing world's efforts. They are attractive, despite their high processing cost, because they relieve the programmer of the need to program gram frequently used operations such as access methods for complicated data structures and variable- length fields, records, and files. As data base management systems become more flexible they also allow unanticipated queries to be answered with relative ease.





01 Jan 1978
TL;DR: The final step in this research is to evaluate proposed hardware that could be utilized to implement a data dictionary and part of a data directory by comparing its processing times with the processing times for a conventional sequential computer.
Abstract: : This research is concerned with the development of a mathematical base that can be utilized to model data base management systems from the user level down to the bit level and to develop and evaluate proposed hardware that could be utilized to implement a data dictionary and part of a data directory The mathematical modeling development is accomplished through set theory and the addition of order to sets This mathematical base is used to define in detail some of the functions that must be performed in Data Base Management (DBM) by operating on the following four levels of data: (1) the user computer interface (Reserved Word); (2) the attribute and file or relationship (F/R) names (Data Name); (3) the modifiers of the attribute and F/R names (Data Descriptors); and (4) the occurrences of the attributes and F/Rs (Data occurrence) Hardware implementation designs are then considered for a subset of these functions and data levels The data levels considered are the Data Name and Data Descriptor Levels Specifically, hardware designs are developed for the data and functions performed by a Data Dictionary and parts of a Data Directory Given the proposed hardware implementation the final step in this research is to evaluate this hardware by comparing its processing times with the processing times for a conventional sequential computer

01 Sep 1978
TL;DR: This manual is intended to be initial documentation of the basic procedures that are necessary for the successful creation of a grid cell data bank and was prepared primarily to aid the XFPI pilot studies in which gridded data banks are being created as a major focal point of the studies.
Abstract: : Spatial Data Management techniques are rapidly becoming practical tools for use by Corps of Engineers field offices in a variety of their responsibilities. Various aspects of these techniques have been applied in traditional Survey and Phase I General Design Memorandum Studies and on a large scale in the Expanded Flood Plain Information (XFPI) studies of the Corp's Flood Plain Management Services program. The use of grid data, e.g., spatial data stored in computer files in a specific grid cell format, has been determined to be the only spatial data management technique that offers significant analytical opportunities when compared to polygon oriented approaches. The grid structure successfully used in applications to date have included square, rectangular, and triangular cells, the latter being an emerging method with particular potential in the Management of terrain (topographic) data. This manual is intended to be initial documentation of the basic procedures that are necessary for the successful creation of a grid cell data bank. The manual was prepared primarily to aid the XFPI pilot studies in which gridded data banks are being created as a major focal point of the studies. However, anyone interested in spatial data management should find that the manual contains valuable information.

Journal ArticleDOI
TL;DR: Information is the coin of the management realm, being the essential link between means and ends, and can be studied analyzed, organized, systematized, and stored for future reference.
Abstract: Information is the coin of the management realm, being the essential link between means and ends. It can be studied analyzed, organized, systematized, and stored for future reference. Management participation in the design and testing of information systems greatly enhances organization communication. Construction management increasingly depends on meaningful information. Management success depends to a large degree on what information is chosen, and how it is utilized.

01 Jan 1978
TL;DR: SDMS permits direct storage and retrieval of an element set by specifying the corresponding key element values, and allows intermediate or scratch data to be stored in temporary data bases which vanish at job end.
Abstract: SDMS is a data base management system developed specifically to support scientific programming applications. It consists of a data definition program to define the forms of data bases, and FORTRAN-compatible subroutine calls to create and access data within them. Each SDMS data base contains one or more data sets. A data set has the form of a relation. Each column of a data set is defined to be either a key or data element. Key elements must be scalar. Data elements may also be vectors or matrices. The data elements in each row of the relation form an element set. SDMS permits direct storage and retrieval of an element set by specifying the corresponding key element values. To support the scientific environment, SDMS allows the dynamic creation of data bases via subroutine calls. It also allows intermediate or scratch data to be stored in temporary data bases which vanish at job end.