scispace - formally typeset
Search or ask a question

Showing papers on "Data management published in 1981"



Journal ArticleDOI
TL;DR: This paper shows how first-order predicate calculus can be used as a language for formally stating modeling knowledge and how knowledge stated in this manner can be subjected to the resolution principle.
Abstract: In view of the growing prominence of corporate modeling, an important area of research concerns techniques for facilitating the design and utilization of models. In this paper we show how first-order predicate calculus can be used as a language for formally stating modeling knowledge. Furthermore, knowledge stated in this manner can be subjected to the resolution principle. The result is that application specific modeling knowledge need not be embedded in a computer program. Rather, it can be stored in a data base and utilized as needed by a problem processing system employing resolution techniques. Advantages of a decision support system taking an approach of this sort are considerable modeling flexibility, capacity for automating the model formulation and execution processes, and compatibility with a high-level user interface language.

115 citations


01 Oct 1981
TL;DR: PAVER as mentioned in this paper is a validated pavement maintenance management system for military installations which is designed to optimize the funds allocated for pavement maintenance and rehabilitation (M&R) in military installations.
Abstract: : This report describes PAVER, a field-tested, validated pavement maintenance management system for military installations which is designed to optimize the funds allocated for pavement maintenance and rehabilitation (M&R). (Author)

102 citations



Proceedings Article
09 Sep 1981
TL;DR: It is demonstrated that no one type of database machine is best for executing all types of queries and for several classes of queries certain database machine designs which have been proposed are actually slower than a DBMS on a conventional processor.
Abstract: The rapid advances in the development of low-cost computer hardware have led to many proposals for the use of this hardware to improve the performance of database management systems. Usually the design proposals are quite vague about the performance of the system with respect to a given data management application. In this paper we develop an analytical model of the performance of a conventional database management system and four generic database machine architectures. This model is then used to compare the performance of each type of machine with a conventional DBMS. We demonstrate that no one type of database machine is best for executing all types of queries. We also show that for several classes of queries certain database machine designs which have been proposed are actually slower than a DBMS on a conventional processor.

45 citations


Proceedings Article
01 Jan 1981
TL;DR: A field study was conducted on the relationships between use of an information system, as a primary criterion for system success, and three secondary criteria--profitabi lity, contribution to user performance, and user satisfaction.

37 citations



Journal ArticleDOI
19 Jun 1981-Science
TL;DR: Various types of scientific and technical data are required for the solution of key societal problems such as energy supply, environmental quality, and industrial productivity and data management must be given a higher priority by the scientific community, industry, and government.
Abstract: Various types of scientific and technical data are required for the solution of key societal problems such as energy supply, environmental quality, and industrial productivity. Ensuring the quality of these data bases is essential. Modern computer and telecommunications technology offers opportunities for major improvements in the dissemination of data, but data management must be given a higher priority by the scientific community, industry, and government.

31 citations


Journal ArticleDOI
TL;DR: The capabilities of the Mistral/11 document retrieval system, which is based on the relational data model and designed for general document retrieval applications, are outlined.

30 citations


01 Jan 1981
TL;DR: This thesis proposes a model to study the I/O complexity for sorting n numbers with any special-purpose hardware device of size s, and shows a lower bound result of omega (n log n/log s).
Abstract: : This thesis explores the design and use of custom-made VLSI hardware in the area of database problems. Our effort differs from most previous ones in that we search for structures and algorithms, directly implementable on silicon, for the solution of computation-intensive database problems. The types of target database systems include the general database management systems and the design database systems. The thesis deals mainly with database systems of the relational model. One common view concerning special-purpose hardware usage is that it performs a specific task. The proposed device is not a hardware solution to a specific problem, but provides a number of useful data structures and basic operations. It can be used to improve the performance of any sequential algorithm which makes extensive use of such data structures and basic operations. The design is based on a few basic cells, interconnected together in the form of a complete binary tree. The proposed device can handle all the basic relational operations: select, join, project, union, and intersection. With a special-purpose device of limited size attached to a host, the overall performance may ultimately be dictated by the I/O between the two sites. The ideal special-purpose device design is one that achieves a balance between computation and I/O. We propose a model to study the I/O complexity for sorting n numbers with any special-purpose hardware device of size s, and show a lower bound result of omega (n log n/log s). We present an optimal design achieving this bound. An important finding is that for practical ranges on the quantity of data to be sorted, systolic sorting devices of small sizes can beat fast sequential sorting algorithms.

24 citations


Book ChapterDOI
01 Jan 1981
TL;DR: New performance-enhancing features of the DADM system include improved user interfaces, improved visibility of processes and data structures, structure sharing, improvements in inference-planning mechanisms, methods for dealing with incomplete information, utilization of semantic advice, and means for controlling recursive premises.
Abstract: A system for applying the theory of logical deduction and proof procedures to the accessing of data stored in conventional data management systems is described and illustrated with several examples. The DADM (Deductively Augmented Data Management) system has been developed along several dimensions of utility and performance to provide a vehicle for research on interactive techniques for reasoning with data, answering questions, and supporting on-line decision making. After illustrating present system operation by means of several examples, new performance-enhancing features of the system are described. These features include improved user interfaces, improved visibility of processes and data structures, structure sharing, improvements in inference-planning mechanisms, methods for dealing with incomplete information, utilization of semantic advice, and means for controlling recursive premises.


Journal ArticleDOI
TL;DR: This paper presents a system framework whose purpose is to improve understanding of environmental management by analyzing the links between elements of the environmental management system and shows the general relationships that exist between complex system elements.
Abstract: This paper presents a system framework whose purpose is to improve understanding of environmental management. By analyzing the links between elements of the environmental management system, it is possible to construct a model that aids thinking systematically about the decision-making subsystem, and other subsystems, of the entire environmental management system. Through a multidisciplinary environmental approach, each of the individual subsystems is able to adapt to threats and opportunities. The fields of government, market economics, social responsibility and ecology, for example, are so complex that it is extremely difficult to develop a framework that gives full consideration to all aspects. This paper, through the application of a highly idealized system framework, attempts to show the general relationships that exist between complex system elements.

Book
31 Mar 1981
TL;DR: The Organization of Data Base Administration is a guide to managing the Data Resource and the DBA function, and to the development of data base applications and standards.
Abstract: I * The Organization of Data Base Administration.- 1 * Managing the Data Resource.- 1.1. The Traditional Approach to Data Management.- 1.2. The Data Base Approach.- 1.3. Criteria for Using the Data Base Approach.- 1.4. Implications for Management.- 2 * The DBA Function.- 2.1. An Overview of the DBA function.- 2.2. The History of the DBA function.- 2.3. The Nature of DBA Tasks.- 2.4. DBA Organization and Staffing.- 2.5. DBA in Practice.- 3 * DBA within the Organization.- 3.1. DBA Interfaces.- 3.2. The DBA and Data Processing.- 3.3. The Data Processing Organization.- 3.4. Placement of the DBA.- 3.5. Other Factors Affecting Organizational Position.- 4 * DBA Organization and Staff.- 4.1. DBA Skills Inventory.- 4.2. Internal Organization of DBA.- 5 * Organizational Dynamics.- 5.1. The Evolution of the DBA function.- 5.2. Conflicts in the DBA's Role.- 5.3. Power and the DBA.- II * Data Base Planning.- 6 * Components of Data Base Planning.- 6.1. The Planning Process.- 6.2. Data Base Goals.- 6.3. Data Base Plans.- 6.4. Summary.- 7 * Evaluation and Selection of Data Base Management Systems.- 7.1. DBMS Features.- 7.3. Methods of Evaluation.- 7.4. The DBA's Role.- III * Data Base Design.- 8 * The Data Base Design Process.- 8.1. Overview of the Data Base Design Process.- 8.2. Trade-Offs in Data Base Design.- 8.3. Constraints of the DBMS.- 8.4. The DBA's Role.- 9 * Logical Data Base Design.- 9.1. The Objectives of Logical Design.- 9.2. Basic Concepts.- 9.3. Steps in the Logical Design Process.- 9.4. The DBA's Role in Logical Design.- 10 * Physical Data Base Design.- 10.1. The Objectives of Physical Design.- 10.2. Basic Concepts.- 10.3. Two Illustrations of Cost-Performance Trade-Offs.- 10.4. Steps in the Physical Design Process.- 10.5. Other Issues in Physical Design.- 10.6. The DBA's Role in Physical Design.- IV * Data Base Operation and Control.- 11 * Maintaining Data Base Integrity.- 11.1. Sources of Error.- 11.2. Techniques for Maintaining Integrity.- 11.3. Integrity Features in DBMS Packages.- 11.4. The DBA's Role in Maintaining Data Base Integrity.- 12 * Controlling Data Base Access.- 12.1. Threats to Data Base Security.- 12.2. Methods for Ensuring Data Security.- 12.3. DBMS Security Features.- 12.4. Components of Data Base Security Policy.- 12.5. The Implications of Privacy.- 13 * Monitoring Data Base Performance.- 13.1. Causes of Imbalance.- 13.2. Measures of Performance.- 13.3. Performance Tools.- 13.4. Resolution of Performance Problems.- V * Managing The User Interface.- 14 * Data Administration.- 14.1. Types of Metadata.- 14.2. Uses of Metadata.- 14.3. The Role of Data Dictionary/Directory Systems.- 15 * Data Base Standards.- 15.1. Conventions for Data Element Naming.- 15.2. Standards for Data Base Application Programs.- 15.3. Data Base Documentation.- 15.4. The Development of Data Base Applications.- 15.5. The DBA and Data Base Standards.- VI * Case Histories.- 16 * The DBA in Practice.- 16.1. Company X: Getting Started.- 16.2. Company Y: Balancing Technical and Communications Skills.- 16.3. Company Z: DBA as "Storage Cop".- 16.4. Observations.- Appendix A * Data Base Management System Packages.- Model 204.- Appendix B * Data Dictionary/Directory Packages.- Data Catalogue 2.- The Model 204 Data Dictionary.

Proceedings ArticleDOI
01 Dec 1981
TL;DR: An interactive computer aided design program used to perform systems level design and analysis of large spacecraft concepts is presented in this paper, focusing on rapid design, analysis of integrated spacecraft, and automatic spacecraft modeling for lattice structures.
Abstract: An interactive computer aided design program used to perform systems level design and analysis of large spacecraft concepts is presented. Emphasis is on rapid design, analysis of integrated spacecraft, and automatic spacecraft modeling for lattice structures. Capabilities and performance of multidiscipline applications modules, the executive and data management software, and graphics display features are reviewed. A single user at an interactive terminal create, design, analyze, and conduct parametric studies of Earth orbiting spacecraft with relative ease. Data generated in the design, analysis, and performance evaluation of an Earth-orbiting large diameter antenna satellite are used to illustrate current capabilities. Computer run time statistics for the individual modules quantify the speed at which modeling, analysis, and design evaluation of integrated spacecraft concepts is accomplished in a user interactive computing environment.



01 Sep 1981
TL;DR: The compilation was divided into seven major parts including data organization, dialogue modes, user input devices, command language and command processing, feedback and error management, security and disaster prevention, and multiple user communication.
Abstract: : The purpose of this report is to compile in one document the variety of user considerations relating to software design of computer-based information systems. Approximately 500 such considerations that currently exist in a variety of thirteen source documents are included. For organizational purposes, the compilation was divided into seven major parts including data organization, dialogue modes, user input devices, command language and command processing, feedback and error management, security and disaster prevention, and multiple user communication. (Author)

01 Jun 1981
TL;DR: It is demonstrated that no one type of database machine is best for executing all types of queries and for several classes of queries certain database machine designs which have been proposed are actually slower than a DBMS on a conventional processor.
Abstract: The rapid advances in the development of low-cost computer hardware have led to many proposals for the use of this hardware to improve the performance of database management systems. Usually the design proposals are quite vague about the performance of the system with respect to a given data management application. This paper develops an analytical model of the performance of a conventional database management system and four generic database machine architectures. This model is then used to compare the performance of each type of machine with a conventional DBMS. It is demonstrated that no one type of database machine is best for executing all types of queries. It is also shown that for several classes of queries certain database machine designs which have been proposed are actually slower than a DBMS on a conventional processor.

Proceedings Article
01 Jun 1981
TL;DR: This report is an anthology of papers prepared by the investigators which identify and discuss the design, development and implementation of such a user interface, and the development of a demonstrable prototype.
Abstract: : This report documents the three year investigation by the National Bureau of Standards into the technical issues involved in providing a uniform user and program environment for (possibly concurrent) access to multiple heterogeneous remote database management systems. This report is an anthology of papers prepared by the investigators which identify and discuss the design, development and implementation of such a user interface, and the development of a demonstrable prototype. (Author)


06 May 1981
TL;DR: The conceptual framework of Modal Data Management reflects three years of experience with the DATA system, in which a time oriented DBMS has been conceived, designed and actually implemented.
Abstract: : The sense of time is implicit in almost all human activity, yet it is rarely reflected in the computer database views of these activities. This paper offers a method of dealing with time, modal storage and retrieval, and describes formal and practical realizations of the concept. The conceptual framework of Modal Data Management reflects three years of experience with the DATA (Dynamic Alerting Transaction Analysis) system, in which a time oriented DBMS has been conceived, designed and actually implemented. The paper first anchors the Modal Data Management concept in the context of the relevant research in Information Modelling and storage technologies, followed by a discussion of the architectural and functional attributes of the DATA system. The major lessons from the DATA experience are then presented, paying special emphasis to their impact on the design of future versions of the system. The concluding comments deal with specific implications of the Modal Data Management in the domains of DBMS usability and Software Engineering. (Author)

Journal ArticleDOI
TL;DR: A re-evaluation of the initial design criteria for MUMPS is now warranted, and language enrichment, compilation and/or preprocessing, and functional integration into general purpose environments are the major characteristics to be expected in the evolution of MumPS systems.

Book
01 Jan 1981
TL;DR: This paper limits the discussion to a class of DCSs which have an interconnection of dedicated/shared, programmable, functional PEs working on a set of jobs which may be related or unrelated.
Abstract: The recent advances in large-scale integrated logic and memory technology, coupled with the explosion in size and complexity of the application areas, have led to the design of distributed architectures. Basically, a Distributed Computer System ( DCS ) is considered as an interconnection of digital systems called Processing Elements ( PEs ), each having certain processing capabilities and communicating with each other. This definition encompasses a wide range of configurations from an uniprocessor system with different functional units to multiplicity of general-purpose computers (e.g. ARPANET). In general, the notion of "distributed systems" varies in character and scope with different people. 30 So far, there is no accepted definition and basis for classifying these systems. In this paper, we limit our discussion to a class of DCSs which have an interconnection of dedicated/shared, programmable, functional PEs working on a set of jobs which may be related or unrelated.


Journal ArticleDOI
TL;DR: Computer graphics has become a true management tool and this nontechnical discussion introduces its capabilities and benefits to potential users.
Abstract: Computer graphics has become a true management tool. This nontechnical discussion introduces its capabilities and benefits to potential users.


01 Dec 1981
TL;DR: The Dynamics Explorer project has acquired the ground data processing system from the Atmosphere Explorer project to provide a central computer facility for the data processing, data management and data analysis activities of the investigators.
Abstract: The Dynamics Explorer project has acquired the ground data processing system from the Atmosphere Explorer project to provide a central computer facility for the data processing, data management and data analysis activities of the investigators. Access to this system is via remote terminals at the investigators' facilities, which provide ready access to the data sets derived from groups of instruments on both spacecraft. The original system has been upgraded with both new hardware and enhanced software systems. These new systems include color and grey scale graphics terminals, an augmentation computer, micrographics facility, a versatile data base with a directory and data management system, and graphics display software packages.

04 Nov 1981
TL;DR: In this paper, a two-phase approach to the statistical and mathematical analyses of cardiology data distributed over many large (Hewlett-Packard Image 1000) computer data bases is presented, which allows data from many distinct applications to be merged and interrelated in a reasonable time frame without making a priori decisions on data base applications.
Abstract: A two phased approach to the statistical and mathematical analyses of cardiology data distributed over many large (Hewlett-Packard Image 1000) computer data bases is presented. During the first phase, patients satisfying specified criteria in each of the data bases are selected and specific data of these patients required for analyses during the second phase are retrieved and merged into a single random access file. During the second phase, statistical and mathematical analyses of the merged data is performed and the results are reported. Both phases are accomplished using global data management commands. The approach presented allows data from many distinct applications to be merged and interrelated in a reasonable time frame without making a priori decisions on data base applications.

Proceedings Article
01 Nov 1981
TL;DR: The approach presented allows data from many distinct applications to be merged and interrelated in a reasonable time frame without making a priori decisions on data base applications.
Abstract: A two phased approach to the statistical and mathematical analyses of cardiology data distributed over many large (Hewlett-Packard Image 1000) computer data bases is presented. During the first phase, patients satisfying specified criteria in each of the data bases are selected and specific data of these patients required for analyses during the second phase are retrieved and merged into a single random access file. During the second phase, statistical and mathematical analyses of the merged data is performed and the results are reported. Both phases are accomplished using global data management commands. The approach presented allows data from many distinct applications to be merged and interrelated in a reasonable time frame without making a priori decisions on data base applications.