scispace - formally typeset
Search or ask a question

Showing papers on "Data management published in 1979"


08 Oct 1979
TL;DR: How the expressive power of the query language was increased, how these changes affected query processing in a distributed data base, as well as what are some limitations of and planned extensions to the current system are discussed.
Abstract: : As part of the continuing development of the LADDER system [1] [2], we have substantially expanded the capabilities of the data base access component that serves as the interface between the natural-language front end of LADDER and the data base management systems on which the data is actually stored. SODA, the new data base access component, goes beyond its predecessor IDA [3], in that it accepts a wider range of queries and accesses multiple DBMSs. This paper is concerned with the first of these areas, and discusses how the expressive power of the query language was increased, how these changes affected query processing in a distributed data base, as well as what are some limitations of and planned extensions to the current system.

163 citations


Proceedings ArticleDOI
30 May 1979
TL;DR: PLAIN incorporates a relational database definitional facility, along with low-level and high-level operations on relations, showing how the database operations are combined with programming language notions such as type checking, block structure, expression evaluation and iteration.
Abstract: The programming language PLAIN has been designed to support the construction of interactive information systems within the framework of a systematic programming methodology. One of the key goals of PLAIN has been to achieve an effective integration of programming language and database management concepts, rather than either the functional interface to database operations or the low-level database navigation operations present in other schemes. PLAIN incorporates a relational database definitional facility, along with low-level and high-level operations on relations. This paper describes those features informally, showing how the database operations are combined with programming language notions such as type checking, block structure, expression evaluation, and iteration. A brief description of the implementation status is included.

51 citations


Journal ArticleDOI
TL;DR: A system to systematically compare the performance of various methods (software modules) for the numerical solution of partial differential equations to obtain confidence in the results and its portability allows others to check or extend the measurements.
Abstract: This paper describes a system to systematically compare the performance of various methods (software modules) for the numerical solution of partial differential equations. We discuss the general nature and large size of this performance evaluation problem and the data one obtains. The system meets certain design objectives that ensure a valid experiment: 1) precise definition of a particular measurement; 2) uniformity in defimition of variables entering the experiment; and 3) reproducibility of results. The ease of use of the system makes it possible to make the large sets of measurements necessary to obtain confidence in the results and its portability allows others to check or extend the measurements. The system has four parts: 1) semiautomatic generation of problems for experimental input; 2) the ELLPACK system for actually solving the equation; 3) a data management system to organize and access the experimental data; and 4) data analysis programs to extract graphical and statistical summaries from the data.

48 citations


Proceedings ArticleDOI
27 Nov 1979
TL;DR: The paper describes the major problems and design decisions the adopted for the Data Access and Transfer System (DATS) and identifies a data access and transfer level.
Abstract: The following paper presents a local heterogeneous network, the HMINET. The adopted software architecture identifies a data access and transfer level. The higher level applications afford a mechanism for data access and transfer to alleviate the effects of the incompatibilities and the differences between the data management systems. The paper describes the major problems and design decisions we adopted for the Data Access and Transfer System (DATS).

45 citations


Proceedings ArticleDOI
30 May 1979
TL;DR: The effect on the performance of data management systems of the use of extended storage devices, multiple processors and prefetching data blocks is analyzed with respect to one system, INGRES, and it is shown that the random access model of data references holds only for overhead-intensive queries.
Abstract: The effect on the performance of data management systems of the use of extended storage devices, multiple processors and prefetching data blocks is analyzed with respect to one system, INGRES. Benchmark query streams, derived from user queries, were run on the INGRES system and their CPU usage and data reference patterns traced. The results show that the performance characteristics of two query types: data-intensive queries and overhead-intensive queries, are so different that it may be difficult to design a single architecture to optimize the performance of both types. It is shown that the random access model of data references holds only for overhead-intensive queries, and then only if references to system catalogs are not considered data references. Significant sequentiality of reference was found in the data-intensive queries. It is shown that back-end data management machines that distribute processing toward the data may be cost effective only for data-intensive queries. It is proposed that the best method of distributing the processing of the overhead-intensive query is through the use of intelligent terminals. A third benchmark set, multi-relation queries, was devised, and proposals are made for taking advantage of the locality of reference which was found.

38 citations


Journal ArticleDOI
TL;DR: This three-part paper reviews general features of scientific data management from a functional standpoint and special emphasis is given to the discussion of trends in the development of large-scale programs, as well as to the assembly of database-linked program networks for engineering analysis.

36 citations


ReportDOI
12 Feb 1979
TL;DR: An architecture for a generalized model management system that facilitates the integration of management science models into a decision support system to support the decision-maker both in specifying a problem and in effecting a solution.
Abstract: : This paper presents an architecture for a generalized model management system that facilitates the integration of management science models into a decision support system. The objective of the system is to support the decision-maker both in specifying a problem and in effecting a solution. This is accomplished by providing him/her with a means for interacting with a complex structured database to specify the structure of some problem; and to solve the model defined for the problem using appropriate information -- either from the database or some other source -- and efficient solution procedures.

32 citations


01 Jan 1979
TL;DR: A natural language query system developed as potential aids to command control data retrieval processes involving large data bases, LADDER, was studied in order to identify significant performance characteristics associated with its use in a Navy command control environment.
Abstract: : Natural language query systems have been developed as potential aids to command control data retrieval processes involving large data bases. One such system, LADDER (for Language Access to Distributed Data with Error Recovery), was studied in order to identify significant performance characteristics associated with its use in a Navy command control environment. Ten officers received moderate training in LADDER and subsequently employed it in a search and rescue scenario. Both system and user performance were examined. Basic patterns of usage were established, and troublesome syntactic expressions were identified. Design recommendations for the man-computer interface in command control query systems are discussed. (Author)

21 citations


Journal ArticleDOI
TL;DR: Simulation models are becoming increasingly important as tools for synthesizing and applying information in almost all aspects of land management for predicting and comparing outcomes of alternative decisions and assumptions.
Abstract: Simulation models are becoming increasingly important as tools for synthesizing and applying information in almost all aspects of land management. They are particulary valuable for predicting and comparing outcomes of alternative decisions and assumptions. Models also permit managers to consider and integrate the potential influences of a large number of variables.

12 citations


Journal ArticleDOI
TL;DR: A schema-driven, time-oriented record for oncology (TORO) on a minicomputer for storage, retrieval, and analysis of cancer patient data, which combines the ability to manage individual patient data for clinical purposes with the ability for rapid cross-patient analyses for research purposes.

12 citations


Proceedings ArticleDOI
29 Jun 1979
TL;DR: EUFID allows a user to query his data base in natural English, including sloppy syntax and misspellings, and is modular and table driven so that it can be interfaced to different applications and data management systems.
Abstract: EUFID is a natural language frontend for data management systems. It is modular and table driven so that it can be interfaced to different applications and data management systems. It allows a user to query his data base in natural English, including sloppy syntax and misspellings. The tables contain a data management system view of the data base, a semantic/syntactic view of the application, and a mapping from the second to the first.

Journal ArticleDOI
TL;DR: This paper surveys the context within which DBMS developments in the USSR are occurring, examines the features of Soviet DBMS systems reported in the current literature, and reviews current research related to data management software technology.
Abstract: During the past several years there have been significant developments in the USSR in the utilization of computer-based information processing across all sectors of the Soviet economy and at all levels of its administration. These developments have been supported by the evolution of a thirdgeneration computer hardware and software base and stimulated by an awareness of the critical importance of information processing technology to the functioning of a centrally planned and monitored economy. There is, therefore, a strong impetus to the development of database management system (DBMS) technology to support a broad spectrum of information processing applications. This paper surveys the context within which DBMS developments in the USSR are occurring, examines the features of Soviet DBMS systems reported in the current literature, and reviews current research related to data management software technology. Information from published sources

Proceedings ArticleDOI
03 Oct 1979
TL;DR: It is concluded that very highly secure data base management systems are feasible, and a kernel based architectural approach is developed and successfully applied to the existing data management system Ingres.
Abstract: The problem of providing a secure implementation of a data base management system is examined, and a kernel based architectural approach is developed. The design is then successfully applied to the existing data management system Ingres. It is concluded that very highly secure data base management systems are feasible,

Journal ArticleDOI
TL;DR: An interactive system to analyze large samples of collected data to ensure a significant improvement in the efficiency of data management in recent high energy physics experiments (Ω-project) and is suitable in all experiments treating large volumes of experimental data.

Book ChapterDOI
15 May 1979
TL;DR: In this article, the authors describe a comprehensive geo-facility system, where data management techniques are used to preserve the relationships between a facility and other facilities, and between the facility and the various components of its description.
Abstract: All utility systems face the problem of producing and maintaining accurate records on their geographic facilities -- those facilities located throughout a geographic area for the purpose of providing service to the utility's customers. These records commonly include both alphanumeric and graphic information, such as distribution maps, circuit diagrams and design specifications. Some early attempts to apply computers to cope with this problem have attacked a single piece -- for example, map drafting systems have been designed to faithfully reproduce and edit the final graphic report (maps). However, in such systems, the alphanumeric data vital for engineering and design calculations are often lost. This paper describes a comprehensive geo-facility system. In this system, data management techniques are used to preserve the relationships between a facility and other facilities, and between a facility and the various components of its description. The description of a facility is developed using data definition techniques for both facility attributes and graphic presentation. The geo-facility workstation supports an x-y tablet input device. A variety of specialized operations can be invoked by pointing to menus located on the tablet. The design of these menus and the relationship of these menus to the facility definition requires additional data definition. The interaction of the various definition and management functions described above is illustrated with a sample design session.

Journal ArticleDOI
TL;DR: A statistical computing system, the Computer-Assisted Data Analysis (CADA) Monitor, for use in performing interactive statistical data analysis, written in a transportable subset of BASIC, and versions are currently available for a variety of computers.
Abstract: This article describes a statistical computing system, the Computer-Assisted Data Analysis (CADA) Monitor, for use in performing interactive statistical data analysis. Especially easy to use because of its conversational nature, CADA includes facilities for data management, evaluation of probability distributions, Bayesian parametric models, Bayesian simultaneous estimation, Bayesian full-rank analysis of variance, and exploratory data analysis. CADA is written in a transportable subset of BASIC, and versions are currently available for a variety of computers.

Book ChapterDOI
20 Jun 1979
TL;DR: The paper analyzes the requirements for an interactive pictorial data base system and shows that the known requirements for pictorial application can best be met by a general purpose system such as IDAMS.
Abstract: IDAMS is a prototype of a general purpose system for interactive data manipulation. The paper analyzes the requirements for an interactive pictorial data base system and shows that the known requirements for pictorial application can best be met by a general purpose system such as IDAMS. Major features of IDAMS are:


Book ChapterDOI
TL;DR: In this paper, the importance of careful data quality checking is stressed, and examples illustrate the highly relevant information with regard to aircraft usage and loading environment that may be obtained from AIDS-recordings.
Abstract: Inspection periods and safe service lives of transport aircraft are based on an assumed average design load experience. Actual service loads may deviate appreciably from these assumptions. Fatigue related load data may beobtained on a routine basis from AIDS-recordings. The importance of careful data quality checking is stressed. Examples illustrate the highly relevant information with regard to aircraft usage and loading environment that may be obtained.



Proceedings ArticleDOI
15 May 1979
TL;DR: The paper describes an integrated interactive computer package for power system analysis ("POWSYS") that was developed at the Electricity Supply Commission (South Africa) and has been used extensively by its planning and operating engineers for a number of years.
Abstract: A number of digital computer programs are currently available for power system studies. These programs are being used to model networks of increasing size and complexity and the provision of effective data management facilities has become crucial to their successful implementation. The paper describes an integrated interactive computer package for power system analysis ("POWSYS"). This package was developed at the Electricity Supply Commission (South Africa) and has been used extensively by its planning and operating engineers for a number of years. The package features a common file structure of network data and results which is accessed by a set of power system analysis programs. These programs provide the major digital simulation capabilities required to study the behaviour of power systems under both steady state and transient conditions. A simple free-format command language enables the user to control the sequence of studies interactively and to utilize the built in data management and reporting facilities provided. The package is open-ended and the addition of new commands and analysis programs is easily accomplished. The paper will describe the main features and facilities afforded by the data management software and will giveabrief overview of the analysis programs currently implemented.

Journal ArticleDOI
A. K. Fitzgerald1, B. F. Goodrich1
TL;DR: The Data Management component of the new IBM 8100 Distributed Processing Programming Executive (DPPX) provides for the storage and retrieval of data on disk and tape by means of a layered structure, an improved concept of device independence, and the use of catalogs.
Abstract: The Data Management component of the new IBM 8100 Distributed Processing Programming Executive (DPPX) provides for the storage and retrieval of data on disk and tape. Its objectives are to support a broad range of functions and be easy to use, be easily extendible, and entail minimal cost for the user. The Data Management component is designed to meet those objectives by means of a layered structure, an improved concept of device independence, and the use of catalogs.

01 Jan 1979
TL;DR: This program will accommodate data from multiple sensors simultaneously for processing of functions with two geometric independent variables (e.g., chord and radius), and is a systems manual for assistance in program maintenance, modification, and/or installation.
Abstract: : The Operational Loads Survey/Data Management System (OLS/DMS) was designed and programmed as a computer software tool for data management and processing of the Operational Loads Survey (OLS) test data base. With limited modification, the OLS/DMS will accommodate other large, time based, test data bases. The system transfers selected test data to a large, direct access disc file and maintains the data on a semi-permanent basis. Data are retrieved from this file, processed, and displayed interactively or in batch. Plot output is generated on a Tektronix 4014 or an incremental plotter (e.g., Calcomp). A small sample of available processing options includes amplitude spectrum, harmonic analysis, digital filtering, blade static pressure coefficient, and blade normal force coefficient. This program will accommodate data from multiple sensors simultaneously for processing of functions with two geometric independent variables (e.g., chord and radius). This report is a systems manual for assistance in program maintenance, modification, and/or installation.

Book ChapterDOI
Ralph Bernstein1
20 Jun 1979
TL;DR: Some of the data base requirements for future programs are identified, the concept of a global data and information base that is geographically accessible, contains data from all earth observation programs, and is easily and economically disseminated is discussed.
Abstract: Remote sensing of the earth has evolved from a film based, manual interpretation technology to a digital multispectral and multisensor technology with significant machine processing for correction, information extraction, data management, and modelling. This transition is not without growing pains. Of the 1015 bits of data that are currently acquired per year in the NASA program, only about 1013 bits are utilized. Future programs involving higher resolution and wider spectral range sensors will increase the data acquisition rates by an order of magnitude. Technological problems exist today in data correction, information extraction, processing, storage, retrieval and dissemination. This paper will identify some of the data base requirements for future programs, and discuss technological approaches for improving the handling and processing of remotely sensed data. Fundamental to this approach is the concept of a global data and information base that is geographically accessible, contains data from all earth observation programs, and is easily and economically disseminated. This capability is needed, and the technology is available for its implementation.


01 Jan 1979
TL;DR: In this article, the authors present both theory and practice of management, and discuss the following major functions of management process: planning; organization; directing; and controlling, major activities and subactivities and some specific tasks for each of the four major functions are discussed.
Abstract: This document presents both theory and practice of management. After discussing foundation, framework and environment of management, the next four parts cover the following major functions of management process: Planning; organization; directing; and controlling. Major activities and subactivities and some specific tasks for each of the four major functions are discussed.

01 Mar 1979
TL;DR: The engineering-specified requirements for integrated information processing by means of the Integrated Programs for Aerospace-Vehicle Design (IPAD) system are presented and a data model is described, based on the design process of a typical aerospace vehicle.
Abstract: The engineering-specified requirements for integrated information processing by means of the Integrated Programs for Aerospace-Vehicle Design (IPAD) system are presented. A data model is described and is based on the design process of a typical aerospace vehicle. General data management requirements are specified for data storage, retrieval, generation, communication, and maintenance. Information management requirements are specified for a two-component data model. In the general portion, data sets are managed as entities, and in the specific portion, data elements and the relationships between elements are managed by the system, allowing user access to individual elements for the purpose of query. Computer program management requirements are specified for support of a computer program library, control of computer programs, and installation of computer programs into IPAD.

Journal ArticleDOI
01 Oct 1979

Proceedings ArticleDOI
01 Jun 1979
TL;DR: Use of associative or content addressable memories and hardware design based on non-numerical operations as well as numerical operations causes information stored at unknown locations to be processed efficiently on the basis of some knowledge of its content.
Abstract: Recent years have witnessed a widespread and intensive effort to develop systems to store, maintain and access data bases of varied size. Such systems are referred to as DBMS— D ata B ase Management S ystems. In different areas, such as artificial intelligence, management information systems, military and corporate logistics and medical diagnosis, a wide variety of DBMS exist. All these systems have generally been implemented on conventional computers, which are based on the von Neumann design. In this design, operations will be performed on the information in the memory by means of their addresses. Because of the size of typical data bases and costs of memory, we cannot hold all information in the main memory and swapping converts the search problem to a transportation problem. Present-day systems have to transfer large sets of data from their mass storage to the CPU, where simple compare-functions are performed in order to separate relevant data from irrelevant data. The transfer channels with their limited capacity form the main bottleneck of this system and as a result, great efforts have been made to reduce the necessary data flow by means of sophisticated software systems and additional redundancy such as index tables and inverted files. By these techniques, address of information will be obtained from a directory. Although directory partially solved the bottleneck problem, it nevertheless created some problems. The directory should logically be kept in the main memory. If we are dealing with large data bases, naturally we are dealing also with large directories, and large directories occupy a large portion of the memory. Also, the use of directories will create some complexity in the search, update and delete algorithms. Conventional computers are all based on numerical operations. The necessity of designing new hardware based on non-numerical operations has been discussed in detail by one of the authors. 1 In contrast, use of associative or content addressable memories and hardware design based on non-numerical operations as well as numerical operations causes information stored at unknown locations to be processed efficiently on the basis of some knowledge of its content.